otsukare Thoughts after a day of work

Saving Webcompat images as a microservice

Update: You may want to fast forward to the latest part… of this blog post. (Head explodes).

Thinking out loud on separating our images into a separate service. The initial goal was to push the images to the cloud, but I think we could probably have a first step. We could keep the images on our server, but instead of the current save, we could send them to another service, let say with a HTTP PUT. And this service would save them locally.

That way it would allow us two things:

  1. Virtualize the core app on heroku if needed
  2. Replace when we are ready the microservice by another cloud hosting solution.

All of this is mainly thinking for now.

Anatomy of our environment

config/ defines:


The maximum limit for images is defined in Currently in, there is a route for localhost upload.

# set limit of 5.5MB for file uploads
# in practice, this is ~4MB (5.5 / 1.37)
# after the data URI is saved to disk
app.config['MAX_CONTENT_LENGTH'] = 5.5 * 1024 * 1024

The localhost part would probably not changed much. This is just for reading the images URL.

if app.config['LOCALHOST']:
    def download_file(filename):
        """Route just for local environments to send uploaded images.

        In production, nginx handles this without needing to touch the
        Python app.
        return send_from_directory(
            app.config['UPLOADS_DEFAULT_DEST'], filename)

then the api for uploads is defined in api/

This is where the production route is defined.

@uploads.route('/', methods=['POST'])
def upload():
    '''Endpoint to upload an image.

    If the image asset passes validation, it's saved as:
        UPLOADS_DEFAULT_DEST + /year/month/random-uuid.ext

    Returns a JSON string that contains the filename and url.
    # cut some stuff.
        upload = Upload(imagedata)
        data = {
            'filename': upload.get_filename(upload.image_path),
            'url': upload.get_url(upload.image_path),
            'thumb_url': upload.get_url(upload.thumb_path)
        return (json.dumps(data), 201, {'content-type': JSON_MIME})
    except (TypeError, IOError):
    except RequestEntityTooLarge:
        abort(413) is basically where we should replace this by an HTTP PUT to a micro service.

What is Amazon S3 doing?

In these musings, I wonder if we could mimick the way Amazon S3 operates at a very high level. No need to replicate everything. We just need to save some bytes into a folder structure.

boto 3 has a documentation for uploading files.

def upload_file(file_name, bucket, object_name=None):
    """Upload a file to an S3 bucket

    :param file_name: File to upload
    :param bucket: Bucket to upload to
    :param object_name: S3 object name. If not specified then file_name is used
    :return: True if file was uploaded, else False

    # If S3 object_name was not specified, use file_name
    if object_name is None:
        object_name = file_name

    # Upload the file
    s3_client = boto3.client('s3')
        response = s3_client.upload_file(file_name, bucket, object_name)
    except ClientError as e:
        return False
    return True

We could keep the image validation on the size of, but then the naming and checking is done. We can save this to a service the same way aws is doing.

So our priviledged service could accept images and save them locally in the same folder structure a separate flask structure. And later on, we could adjust it to use S3.

Surprise. Surprise.

I just found out that each time you put an image in an issue or a comment. GitHub is making a private copy of this image. Not sure if it's borderline with regards to property.

If you enter:

!['m root](

Then it creates this markup.

<p><a target="_blank"
       rel="noopener noreferrer"
             alt="I'm root"

And we can notice that the img src is pointing to… GitHub?

I checked in my server logs to be sure. And I found… - - [20/Nov/2019:06:44:54 +0000] "GET /2019/01/01/2535-misere HTTP/1.1" 200 62673 "-" "github-camo (876de43e)"

That will seriously challenge the OKR for this quarter.

Update: 2019-11-21 So I tried to decipher what was really happening. It seems GitHub acts as a proxy using camo, but still has a caching system keeping a real copy of the images, instead of just a proxy. And this can become a problem in the context of

Early on, we had added to our connect-src since we had uses that were making requests to However, this effectively opened up our connect-src to any Amazon S3 bucket. We refactored our URL generation and switched all call sites and our connect-src to use to reference our bucket.

GitHub is hosting the images on Amazon S3.