otsukare Thoughts after a day of work

Week notes - 2020 w04 - worklog - Python

Monday

Some webcompat diagnosis. Nothing exciting in the issues found, except maybe something about clipping and scrolling. Update: there is a bug! Thanks Daniel

Tuesday

scoping request to the webhook to the actual repo. That way we do not do useless work or worse conflicts of labels assignments. I struggled a bit with the mock Basically for the test I wanted to avoid to make the call to GitHub. When I think about it, there are possibly two options:

  • Mocking.
  • or putting a flag in the code to avoid the call if in test environment. Not sure what is the best strategy.

I also separated some tests which were tied together under the same function, so that it is clearer when one of them is failing.

Phone meeting with the webcompat team this night from 23:00 to midnight. Minutes.

About Mocking

Yes mocking is evil in unit tests, but it becomes necessary if you have dependencies on external services (that you do not control). A good reminder is that you need to mock the function where it is actually called and not where it is imported from. In my case, I wanted to make a couple of tests for our webhook without actually sending requests to GitHub. The HTTP response from GitHub which interests us would be either:

  • 4**
  • 200

So I created a mock for the case where it is successful and makes actually the call. I added comments here to explain.

    @patch('webcompat.webhooks.new_opened_issue')
    def test_new_issue_right_repo(self, mock_proxy):
        """Test that repository_url matches the CONFIG for public repo.
        Success is:
        payload: 'gracias amigos'
        status: 200
        content-type: text/plain
        """
        json_event, signature = event_data('new_event_valid.json')
        headers = {
            'X-GitHub-Event': 'issues',
            'X-Hub-Signature': 'sha1=2fd56e551f8243a4c8094239916131535051f74b',
        }
        with webcompat.app.test_client() as c:
            mock_proxy.return_value.status_code = 200
            rv = c.post(
                '/webhooks/labeler',
                data=json_event,
                headers=headers
            )
            self.assertEqual(rv.data, b'gracias, amigo.')
            self.assertEqual(rv.status_code, 200)
            self.assertEqual(rv.content_type, 'text/plain')

Wednesday

Asia Dev Roadshow

Sandra has published the summary and the videos of the Developer Roadshow in Asia. This is the talk we gave about Web Compatibility and devtools in Seoul.

Anonymous reporting

Still working on our new anonymous reporting workflow.

Started to work on the PATCH issue when it is moderated positively but before adding code I needed to refactor a bit so we don't end up with a pile of things. I think we can further simplify. Unit tests make it so much easier to move things around. Because when moving code in different modules, files, we break tests. And then we need to fix both codes and tests, so it's working again. But we know in the end that all the features that were essential are still working.

Skiping tests before completion

I had ideas for tests and I didn't want to forget them, so I wanted to add them to the code, so that they will be both here, but not make fail the system.

I could use pass:

def test_patch_not_acceptable_issue(self):
    pass

but this will be silent, and so you might forget about them. Then I thought, let's use the NotImplementedError

def test_patch_not_acceptable_issue(self):
    raise NotImplementedError

but here everything will break and the test suite will stop working. So not good. I searched and I found unittest.SkipTest

def test_patch_not_acceptable_issue(self):
    """Test for not acceptable issues from private repo.

    payload: 'Moderated issue rejected'
    status: 200
    content-type: text/plain
    """
    raise unittest.SkipTest('TODO')

Exactly what I needed for nose.

(env) ~/code/webcompat.com % nosetests tests/unit/test_webhook.py -v

gives:

Extract browser label name. ... ok
Extract 'extra' label. ... ok
Extract dictionary of metadata for an issue body. ... ok
Extract priority label. ... ok
POST without bogus signature on labeler webhook is forbidden. ... ok
POST with event not being 'issues' or 'ping' fails. ... ok
POST without signature on labeler webhook is forbidden. ... ok
POST with an unknown action fails. ... ok
GET is forbidden on labeler webhook. ... ok
Extract the right information from an issue. ... ok
Extract list of labels from an issue body. ... ok
Validation tests for GitHub Webhooks: Everything ok. ... ok
Validation tests for GitHub Webhooks: Missing X-GitHub-Event. ... ok
Validation tests for GitHub Webhooks: Missing X-Hub-Signature. ... ok
Validation tests for GitHub Webhooks: Wrong X-Hub-Signature. ... ok
Test that repository_url matches the CONFIG for public repo. ... ok
Test when repository_url differs from the CONFIG for public repo. ... ok
Test the core actions on new opened issues for WebHooks. ... ok
Test for acceptable issues comes from private repo. ... SKIP: TODO
Test for rejected issues from private repo. ... SKIP: TODO
Test for issues in the wrong repo. ... SKIP: TODO
Test the private scope of the repository. ... ok
Test the public scope of the repository. ... ok
Test the unknown of the repository. ... ok
Test the signature check function for WebHooks. ... ok
POST with PING events just return a 200 and contains pong. ... ok

----------------------------------------------------------------------
Ran 26 tests in 0.102s

OK (SKIP=3)

It doesn't fail the test suite, but at least I know I have work to do. We can perfectly see what is missing.

Test for acceptable issues comes from private repo. ... SKIP: TODO
Test for rejected issues from private repo. ... SKIP: TODO
Test for issues in the wrong repo. ... SKIP: TODO

Thursday and Friday

I dedicated most of my time in advancing the new anonymous workflow reporting. The interesting process in doing it was to have tests and having to refactor some functions a couple of times so it made more sense.

Tests are really a safe place to make progress. A new function will break tests results and we will work to fix the tests and/or the function to a place which is cleaner. And then we work on the next modification of the code. Tests become a lifeline in your development.

Another thing which I realize that it is maybe time we create a new module for our issues themselves. It would model, instantiate our issues and we can use in multiple places. Currently we have too many back and forth on parsing texts, calling dictionaries items, etc. We can probably improve this with a dedicated module. Probably for the phase 2 of our new workflow project.

Also I have not been effective as I wished. The windmill of thoughts about my ex-work colleagues future is running wild.

Otsukare!