otsukare Thoughts after a day of work

Browser Wish List - Distressful Content Filtering

Let's continue with my browser wish list. Previous ideas:

Here are a couple of propositions

So how do we make it possible to shield themselves against what they consider being distressful content? I'm not qualifying what is distressful content on purpose, because this is exactly the goal of this post. The nature of content which creates emotional harm is a very personal topic, which can not be decided by others.

Universal shields

Systems like the security alert for harmful websites or the privacy shields all rely on general list decided for the user. Some systems offer a level of customizations, you may decide to allow or bypass the shielding system. But in the first place the shielding system was based on a general rule you had no control on.

That's an issue, because when it's about the nature of the content, this can lead to catastrophic decision excluding some type of contents which is considered harmful by the shield owner.

Personal shields

On the other hands, they are system where you can shield yourself against the website practice. For example for privacy, you may want to use something like uMatrix where you can block everything by default, and allow certain HTTP responses type for each individual URIs. This is what I do on my main browser. It requests a strong effort in tailoring each individual pages. It's a built a policy on the go. It creates general list for future sites (you may block Google Analytics for every future sites you will encounter), but still it doesn't really learn more than that on how to act on your future browsing.

We could imagine applying this method to distressful content with keywords in the page. In terms of distressful content, it may dramatically fail for the same reasons that universal shields fail. They don't understand the content, they just apply a set of rules.

Personal ML shields

So I was wondering if Machine Learning could help here. Where a personal in-browser machine learning engine would flag the content of the links when we are browsing. When reaching a webpage the engine could follow links before we click on them, and create a pre-analysis of the page.

If we click or hover on the link and the analysis is not finished, we could get a popup message saying that the browser don't know yet the nature of the content, it has not finished the analysis. Or if the analysis has been done, it could tell us that based on the analysis and past browsing experience, the content is about this and that, and there are matching our interests.

It should be possible to by-pass such a system if the user wishes so.

It could also help to create pages which are easier to cope with. For example, a page full of images with violent representation and we want to read the content but we want a blur on images by default, with a reveal on image clicking if we wish so.

PS: To address the elephant in the room, I still have a job, but if you read this, know that many qualified people will need your help in finding a new job.

Otsukare!