• 0 Posts
  • 90 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle





  • smpl@discuss.tchncs.detoFirefox@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    12 days ago

    As a general stance “People want me to give them free shit. I say gtfo.”, I understand you.

    That’s just not proportional to Mozilla and Firefox. In 2022 they had a total revenue of $595 million¹. That allows them to hire 3305 software developers at a salary of $180.000. Google was responsible for 81% of that revenue¹. If you remove Google and their influence from the equation you’re left with $113 millon and Mozilla can then hire 628 software developers. I think that would be more than adequate to maintain a browser.

    1. https://en.wikipedia.org/wiki/Mozilla_Corporation


  • It seems that we focus our interest in two different parts of the problem.

    Finding the most optimal way to classify which images are best compressed in bulk is an interesting problem in itself. In this particular problem the person asking it had already picked out similar images by hand and they can be identified by their timestamp for optimizing a comparison of similarity. What I wanted to find out was how well the similar images can be compressed with various methods and codecs with minimal loss of quality. My goal was not to use it as a method to classify the images. It was simply to examine how well the compression stage would work with various methods.





  • I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don’t know, the savings might be neglible, but I’d assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.

    I think you’re overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)








  • One of the most controversial changes of Chrome’s MV3 approach is the removal of blocking WebRequest, which provides a level of power and flexibility that is critical to enabling advanced privacy and content blocking features. Unfortunately, that power has also been used to harm users in a variety of ways. Chrome’s solution in MV3 was to define a more narrowly scoped API (declarativeNetRequest) as a replacement. However, this will limit the capabilities of certain types of privacy extensions without adequate replacement.

    Mozilla will maintain support for blocking WebRequest in MV3. To maximize compatibility with other browsers, we will also ship support for declarativeNetRequest. We will continue to work with content blockers and other key consumers of this API to identify current and future alternatives where appropriate. Content blocking is one of the most important use cases for extensions, and we are committed to ensuring that Firefox users have access to the best privacy tools available.

    https://blog.mozilla.org/addons/2022/05/18/manifest-v3-in-firefox-recap-next-steps/