- cross-posted to:
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- hackernews@lemmy.smeargle.fans
Chrome pushes forward with plans to limit ad blockers in the future::Google has set a date for the introduction of Manifest V3 which will hurt the capabilities of many ad blockers.
Users push forward with using Firefox
I might be misunderstanding but the article says:
Nevertheless, Firefox said it will adopt Manifest V3 in the interest of cross-browser compatibility.
But they don’t plan to drop MV2 though
To be more accurate: Mozilla does plan on deprecating MV2 once they have all of the MV3 stuff supported and sufficient time to transition has been given but they will make the the crucial “webRequestBlocking” API used by ad blockers available on MV3 (unlike Chrome) for those extensions that need it to do more than declarativeNetRequest allows for.
Towards the end of 2023 — once we’ve had time to evaluate and assess MV3’s rollout (including identifying important MV2 use cases that will persist into MV3) — we’ll decide on an appropriate timeframe to deprecate MV2.
Awesome news, thanks!
This is what happens when you invest in a platform run by a glorified ad-tech company.
precisely, since google makes most of its revenue from ads, this move shouldn’t surprise anyone
At this point I’m beginning to feel fine with it. If you care, switch to Firefox, you probably already did, if not, enjoy your ads.
Ads have become so intrusive, and have made using the internet a nightmare.
This video does a nice job at explaining the ad-based internet we face at the moment: https://tilvids.com/w/99oSPPBJb3tkdVLc3fsSe5
I quite enjoy my PiHole as well.
I’m just a tech hobbyist and not a network engineer but my general understanding is that even if chrome is “updated” so as blockers won’t work as before, blocking at the DNS level should still work, right? Assuming that they still use separate ad servers.
So, dns blocking will always block the requests to things on the block list, which includes ads… however I’ve noticed that a many sites are now using js to detect images that don’t load before calling the “full the body with text” api call. Originally this would just do some fancy css hiding of the content so SEO scraping would still work (and oh can just use reader view), but now I’ve seen them pull the first paragraph and then, if the images loaded (or an additional call to a tracking pixel for instance) it would also call to their API to get the remainder of the content.
The other way it’ll fail is if they use the same server/dns hostname for content as they do for ads.
Switch? I never left! 🗿🗿🗿