

Yes, it’s the exact same practice.
The main difference, though, is that Amazon as a company doesn’t rely on this “just walk out” business in a capacity that is relevant to the overall financial situation of the company. So Amazon churns along, while that one insignificant business unit gets quietly shut down.
For this company in this post, though, they don’t have a trillion dollar business subsidizing the losses from this AI scheme.
They’re actually only about 48% accurate, meaning that they’re more often wrong than right and you are 2% more likely to guess the right answer.
Wait what are the Bayesian priors? Are we assuming that the baseline is 50% true and 50% false? And what is its error rate in false positives versus false negatives? Because all these matter for determining after the fact how much probability to assign the test being right or wrong.
Put another way, imagine a stupid device that just says “true” literally every time. If I hook that device up to a person who never lies, then that machine is 100% accurate! If I hook that same device to a person who only lies 5% of the time, it’s still 95% accurate.
So what do you mean by 48% accurate? That’s not enough information to do anything with.
Teslas will (allegedly) start on a small, low-complexity street grid in Austin. exact size TBA. Presumably, they’re mapping the shit out of it and throwing compute power at analyzing their existing data for that postage stamp.
Lol where are the Tesla fanboys insisting that geofencing isn’t useful for developing self driving tech?
Wouldn’t a louder room raise the noise floor, too, so that any quieter signal couldn’t be extracted from the noisy background?
If we were to put a microphone and recording device in that room, could any amount of audio processing be able to extract the sound of the small server out from the background noise of all the bigger servers? Because if not, then that’s not just a auditory processing problem, but a genuine example of destruction of information.
That’s never really been true. It’s a cat and mouse game.
If Google actually used its 2015 or 2005 algorithms as written, but on a 2025 index of webpages, that ranking system would be dogshit because the spammers have already figured out how to crowd out the actual quality pages with their own manipulated results.
Tricking the 2015 engine using 2025 SEO techniques is easy. The problem is that Google hasn’t actually been on the winning side of properly ranking quality for maybe 5-10 years, and quietly outsourced the search ranking systems to the ranking systems of the big user sites: Pinterest, Quora, Stack Overflow, Reddit, even Twitter to some degree. If there’s a responsive result and it ranks highly on those user voted sites, then it’s probably a good result. And they got away with switching to that methodology just long enough for each of those services to drown in their own SEO spam techniques, so that those services are all much worse than they were in 2015. And now indexing search based on those sites is no longer a good search result.
There’s no turning backwards. We need to adopt new rankings for the new reality, not try to turn back to when we were able to get good results.
Stingrays don’t do shit for this. That’s mostly real time location data focused in by tricking your phone into reporting its location to a fake cell tower controlled by an adversary. That doesn’t get into the data in your phone, and even if someone used the fake tower to man in the middle, by default pretty much all of a phone’s Internet traffic is encrypted from the ISP.
The world of breaking disk encryption on devices is a completely different line of technology, tools, and techniques.