Quote of the Day

Matthew G. Saroff
3 min readJun 8


The Pitchmen Who Made Out like Bandits on Crypto — Leaving Mom-And-Pop Investors Holding the Bag — Are Precisely the Same People Who Are Beating the Drum for AI Today.

— Cory Doctorow

Mr. Doctorow is correct.

Much like Crypto, there is no, “There,” there.

You have some pattern recognition, and nothing that meaningfully resembles intelligence, and the solution of the snake-oil salesmen is to continue to shove more dubious data at dubious, “Machine learning,” algorithms, and hope, as the joke goes, that there is a pony at the bottom of this pile of sh%$:

It didn’t happen.

The story you heard, about a US Air Force AI drone warfare simulation in which the drone resolved the conflict between its two priorities (“kill the enemy” and “obey its orders, including orders not to kill the enemy”) by killing its operator?

It didn’t happen.

The story was widely reported on Friday and Saturday, after Col. Tucker “Cinco” Hamilton, USAF Chief of AI Test and Operations, included the anaecdote in a speech to the Future Combat Air System (FCAS) Summit.

But once again: it didn’t happen:

“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” the Royal Aeronautical Society, the organization where Hamilton talked about the simulated test, told Motherboard in an email.

The story got a lot more play than the retraction, naturally. “A lie is halfway round the world before the truth has got its boots on.”

Why is this lie so compelling? Why did Col. Hamilton tell it?

Because it’s got a business-model.


Tech critic Lee Vinsel coined the term “criti-hype” to describe criticism that incorporates a self-serving commercial boast. For years, critics of Facebook and other ad-tech platforms accepted and repeated the companies’ claims of having “hacked our dopamine loops” to control our behavior.

These claims are based on thin, warmed-over notions from the largely deprecated ideas of behaviorism, which nevertheless bolstered Facebook’s own sales-pitch:


If the problem with “AI” (neither “artificial,” nor “intelligent”) is that it is about to become self-aware and convert the entire solar system to paperclips, then we need a moonshot to save our species from these garish harms.

If, on the other hand, the problem is that AI systems just suck and shouldn’t be trusted to fly drones, or drive cars, or decide who gets bail, or identify online hate-speech, or determine your creditworthiness or insurability, then all those AI companies are out of business.

Take away every consequential activity through which AI harms people, and all you’ve got left is low-margin activities like writing SEO garbage, lengthy reminisces about “the first time I ate an egg” that help an omelette recipe float to the top of a search result. Sure, you can put 95 percent of the commercial illustrators on the breadline, but their total wages don’t rise to one percent of the valuation of the big AI companies.


The story that AI sophistication is on a screaming hockey-stick curve headed to the moon lets companies who replace competent humans with shitty algorithms claim that we are simply experiencing a temporary growing pain — not a major step towards terminal ensh%$tification.

(%$ mine)

This yet another scam from Silicon Valley.

Unless and until, the company founders who create these fraudulent businesses, and more importantly the VCs who execute their pump and dump frauds using these fraudulent businesses, are prosecuted criminally, this will continue.

This is not sustainable, and the longer wait, the more destructive, and disruptive, the final reckoning will be.



Matthew G. Saroff

Husband, father, pinko, slave to cats