Mechanical Turks Are for Market Risk Only
In recent news, AI startup Builder.ai, formerly Engineer.ai, entered bankruptcy proceedings. But the first time it made headlines was in 2019, when former engineers of the company alleged that its claims of using AI to fully automate app building were exaggerated and that it was using human developers in the background: This AI startup claims to automate app making but actually just uses humans
The first instance of passing a human off as an artificial intelligence was the Mechanical Turk, a “fraudulent chess-playing machine constructed in 1770”. These days, Amazon offers a service with that name, providing cheap human labour for repetitive tasks that cannot yet be easily automated.
Let’s be clear: Faking AI capabilities by using humans and using that to attract customers and investors is fraudulent, even if the plan is to use these investments to build out real AI capabilities.
Market Risk vs Product Risk
However, if you’re merely in the early stages of validating a business idea, it can be a good idea to postpone building out the AI tool and instead simulate it with human labour. But only if the main risk is market risk. If you are supremely confident that you can build the AI part, you just aren’t sure if anyone will pay for the thing, it won’t hurt to test the waters with a cheaper way. But if there’s no guarantee that you can build it, faking it is a dangerous dead-end.
That is the trap builder.ai fell into: All product risk, no market risk. If you don’t need to answer whether people would pay for the real thing, don’t bother faking it.
Always start with this question: What is the most significant uncertainty about our product? And how can we most effectively address it?