IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Why did Stanford take down its Alpaca AI chatbot?

Answer: "Hallucinations," among other things.

Digital outline of a brain surrounded by a circuit board.
No, the Alpaca AI chatbot didn’t get high and start hallucinating, although you might be forgiven for thinking that after reading Stanford’s reasons for axing it. University researchers cited “hallucinations” among their reasons for taking Alpaca offline after launching a public demo last week.

According to Gizmodo, “hallucinating” is the term used to describe when an artificial intelligence states misinformation as true fact. Basically, it just means that the bot thinks something is real when it isn’t, which is a lot like when humans hallucinate, so it makes sense.

In addition to the hallucinations, the bot was apparently taken down for rising costs and safety concerns. “The original goal of releasing a demo was to disseminate our research in an accessible way. We feel that we have mostly achieved this goal, and given the hosting costs and the inadequacies of our content filters, we decided to bring down the demo,” said a spokesperson for Stanford’s Human-Centered Artificial Intelligence institute. The bot used Meta’s LLaMA AI, hence the name Alpaca.