Select Page

What Amara’s Law Can Tell Us About AI

Throughout 2025, AI dominated headlines and keynote slots — and the hype shows little sign of slowing down. There are bold predictions about productivity gains, promises of transformation across industries and almost every app now has some kind of AI-assisted feature. 

But as with any technological shift of this scale, it isn’t always smooth sailing. AI still suffers from hallucinations and bias. Plenty of pilots have failed. And many businesses are still struggling to achieve the ROI they expected. Consequently, there is growing speculation that the AI boom is actually an AI bubble.

The reality is that tech predictions almost always miss the mark, especially when they concern something as far-reaching as AI. Amara’s Law gives us a useful way to visualize why this is the case.

What is Amara’s Law?

Old predictions about the future can often seem absurd in hindsight. In 1959, the U.S. Postmaster General predicted that mail would be delivered from New York to Australia in a matter of hours… using guided missiles. Futurists in the 1960s and 1970s expected us to have robot butlers, flying cars and personal jetpacks by now.

But for every overly optimistic prediction, there’s a doubting voice that has been proven emphatically wrong. In a 1966 edition of Time magazine, “remote shopping” was predicted to flop. In 2003, Steve Jobs stated that the subscription model for music was “bankrupt.” And in 1995, Bob Metcalfe, co-inventor of Ethernet, predicted that the internet would collapse. He famously (and literally) ate his words at the International World Wide Web Conference just two years later.

As president of the Institute for the Future (IFTF), Roy Amara was very well acquainted with tech predictions and hype cycles. In 1978, the Boston-born futurologist observed a pattern that is now known as Amara’s Law:

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

This observation has played out time and time again. Perhaps the best example is the internet. Short-term overestimations led to the dot-com bubble. But 25 years later, the internet has had a greater impact than most could have imagined in the ‘90s. Mobile phones, social media and cloud computing all followed a similar trajectory. AI, particularly generative AI and agentic AI, seems to be the latest breakthrough technology to demonstrate Amara’s Law.

A chart depicting Amara's Law, showing the mismatch between the expected impact of technology and its actual impact over time.

The Gartner Hype Cycle expands on Amara’s Law, adding phases such as the “Peak of Inflated Expectations” and the “Slope of Enlightenment”. According to Gartner’s analysis, 2025 was the year Gen AI entered the “Trough of Disillusionment”, as organizations encountered disappointing returns, failed experiments and governance challenges. Agentic AI is still in the previous phase, the Peak of Inflated Expectations. 

Both Amara’s Law and the Gartner Hype Cycle indicate that we’re reaching the point where we shift from overestimation to underestimation. That’s where the real impact becomes evident.

 

What is Amara’s Law?

With any emerging technology, it’s important to take a reality check and assess the real value. In 2025, there were clear indications that AI was not meeting short-term expectations:

  • Less than 30% of CEOs are satisfied with the return on their AI investments (Gartner).
  • 95% of task-specific Gen AI pilots fail (MIT).
  • 42% of companies abandon the majority of their AI initiatives before they reach production (S&P Global Market Intelligence).

These statistics indicate a mismatch between the enthusiasm for AI and the ability to extract value from it. But they don’t tell the whole story. Failed pilots and a lack of ROI don’t automatically mean that AI is broken or not fit for purpose. Rather, they reveal that we are still in the first phase of Amara’s Law: overestimated impact.

This is partly due to what I call “Artificial Confidence” — a tendency to put too much faith in the capabilities of artificial intelligence without laying the necessary groundwork. This happens for two main reasons. First, AI is so versatile that it can feel like a cure-all solution. Second, companies fear they will get left behind if they are not quick to adopt AI. 

We have seen numerous high-profile blunders: support chatbots misleading customers, fictitious citations appearing in published work, and developers generating code that contains vulnerabilities. Failures like these are not always bugs within the AI itself. They are symptoms of trying to force a short-term rollout without the long-term infrastructure to support it. AI is a tool; its effectiveness depends on how it is wielded.

Report
The State of Digital Quality in AI in 2025

See how AI is changing the way teams build and functionally test software and digital experiences. 

Read the Report Now

Creating the conditions for AI success

Amara’s Law and the Gartner Hype Cycle both point toward the same conclusion: we need to think longer term. As with any groundbreaking technology, the true potential is only revealed once we have transformed the structure around it. Electric cars are not much use without a sufficient charging infrastructure. Smartphones wouldn’t be as useful without an ecosystem of apps to expand their functionality. 

We’re witnessing the same dynamic with AI. Employees using generic AI tools without guidance are unlikely to see meaningful productivity gains. Even customized, task-specific AI solutions will fail to move the needle if existing workflows don’t adapt. The real long-term value is in redesigning processes and roles around AI-human collaboration, not just replacing people.

While each AI journey will look a little different, here are some important steps organizations can take to start realizing the full potential of AI:

  • Establish a strong AI governance framework: Create clear policies and guardrails for ethical use, data privacy and responsible deployment of AI tools across the organization. This includes defining who is accountable when a mistake is made.
  • Use high-quality training data: Inconsistent data leads to unreliable output. For the best results, training data must be clean, complete and of high quality.
  • Foster AI literacy: Providing employees with training can help them understand how to interact effectively with AI and how to interpret and evaluate its output.
  • Adjust core workflows: Identify areas where AI can drive efficiency and then adapt processes to integrate it effectively. Conversely, identifying processes where AI is not suitable can help avoid costly disruption.
  • Adapt QA and testing for AI systems: Traditional functional testing is not enough for non-deterministic models. It’s crucial to implement advanced AI testing methodologies, such as red teaming, to uncover bias, toxicity and edge-case failures before they reach production.

AI will exceed our expectations in the long term

Amara’s Law is just one of the indicators that AI is reaching an inflection point. The short-term impact may have been overestimated, and there will surely be many more failed pilots along the way. But in the long term, AI will influence our lives in ways that no one can yet predict. The organizations that will emerge as winners are the ones who take a considered approach, build the right infrastructure and invest in quality assurance.

For now, the best approach is to test your AI solutions in the real world, with real people and diverse datasets to get feedback on the quality of responses. This will help you identify and remediate bias, toxicity and hallucinations so you can release AI-driven products with confidence. Contact us to learn more.

Want to see more like this?
Published On: January 13, 2026
Reading Time: 7 min

What Amara’s Law Can Tell Us About AI

From overhyped to underestimated: Why AI requires long-term thinking.

Why the Human Element of Testing Is Essential

Customers are human. Discover why human testers are the essential safety net for quality AI.

Fintech Monetization: a UX Balancing Act

In a bid to generate new revenue streams, fintechs are putting ads in their apps. What does this mean for the user experience?

AI: The Apex Tech Predator

Software ate the world; now AI is devouring software. Why faster code requires smarter testing.

5 iGaming Test Cases That Pay Off

Build your comprehensive test plan for the highly regulated iGaming sector

Key Points to Consider for AI Data Collection

Source the right data from the right people under the right conditions
No results found.