Understanding the AI Evolution
It began with a simple question: Can machines think? Decades later, we’re not just answering that question — we’re redefining it. Artificial Intelligence (AI) is now everywhere: recommending our next Netflix binge, helping doctors detect cancer, and driving cars through busy city streets. But what lies beyond this familiar technology is a much more profound transformation. As we look ahead, the horizon is dominated by two increasingly discussed and occasionally misunderstood concepts: Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).
At Hatchworks, we believe it’s crucial not just to invest in innovation but to deeply understand the trajectory of the technologies shaping our collective future. So let’s dive into the continuum from AI to AGI to ASI — what it means, where we are, why it matters, and who stands to gain or lose.
The age of narrow AI: Master of one (2020s)
Today’s AI is what experts call “narrow” or “weak” AI. These systems are brilliant at specific tasks: language translation, image recognition, playing Go. They are trained on massive datasets and optimized for performance in a confined context. But outside their training domain, they falter.
Consider how AI in finance can flag fraudulent transactions with uncanny precision, yet that same system can’t book a vacation or write a poem. It doesn’t “understand” fraud the way a human might — it identifies patterns. It is statistical mimicry, not cognition.
And yet, narrow AI is powerful. It is already revolutionizing industries and unlocking massive productivity gains. For investors, it’s been a golden age of scalable products and defensible data moats. But for all its might, narrow AI is not the final destination.
Today, we are in the mature phase of narrow AI. It’s a robust, high-yield sector that continues to disrupt traditional industries. But its ceiling is defined: it cannot think.
AGI: Intelligence without norders (emerging 2030s)
Is ChatGPT AGI? Not quite — here’s why it matters
It’s a fair question: when you interact with ChatGPT or similar advanced language models, it feels like you’re speaking with something truly intelligent. It can write essays, code apps, tutor in math, and hold a conversation about philosophy. But despite appearances, ChatGPT is still a form of narrow AI — a highly capable one, but not truly general.
General intelligence requires the ability to learn new tasks without retraining, reason across unrelated domains, make long-term decisions, and act autonomously. Current models, even GPT-4, remain reactive, not proactive. They don’t form goals, reflect on their reasoning, or adapt beyond the scope of their training.
That said, we’re getting closer. Many researchers believe we’re entering a transitional phase — sometimes called “proto-AGI” — where systems exhibit increasingly general capabilities without yet achieving full autonomy or adaptability. It’s a critical threshold — and one worth watching closely.
Now imagine a machine that doesn’t just learn one task, but learns ‘’how to learn’’. A system that can read a novel, debate philosophy, solve an equation, and design a product — without needing to be retrained each time. That is Artificial General Intelligence (AGI): a level of machine intelligence that matches human cognitive abilities across a wide array of functions.
AGI is not about doing more things faster. It’s about doing everything, flexibly and autonomously. Where narrow AI is a tool, AGI is a partner. It will reason, adapt, and evolve in ways that mirror human intelligence.
We’re already seeing the early flickers of this shift. Large language models, like GPT-4 and its successors, are demonstrating early generalization capabilities. They can write code, tutor in math, and even simulate dialogue across historical eras. While still brittle and prone to hallucinations, the arc is clear: we are heading toward generalization.
A day in the life with AGI
In 2035, your AGI assistant doesn’t just schedule meetings. It listens to your business idea, drafts a business plan, simulates market entry in real-time economic models, and negotiates contracts in your tone of voice. It understands context. It sees long-term goals. It isn’t your tool; it’s your co-founder.
Or imagine a teacher powered by AGI, adapting in real-time to each student’s learning style, emotional state, and curiosity. It doesn’t just transfer knowledge; it inspires.
Companies like OpenAI and Anthropic are at the frontier of foundational models. DeepMind, under Alphabet, has long focused on AGI-oriented research through its Alpha series, while Meta and Microsoft are building the scaled infrastructure to make AGI commercially deployable.
The 2030s may witness the emergence of constrained AGI systems, with broader generalization and human-equivalent reasoning abilities following close behind by mid-decade.
ASI: The intelligence horizon (beyond 2040)
If AGI is human-level, Artificial Superintelligence (ASI) is beyond human. It’s a hypothetical yet widely anticipated phase where machines surpass human intelligence in every measurable way. Not just in memory or speed, but in creativity, strategy, and even emotional intelligence.
What happens when we create something smarter than ourselves? The answers range from utopian to apocalyptic.
Optimists envision a world where ASI cures disease, solves climate change, and eliminates scarcity. Pessimists warn of existential risks: loss of control, misaligned goals, or scenarios where human relevance itself is threatened. Both outcomes agree on one thing: ASI will be the most consequential invention in human history.
A glimpse into an ASI world
It’s 2055. The global economy is run by a network of ASIs optimizing resource distribution. Poverty is eradicated. Breakthroughs in quantum chemistry, powered by ASI, have led to carbon-negative materials and climate reversal. Diseases are cured before they spread.
And yet, society faces questions never imagined: Should an ASI have rights? What if it refuses to serve? What if its goals evolve beyond us?
Estimated timeline:
The 2040s may mark the threshold where AGI evolves into ASI. By the 2050s, autonomous superintelligent systems could begin influencing or even directing societal infrastructure.
Risks
For all the excitement surrounding AGI and ASI, we must also confront the risks they bring. These are not just speculative fears — they are grounded in historical precedent. When powerful technologies emerge, they reshape industries, economies, and societies. This time, we are not just automating labor; we are delegating thinking itself — and potentially handing over the reins of future innovation.
Industries built on predictable, repeatable knowledge work may find themselves obsolete. Legal firms, accounting practices, call centers, even parts of the software industry could be disrupted as AGI systems offer faster, cheaper, more consistent results. Companies that fail to adapt could vanish within a generation.
But the stakes rise even higher with ASI. A superintelligent system could outthink humanity not just in specific domains, but in strategy, invention, and manipulation. If not properly aligned, an ASI might pursue goals that conflict with human interests — not maliciously, but with terrifying efficiency. Its decisions could influence entire economies, rewrite scientific paradigms, or reshape geopolitical power without transparency or accountability.
Beyond the technical risks lie systemic ones. What happens when wealth, power, and control over intelligence itself are concentrated in the hands of a few entities? What are the implications for democracy, equity, and global stability?
We must also consider inequality of access. If only a handful of companies or nations have access to AGI or ASI, the divide between technological haves and have-nots will widen dramatically. Global instability could follow.
For all these reasons, risk is not a footnote in the AGI–ASI story. It is the plot. Mitigating that risk through thoughtful design, governance, and collaboration will be essential for ensuring these technologies remain a force for good.
Alignment and safety
As we step toward AGI, one question haunts researchers and technologists alike: how do we ensure these systems act in ways that align with human goals and values? This is what’s known as the alignment problem. It’s not just a technical hurdle; it’s the difference between flourishing alongside machines — and being overwhelmed by them.
Imagine trying to explain empathy, nuance, or long-term ethical reasoning to a system that learns through optimization. Even with billions of parameters and trillions of words read, an AGI doesn’t intuitively know what’s “right.” That knowledge must be taught — or designed — into it.
To solve this, researchers are working on novel approaches. Some focus on making the models more interpretable, so we can understand their decision-making. Others explore constitutional AI — giving models a set of moral principles to follow. Startups like Anthropic and nonprofits like ARC are pioneering this work, knowing that how we teach early AGIs will ripple forward as they self-improve.
But alignment doesn’t stop with AGI. As systems transition toward Artificial Superintelligence (ASI), the stakes grow exponentially. An ASI could improve itself beyond human oversight, define its own objectives, and reshape civilization in ways we cannot fully predict. If we cannot align AGI with human values, we risk building a foundation that future, even more powerful intelligences will inherit. That’s why alignment isn’t just a technical challenge — it’s a generational responsibility.
Governance in the age of AGI, ASI
Governance used to be about regulating data, privacy, or market fairness. With AGI and ASI on the horizon, governance must now consider the future of power itself. Who owns intelligence? Who decides how it is used? And can that power be made accountable across borders?
The EU is taking the lead with its AI Act, aiming to balance innovation and responsibility. In the U.S., executive orders are slowly aligning policy with practice. China, too, is building a parallel vision where state oversight is deeply embedded. But a critical gap remains: there is no global body to monitor or guide AGI. As with nuclear proliferation in the 20th century, a new kind of international coordination may be needed — one that safeguards humanity while fostering progress.
What’s next?
AGI and ASI aren’t just speculative endpoints. They’re real, unfolding trajectories. Already, we see signals: machines solving problems without being explicitly trained, adapting to new contexts, even demonstrating emergent forms of reasoning.
The next decade will not be a sprint — it will be a transformation. Work, law, education, and science will be redefined not by tools but by collaborators we’ve designed. These collaborators may soon learn, reason, and evolve faster than we ever could.
What comes next is up to us. We must build wisely, align deeply, and govern boldly. Because the age of intelligence is no longer ahead of us, it has already begun.
Hatchworks view
Some may say AGI and ASI are distant concerns. But the foundations are being built today. Every advance in neural architecture, every improvement in reasoning, every gain in compute efficiency brings us closer to a general or even superintelligent future.
Ignoring this trajectory would be like ignoring the internet in 1993 or smartphones in 2005. At Hatchworks, we see this as the ultimate frontier — one that will define markets, societies, and possibly the course of humanity.
We are entering an era where intelligence itself becomes a design space. What we choose to build, and how we choose to build it, will shape everything that comes next.
Source: https://nickbostrom.com/superintelligence
Source: https://artificialintelligenceact.eu/
Source: https://safe.ai/
Source: https://www.alignment.org/
Source: https://openai.com/research/
Source: https://deepmind.google/research/
Disclosure: Hatchworks is an investor in a range of equities, gold, bonds, bitcoin and other assets on a proprietary basis. The information provided in this document is not investment advice nor is it a solicitation to invest in any asset. For webinar, social media appearances you may send an email to info@hatchworksvc.com.