What B2B Leaders Need to Know About AGI
The Exponential Curve We're All Standing On
There's a conversation happening in AI research labs now that most B2B leaders aren't paying attention to.
It's not about chatbots or gen-AI or any of the productivity tools flooding LinkedIn feeds.
It's about AGI. Artificial General Intelligence.
The point at which machines can perform any cognitive task that a human can do.
And according to some of the world's leading AI researchers, we might be there by 2028.
Not 2050. Not some distant sci-fi future. Potentially within the next three years.
I recently listened to a podcast featuring Shane Legg, Chief AGI Scientist and Co-Founder of Google DeepMind. Shane has been talking about AGI since 2009, back when it was considered the lunatic fringe of AI research. He's credited with popularising the term itself.
His prediction?
A 50/50 chance of minimal AGI by 2028. That's the point where AI systems can do all the cognitive things humans typically can do. Full AGI, where machines match the entire spectrum of human cognition, including the extraordinary feats of Einstein-level thinking? He's putting that within a decade.
And here's what keeps me up at night: most C-suite I speak with haven't internalised what this means for their businesses.
They tried ChatGPT a year ago, found it limited, and moved on. They're treating AI as another tool in the stack, like adding a new feature to HubSpot or upgrading their sales engagement platform.
But AGI isn't a tool. It's a fundamental restructuring of how cognitive work gets done.
And the companies that don't understand this distinction are about to get left behind.
This article is my attempt to bridge that gap. To take the conversations happening in AI research and translate them into something actionable for the B2B leaders navigating go-to-market transformation, revenue operations, and the messy reality of scaling a mid-market company.
Because here's the truth: whether you believe AGI is coming in three years or ten, the trajectory is clear. AI systems are becoming dramatically more capable, and that capability is accelerating. The question isn't whether this will impact your business. It's whether you'll lead the transformation or get disrupted by it.
Let's start by defining what we're actually talking about.
Understanding AGI (And Why Definitions Matter)
What Is AGI?
AGI stands for Artificial General Intelligence, but that definition doesn't help much on its own. The challenge is that different people use the term to mean wildly different things, which leads to confusion when people debate timelines or implications.
Shane Legg's definition is the clearest I've encountered: AGI is an artificial agent that can at least do the kinds of cognitive things people can typically do.
Not extraordinary things. Not Einstein-level physics or Mozart-level composition. Just the cognitive tasks that we'd expect most people to be capable of performing.
That's what he calls "minimal AGI." It's the baseline. The point where we can no longer say "well, it can't do this basic thing that any person could do."
The key word there is "cognitive." We're not talking about physical tasks like plumbing or construction. We're talking about thinking, reasoning, learning, understanding, and communicating.
And here's the uncomfortable truth: today's AI systems are already superhuman at some of these tasks.
They can speak 150+ languages fluently. They have phenomenal general knowledge about the world. They can write code, analyse data, and explain complex concepts better than most people.
But they're still weak in other areas. They struggle with continual learning, the ability to keep acquiring new skills over extended periods. They're not great at visual reasoning, like understanding spatial relationships or counting nodes in a graph. They can't yet do the kind of learning you'd do when starting a new job, where you don't know everything on day one but gradually become proficient.
These aren't fundamental blockers, though. They're engineering challenges with clear paths to solutions. Researchers know what needs to be built. It's just a matter of time and iteration.
Beyond Minimal AGI: The Levels of Intelligence
Minimal AGI is just the starting point. Once we cross that threshold, there are two more levels to consider.
Full AGI is the point where AI systems can do everything humans can do cognitively, including the extraordinary. Writing groundbreaking symphonies. Developing new theories in physics. Creating literature that moves people to tears.
Full AGI means we've truly replicated the entire spectrum of human cognitive capability, from the everyday to the exceptional.
But then there's the level beyond that.
Artificial Superintelligence (ASI).
This is where things get genuinely difficult to wrap your head around. ASI is intelligence that goes far beyond what humans are capable of achieving, regardless of how smart or talented the human is.
And here's why Shane Legg believes ASI is inevitable:
Human brains are extraordinary, but they're limited by physics. Your brain weighs about 1.4 kilograms. It consumes roughly 20 watts of power. The signals in your brain move at about 30 metres per second through electrochemical processes. The frequency on the neural channel is around 100-200 Hz in the cortex.
Now compare that to a modern data centre:
- Instead of 20 watts, you have 200 megawatts
- Instead of 1.4 kilograms, you have several million kilograms of compute infrastructure
- Instead of 100-200 Hz, you have 10 billion Hz on the channel
- Instead of 30 m/s signal propagation, you have the speed of light: 300,000 kilometres per second
That's six to eight orders of magnitude difference across multiple dimensions simultaneously.
We already see machines that can fly faster than any bird, lift more than any human, and see further than any eye. The idea that human intelligence represents the upper limit of what's cognitively possible doesn't make sense when you look at the physics.
So yes, ASI is coming. The only question is when.
Why This Matters for Timeline Discussions
Here's where definitions become critically important.
When someone says "AGI will be here in five years," what do they mean? Minimal AGI? Full AGI? Something that transforms the economy? Something conscious?
I've had conversations with people who violently disagree about AGI timelines, only to discover they're using completely different definitions. One person is talking about minimal AGI, another is talking about economic transformation, and a third is talking about consciousness. No wonder they can't agree.
This is why I'm using Shane Legg's framework throughout this article. It provides clarity:
- Minimal AGI: Can do cognitive tasks typical humans can do (potentially 2-5 years away)
- Full AGI: Can do the full spectrum of human cognition, including extraordinary feats (potentially within a decade)
- ASI: Goes beyond human cognitive limits entirely (timeline unclear, but likely follows full AGI relatively quickly)
For B2B leaders, the most relevant threshold is minimal AGI. That's the point where AI systems stop failing at basic cognitive tasks and start becoming genuinely reliable across a broad range of work.
And if the researchers are right, that might be closer than you think.
Where We Are Right Now (And Why Most Are Behind)
The Uneven Landscape of Current AI
If you tried ChatGPT or Claude a year ago and found it underwhelming, I have news for you: a year in AI is a loooong time.
The capabilities we have today are exponentially beyond what existed 12 months ago. And the gap between "I tried it once" and "I'm using it daily in sophisticated ways" is massive.
Let me give you a concrete example from our work at ScaleStation.
Six months ago, we implemented AI-powered call analysis for a client's sales team. The system analyses every single sales call, tracks methodology adherence, identifies objection handling patterns, and measures talk-to-listen ratios.
Before this, their sales managers would carve out time each week to listen to maybe three or four call recordings if they were lucky. Feedback was generic and delayed by days. Reps had no idea what they were doing wrong until their next one-on-one.
Now? Every call gets analysed in real time. Managers get a dashboard showing exactly where each rep needs coaching. Feedback is immediate and specific.
The result? Their average deal cycle dropped by 18% within 90 days. Not because they changed their sales process, but because AI ensured every rep was actually following it.
That's where we are right now, today, with current technology.
And we're nowhere near minimal AGI yet.
What Current AI Can and Can't Do
Current AI systems are incredibly uneven in their capabilities. Understanding this unevenness is crucial for knowing where to invest and where to wait.
Where AI excels today:
- Language processing and translation across 150+ languages
- General knowledge and information retrieval
- Code generation and debugging
- Content creation and editing
- Data analysis and pattern recognition
- Structured reasoning and step-by-step problem solving
Where AI still struggles:
- Continual learning over extended periods
- Visual reasoning and spatial understanding
- Learning new skills the way humans do when starting a new role
- Maintaining consistent personality and memory across long interactions
- Understanding subtle context and social dynamics
- Genuine creativity that breaks from learned patterns
The gap is closing rapidly on most of these limitations. But right now, if you're relying on AI for any of the "struggles" list, you're going to have a frustrating experience.
The Problem with "I Tried It and It Didn't Work"
I hear this constantly: "We experimented with AI but it didn't really help us."
When I dig deeper, here's what I typically find:
- They tried it once or twice without systematic integration
- They expected it to work perfectly out of the box
- They didn't redesign their workflows to take advantage of it
- They tried it with tasks AI isn't good at yet
- They used a model from 6-12 months ago
It's like trying to use HubSpot without proper implementation, getting frustrated that it doesn't work, and concluding that CRM systems are overhyped.
The companies winning with AI right now share a few characteristics:
- They started experimenting 12+ months ago
- They iterate constantly and learn what works
- They redesign processes around AI capabilities rather than bolting it onto existing workflows
- They train their teams to work alongside AI, not against it
- They track metrics and improve systematically
These companies have built institutional knowledge that their competitors can't copy overnight. They know which use cases work, which don't, and how to get value quickly.
The companies that waited? They're starting from zero. And the gap widens every month.
The Cognitive Work That's Already Being Transformed
Let's be specific about what's changing right now, not in some hypothetical future.
Software engineering is the canary in the coal mine. Teams that previously needed 100 engineers now need 20 who use advanced AI tools. Those 20 are more productive than the 100 used to be.
Content creation has been completely transformed. The bottleneck isn't writing anymore. It's strategy, insight, and knowing what to create.
Customer support is shifting from humans answering questions to humans training AI systems and handling edge cases.
Research and analysis that used to take days now takes hours. Synthesising information from dozens of sources is trivial.
Data entry and CRM management can be almost entirely automated. The question is whether your systems are clean enough to trust the automation.
Notice what these have in common? They're all purely cognitive work. Work you can do remotely with just a laptop, camera, and microphone.
Shane Legg uses this as a heuristic: if you can do your job entirely through a laptop without any physical interaction with the world, AI is probably coming for significant chunks of that work.
That doesn't mean your job disappears. It means it transforms.
The question is whether you're ready for that transformation.
The Path to Minimal AGI (And What It Means for B2B)
The Missing Pieces
We're not at minimal AGI yet, but the path is clearer than most people realize.
According to researchers at DeepMind and other leading labs, there are specific capabilities that need to be developed before we can say AI systems reliably match typical human cognitive ability:
1. Continual learning at human speed
Humans can start a new job and learn continuously over months and years. Current AI systems are mostly static. You train them on a dataset, and then they're frozen in time.
The solution involves combining multiple approaches: retrieval systems for storing new information, episodic memory architectures, and processes to train new knowledge back into underlying models over time.
This isn't science fiction. These are engineering problems with known approaches. It's just a matter of building and integrating them.
2. Improved visual and spatial reasoning
Current AI can recognize objects brilliantly. Cat, dog, car, person. No problem.
But ask it to reason about spatial relationships or count elements in a complex diagram, and it gets shaky. It doesn't understand perspective the way humans do. It can't mentally manipulate 3D objects or track movement through space reliably.
Again, this is a known problem with clear research directions. Better training data, architectural improvements, and multimodal learning approaches are all being actively developed.
3. More reliable reasoning under uncertainty
Humans are actually quite good at reasoning when information is incomplete or ambiguous. We make educated guesses, we reason by analogy, we use intuition built from experience.
Current AI systems can do some of this, but they're not consistent. They'll confidently make errors that seem bizarre to humans. They struggle when the "correct" answer depends on subtle context or common-sense knowledge that wasn't explicitly in their training data.
The solution involves better uncertainty quantification, improved calibration, and systems that can reason about what they don't know.
4. Long-term coherence and planning
If you're working with an AI system over weeks or months, you want it to remember previous conversations, learn your preferences, and maintain consistent understanding of your goals and context.
Current systems struggle with this. Each interaction is relatively isolated. They don't build genuine long-term models of you as a person or maintain coherent plans over extended periods.
This is changing with developments in memory architectures, user modeling, and multi-session learning. But it's not fully solved yet.
The Timeline Question
Shane Legg has maintained a 50/50 probability of minimal AGI by 2028 since 2009. That's now just three years away.
Other researchers have different timelines. Some think it's closer, some think it's further. But the consensus is clearly shifting toward "sooner rather than later."
What's remarkable is how consistent the forward progress has been. Every year, capabilities improve. Every year, the list of "things AI can't do" gets shorter.
And here's the critical insight: you don't need minimal AGI for massive business impact.
The AI systems we have right now, today, are already capable of transforming significant chunks of B2B operations. They're just not evenly distributed or well understood yet.
By the time we reach minimal AGI, where AI systems reliably handle any typical cognitive task, the companies that started building institutional knowledge years ago will have an insurmountable lead.
What Minimal AGI Actually Looks Like in Practice
Let's make this concrete. What does it mean when AI reaches minimal AGI?
It means you could give it any task that you'd give to a reasonably competent knowledge worker, and it would handle it reliably without surprising failures.
For a B2B company, that might look like:
Sales operations:
- Researching accounts and building target lists
- Drafting personalized outreach
- Qualifying inbound leads
- Updating CRM records
- Scheduling follow-ups
- Generating call summaries
- Identifying expansion opportunities
Marketing:
- Writing blog posts and case studies
- Analysing campaign performance
- Conducting competitor research
- Optimising landing pages
- Managing social media
- Creating presentation decks
- Developing content strategies
Customer success:
- Answering product questions
- Creating help documentation
- Analysing usage patterns
- Identifying at-risk accounts
- Drafting renewal proposals
- Conducting health checks
- Planning onboarding sequences
RevOps and analytics:
- Building reports and dashboards
- Cleaning and enriching data
- Forecasting pipeline
- Analysing conversion rates
- Identifying bottlenecks
- Designing process improvements
- Testing and documenting workflows
Notice these aren't exotic capabilities. They're everyday tasks that knowledge workers do right now.
The difference is scale and cost. Instead of hiring multiple people for each function, you might have one person working with multiple AI agents. Or one AI agent supporting multiple people.
The organizational structure shifts from "hire more people to do more work" to "build better AI workflows to amplify your best people."
That's a fundamental change in how B2B companies scale.
The Hard Truth About AI and Employment
The 80% Reality
Shane Legg was remarkably candid about this in the podcast: "Where prior you needed 100 software engineers, maybe you need 20, and those 20 use advanced AI tools."
That's not a hypothetical. That's happening right now in software engineering.
And it's coming to every other form of purely cognitive work.
Let me be direct: if your job consists entirely of cognitive tasks that can be done remotely with a laptop, significant portions of that work are going to be automated or dramatically transformed within the next five years.
That includes:
- Sales development representatives
- Junior analysts and coordinators
- Content writers and editors
- Data entry specialists
- Basic accounting and finance roles
- Junior designers
- Customer support agents
- Research assistants
- Project coordinators
This isn't about AI "replacing" these jobs entirely. It's about dramatically changing the ratio of human effort required.
Instead of needing 10 SDRs to generate enough pipeline, you might need 2 SDRs working with AI tools that handle research, initial outreach, and follow-up sequencing.
Instead of needing 5 customer support agents, you might need 1 agent training AI systems and handling complex escalations.
Instead of needing 3 analysts, you might need 1 strategic thinker who knows how to work with AI for data processing and initial analysis.
The work doesn't disappear. The headcount requirement drops dramatically.
Who's Protected (For Now)
This is the question everyone wants answered: which jobs are safe?
Shane Legg offered a useful heuristic: plumbers, electricians, and other trades that require physical presence and complex real-world interaction are relatively protected. Even when AI develops the cognitive understanding to do these jobs, robotics will take years to catch up. And even then, it'll take years more for the economics to make sense compared to human labor.
But there's a second category of protection that I think is underappreciated: human-to-human interaction where the "humanness" matters.
Think about:
- Executive coaching and leadership development
- High-touch customer relationships
- Creative direction and strategic vision
- Sales conversations where trust and rapport are crucial
- Therapy and counseling
- Teaching and mentoring where personal connection drives outcomes
AI might become capable of these tasks technically, but there's value in knowing there's a real human on the other side who genuinely cares and is invested in the outcome.
However, here's the uncomfortable reality: even in these "protected" roles, AI augmentation will make individual practitioners dramatically more productive. Which means you'll need fewer of them.
A sales executive using AI for all research, preparation, and follow-up can handle 3x the accounts they used to. Which means you need one-third the sales executives.
There's no escaping the math.
The Redistribution Challenge
Here's where this gets genuinely difficult from a societal perspective.
AI is going to dramatically increase productivity. The total economic pie will get bigger. We'll be able to produce more goods and services with less human effort.
But our current economic system is built on people trading their labor for access to resources. You work, you get paid, you buy things.
What happens when a meaningful percentage of the population can't trade cognitive or physical labor at a competitive price compared to AI systems?
We're going to need new models. Universal basic income gets discussed frequently, but that's just one possibility. We might need fundamental restructuring of how wealth gets distributed in a post-scarcity economy.
The good news is this isn't a problem of insufficient resources. The pie is getting bigger. There will be more than enough to go around.
The challenge is political and social: how do we structure society so that everyone benefits from AI-driven abundance, not just the people who own the AI systems?
I don't have answers to that question. But I know we need more people thinking about it seriously, not dismissing it as sci-fi speculation.
Because if Shane Legg's timeline is even remotely accurate, we're going to be facing these questions within a decade.
What This Means for B2B Leaders Right Now
You can't solve the societal-level challenges. But you can and should be thinking about how this affects your organization.
Here's my advice:
1. Invest in AI augmentation, not replacement
The goal shouldn't be to eliminate headcount. It should be to make your best people exponentially more productive.
The companies that figure out how to do this first will have an enormous competitive advantage. They'll move faster, scale more efficiently, and operate at margins their competitors can't match.
2. Redesign roles around human judgment, not execution
Junior roles focused on execution are going to shrink. Senior roles focused on judgment, strategy, and relationship management are going to become more valuable.
Start shifting your team structure now. Hire for strategic thinking and relationship skills. Use AI for the execution layer.
3. Build AI literacy across your entire organization
Every single person in your company needs to understand how to work effectively with AI tools. This isn't optional anymore.
The gap between people who are AI-fluent and people who aren't is going to become massive. Invest in training now.
4. Measure outcomes, not activity
When AI handles research, data entry, and follow-up, your team's capacity explodes. But if you're still measuring success by activities (calls made, emails sent, hours logged), you're missing the point.
Shift to outcome-based metrics. Revenue generated. Problems solved. Value delivered.
5. Move fast
The companies that embed AI deeply into their operations over the next 2-3 years will build competitive moats that late adopters can't easily cross.
Speed is your advantage. Use it.
The Ethics Question
Can AI Be Ethical?
This is one of the most fascinating aspects of AGI development: the question of whether AI systems can reason about ethics and make genuinely ethical decisions.
Shane Legg introduced me to a concept he calls "System Two Safety," based on Daniel Kahneman's work on human decision-making.
Here's how it works:
Humans have two modes of thinking. System One is fast, instinctive, emotional. Someone annoys you, you feel a flash of anger. System Two is slower, more deliberate, more logical. You take a breath, think through the consequences, and choose a different response.
The same framework can apply to AI systems.
Current AI safety work often focuses on training models to avoid harmful outputs through reinforcement learning and constitutional AI approaches. That's important, but it's like trying to make someone's instincts perfectly aligned with good behavior.
System Two Safety is different. It's about building AI systems that can reason explicitly about ethical implications before taking action.
Imagine an AI system faced with a complex ethical situation. Instead of just going with its trained instincts, it could:
- Analyse the situation and identify relevant ethical considerations
- Consider multiple possible actions and their likely consequences
- Evaluate each option against ethical principles and social norms
- Reason through edge cases and exceptions (like when lying might be ethical to save a life)
- Choose the action that best aligns with ethical guidelines
- Explain its reasoning transparently
This is possible right now with current "thinking" AI models. You can actually see the chain of thought these systems use when reasoning through problems.
Why This Might Work Better Than Human Ethics
Here's a provocative claim: AI systems might become more ethical than humans.
Not because they're inherently good or care about morality. But because they can apply ethical reasoning more consistently and thoroughly than humans typically do.
Humans are biased. We're emotional. We're inconsistent. We have bad days. We rationalize decisions we've already made emotionally. We're influenced by how questions are framed.
An AI system that's properly designed for ethical reasoning could:
- Apply principles consistently without emotional bias
- Consider consequences more thoroughly than humans have time for
- Reason through complex ethical trade-offs systematically
- Maintain the same standard of care regardless of mood or fatigue
- Explain its reasoning transparently for scrutiny
Of course, this only works if the ethical framework embedded in the system is sound. And that's a massive challenge, because humans don't agree on ethics.
Different cultures have different norms. Different philosophical traditions emphasize different principles. Different individuals have different values.
AI systems will need to navigate this pluralism, understanding context and adapting their reasoning to different situations while maintaining core ethical principles.
It's a hard problem. But it's a tractable engineering problem, not an insurmountable philosophical barrier.
The Interpretability Advantage
One major advantage AI systems have over humans: we can look inside them.
With human decision-making, you can ask someone why they did something, but you're getting a post-hoc rationalization. You're not seeing the actual neural processes that led to the decision.
With AI systems, especially those using chain-of-thought reasoning, you can literally watch the reasoning process unfold. You can see what factors the system considered, how it weighed trade-offs, and what logic it applied.
That transparency creates accountability. If an AI system makes a questionable decision, you can audit the reasoning. You can identify where it went wrong. You can improve the system.
You can't do that with a human brain.
The Grounding Problem
There's a valid concern about whether AI systems can truly understand human values and ethics without living human experiences.
How can an AI reason about harm if it's never experienced pain? How can it understand the value of human life if it's never been alive? How can it appreciate human relationships if it's never formed one?
This is called the "grounding problem," and it's a real challenge.
But I think it's less insurmountable than it first appears.
Humans empathize with experiences we've never had all the time. I've never given birth, but I can reason about maternal health ethics. I've never been to war, but I can understand why certain rules of engagement exist.
We do this through learning, through exposure to stories and data, through reasoning by analogy.
AI systems are trained on vast amounts of human-generated data. They absorb books, articles, conversations, stories, ethical frameworks, case studies, and philosophical arguments created by humans who did have lived experiences.
That doesn't perfectly replicate human grounding, but it's not nothing.
And as AI systems become more embodied in robots, as they interact with the world more directly, as they operate over longer time horizons with persistent memory, that grounding will strengthen.
The question isn't whether AI systems will ground ethics exactly like humans do. The question is whether they can ground ethics well enough to reason about ethical implications reliably.
I think the answer is yes.
From AGI to Superintelligence (ASI)
The Physics Argument
I mentioned earlier that Shane Legg believes ASI is inevitable based on physics. Let me expand on why that matters.
Human intelligence is constrained by biology. Your brain needs to:
- Fit inside your skull
- Consume minimal energy (your ancestors needed to survive on limited food)
- Use materials available to biological systems
- Develop within a reasonable timespan
- Function reliably for decades without maintenance shutdowns
These constraints produced an extraordinary result. Human brains are perhaps the most complex objects we know of in the universe. But they're still limited.
Artificial systems don't face the same constraints.
A data centre doesn't need to fit in a skull. It doesn't need to run on 20 watts. It can use materials and architectures that biology can't access. It can be upgraded and expanded continuously.
The question isn't whether artificial systems can exceed human intelligence. The question is how much they'll exceed it.
And here's where things get genuinely difficult to predict.
What Does Superintelligence Look Like?
We genuinely don't know.
Will a superintelligent system be better than humans at everything? Or will there be domains where human intelligence remains competitive?
We already see that AI can speak 200 languages, something no human can do. It can process information faster. It can maintain attention longer. It can search through vast datasets instantly.
But might there be types of reasoning where computational complexity limits how much better than human an AI system can get?
Might there be creative insights that require the particular type of messiness that biological brains have?
Might there be social and emotional intelligence domains where human experience provides an advantage that's difficult to replicate?
Nobody knows for certain.
What seems clear is that AI systems will exceed human capability in many domains, potentially dramatically. Whether they exceed human capability in all domains is an open question.
The Speed Question
Here's what concerns me most about the transition from AGI to ASI: the speed at which it might happen.
Once you have a system that can do cognitive work at human level, you can potentially use that system to help design better AI systems.
That creates a feedback loop.
Better AI systems design even better AI systems, which design even better systems, which...
This is called "recursive self-improvement," and it's one of the scenarios that leads to very rapid capability gains.
Maybe you go from human-level intelligence to superintelligence in years. Or months. Or, in some scenarios people worry about, weeks.
I don't think the fastest scenarios are likely. There are physical constraints, testing requirements, and practical engineering challenges that slow things down.
But even a measured path from AGI to superintelligence over a few years would represent a pace of change that human society has never experienced before.
The Control Problem
This is where AI safety research gets genuinely difficult.
If you have a superintelligent system that's far more capable than any human, how do you maintain meaningful control over it?
You can't outsmart it. You can't outplan it. You can't predict what it might do in novel situations.
This is why so much work is going into alignment research: trying to ensure that as AI systems become more capable, they remain aligned with human values and interests.
It's not a solved problem. But there are promising directions:
Constitutional AI approaches that bake principles into the training process
Interpretability research that lets us understand what systems are thinking
Red teaming and adversarial testing that probes for failure modes before deployment
Multi-layered safety systems that don't rely on any single point of control
Human oversight and monitoring for deployed systems
Gradual deployment that catches problems before they scale
Is this enough? Nobody knows for certain.
But the alternative, stopping AI research entirely, isn't realistic when multiple countries and companies are racing toward AGI.
That's why Shane Legg focuses on making superintelligent systems "super ethical." If we can't stop the development of superintelligence, we need to ensure it reasons about ethics as robustly as it reasons about everything else.
The Transformation of B2B Revenue Engines
The 30% to 80% Shift
Here's a stat that should get your attention: most B2B sales teams spend about 30% of their time actually talking to buyers.
The other 70%? Research. CRM admin. Follow-up emails. Internal coordination. Data entry.
AI changes that ratio in a big way.
When AI handles research, a sales rep shows up to every call with comprehensive account intelligence, competitive insights, and conversation guides generated in seconds instead of hours.
When AI manages CRM updates, reps never touch the database. The system listens to calls, extracts key information, and updates fields automatically.
When AI drafts follow-up emails, handles scheduling, and manages next steps, reps focus purely on the conversation.
Suddenly, that 30% becomes 80%.
Same headcount. Same compensation. Triple the customer interaction time.
That's not a marginal improvement. That's a fundamental restructuring of how sales works.
The RevOps Revolution
Revenue operations is about to become exponentially more strategic.
Right now, most RevOps teams are drowning in tactical work:
- Cleaning data
- Building reports
- Managing tool configurations
- Troubleshooting integrations
- Maintaining dashboards
AI can handle pretty much all of that.
What's left is the genuinely strategic work:
- Designing revenue architecture
- Identifying systemic bottlenecks
- Optimising conversion paths
- Building predictive models
- Aligning incentives
This is what RevOps was always supposed to be. Most teams just never had time to get there because they were too busy with the tactical work.
AI removes that constraint.
The RevOps leaders who understand this will become some of the most valuable people in their organisations. They'll be the architects of AI-augmented revenue engines.
The ones who don't? They'll be shocked when their role gets redefined out from under them.
Marketing's Creative Renaissance
Content marketing is in a strange place right now.
On one hand, AI can generate content at scale. Blog posts, social media updates, emails, case studies. All of it can be produced faster than ever.
On the other hand, the internet is being flooded with AI-generated content. The signal-to-noise ratio is getting worse every month.
What cuts through?
Genuine insight. Original thinking. Contrarian perspectives backed by experience. Content that could only come from living through real situations and synthesizing lessons that aren't obvious.
AI can help you express those insights more effectively. It can handle the production layer. But it can't create the insights themselves.
This is actually good news for marketers who focus on strategy and thought leadership. Your value goes up as the noise level increases, assuming you're creating signal.
The marketers in trouble are the ones who were already operating in the "generic content production" space. That work is getting automated away entirely.
Customer Success at Scale
Customer success teams face a capacity problem: you can only serve so many accounts per CSM before the quality degrades.
AI breaks that constraint.
Imagine a customer success motion where:
- AI monitors usage patterns across all accounts continuously
- It flags at-risk customers based on behavioural signals
- It drafts personalised outreach for CSMs to review and send
- It generates health scores that actually predict churn
- It identifies expansion opportunities based on usage and firmographic data
- It answers routine customer questions instantly
- It creates customised onboarding paths based on customer goals
The CSM's role shifts from reactive fire fighting to proactive strategy.
Instead of managing 50 accounts poorly, they manage 150 accounts well. They focus on relationship building, strategic planning, and high-touch moments that actually require human judgment.
The economics of customer success change completely. You can profitably serve smaller accounts. You can provide white-glove service at scale. You can identify and capture expansion revenue that would otherwise slip through.
Building AI-Ready Revenue Operations
The Foundation Problem
Here's a truth most B2B businesses don't want to hear: you can't build effective AI workflows on top of broken systems.
If your CRM is full of duplicate records, incomplete fields, and inconsistent data, AI will amplify the mess, not fix it.
If your processes aren't documented, AI can't automate them.
If your team doesn't follow consistent methodologies, AI can't reinforce best practices.
The companies winning with AI right now are the ones who invested in operational excellence before they started building AI capabilities.
They cleaned their data. They documented their processes. They established clear governance. They built systematic approaches to revenue operations.
Then, and only then, did they layer in AI.
The companies struggling? They're trying to skip that foundation work and jump straight to automation. It doesn't work.
The Architecture of AI-Augmented GTM
Let me describe what a properly designed AI-augmented go-to-market motion looks like in 2025.
Layer 1: Clean Data Foundation
Your CRM is your source of truth. Every field has clear definitions. Data quality rules are enforced systematically. Duplicate detection runs automatically. Enrichment happens in the background.
You're not maintaining this manually. AI systems monitor data quality continuously and flag issues before they compound.
Layer 2: Process Automation
Standard workflows run without human intervention:
- Lead routing and assignment
- Follow-up sequences based on behavior
- Task creation and reminders
- Meeting scheduling and preparation
- Deal stage progression based on exit criteria
- Risk flagging and escalation
The system handles the predictable patterns. Humans handle the exceptions.
Layer 3: Intelligence and Insight
AI systems analyse patterns across your entire revenue engine:
- Which messages resonate with which personas
- Where deals get stuck and why
- Which reps are succeeding and what they're doing differently
- How seasonality affects your pipeline
- Where your forecasts tend to miss and why
This isn't basic reporting. This is genuine pattern recognition that surfaces insights you wouldn't find manually.
Layer 4: Augmented Execution
Your team uses AI as a constant companion:
- Sales reps get real-time guidance during calls
- Marketers get creative assistance and feedback
- CSMs get proactive risk alerts and suggested actions
- Executives get synthesized intelligence without requesting reports
The AI layer doesn't replace judgment. It amplifies it.
The Implementation Roadmap
If you're a B2B leader reading this and thinking "we need to move on this," here's where to start:
Quarter 1: Assessment and Foundation
Audit your current systems and processes. Where are the biggest inefficiencies? Where is work purely cognitive and high-volume? Where is data quality causing problems?
Fix the foundation issues. Clean critical data. Document key processes. Establish governance.
Pick one high-value, low-risk workflow to automate as a proof of concept. Something like lead enrichment or call summarization.
Quarter 2: Proof of Value
Implement your first AI workflow. Measure the impact rigorously. Learn what works and what doesn't.
Train your team on working with AI. Not just "here's how to use the tool," but "here's how to think about AI as a collaborator."
Identify the next three workflows to tackle based on your learnings.
Quarter 3: Scale and Iteration
Roll out multiple AI-augmented workflows across different functions. Sales, marketing, customer success, RevOps.
Start measuring productivity gains. Where are people spending less time on admin and more time on high-value work?
Build internal champions who can train others and share best practices.
Quarter 4: Strategic Integration
By now, AI should be embedded across your revenue operations. The focus shifts to optimization and expansion.
What new capabilities are possible that weren't before? What strategic initiatives can you pursue now that you have this much leverage?
How do you restructure teams to take full advantage of these capabilities?
This isn't a one-time project. It's a continuous evolution. But you need to start now, not later.
The Competitive Dynamics
First-Mover Advantages Are Compounding
In most technology adoption cycles, there's a reasonable window for fast followers. The first movers pay the pioneering tax, work out the kinks, and then others can copy their playbook.
AI is different.
The institutional knowledge you build by working with AI compounds over time. You learn which prompts work. Which workflows are fragile. Where human judgment is still essential. How to train teams effectively.
That knowledge becomes embedded in your culture and operations. It's not something competitors can easily copy.
Even more importantly, the data you generate by using AI creates a feedback loop. Your AI systems get better as they learn from your specific use cases, your customers, your market.
A competitor starting 12 months behind isn't just 12 months behind. They're starting from zero while you're accelerating.
The Margin Expansion Opportunity
Here's why this matters from a pure business perspective:
Your competitors are locked into a certain cost structure. They need X salespeople to generate Y revenue. They need Z support staff to service their customers.
If you can generate the same revenue with 50% of the headcount because you're augmenting your team with AI, your margins expand dramatically.
You can underprice competitors and still make more money. Or you can maintain pricing and achieve margins they can't match.
That margin advantage lets you:
- Invest more in product development
- Expand into new markets faster
- Acquire customers more aggressively
- Attract better talent with better compensation
- Weather economic downturns more easily
It's a compounding competitive advantage that gets harder to overcome the longer it persists.
The Risk of Waiting
I meet B2B leaders regularly who acknowledge AI is important but aren't treating it as urgent.
They're waiting for:
- The technology to mature more
- Clear best practices to emerge
- Regulatory clarity
- Budget availability
- Internal buy-in
Meanwhile, their competitors are building capabilities and competitive moats.
Here's what I tell these leaders: the cost of moving too early is manageable. You might waste some time on workflows that don't pan out. You might implement something that needs to be redone.
The cost of moving too late is existential. You end up competing against companies with fundamentally better economics, faster execution, and deeper capabilities.
And catching up is expensive. Not just in technology and implementation costs, but in the culture change required. Organizations that embedded AI early have teams that think differently. They expect AI augmentation. They design workflows around it.
Organizations that waited are trying to bolt AI onto existing processes, existing mindsets, existing resistance.
That's a much harder transformation.
Preparing for a Post-AGI World
The Uncomfortable Conversations We Need to Have
Let's return to the bigger picture for a moment.
If minimal AGI arrives in the next 3-5 years, and full AGI within a decade, we're going to face questions that most B2B leaders aren't prepared for:
What do you do with the team members whose roles become largely automated?
Do you retrain them? Transition them to different roles? Accept that your headcount needs will shrink? Create entirely new roles that didn't exist before?
These aren't easy questions. They involve people's livelihoods, their identities, their career paths.
But ignoring them doesn't make them go away.
How do you restructure compensation in a world where output decouples from hours worked?
If a sales rep using AI can generate 3x the pipeline, do they get paid 3x as much? Do you adjust quotas? Do you rethink the entire compensation model?
What about roles where the work shrinks dramatically but the judgment required stays constant?
What's your responsibility to the broader labor market?
If your company successfully automates away 50% of your knowledge worker roles, you're contributing to a broader societal shift.
Do you have any responsibility to help with that transition? To advocate for policy changes? To invest in retraining programs?
Or do you focus purely on shareholder value and let society figure out the rest?
I don't have definitive answers to these questions. But I know we need to start having the conversations now, not when we're in the middle of the crisis.
The Abundance Opportunity
Here's the optimistic scenario:
AI dramatically increases productivity across the economy. We can produce more goods and services with less human effort. Material abundance increases.
The cost of many products and services drops as AI-powered production becomes more efficient. Healthcare gets better and cheaper. Education becomes more accessible. Scientific progress accelerates.
We solve problems that seemed intractable: climate change, disease, energy scarcity, resource allocation.
Humans are freed from repetitive cognitive work to focus on creativity, relationships, exploration, and pursuits that bring genuine meaning.
This isn't fantasy. It's a plausible scenario if we navigate the transition well.
But "navigating well" requires intentional choices about how we structure society, distribute resources, and define value in a world where human labor isn't the primary economic input.
What You Can Control
As a B2B leader, you can't solve the societal-level challenges. But you can make choices about how your organization navigates this transition:
Choose abundance mindset over scarcity
Don't approach AI as "how do we cut costs by reducing headcount." Approach it as "how do we create so much value that growth outpaces any efficiency gains."
The companies that use AI to expand their total addressable market, enter new categories, and create new offerings will thrive.
The companies that use it purely for cost-cutting will find themselves in a race to the bottom.
Invest in your people's adaptability
The specific skills your team has today might not be the skills they need in three years. But their ability to learn, adapt, and work effectively with AI will remain valuable.
Invest in continuous learning. Create paths for people to evolve their roles. Reward experimentation and adaptation.
Build ethically and transparently
How you implement AI matters. Are you being honest with your team about the changes coming? Are you giving people time to adapt? Are you making decisions thoughtfully rather than reactively?
The companies that handle this transition with integrity will build trust and loyalty that lasts.
The ones that don't will face retention problems, morale issues, and reputational damage.
Stay connected to human value
At the end of the day, B2B businesses exist to serve other businesses, which exist to serve humans.
Don't get so caught up in efficiency and automation that you lose sight of the human value you're creating.
The companies that maintain that focus will navigate the AI transition successfully.
The Action Plan
What to Do This Month
If you're a CEO, CRO, or senior GTM leader at a B2B company, here's what I'd recommend doing in the next 30 days:
1. Educate yourself and your leadership team
Block off time to deeply understand what's happening in AI. Not just reading headlines, but understanding the capabilities, limitations, and trajectory.
Have conversations with your leadership team about what this means for your business. Get alignment on whether this is urgent or not.
If you conclude it's urgent, make it visible. Put it in your strategic priorities. Allocate budget and resources.
2. Audit your current state
Where is your team spending time on purely cognitive work that could be augmented or automated?
Where is your data quality causing inefficiency?
Where are your processes poorly documented or inconsistent?
Where are you already using AI tools, and how effective are they?
3. Pick your first high-value experiment
Don't try to boil the ocean. Pick one workflow that's:
- High volume (happens frequently)
- Well-defined (you can describe the process clearly)
- Low risk (mistakes aren't catastrophic)
- Measurable (you can track the impact)
Examples might include:
- AI-powered call analysis and coaching
- Automated lead enrichment and scoring
- CRM data cleanup and maintenance
- Content generation and optimization
- Customer health monitoring
Implement it. Measure it. Learn from it.
4. Build AI literacy in your organization
Start training your team on how to work effectively with AI. Not just tool training, but mindset shifts.
Identify early adopters who can become internal champions. Give them space to experiment and share learnings.
Create channels for people to share what's working and what isn't.
5. Join or create a peer learning group
You're not alone in figuring this out. Other B2B leaders are grappling with the same questions.
Find your peers. Share what you're learning. Learn from their experiments.
The companies that treat this as a collaborative learning journey will move faster than those trying to figure it out in isolation.
What to Do This Quarter
Build your AI roadmap
Based on your audit and initial experiments, create a prioritized roadmap of workflows to augment or automate.
Sequence them based on:
- Value impact (revenue, efficiency, quality)
- Implementation complexity
- Team readiness
Don't try to do everything at once. Build systematically.
Invest in your data foundation
If your data quality is poor, fix it. AI built on bad data produces bad results.
Establish data governance. Clean critical datasets. Implement quality monitoring.
This isn't glamorous work, but it's essential.
Redesign roles and expectations
As AI capabilities roll out, roles need to evolve. Sales reps aren't doing the same job when AI handles research and admin. CSMs aren't doing the same job when AI monitors health scores.
Work with your team to redesign roles around the new capabilities. Clarify expectations. Update job descriptions and compensation structures as needed.
Measure and communicate progress
Track the impact of your AI initiatives rigorously. Not just "we implemented something," but "here's the measurable improvement in efficiency, quality, or revenue."
Communicate wins broadly. Celebrate teams that embrace the change. Share learnings from failures openly.
What to Do This Year
Embed AI across your entire revenue engine
By the end of the year, AI should be integrated into sales, marketing, customer success, and RevOps.
Not just one-off tools, but systematic augmentation across your go-to-market motion.
Build proprietary AI capabilities
Start developing AI workflows that are specific to your business, your customers, your market.
This is where genuine competitive advantage comes from, not from using the same off-the-shelf tools as everyone else.
Evolve your organizational structure
As AI capabilities mature, your organizational structure should evolve to take advantage of them.
Maybe you don't need as many layers of management. Maybe you can expand into new markets with smaller teams. Maybe you can service smaller customers profitably.
Think about what organizational structure makes sense in an AI-augmented world, not just in the pre-AI world.
Prepare for acceleration
The pace of AI improvement is accelerating, not plateauing.
What seems impossible today might be routine in 18 months. Build organizational muscles for continuous adaptation, not one-time transformation.
The Choice We're All Facing
AGI is coming.
Maybe in 3 years. Maybe in 10.
But the trajectory is clear, and the pace is accelerating.
For B2B leaders, this isn't an abstract philosophical question. It's a strategic imperative that will determine who thrives and who gets disrupted over the next decade.
The companies that started embedding AI 12 months ago have a lead that's growing every month. They've built institutional knowledge, trained their teams, and developed workflows that competitors can't easily copy.
The companies waiting for clarity are falling further behind.
Here's what I want you to take away from this article:
First, this is urgent. Not in a panic-driven way, but in a "the decisions we make now will compound over years" way. Start experimenting now, not later.
Second, this is tractable. You don't need to solve AGI or understand transformers or become an AI researcher. You need to identify high-value workflows and systematically augment them with AI capabilities.
Third, this is about people. The companies that navigate this transition well will invest in their teams' adaptability, make thoughtful choices about how to structure work, and maintain focus on human value.
Fourth, this is a leadership issue. AI adoption isn't something you delegate to IT or a single enthusiast. It requires executive sponsorship, strategic prioritization, and organizational commitment.
Fifth, the opportunity is massive. AI isn't just about efficiency. It's about capabilities that weren't possible before. New markets you couldn't serve profitably. Products you couldn't build. Insights you couldn't extract. Speed you couldn't achieve.
We're standing on the exponential curve right now. Most people don't see it yet because exponentials look linear at the start.
But the researchers building these systems see it clearly. The timelines are getting shorter. The capabilities are expanding faster than most people expected.
The question isn't whether AGI will transform B2B go-to-market. It's whether you'll lead that transformation or get disrupted by it.
I know which side I'm on.
The companies we work with at ScaleStation are embedding AI systematically across their revenue engines. Not because it's trendy, but because it's creating measurable competitive advantages right now, today, with current technology.
And we're just getting started.
If minimal AGI arrives by 2028, as Shane Legg predicts, the next three years will be the most important strategic period for B2B companies in decades.
The decisions you make now about AI adoption, organizational structure, and go-to-market design will compound over years.
Choose wisely. Move quickly. Stay human.
The future is coming faster than most people think.