Connect with us

Hi, what are you looking for?

Artificial Intelligence

Neural Dispatch: Agentic AI’s lack of intelligence, a DeepSeek moment, and Nvidia’s AI supercomputer

Neural Dispatch_ Agentic AI’s lack of intelligence, a DeepSeek moment, and Nvidia’s AI supercomputer

The biggest AI developments, decoded. October 29, 2025.

Cognitive warmup. Is this another DeepSeek moment for AI to contend with? Chinese tech giants Alibaba says their Alibaba Cloud platform has successfully tested a new compute pooling system called Aegaeon, which has reduced the number of Nvidia H20 GPUs required to serve dozens of models of up to 72-billion parameters, from 1,192 to 213 GPUs. This is according to a research paper presented this week at the 31st Symposium on Operating Systems Principles (SOSP) in Seoul, South Korea.”Aegaeon is the first work to reveal the excessive costs associated with serving concurrent LLM workloads on the market,” researchers from Peking University and Alibaba Cloud wrote in the paper. This may well be another example of working smarter and not necessarily spending on massive compute infrastructure – exactly something a lot of AI companies are intently talking about the past few weeks. Alibaba’s methodology means a single GPU can support up to seven models for users to call upon, compared to a maximum of three models under alternative systems, researchers point out. .Investors will soon begin to realise this.

ALGORITHM

This week, we talk about Microsoft trying to find a more stable AI foundation that doesn’t have to rely on OpenAI as much, Nvidia’s ‘personal AI supercomputer’ that costs a pretty penny for your AI obsession, and OpenAI tells us again its concerned about the well-being of humanity in general.

Microsoft is redrawing AI alliances

Microsoft MAI-Image1

Microsoft MAI-Image1

Microsoft has introduced a new text-to-image model called MAI-Image-1, marking its first generative image system that’s been developed in-house. It’s a clear move away from reducing reliance on partners such as OpenAI, and perhaps even a signal (among many others; the widening scope of Microsoft’s work with Anthropic an indicator too) that the Redmond-based tech giant want to have a stronger foundation to work with, and have more of a stake in the overall creative stack. Early testing has returned impressive results on AI benchmarks, which is always a good sign if generative foundations such as lighting, texture, and realism, aren’t completely off from the outset. You’d be surprised how many image generators often fall short, even now. Strategically, it’s a big deal — a potential for tighter integration into Copilot which Microsoft is talking up a lot off late, and makes a case for better control of intellectual property. As generative image tools become commoditised, control and quality will separate the winners from the noise.

Nvidia, a personal AI supercomputer, and Apple

Nvidia AI supercomputer

Nvidia AI supercomputer

There was considerable excitement this past week when Nvidia began to ship its compact AI computer, the DGX Spark, meant for developers and researchers. Priced at $3,999 (that’d be around 3,82.000 onwards), it packs some serious power with a Grace-Blackwell GB10 superchip and 128 GB unified memory delivering up to a petaflop of inferencing performance. This means models with hundreds of billions of parameters can now be trained or fine-tuned right on a desktop. Powerful AI compute hardware is coming to your desk. But I’ve been wondering, didn’t we already have similar levels of compute available (and therefore Nvidia isn’t breaking new ground here) in the form of the Apple Mac Studio with the M1 Max and the Mac Mini with an M4 Pro? Chanced upon a nice graphic by lmsys.org comparing benchmarks, which indicate that an M1 Max Mac Studio (this is priced around $2,000) consistently returns higher output tokens per second than Nvidia’s computing foray. Even the really compact form factor of the Mac Mini (this is around $1,400) matches the DGX Spark. Makes one wonder, what the thought was…

AI therapy by day, AI temptation by night?

OpenAI has established what it calls an Expert Council on Wellbeing and AI. This will consist of eight specialists tasked with studying how constant interaction with AI systems affects human emotion, cognition, and behaviour. The focus is on defining what “healthy AI use” means across contexts, from education to therapy to everyday chatbots. The council will advise on design, ethics, and potential behavioural side effects. Is it a sign that the AI conversation is maturing? Unlikely. But it is clear OpenAI tends to bring us back to the smarter systems and impact conversations quite often, to keep investors, consumers and perhaps even policymakers happy. This is the same company that wants to make AI sexting mainstream, mind you. AGI can wait, porn is the revenue generator for the immediate future?

THINKING

Andrej Karpathy

Andrej Karpathy

“They just don’t work. They don’t have enough intelligence, they’re not multimodal enough, they can’t do computer use and all this stuff. They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues.” — Andrej Karpathy, ex-OpenAI, on the Dwarkesh Podcast.

Andrej Karpathy has taken a pin to the AI bubble. He estimates artificial general intelligence (AGI) is at least 10 years away, that the code generated by today’s models include considerable ‘slop’, and argues the approach should be smaller models that have better memorisation and context. In case you’re in the mood to question Karpathy’s credentials just because he may not be aligning with your world view on all things AI and how AI agents would be better than humans, here something to ponder — he’s a research scientist, a founding member at OpenAI, was Senior Director of AI at Tesla, and has since founded Eureka Labs which focused on education aligned AI.

The Context: Andrej Karpathy’s words hit at the core of the current AI narrative — and they do so from someone who has actually built the systems now being mythologised. His comments come at a time when “AI agents” have become Silicon Valley’s new obsession, marketed as the next big leap beyond chatbots, and better than humans in the workplace. But his verdict is sobering. The technology simply isn’t ready. In his view, today’s large language models are still pattern mimics, not thinkers. They can recall, summarise, and respond impressively within a conversation, but lack the cognitive infrastructure that makes intelligence continuous and cumulative. They don’t truly remember; they merely reference transient context windows. They don’t understand multimodality; they correlate text and image tokens without an integrated sense of reasoning across them. They don’t learn from experience; they only re-perform from training.

The current hype cycle around agentic AI, that can act on behalf of users, browse the web, execute commands, and make autonomous decisions, simply does enough to paper over structural gaps. Every AI company’s demos look polished, there is an illusion of progress with tall claims, but the foundation seems far from general intelligence.

A Reality Check: Karpathy’s decade-long timeline feels almost radical in an industry addicted to trying and being seen showing off quarterly breakthroughs. He’s not saying progress will stall, only that the essential leap from simulation to cognition will take far longer than investors or AI bosses are willing to admit. True agents will need persistent memory, real-time learning, cross-modal reasoning, and safe self-correction, all of which demand breakthroughs in model architecture, data efficiency, and energy cost. And perhaps even the size of the models, with larger not always better.

For AI companies, every new model is somehow always more aligned, more efficient, and more multimodal, than the previous one which was supposedly the greatest already. We are simply making the best, better? It is all boxed within a narrow frame of competence, because the reality is they can’t sustain context or evolve behaviour the way humans do naturally.

For enterprises, this perspective matters. It suggests that AI, for now, should be positioned as one piece in the puzzle, not in a position to lead amplification or replacement, for anything that requires human creativity, insight, and productivity. Automate the basic tasks, that’s all. Overpromising “autonomous agents” risks frustration and mistrust, especially when systems inevitably fail at continuity or nuance. In that sense, Karpathy’s realism isn’t cynicism. It challenges the industry to focus less on hype and more on foundational science: memory systems, continual learning, interpretability, and integration with real-world perception.

Neural Dispatch is your weekly guide to the rapidly evolving landscape of artificial intelligence. Each edition delivers curated insights on breakthrough technologies, practical applications, and strategic implications shaping our digital future.

The article originally appeared on Hindustan Times

You May Also Like

World

Pakistan will discuss an Extended Fund Facility (EFF) with the International Monetary Fund (IMF) in Washington next month, Finance Minister Muhammad Aurangzeb said on...

Business

New York CNN — Apple has received approval to change the way its smartwatches function so the company can overcome the Apple Watch ban imposed by...

Science

The Indian Space Research Organisation (ISRO) on Thursday successfully completed the docking process of the SpaDeX satellites. “India docked its name in space history!...

Science

The probability of a “city-killer” asteroid hitting Earth in 2032 just increased to 1 in 32, or 3.1%, according to NASA, and a chilling...