YazataMay 26, 2025 07:41 AM (This post was last modified: May 26, 2025 08:07 AM by Yazata.)
xAI has started construction of their Colossus-2 supercomputer, which will reportedly be the world's most powerful AI supercomputer when it's completed. It will have one million GPUs, up dramatically from Colossus-1's current 200,000 GPUs. (I don't know if Colossus-1 will be incorporated into Colossus-2 but I assume so.) Elon says that they anticipate spending $25 to $30 billion to build Colossus-2.
Colossus-2 will require an outrageous amount of electicity to operate, as much as a small city. In order to insure a steady supply, they are receiving large numbers of Tesla megapack batteries from Tesla's unheralded megapack battery factory in Lathrop California.
confused2May 26, 2025 12:08 PM (This post was last modified: May 26, 2025 12:11 PM by confused2.)
So what's it for?
What needs that amount of processing power?
Designing rocket engines?
Self-driving cars? .. they seem to be already so good that little progress is possible.
Starlink .. masses of data that needs processing in real time. Track, translate, assess. If mobile phones can use starlink satellites when you want them to, starlink satellites can probably use mobile phones even when you don't want them to. Tracking (almost) everybody at any time.
(May 26, 2025 07:58 PM)Syne Wrote: AI is a huge industry for consumer uses, not just internally to Elon's companies.
Pi (an AI)..
Quote:I don't need a GPU at runtime. My training phase might have involved GPUs, but now that I'm trained and ready to chat with you, I don't need that kind of computing power.
Once an AI model is trained, it needs to be executed, often in real-time, to make predictions on new data. GPUs play a critical role in this inference phase as well. Their ability to rapidly execute the complex calculations required to make predictions, enables AI-powered applications to respond to user requests quickly and efficiently. Whether it's a self-driving car making split-second decisions or a chatbot providing instant responses, GPUs are essential for unlocking the real-time capabilities of AI models.
- https://cloud.google.com/discover/gpu-for-ai
(May 26, 2025 12:08 PM)confused2 Wrote: So what's it for?
What needs that amount of processing power?
Designing rocket engines?
Self-driving cars? .. they seem to be already so good that little progress is possible.
Starlink .. masses of data that needs processing in real time. Track, translate, assess. If mobile phones can use starlink satellites when you want them to, starlink satellites can probably use mobile phones even when you don't want them to. Tracking (almost) everybody at any time.
I would of said cryptocurrency is one of the main things it will get used for considering people have put worth on the nonsense. AI is obviously what it's suppose to be meant for, but obviously making money while it's not running things is definitely going to be on the list.
It can handle parallel processing multiple outcomes at the same time, so it's not about just running super huge singular process.
It could also double to act as a VR server setup too, couldn't tell you what for, maybe just to push into Metas domain.
We could ask the question of what are the current world super computers used for? In most cases having a larger server farm allows the allocation of units to third parties for research. Operating such units in a controlled environment would mean making sure that external state operators do not have access to the information that comes out of them, however that being said it does leave it open for industrial espionage as the information could be used by those that run the network especially if their TOC/TOS allows it.
YazataMay 26, 2025 10:55 PM (This post was last modified: May 26, 2025 11:19 PM by Yazata.)
(May 26, 2025 12:08 PM)confused2 Wrote: So what's it for?
What needs that amount of processing power?
Elon talks a lot about making AI's that are "maximally curious", designed to explore the secrets of the universe.
So my guess is that they might be building a super-human philosopher/scientist able to digest all of the research findings produced by science (vast amounts of data) and then generating new hypotheses and discerning new relationships between variables that nobody has ever thought of before.
Kind of like today's LLM's, but trained not to perform as a chatbot trained on social media chat, but as a philosopher/scientist trained on the world's scientific data and academic literature. Just imagine it put to work concocting mathematical proofs.
Quote:Designing rocket engines?
Fundamental engineering design from first principles would be a great practical application.
One thing that xAI is already doing with their existing AI is training it to interpret medical imagery like x-rays and CT scans. When Elon spoke at a recent neurosurgery conference describing what Neuralink is doing, he asked the assembled neurosurgeons to send their imagery to xAI and then use their professional expertise to judge the results they receive back and tell xAI where the process still needs work. Elon says that the AI is already pretty good at that medical task
Quote:Self-driving cars? .. they seem to be already so good that little progress is possible.
I'm guessing that most of the AI that we will encounter in our daily lives will be far more limited and specialized than Colossus-2. But driving a car is really a super complex task in which drivers are confronted with an almost unlimited number of novel road conditions. So Tesla has literally millions of cars feeding data (mostly video) into the Cortex supercomputer in Austin Texas that in turn trains the neural network computers in the cars. It's a huge loop in which the whole fleet learns from its experience. That's why each successive FSD release is a smoother and more competent driver.
The Optimus Teslabots will just add new levels of complexity to that as the robots are faced with many/most of the tasks performed by humans, and with all of the associated challenges. In that regard, they are still a work-in-progress. (Elon hopes to send a few of the robots to Mars late next year.)
Quote:Starlink .. masses of data that needs processing in real time. Track, translate, assess. If mobile phones can use starlink satellites when you want them to, starlink satellites can probably use mobile phones even when you don't want them to. Tracking (almost) everybody at any time.
C2 Wrote:Starlink .. masses of data that needs processing in real time. Track, translate, assess. If mobile phones can use starlink satellites when you want them to, starlink satellites can probably use mobile phones even when you don't want them to. Tracking (almost) everybody at any time.
Yazata Wrote:That's a scary thought.
That is a scary thought.
I'm sure everyone has probably seen the Sam Altman and Jony Ive's video?
I was wondering about the privacy of this potential device. Where would the data be stored? You couldn’t possibly contain all the data in the device itself. If someone came up with such a device, I'd choose that over the cloud or servers. I’m thinking the way forward would be decentralized AI where AI processes, data storage, and decision-making occur across multiple nodes instead of relying on a central authority. It would be more transparent and secure, but I can't see developers giving up that kind of power.
YazataJul 10, 2025 08:34 PM (This post was last modified: Jul 10, 2025 08:40 PM by Yazata.)
(May 26, 2025 10:55 PM)Yazata Wrote:
(May 26, 2025 12:08 PM)confused2 Wrote: So what's it for?
What needs that amount of processing power?
Elon talks a lot about making AI's that are "maximally curious", designed to explore the secrets of the universe.
So my guess is that they might be building a super-human philosopher/scientist able to digest all of the research findings produced by science (vast amounts of data) and then generating new hypotheses and discerning new relationships between variables that nobody has ever thought of before.
Kind of like today's LLM's, but trained not to perform as a chatbot trained on social media chat, but as a philosopher/scientist trained on the world's scientific data and academic literature. Just imagine it put to work concocting mathematical proofs.
Quote:Designing rocket engines?
Fundamental engineering design from first principles would be a great practical application.
Last night xAI unveiled their Grok-4 model. I assume that it runs on the existing Colossus-1 supercomputer.
Reportedly, it is the world's most powerful AI by a number of benchmarks:
#1 on Humanity’s Last Exam (general hard problems) 44.4%, #2 is 26.9%
#1 on GPQA (hard graduate problems) 88.9%. #2 is 86.4%
#1 on AIME 2025 (Math) 100%, #2 is 98.4%
#1 on Harvard MIT Math 96.7%, #2 is 82.5%
#1 on USAMO25 (Math) 61.9%, #2 is 49.4%
#1 on ARC-AGI-2 (easy for humans, hard for AI) 15.9%, #2 is 8.6%
#1 on LiveCodeBench (Jan-May) 79.4%, #2 is 75.8%
That said, Grok-4 still hasn't produced any useful new technological developments, but Elon expects that it will by the end of 2026, and perhaps in 2025. He expects that it might possibly generate new physics by the end of 2026.
Elon says that it still lacks common sense. But its biggest failures have basically been the result of human error, such as training it on biased data.
"Grok 4 is the first time, in my experience, that an AI has been able to solve difficult, real-world engineering questions where the answers cannot be found anywhere on the Internet or in books.