This was amazing. I learned so much from this. I use AI all the time for coding and it’s amazing. But for now, at least, it only works for what I need it to do because I know how to code and how to test code. Otherwise it would get stuck or miss big problems. I am a physician and AI has recently become very helpful there, too, but again I can only use it well because I know how to be a doctor without it. I’m terrified for young people. What jobs will be available to them? And how will they get the experience to catch what AI misses? And what happens to all of us when no one knows what they’re doing anymore? Another issue with AI is that when I use it for coding it usually has to think for 1-10 minutes before giving me an answer and then it needs my feedback again. I am still working on strategies to use that time efficiently. And it will probably drive my attention span down even further. I had not thought about lack of investment in other sectors which also seems quite bad. We are in for a rough road, I think.
Yes, this was an excellent discussion. I would love to see another one on jobs, creativity, and the societal impacts of AI. This is such an important topic right now.
AI requires original, human-generated content to train on – but at the same time, AI is disincentivizing the very content that it needs to move forward... For example, traffic is the lifeblood of any website, but Google now plasters enormous "AI Overviews" scraped from other sites in response to each search, robbing the traffic (and incentive) for those human authors to create new or additional content.
Without new material, generative AI models are trained on increasingly large amounts of their own output, leading to what researchers call "model collapse" – a degenerative process resulting in a decline in quality, variety, and accuracy. Like digital incest, this occurs because AI-generated data, lacking the full richness of human-created data, can introduce errors that compound over generations, causing models to drift from reality and produce less useful or "polluted" outputs.
So in a future with diminishing human-authored content and worsening AI-generated content, where will we be? Everyone comes to rely on AI, but what does AI rely on? Has anybody thought this through?
At seventy-five I can see the goalposts of life so all of this is far more important for you (relative) youngsters, but what I'm trying to do to the very best of my rudimentary computer/technological abilities is remove AI from my daily life. What I'm discovering is it's not so easy especially with my new Windows 11 computer. I don't want copilot and Gemini and AI assisted searches, I want to use my brain. I have found some info that purports to show me how to delete or block this or that, but none of it seems to work very well. At this point I'd be happy to just rid my computer of those maddeningly seductive "AI Overviews" that make my brain go to sleep and that I don't really trust for completeness or correctness.
You can consider an alternative to Google like DuckDuckGo, which uses less AI.
Also, if you include “-ai” in any Google search, you won’t get the “AI Overview” at all… plus your results will be faster. But not many people are aware.
A went Google searching and it gave me two AI enhanced answers and they were inconsistent with each other! AI couldn’t tell because it’s kind-of stupid!
We see things that aren’t there because this is what humans are born to do.
The coconut ‘monkey face’ doesn’t look like a face! Great research on infants seeing how they smile at the three dots, yes three dots…we are programmed to see faces…
I'm 70, so a bit behind you - maybe on the 20 yard line. I actually like the AI summaries. They can save me a lot of time rummaging around for answers. But I take the summaries with a grain of salt. If I want to dive in deeper there is a link and if I don't trust it at all if I scroll further I can find other sources. So I don't think it takes anything away.
The most impressive for me was when a part fell off the dishwasher - a little wheely thing. I took a picture of it and used Google lens which identified it and gave me the exact part number (I assume that's AI) and a link to where I could buy it. I'm not sure how long it would have taken me otherwise to figure out what it was.
74yo retired programmer here. The short answer is: Linux Mint.
I made my living for 43 years on DOS/Windows. But I actually liked Windows only with Win7; thereafter, Microsoft shoved too many "helpers" into everything and ruined the product. Along the way I set up Linux servers, but couldn't get comfortable enough with it to build GUI software until the last half-dozen years of my career.
So with the impending Win11 and all the warnings about its souped-up AI features, and with me in retirement and just wanting to be able to surf the net and occasionally use a word processor and spreadsheet program, I asked DuckDuckGo to find me articles on "Linux for Grandma".
The answer is: Linux Mint. I ordered a laptop from a company in the Netherlands, LaptopWithLinux.com. The order sheet allowed me to have them preinstall my preferred browser (FireFox) and email program (Thunderbird), as well as Libre Office for word processor, spreadsheet, etc.
My laptop arrived Jan. 21, 2025, just before DJT got busy imposing tariffs, so I got mine for $920; it's probably more now. But it worked right out of the box, and while I did occasional research about how to accomplish some things with Linux I was able to get down to surfing and reading without fuss.
Is it a dynamic breakthrough that will greatly change our traditional world?
Is it a massive investment that is creating a short-term bubble that will significantly affect our short-term economy and, longer term, our traditional job market?
Is it something that I needn’t be concerned about, except for how it is likely to affect my 5 grandchildren?
I was a 1971-1972 MIT Sloan Fellow—-far too long ago to understand the implications of AI. I guess I must simply learn to grin and bear it.
AI is the latest way a small handful of fast talkin' tech bros are rushing to cash in on a flood of capital. Maybe they can make a ton of money but at what cost to the rest of us? Is there a credible, verifiable cost benefit analysis of AI?
Robert This is the most sensible AI comment that I’ve read. Back when I was a MIT Sloan Fellow 1971-1972 our big tech breakthrough was computer timeshare. When it broke down during a group marketing problem, the only noise was the two Japanese Fellows with their abacuses. Guess times have changed.
In the long run, AI will have the biggest technical impact on human society since writing, assuming we don't hit the AI robotic science fiction apocalypse or any of the other end-of-the-species scenarios.
For you directly, the biggest issue is probably whether your finances are insulated from a potential 2008-like implosion.
How AI will impact your grandkids is probably beyond your control, except to do your best to see that they are well educated, and perhaps to try to get them other passports if possible.
(Base on my grandparents I could have had EU and Canadian passports, but it required paperwork done long ago.)
68 and refuse to even read the default AI answers. Instead, I scroll across the header in Google and click ‘web’ to see search results. On the bright side, I just received information about a class action lawsuit filed against Anthropic because they used two of my books for training. I’m hoping for more litigation against these beasts in the future!
I, too, try to avoid the AI answers so readily offered in Google and elsewhere. I’ve switched to Duck, Duck Go for searches, as one can turn off AI answers and just see the more primary sources and choose what to read. If I do use Google, I quickly scroll past all the AI summarizing and questions.
I’m 80 and ai is useful, as long as you check it for accuracy. You can choose on the top Google menu, whether you want All, AI, or some other selection. All, gives you the usual AI rundown, which sometimes is accurate, but it makes sense to check sources also listed below. AI accuracy also depends on how carefully you phrase a question. Be precise and narrow in your question, and the ai answer is much more likely to be accurate. Still do a quick source check. I composite in photoshop. If I’m designing a crazy card for a friend, I’ll text to image, and use parts of the resulting ai image that I like, blended with my own drawing or photos. It’s fun. If I’m doing serious work, I selectively use some smart tools to help me blend faster, but all the parts of the final image are my own work - not ai. I was an English major. I do my own writing for good or ill. I keep spellcheck and her friends muzzled, unless I have a question for the dictionary. You pick and choose what you want. You’re the human. You’re the boss.
My objection to AI is equally environmental. It uses 7-8 times as much energy to do an AI search compared to a traditional Google search. I just don't think we need to be destroying the planet while we're looking up things in a virtual encyclopedia. I know, I know, but my four-year-old grandson's parents while they don't abjure the internet usually have him use the encyclopedia when he wants to learn about something.
Some time ago I read an article by a travel writer who decided to plan a trip to visit a number of beautiful places using AI. It seemed so paradoxical to use a destructive tool to admire natural beauty. Are we really all so busy that we can't do anything ourselves?
I bemoan the fact that AI as generally used is an awfully sloppy term for some pretty precise chips. The slopover seems to include anything anyone might do with them. We need specific definitions for chips that vary and are programmed in so many different directions. Not likely to happen. I worked in photoshop for 20 odd years, and watched Adobe build on what went before. You can still push a pixel all by yourself, or use a "smart tool" on your computer - offline - due to your GPU - a Graphics Processing Unit. Or you can use Adobe ai, which only uses graphics Adobe has licensed, or are out of copyright, if you need a stock photo or inspiration. Or you can use plug-ins which search the web for anything they're told to do. Only the last two need to go online They use different amounts of power for different processes, and get that power from different places, thanks to the descendants of GPUs which include Machine Learning algorithms, and the misnamed AI. So that's graphics. I used to air brush, silkscreen, and shibori fabrics I made into clothing for people. Then I printed hosiery, using photoshop and a digital printer. Now in my dotage, most of the artwork I do is digital, plus muscling and trimming my prints into accordion book form. Digital work uses less material, and is arguably healthier for me and the environment, in many ways. I can imagine really good transportation systems using ai, to keep traffic smooth - keep lights green if no one's on the cross street, saving time, tire rubber, brakes, and fuels, such as electricity. Or ai modulation of the electric grid, so it's much more efficient - no brown outs or surges to ruin equipment. AI control of waste water systems could aid cleaning water, prevent backups, or overflows in storms, and keep an eye on illnesses before humans can start counting. AI systems might work out ways to use less power themselves, or design better solar/wind/wave/thermal electrical supply. They might be able to see useful variation in plant and animal changes, fire danger, or careful, waste free, irrigation. AI machines might inspect airplane frames and engines before flight, pipelines of all kinds against spills, and of course pharmaceutical research. The limit is human imagination. I think kids need to learn a lot of things, among them older ways to get things done, and new ways. They will live in the world long after me, and must prepare as best they can, as every generation has. I'm not going to second guess parents on the "right way" for everyone to learn. I get your point, but like other tools, since the first stick, it all depends on how they're used. I do think it matters for people to know a subject well enough to make sensible decisions and regulations, and not bow down all wobbly to the new idol, or go full luddite with pitchforks and shoes.
On my W-11 machine, I use Firefox as my browser. As Acela recommended, I use DuckDuckGo as my search engine. I use (but do not really recommend) AOL as my mailer. Except for the operating system itself, Word and Excel are the only Microsoft tools I use. The only Google tool I use is the mapping one and "streetview"; I avoid gmail like the plague.
Every time Microsoft updates the operating system, it pretends that I just bought the computer and tries to get me to reset everything to the Microsoft products. It's mildly irritating, but I just keep selecting "Skip" until it gives up.
Me too. I see 2 areas where AI is helpful to me. Editing poorly exposed photos and health imagery such as mammograms, MRI’s and such. Maybe protecting your retirement savings. Otherwise, don’t want it in my life.
It's the microsoft paperclip all over again. Really, it's designed to direct you what to do, to make you a thought follower. People are dazzled by technology so they hope this can be used to help engineer society. This is what they're doing, imo. It will continue to make the rich richer and everyone else financially and psychologically dependent on them. But gaining political alignment through the political process is an excruciatingly slow process, to borrow a quote from Planck, it progresses one funeral at a time. So as much as I hate the hype, this might represent one workable compromise. It's the world of tomorrow. But tomorrow, by definition, never comes.
Also use thei Firefox or DuckDuckGo browser. Go into Settings and disable any features you don't need.
I am looking at installing Linux with windows as a dual boot option.
Another useful trick is to turn off Wi-Fi until you need it. One of my computers is disconnected from the web 99+% of the time, and as a result seems faster but for sure interrupts me a LOT less.
There is a very simple view of AI and data, fire up a search engine, type in ‘business leaders’ and ask for image output….look at the gender and color. Or attractive women, or anything else you’d like…the data bias is there for all to see! Digitized data sources are strong biased, so is AI!
I commented above about how lawyers using AI have been sanctioned since AI doesn't seem to be concerned with doing accurate legal research. (It makes up supporting case citations, amongst other things.) But your reference to your attention span reminded me of a book I recently read which documents how using computers/tablets, etc. (even before AI kicks in), damages the ability of children to focus/concentrate/problem solve. It makes me feel like we're deliberately using technology that will end up destroying us by aggressively dumbing down the population.
Some time ago I read a fascinating article about studies showing how the use of GPS is doing serious damage to some of our most important brain functions. Check it out. From (oh no!) AI Overview, or do a search and read some of the source material.
Impact of External GPS Use
Research indicates that habitual reliance on external GPS negatively affects the brain's natural spatial memory and learning processes.
Reduced Hippocampal Activity: Studies using brain scans show that when people rely on turn-by-turn GPS directions, activity in the hippocampus decreases. Navigating without GPS requires active use of spatial memory, which activates this region.
Strategy Shift: GPS encourages a "stimulus-response" or "auto-pilot" navigation strategy, which relies more on the caudate nucleus (involved in habit learning) rather than the spatial memory strategy that builds cognitive maps. This can lead to more rigid behavior and poorer overall spatial knowledge of an environment.
Potential Atrophy: Just as London taxi drivers were famously found to have larger gray-matter volume in their hippocampi from memorizing city streets, lack of use can lead to the opposite. Infrequent use of natural navigation skills may impair hippocampal connections over time.
Divided Attention: Some research suggests that the use of navigational aids sufficiently divides attention, which contributes to impaired spatial memory.
I believe you, but if your sense of direction were as bad as mine is, you would (as I do) bless GPS every day. I've often said that if GPS existed when I started driving (in 1966), I'd have been able to live a whole additional lifetime with the time I'd have saved by not getting lost.
I listened to an interesting discussion about AI and education. The Professors liked it as a tool but all of them were concerned about it's use in students who aren't experts and don't have the background to know what true and what's fabricated. There was also a big concern that the students would never gain that knowledge because AI makes it so much easier to "appear" to know the answers.
Yes people and coding are the real innovations, the real energy and design, the joy, the fun! AI is none of this…there is a reason some have become so excited about marketing these tools…no people, no social contract, no lives, no loves.
I read a new story recently, I think in Slate, that two teams of coders, one with and one without AI, were pitted against each other to see which would be more productive. The team using AI lost because the coders had to spend inordinate amounts of time checking the code AI produced for errors. I guess the coders using AI didn't have a good way to use the 1-10 minutes of time they needed to wait for AI's wrong answers as well, huh?
Anyway, I look forward to serving our future AGI masters. They can't possibly to a crummier job of running the world than our current overlords. Or can they (queue spooky music)?
I can see that. It depends on what the task is. I've asked it to do things that I realized later I could have done more quickly myself. And I've also asked it to do things it just couldn't do. But it's done thing in minutes that would take me a day, and it's done things in hours that would take me weeks. It's also done things that I've wanted to do for years and just didn't have the skill to do, and never would have the skill to do because of the opportunity cost of gaining that skill. And it's great for debugging. Many human coders make a lot of mistakes, too. I definitely do. I spend a lot of time testing my own code, so my guess is that part is a wash for me. I also ask Claude to teach me as it goes, so I learn stuff for when I code on my own. I think there are a lot of bad things about AI. When I said it's become in useful in medicine, I should have clarified it's a minority of AI that's useful. Other AI is unhelpful or worse, and many people make ridiculous claims about what AI can do. But some of the good stuff is really good.
If your software program can give you a list of your most often used codes, hire a scribe with the skill to expand category dx or procedure from simplest to expanded precision (plain Diabetes followed by Diabetes of every flavor, etc for kidney disease, hypertension, whatever you treat), have individual patient codes appear onscreen at each visit with easy access to your master list, you might save hours every week.
As a doctor, have you read C M Kornbluth's 1950 SciFi short story, "The Little Black Bag"? The future is filled with people who are not very intelligent, but the technology compensates. The Black bag of the story is the classic physician's black bag to carry the tools of the trade to the patient. A remarkable piece of technology that was acquired by a mid-20th-century primary care physician. That is how kids without experience will do medicine in teh future.
Isn't this how your dealer services your modern car? They plug it into a computer with diagnostic software, which determines what needs to be changed or fixed. All that knowledge, understanding engines and transmissions has become obsolete for new cars. The skills are now used for older vehicles without the fancy electronics. AIs are claimed to be better than radiologists at detecting cancers from imaging results. Fortunately, hospitals ignored the suggestion that radiologists would become obsolete. However, I think it is clear that this technology could be used to overcome shortages of specialists in countries with stressed healthcare systems, and countries with only poor systems.
Where I live, the local hospital cannot find a neurologist to hire. The region is almost devoid of them. In counties where hospitals are closing, I can see that AI systems will increasingly shoulder the load of various specialists. Those SciFi robotic devices that do the "hands-on" work of specialists or surgeons may become the future of healthcare in some places. At least LLMs can be programmed to have good bedside manner.
Today's NY Times used the accident rates of Waymo self-driving taxis to suggest that we should be happy that they exist, as fatalities are reduced by 90%. (but the stats are not really comparable, AFAIK).. Yet most people are not comfortable with pilotless aircraft. Self-driving taxis may change that in time. For some aircraft losses, like that one over the Pacicific that was never found, it might be very helpful to have a remote pilot be able to take over the piloting if the crew on board is incapacitated, just as being done with self-driving cars that get into difficulties.
My local hospital has a "neurologist" on tap via screen. But as I noted with a recent need to find a neurologist, this doesn't count when tests need to be done. My only recent experience of hands-on tests neurologists do is testing reflexes and doing nerve conduction tests with electrodes. How hard would that be to do with a machine, even if it needed to be observed or run remotely? The only really skilled task I have experienced is taking spinal fluid for analysis, but does that require a neurologist, or can another specialty do that task? (At UCSF, the neurology dept wasn't able to do such a spinal tap as it needed imaging in my case to insert the needle.) The rest is just analysis, which might well be done by some sort of AI. Where I live, "specialists" seem more like a label than having any real skill other than concentrating the patient load on a particular physician to gain more practice. It is a veritable "medical desert" regarding specialist availability.
Classic story and cautionary tale of relying on a seemingly miraculous technology that: (a) misleadingly appears to offer you a path to riches; and (b) you don't have the first clue on how it actually works.
I heard this and then just a couple days ago heard that prompt engineer jobs are less needed because the models are better at reading bad prompts. So I started telling Claude to help me write better prompts and it does. It is its own prompt engineer.
Neither of my two key physicians use AI, which is why I'm leaving their practices. Both are very much "old school" and have dismissed all of the objective data I've given them as well as what those data portend for my health. If they had taken a few minutes to use any of the AI chatbots they could have verified what I told the, which could have major ramifications for my health. So, you use of AI is wise. It's the future of healthcare and clinical medicine. (I'm the son and brother of physicians as well as an early AI research scientist).
Excellent comment, e.g., "we're in for a rough road, I think." Amen to that. The younger generation is entering a era with no models to rely on......a frontier no one can explain to them. Very concerning. Some compare the transformation via AI to electricity, e,g., but AI is radically changing everything, and at a pace a thousand times faster than the changes due to electricity. Ahhhh....but the potential for major advances in virtually every discipline. There's the rub. Potentially unimaginable benefits. It reminds me of that famous first line in A Tale of Two Cities: "These were the best of times, these were the worst of times." Only in 10-20 years will we be able to know which of those these times are.
An illuminating discussion! Thanks to both Pauls for a consideration of the process of AI itself and its even more convoluted economics. What I’ve extracted from your back-and-forth is that AI will give most reliable answers to rigidly-defined logical systems like computer software, but will be inherently less reliable in the ‘loose grammar engine’ that uses human vocabulary. Moreover, the data from the public internet that was used initially to ‘train’ the LLMs that are the logic of the system has now been mostly consumed and what data are left are less useful and maybe even destructive if used. Apparently, such AI systems (and I’ve experienced this for myself) can aggregate bits of knowledge, suggest corollaries, and thus mimic a human’s ability to make predictions. Why not? Maybe humans do it that same way? But, it appears, like humans, AI is stuck with its knowledge base and comes up with conclusions reminiscent of a 37-year old guy fulminating on Redditt.
Moreover, the computer chips, which were originally devised to run video games, heat up, behave erratically and in ways that may be hard to identify, and therefore have a short useful half-life. And they are 60% of the cost of the whole system. Each system runs on massive amounts of electricity which the AI giants must build plants to provide themselves with or hoodwink utilities and their customers to provide for them.
The two Pauls then switch away from the technology itself to a discussion of what all this is costing and what economic effect it will have on the U.S.- on the U.S. mostly because these AI behemoths are fighting it out mostly here, while other models from other countries like China (with Deep Seek) are more sparing of resources. Turns out it’s slippery to estimate how much is being spent on this - maybe 14% of total GDP? And many investors have distorted ideas about what’s under the hood and rely instead on the perfume of Meta and Google to close the deal. Oh- there are also convoluted business arrangements called SPVs - shades of CDSs anyone?
Dr. Kedrowsky notes that this astonishing outpouring of economic resources - justified or not - is likely to distort our society in many ways - gobble up available credit and land to build power plants and data centers, and create a huge deflationary effect when it actually starts doing what it has promised to do. To which Professor Krugman responds “oh joy”.
Any interview on technology where the interviewer asks what a "thingee" is speaks my language.
Thank you for such a clarifying conversation. The message I take in is that despite the seemingly enormous brain power of the people who are making AI and AI itself is that we are making the same loop around the same race track we take every generation only faster and with more cumbustible gasoline heading into a fiery crash.
I think it's interesting that AI allows people (maybe lawyer types?) to work within the context of some "collection of documents" without having to actually read those documents. "Adjust the language of the proposed legislation to be consistent with corporate policy (as described in [secret] corporate documents AJ94.5)" No one needs to every actually know what the secret corporate policy says or what it is even about.
Gloria you are correct. There are always people who will use technology for evil. like a coin there are two sides like Atomic energy and atomic bombs. There are good uses for AI, but we must be vigilant to limit the evil uses.
no but it presented different threats in a different era which created fear. and nobody but nobody knows whether AGI will be terminator and whether that will even happen and when.
I don't believe they have more brain power than Paul and Paul - and many, many others and certainly have VERY low EQs. They have much more ambition for power and are much greedier. The first thing that needs to happen when the Congress and Senate turn over is to outlaw Citizens United so they have less opportunity to ruin our democracy.
Terry is great! I always translated at ‘what’s his name.’ But could be wrong…Patchet got a lot of very complex human things correct although his ideas of libertarian, murders and thieves, the guilds, is funny, as is the smart version the dictator…he makes the trains run on time. Of course Vimes is the hero, sorry he was never able to make Carrot the rightful King, would age made interesting children for the successions.
Found the article well worth my time. One question it left me with was the impact of the unprecedented involvement of four sectors in this possible bubble- real estate , technology, credit and government. Will that prevent the bubble from bursting because “everyone” is involved or will it be a far more disastrous economic crash?
For the discussion of circularity just over 1/2 way in this presentation, I was reminded of the analogy of the AI driven factory that makes paper clips, described in Harari's "Nexus" and of the Sorcerer's Apprentice in "Fantasia".
A key point from this discussion is that asymptotic limits are being approached in ways that are not generally understood. The short mean time before failure (MTBF) has also not been prominently discussed. Another key point is that the utility of these LLM's in real life can only go so far for the average person. It's not driving your car (yet), and it's not preparing dinner (think Jetsons or Star Trek), nor is it helping in the labor of pouring a foundation or butchering a steer.
A point not discussed is the cost in this "industry" for robustness. Not just the hot swapping of chips that fail. A whole data center cannot be allowed to go dark, so there must be power generation backup, including batteries or enormous scale. In terms of water cooling needs, there is the problem of drought as a chronic issue, but there is also flooding and wind to be considered.
My response to all of this discussion is to prepare for a steep recession., for which the government is not prepared, especially with the proposed replacement person for Jerome Powell.
Former emergency generator tech here, the increase in scale over previous large financial data centers (themselves a very large increase over previous generation) is mind boggling. Lehman (now Barclays) started in the 80s with 3×1,000 kW. As their doom approached in 2007, they were building out 6x2.8 mW. I worked at both sites.
Microsoft's complex in Mount Pleasent, WI: existing, 39; permitted, 40 more; proposed, about 150. From AI overview. I wonder, can anyone else, like a hospital, even get one now?
So that's where those "widgets" I heard about in Econ 101 came from! Trouble here seems to be, AI widgets are "slop", and the machines that make them are very expensive and wear out fast.
There was an earlier theatrical reference but yes, Econ 101…but at least they are creating something rather than stealing…different economics for sure. Fake it till you make it! Fake reality!
In other words, financialization, more or less. Don’t need a real and useful good, just a quasi-plausible story to build financial structures on. And, ‘as long as we all stay irrational, we’re ok’.
Oh, I forget 'influencers.' and the drugification of phones...really, even the most bizarre science fiction never predicted that we'd pay 100/dollars a moth to become addicted to cat videos, the storage of which is killing our planet...although AI, which is neither, is jump-started the distortion!
I specifically am concerned about the "average person" upon which the LLMs are based. My brain functions at the 99th percentile so AI results are pedantic and almost useless.
Wow! This is incredible. I’ve been concerned about the AI boom and possible impacts on my IRAs. This is laid out so well and understandable that I, a music major, can understand it. I’ve wanted to know this info for a long time. And while all of this money is going into data center infrastructure, what is happening with all of the non- tech infrastructure like bridges, roads, water and sewer lines, etc.?
Well, along with those, what we SHOULD be investing a lot of that money in is green energy, or even nuclear. Instead, along with crypto (which is totally useless), we are generating more greenhouse gases at the worst possible time for very uncertain benefit. We should be solving the energy problem and the climate problem, instead of working hard to make it worse.
And then these shitheads like Musk, instead of focussing on preserving the Earth environment, wants to go to Mars when we've so thoroughly ratfucked what we have here.
To illustrate just how unrealistic the Mars pipe dream is: even after a global nuclear armageddon that kills 99% of humanity and poisons the entire planetary surface and atmosphere, Earth will still be more hospitable to human life than Mars currently is.
We have a magnetic field. You won’t be able to live on the surface of Mars without maybe wrapping the planet with superconducting cable to generate a field. Otherwise, always the atmosphere will be thin. And no oxygen.
One of my favorites is the dust. It blows everywhere on Mars and it ain't ordinary dust. It's fine, microscopically sharp and angular, abrasive to equipment, and full of toxic, carcinogenic perchlorates,
Because of the externalities that cost the general population, should the data centers be nationalized? By the way, should StarLink be nationalized for national security purposes?
.. how many infrastructure weeks did the last Trump administration have while delivering goose eggs? Check with your local Dept of Transportation. They and grade bridges and road surfaces ... many are aged and McGuyvered to meet bare minimum standards and fall in at C or lower having been built in the 70s or earlier.
Biden administration did pass an infrastructure bills, some of which is being kneecapped by HeWhoMustNotBeNamed.
Add to that the growing pressure of these data centers on an already aged grid ... anyone remember the Great Black Out? Or the rolling blackouts of the Enron era in the West?
It's going to blow up small boats out of the water off the coast of Venezuela with the full might of the US navy instead of Coast Guard cutters arresting them. It's going to build up an alternate domestic military force called Border Patrol and ICE, and build concentration camps for landscapers, roofers, cleaners etc. It's going to demolish the White House and put in an AMAZING BALLROOM so that thousands can pay homage to the first American King.
I think the most important thing is to realize that there is no good reason to think large language models are an important step towards a general intelligence machine. I cannot say for sure that it is not, but if it is, where is the actual connection?
It seems likely that LLMs are at least a stepping stone to more sophisticated AI since at a basic level they operate in a similar way to some neural functions.
On the other hand, brains also have filtering systems and elaborate feedback mechanism to adjust the filtering.
Filtering can at one level be thought of as evaluating multiple ideas before choosing a word or action, but there is filtering at multiple levels down to electro/chemical signalling between neurons.
LLMs are trained in some facets of this filtering, but the mechanisms in the brain are much more complicated.
It is unclear whether the building blocks in an LLM are too simple to be useful to get to AGI, or will be a component, or are sufficient in and of themselves.
I wish I understood most of this conversation. I do understand how important it is to be aware of the issues addressed. I appreciate the opportunity to “listen in” on the conversation and to read the comments here as well. It is a foreign language to my 78 year old brain whose processor is becoming degraded🤣. Thanks for sharing the interview and thanks to the commenters for sharing their insights as well.
I found this discussion informative and fascinating. It was also quite terrifying. I had no idea how shaky the entire AI boom is. I'm left with the feeling that it only a matter of time before this enormous bubble bursts! Meanwhile, the massive build out of AI centers is causing serious environmental damage.
I came away both more and less terrified. Less so in terms of the singularity happening anytime soon, more so in how much crappier capitalism is in this reality than I realized...
As it works in the US, yes. Capitalism is not a political system, but a system that can work fairly when both sides of a transaction can negotiate on equal footing.
In our world, corporations have more rights than individuals, and have become oligarchy. The Federalist Papers ... nine and ten in particular discuss the needs for guardrails to prevent certain concerns, religion, merchant class, etc., to throw their weight to undermine common good.
Trade can be free and fair where guardrails, i.e., system of government, exists to support and enforce the common good. We are not in that place.
I’m also appalled by how consumptive these data centers are, in terms of how much land, water and power they will need… concerned that they will be expensive, useless burdens on the public in a few short years
We are having a big fight over a data center to be built in Saline MI, near Ann Arbor. DTE wants to raise rates to accommodate the extra infrastructure needed, while politicians are offering up all kinds of taxes breaks in exchange for a minimum of 30 new jobs.
Good info on the can be found at Distill on Substack. One state's fight not to subsidize yet another corporation.
After reading this, I would stay away from it, particularly subsidizing power. Why should general population subsidize the power this dsta center needs, when the parent companies have made billions of dollars?
Yes, and for state and local governments to give concessions to attract while forcing residential consumers to bear the increases cost to build out infrastructure to support is malfeasance.
Very good. Kedrosky is doing a good job here (which is pretty rare in GenAI).
"Is there a less-than-90-minute explanation of how the whole thing operates?" https://youtu.be/9Q3R8G_W0Wc is a 45min talk from two years ago when certain things were still easy to illustrate/visualise; nothing about how a transformer works or the math, purely functional on generating text by token selection). These things have not fundamentally changed, though a lot of 'engineering the hell out of/around it' has taken place and quality has improved a lot (but fundamental limitations still remain). The 'thinking' models, for instance are massively costly because of their 'indirection' (it is just a system with massively more behind-the-scene inference with models trained on certain forms of 'token sequences' (like mathematical reasoning text), https://ea.rna.nl/2025/02/28/generative-ai-reasoning-models-dont-reason-even-if-it-seems-they-do/). There is also much more parallelism (multiple inferences running side by side in the models with at the end one that is put out, this, I guess, was probably the big improvement in GPT4).
One remark. As far as has been reported — it's all trade secrets so hard to estimate, there are a few papers, like https://arxiv.org/abs/2512.03024 and https://arxiv.org/abs/2505.09598 (but they aren't very strong) and it is a lot of professional guesswork — about 90% of a models energy spend comes from inference. While one big job for training sounds special, this big job too is a constant stream of little jobs, and I would doubt that inference is extremely less stressful for the chips than training is. But that is an estimate, surely.
At this point, there is an abundance of (often additional to an existing dataset) training, but that is because Google, OpenAI, Anthropic, etc. are in a race to get to the top position, the near-monopoly position, with the best model and that requires a lot of training during development. When that race settles, we will see that inference, not training makes up almost all of the cost. It is that '90%' inference why they are building these data centers.
The firms lose massive amounts of money on inference, and training is pure development cost. So, making everything more efficient is where most of the attention (pun intended) now is. GPT5, for instance, tries to estimate the difficulty of the text so far (both human and AI-generated) and tries to route inference to cheaper (smaller) models, also depending om your level of pay. Google has paid a lot of attention to hardware (TPUs) and in the past has created a large cache of (not very reliable or complete) 'AI-generated summaries' of web pages which it at one point used instead of reading the web pages themselves during an inference, like others do.
But that this stuff depreciates extremely fast remains true. And that all that massive growth in volume will not bring us AGI-quality is also pretty clear. Which means that the models have to economically become the 'cheap' option (both in cost as in quality) for mental work, just as factories became the 'cheap' (physical automation) option for physical work at the start of the Industrial Revolution. It is quite uncertain at this stage if that is economically viable, and at least it will put pressure on many 'mental artisans' of today (especially creatives). Some really large breakthroughs (e.g. combining with non-digital hardware) are still required, and these are pretty far off.
"Quite uncertain"??? Presuming I've accurately grasped what I read here, I would infer that the likelihood of economic disaster is quite high. Of course, that presumption may be as dubious as presuming the accuracy of AI summaries.
The likelihood of the bubble bursting is very high indeed. A recession seems indeed quite likely, but there are in my mind some uncertainties how deep it will get.
I was talking about the somewhat longer term (say 10-15 years). Suppose this research on combined analog-digital calculations can be scaled up a factor 256 from recent research, then we’re looking at a factor 100,000 energy drop for this stuff (1000 times as fast at 1% of energy for the same calculation). No AGI, but the economy of inference will then change dramatically. The current technology I doubt will be economically viable.
It would seem that a recession is certain. Kedrosky says “[Bubbles] tend to be about real estate, or they tend to be about technology, or they tend to be about loose credit, and sometimes they even have a government role with respect to some kind of perverse incentive that was created. This is the first bubble to have all four.”
In other words, to avoid a recession, these companies have to do everything perfectly and have a tremendous amount of luck on top of that.
Agreed. Though we should not forget sequences like the dotcom crash leading to central banks setting very low interest rates in attempts to stimulate the economy, which then lead to money fleeing into stocks and real estate, thus — in combination with MBS-silliness like turning junk bonds into AAA — led to the 2007 megaboom and 2008 megabust.
Hmm, maybe we should call this magaboom and magabust. Just kidding.
This was a great interview! I listened to Paul Kedrosky's talk with Derek Thompson, so things like the SPVs and "Dutch disease" were not news to me, but still learned some new things: The idea of those land power companies gobbling up parcels of land just to hoard and flip ala Chinatown is insane. We are building in so many layers of complexity and interdependence it's impossible to know exactly how it will turn out, but it seems the downside risks are a lot more prominent than the potential upside...
I was telling a colleague the other day that using a chat AI tool is like working with a 7-year-old and a yellow lab at the same time. Lots of questions and lots of pleasing.
So, the GPU's, like ripe bananas, stop becoming useful?
The faith in AGI capturing the future, the obvious winners, are gonna be private share holders, of the monopolistic platforms, 6 or 7 of them total, dominating the market.
These platforms must capture the entire market, winner take all.
Eminent domain of water rights is ALREADY going on, electricity produced from lignite coal, depleted oil reserves, those holding future drilling rights.
None of this has anything to do with Democracy.
All this for future profits for--- whom again? Time to cozy up to those in charge. "Trump? oh Trump? Have some gold, but give me WATER RIGHTS.
Universal healthcare? $200 bn. High speed train network? $500 bn. Free college for everyone? $250 bn. None of those have any $ in one billionaire's pocket.
Wow. The comments are just as interesting as the initial discussion, not because they are all gems but bc they highlight humans dismissiveness. For those who got lost, read the transcript vs watching it, and if it's too long for you, run it thru an AI program for a detailed summary, bc this is a really great piece that many humans need to grok, especially those of us who have been running away from understanding AI out of fear or complexity
I use Ai to pull info on really basic topics then dive into links and related searches. Mostly just to hone in on related words and phrases to use in a more Boolian search.
The difference is clicking on the Pauls picture in the post forbthe live chat, or the the voice to text link at the top right hand side of every post, dark black arrow.
This was amazing. I learned so much from this. I use AI all the time for coding and it’s amazing. But for now, at least, it only works for what I need it to do because I know how to code and how to test code. Otherwise it would get stuck or miss big problems. I am a physician and AI has recently become very helpful there, too, but again I can only use it well because I know how to be a doctor without it. I’m terrified for young people. What jobs will be available to them? And how will they get the experience to catch what AI misses? And what happens to all of us when no one knows what they’re doing anymore? Another issue with AI is that when I use it for coding it usually has to think for 1-10 minutes before giving me an answer and then it needs my feedback again. I am still working on strategies to use that time efficiently. And it will probably drive my attention span down even further. I had not thought about lack of investment in other sectors which also seems quite bad. We are in for a rough road, I think.
Yes, this was an excellent discussion. I would love to see another one on jobs, creativity, and the societal impacts of AI. This is such an important topic right now.
AI requires original, human-generated content to train on – but at the same time, AI is disincentivizing the very content that it needs to move forward... For example, traffic is the lifeblood of any website, but Google now plasters enormous "AI Overviews" scraped from other sites in response to each search, robbing the traffic (and incentive) for those human authors to create new or additional content.
Without new material, generative AI models are trained on increasingly large amounts of their own output, leading to what researchers call "model collapse" – a degenerative process resulting in a decline in quality, variety, and accuracy. Like digital incest, this occurs because AI-generated data, lacking the full richness of human-created data, can introduce errors that compound over generations, causing models to drift from reality and produce less useful or "polluted" outputs.
So in a future with diminishing human-authored content and worsening AI-generated content, where will we be? Everyone comes to rely on AI, but what does AI rely on? Has anybody thought this through?
At seventy-five I can see the goalposts of life so all of this is far more important for you (relative) youngsters, but what I'm trying to do to the very best of my rudimentary computer/technological abilities is remove AI from my daily life. What I'm discovering is it's not so easy especially with my new Windows 11 computer. I don't want copilot and Gemini and AI assisted searches, I want to use my brain. I have found some info that purports to show me how to delete or block this or that, but none of it seems to work very well. At this point I'd be happy to just rid my computer of those maddeningly seductive "AI Overviews" that make my brain go to sleep and that I don't really trust for completeness or correctness.
Any suggestions, anyone?
You can consider an alternative to Google like DuckDuckGo, which uses less AI.
Also, if you include “-ai” in any Google search, you won’t get the “AI Overview” at all… plus your results will be faster. But not many people are aware.
I started using DuckDuckGo instead of Google years ago, and never turned back.
This latest spam bot is back with a vengeance. Reported.
Another cute spam bot with its own substack post. Reported.
Very cute. A spam bot with its own substack post. Reported.
Thank you.
A went Google searching and it gave me two AI enhanced answers and they were inconsistent with each other! AI couldn’t tell because it’s kind-of stupid!
We see things that aren’t there because this is what humans are born to do.
The coconut ‘monkey face’ doesn’t look like a face! Great research on infants seeing how they smile at the three dots, yes three dots…we are programmed to see faces…
I’ve had Google AI give me wrong answers. Of course most of the time, I don’t know whether they’re right or wrong.
Likewise I use startpage.com for search and get no AI slop at all.
I'm 70, so a bit behind you - maybe on the 20 yard line. I actually like the AI summaries. They can save me a lot of time rummaging around for answers. But I take the summaries with a grain of salt. If I want to dive in deeper there is a link and if I don't trust it at all if I scroll further I can find other sources. So I don't think it takes anything away.
The most impressive for me was when a part fell off the dishwasher - a little wheely thing. I took a picture of it and used Google lens which identified it and gave me the exact part number (I assume that's AI) and a link to where I could buy it. I'm not sure how long it would have taken me otherwise to figure out what it was.
I like the AI summaries too and I’m 74.
74yo retired programmer here. The short answer is: Linux Mint.
I made my living for 43 years on DOS/Windows. But I actually liked Windows only with Win7; thereafter, Microsoft shoved too many "helpers" into everything and ruined the product. Along the way I set up Linux servers, but couldn't get comfortable enough with it to build GUI software until the last half-dozen years of my career.
So with the impending Win11 and all the warnings about its souped-up AI features, and with me in retirement and just wanting to be able to surf the net and occasionally use a word processor and spreadsheet program, I asked DuckDuckGo to find me articles on "Linux for Grandma".
The answer is: Linux Mint. I ordered a laptop from a company in the Netherlands, LaptopWithLinux.com. The order sheet allowed me to have them preinstall my preferred browser (FireFox) and email program (Thunderbird), as well as Libre Office for word processor, spreadsheet, etc.
My laptop arrived Jan. 21, 2025, just before DJT got busy imposing tariffs, so I got mine for $920; it's probably more now. But it worked right out of the box, and while I did occasional research about how to accomplish some things with Linux I was able to get down to surfing and reading without fuss.
Windows 7 was Microsoft's high water mark. it has been one deprovement after another since then.
Eric At 92 I remain gobsmacked by AI.
Is it a dynamic breakthrough that will greatly change our traditional world?
Is it a massive investment that is creating a short-term bubble that will significantly affect our short-term economy and, longer term, our traditional job market?
Is it something that I needn’t be concerned about, except for how it is likely to affect my 5 grandchildren?
I was a 1971-1972 MIT Sloan Fellow—-far too long ago to understand the implications of AI. I guess I must simply learn to grin and bear it.
AI is the latest way a small handful of fast talkin' tech bros are rushing to cash in on a flood of capital. Maybe they can make a ton of money but at what cost to the rest of us? Is there a credible, verifiable cost benefit analysis of AI?
Robert This is the most sensible AI comment that I’ve read. Back when I was a MIT Sloan Fellow 1971-1972 our big tech breakthrough was computer timeshare. When it broke down during a group marketing problem, the only noise was the two Japanese Fellows with their abacuses. Guess times have changed.
https://shapingwork.mit.edu/news/mit-sloan-management-review-nobel-laureate-busts-the-ai-hype/
In the long run, AI will have the biggest technical impact on human society since writing, assuming we don't hit the AI robotic science fiction apocalypse or any of the other end-of-the-species scenarios.
For you directly, the biggest issue is probably whether your finances are insulated from a potential 2008-like implosion.
How AI will impact your grandkids is probably beyond your control, except to do your best to see that they are well educated, and perhaps to try to get them other passports if possible.
(Base on my grandparents I could have had EU and Canadian passports, but it required paperwork done long ago.)
Here are some ideas:
Sick of AI in your search results? Try these 7 Google alternatives with old-school, AI-free charm | ZDNET https://share.google/jBciNIQnoehy43Cv7
Thank you so much. I'll give it a shot.
68 and refuse to even read the default AI answers. Instead, I scroll across the header in Google and click ‘web’ to see search results. On the bright side, I just received information about a class action lawsuit filed against Anthropic because they used two of my books for training. I’m hoping for more litigation against these beasts in the future!
I, too, try to avoid the AI answers so readily offered in Google and elsewhere. I’ve switched to Duck, Duck Go for searches, as one can turn off AI answers and just see the more primary sources and choose what to read. If I do use Google, I quickly scroll past all the AI summarizing and questions.
Append -AI to the end of your search words
I’m 80 and ai is useful, as long as you check it for accuracy. You can choose on the top Google menu, whether you want All, AI, or some other selection. All, gives you the usual AI rundown, which sometimes is accurate, but it makes sense to check sources also listed below. AI accuracy also depends on how carefully you phrase a question. Be precise and narrow in your question, and the ai answer is much more likely to be accurate. Still do a quick source check. I composite in photoshop. If I’m designing a crazy card for a friend, I’ll text to image, and use parts of the resulting ai image that I like, blended with my own drawing or photos. It’s fun. If I’m doing serious work, I selectively use some smart tools to help me blend faster, but all the parts of the final image are my own work - not ai. I was an English major. I do my own writing for good or ill. I keep spellcheck and her friends muzzled, unless I have a question for the dictionary. You pick and choose what you want. You’re the human. You’re the boss.
My objection to AI is equally environmental. It uses 7-8 times as much energy to do an AI search compared to a traditional Google search. I just don't think we need to be destroying the planet while we're looking up things in a virtual encyclopedia. I know, I know, but my four-year-old grandson's parents while they don't abjure the internet usually have him use the encyclopedia when he wants to learn about something.
Some time ago I read an article by a travel writer who decided to plan a trip to visit a number of beautiful places using AI. It seemed so paradoxical to use a destructive tool to admire natural beauty. Are we really all so busy that we can't do anything ourselves?
I bemoan the fact that AI as generally used is an awfully sloppy term for some pretty precise chips. The slopover seems to include anything anyone might do with them. We need specific definitions for chips that vary and are programmed in so many different directions. Not likely to happen. I worked in photoshop for 20 odd years, and watched Adobe build on what went before. You can still push a pixel all by yourself, or use a "smart tool" on your computer - offline - due to your GPU - a Graphics Processing Unit. Or you can use Adobe ai, which only uses graphics Adobe has licensed, or are out of copyright, if you need a stock photo or inspiration. Or you can use plug-ins which search the web for anything they're told to do. Only the last two need to go online They use different amounts of power for different processes, and get that power from different places, thanks to the descendants of GPUs which include Machine Learning algorithms, and the misnamed AI. So that's graphics. I used to air brush, silkscreen, and shibori fabrics I made into clothing for people. Then I printed hosiery, using photoshop and a digital printer. Now in my dotage, most of the artwork I do is digital, plus muscling and trimming my prints into accordion book form. Digital work uses less material, and is arguably healthier for me and the environment, in many ways. I can imagine really good transportation systems using ai, to keep traffic smooth - keep lights green if no one's on the cross street, saving time, tire rubber, brakes, and fuels, such as electricity. Or ai modulation of the electric grid, so it's much more efficient - no brown outs or surges to ruin equipment. AI control of waste water systems could aid cleaning water, prevent backups, or overflows in storms, and keep an eye on illnesses before humans can start counting. AI systems might work out ways to use less power themselves, or design better solar/wind/wave/thermal electrical supply. They might be able to see useful variation in plant and animal changes, fire danger, or careful, waste free, irrigation. AI machines might inspect airplane frames and engines before flight, pipelines of all kinds against spills, and of course pharmaceutical research. The limit is human imagination. I think kids need to learn a lot of things, among them older ways to get things done, and new ways. They will live in the world long after me, and must prepare as best they can, as every generation has. I'm not going to second guess parents on the "right way" for everyone to learn. I get your point, but like other tools, since the first stick, it all depends on how they're used. I do think it matters for people to know a subject well enough to make sensible decisions and regulations, and not bow down all wobbly to the new idol, or go full luddite with pitchforks and shoes.
On my W-11 machine, I use Firefox as my browser. As Acela recommended, I use DuckDuckGo as my search engine. I use (but do not really recommend) AOL as my mailer. Except for the operating system itself, Word and Excel are the only Microsoft tools I use. The only Google tool I use is the mapping one and "streetview"; I avoid gmail like the plague.
Every time Microsoft updates the operating system, it pretends that I just bought the computer and tries to get me to reset everything to the Microsoft products. It's mildly irritating, but I just keep selecting "Skip" until it gives up.
Me too. I see 2 areas where AI is helpful to me. Editing poorly exposed photos and health imagery such as mammograms, MRI’s and such. Maybe protecting your retirement savings. Otherwise, don’t want it in my life.
In the Google search box put. -AI then type your question.
It just queries the old way. Becomes a bit monotonous.
It's the microsoft paperclip all over again. Really, it's designed to direct you what to do, to make you a thought follower. People are dazzled by technology so they hope this can be used to help engineer society. This is what they're doing, imo. It will continue to make the rich richer and everyone else financially and psychologically dependent on them. But gaining political alignment through the political process is an excruciatingly slow process, to borrow a quote from Planck, it progresses one funeral at a time. So as much as I hate the hype, this might represent one workable compromise. It's the world of tomorrow. But tomorrow, by definition, never comes.
Also use thei Firefox or DuckDuckGo browser. Go into Settings and disable any features you don't need.
I am looking at installing Linux with windows as a dual boot option.
Another useful trick is to turn off Wi-Fi until you need it. One of my computers is disconnected from the web 99+% of the time, and as a result seems faster but for sure interrupts me a LOT less.
Blue books? 🤣🤣🤣🤣
37 yo Reddit users? No where near a reasonable data set for use in any statistical analysis.
What could possibly go wrong?
There is a very simple view of AI and data, fire up a search engine, type in ‘business leaders’ and ask for image output….look at the gender and color. Or attractive women, or anything else you’d like…the data bias is there for all to see! Digitized data sources are strong biased, so is AI!
Exactly my thought!
I think the tech jerks refer to it as "peak data," analogous to "peak oil."
I commented above about how lawyers using AI have been sanctioned since AI doesn't seem to be concerned with doing accurate legal research. (It makes up supporting case citations, amongst other things.) But your reference to your attention span reminded me of a book I recently read which documents how using computers/tablets, etc. (even before AI kicks in), damages the ability of children to focus/concentrate/problem solve. It makes me feel like we're deliberately using technology that will end up destroying us by aggressively dumbing down the population.
Some time ago I read a fascinating article about studies showing how the use of GPS is doing serious damage to some of our most important brain functions. Check it out. From (oh no!) AI Overview, or do a search and read some of the source material.
Impact of External GPS Use
Research indicates that habitual reliance on external GPS negatively affects the brain's natural spatial memory and learning processes.
Reduced Hippocampal Activity: Studies using brain scans show that when people rely on turn-by-turn GPS directions, activity in the hippocampus decreases. Navigating without GPS requires active use of spatial memory, which activates this region.
Strategy Shift: GPS encourages a "stimulus-response" or "auto-pilot" navigation strategy, which relies more on the caudate nucleus (involved in habit learning) rather than the spatial memory strategy that builds cognitive maps. This can lead to more rigid behavior and poorer overall spatial knowledge of an environment.
Potential Atrophy: Just as London taxi drivers were famously found to have larger gray-matter volume in their hippocampi from memorizing city streets, lack of use can lead to the opposite. Infrequent use of natural navigation skills may impair hippocampal connections over time.
Divided Attention: Some research suggests that the use of navigational aids sufficiently divides attention, which contributes to impaired spatial memory.
I believe you, but if your sense of direction were as bad as mine is, you would (as I do) bless GPS every day. I've often said that if GPS existed when I started driving (in 1966), I'd have been able to live a whole additional lifetime with the time I'd have saved by not getting lost.
I listened to an interesting discussion about AI and education. The Professors liked it as a tool but all of them were concerned about it's use in students who aren't experts and don't have the background to know what true and what's fabricated. There was also a big concern that the students would never gain that knowledge because AI makes it so much easier to "appear" to know the answers.
Yes people and coding are the real innovations, the real energy and design, the joy, the fun! AI is none of this…there is a reason some have become so excited about marketing these tools…no people, no social contract, no lives, no loves.
I read a new story recently, I think in Slate, that two teams of coders, one with and one without AI, were pitted against each other to see which would be more productive. The team using AI lost because the coders had to spend inordinate amounts of time checking the code AI produced for errors. I guess the coders using AI didn't have a good way to use the 1-10 minutes of time they needed to wait for AI's wrong answers as well, huh?
Anyway, I look forward to serving our future AGI masters. They can't possibly to a crummier job of running the world than our current overlords. Or can they (queue spooky music)?
I can see that. It depends on what the task is. I've asked it to do things that I realized later I could have done more quickly myself. And I've also asked it to do things it just couldn't do. But it's done thing in minutes that would take me a day, and it's done things in hours that would take me weeks. It's also done things that I've wanted to do for years and just didn't have the skill to do, and never would have the skill to do because of the opportunity cost of gaining that skill. And it's great for debugging. Many human coders make a lot of mistakes, too. I definitely do. I spend a lot of time testing my own code, so my guess is that part is a wash for me. I also ask Claude to teach me as it goes, so I learn stuff for when I code on my own. I think there are a lot of bad things about AI. When I said it's become in useful in medicine, I should have clarified it's a minority of AI that's useful. Other AI is unhelpful or worse, and many people make ridiculous claims about what AI can do. But some of the good stuff is really good.
If your software program can give you a list of your most often used codes, hire a scribe with the skill to expand category dx or procedure from simplest to expanded precision (plain Diabetes followed by Diabetes of every flavor, etc for kidney disease, hypertension, whatever you treat), have individual patient codes appear onscreen at each visit with easy access to your master list, you might save hours every week.
As a doctor, have you read C M Kornbluth's 1950 SciFi short story, "The Little Black Bag"? The future is filled with people who are not very intelligent, but the technology compensates. The Black bag of the story is the classic physician's black bag to carry the tools of the trade to the patient. A remarkable piece of technology that was acquired by a mid-20th-century primary care physician. That is how kids without experience will do medicine in teh future.
I will read it. Agree I worry that’s where we are headed.
Isn't this how your dealer services your modern car? They plug it into a computer with diagnostic software, which determines what needs to be changed or fixed. All that knowledge, understanding engines and transmissions has become obsolete for new cars. The skills are now used for older vehicles without the fancy electronics. AIs are claimed to be better than radiologists at detecting cancers from imaging results. Fortunately, hospitals ignored the suggestion that radiologists would become obsolete. However, I think it is clear that this technology could be used to overcome shortages of specialists in countries with stressed healthcare systems, and countries with only poor systems.
Where I live, the local hospital cannot find a neurologist to hire. The region is almost devoid of them. In counties where hospitals are closing, I can see that AI systems will increasingly shoulder the load of various specialists. Those SciFi robotic devices that do the "hands-on" work of specialists or surgeons may become the future of healthcare in some places. At least LLMs can be programmed to have good bedside manner.
Today's NY Times used the accident rates of Waymo self-driving taxis to suggest that we should be happy that they exist, as fatalities are reduced by 90%. (but the stats are not really comparable, AFAIK).. Yet most people are not comfortable with pilotless aircraft. Self-driving taxis may change that in time. For some aircraft losses, like that one over the Pacicific that was never found, it might be very helpful to have a remote pilot be able to take over the piloting if the crew on board is incapacitated, just as being done with self-driving cars that get into difficulties.
O, brave new world!
Interesting. Neurologists seems to be hard to find in FL too.
My local hospital has a "neurologist" on tap via screen. But as I noted with a recent need to find a neurologist, this doesn't count when tests need to be done. My only recent experience of hands-on tests neurologists do is testing reflexes and doing nerve conduction tests with electrodes. How hard would that be to do with a machine, even if it needed to be observed or run remotely? The only really skilled task I have experienced is taking spinal fluid for analysis, but does that require a neurologist, or can another specialty do that task? (At UCSF, the neurology dept wasn't able to do such a spinal tap as it needed imaging in my case to insert the needle.) The rest is just analysis, which might well be done by some sort of AI. Where I live, "specialists" seem more like a label than having any real skill other than concentrating the patient load on a particular physician to gain more practice. It is a veritable "medical desert" regarding specialist availability.
Classic story and cautionary tale of relying on a seemingly miraculous technology that: (a) misleadingly appears to offer you a path to riches; and (b) you don't have the first clue on how it actually works.
The kind of jobs they're talking about these days are things like "prompt engineer" 🙄
I heard this and then just a couple days ago heard that prompt engineer jobs are less needed because the models are better at reading bad prompts. So I started telling Claude to help me write better prompts and it does. It is its own prompt engineer.
Neither of my two key physicians use AI, which is why I'm leaving their practices. Both are very much "old school" and have dismissed all of the objective data I've given them as well as what those data portend for my health. If they had taken a few minutes to use any of the AI chatbots they could have verified what I told the, which could have major ramifications for my health. So, you use of AI is wise. It's the future of healthcare and clinical medicine. (I'm the son and brother of physicians as well as an early AI research scientist).
Excellent comment, e.g., "we're in for a rough road, I think." Amen to that. The younger generation is entering a era with no models to rely on......a frontier no one can explain to them. Very concerning. Some compare the transformation via AI to electricity, e,g., but AI is radically changing everything, and at a pace a thousand times faster than the changes due to electricity. Ahhhh....but the potential for major advances in virtually every discipline. There's the rub. Potentially unimaginable benefits. It reminds me of that famous first line in A Tale of Two Cities: "These were the best of times, these were the worst of times." Only in 10-20 years will we be able to know which of those these times are.
An illuminating discussion! Thanks to both Pauls for a consideration of the process of AI itself and its even more convoluted economics. What I’ve extracted from your back-and-forth is that AI will give most reliable answers to rigidly-defined logical systems like computer software, but will be inherently less reliable in the ‘loose grammar engine’ that uses human vocabulary. Moreover, the data from the public internet that was used initially to ‘train’ the LLMs that are the logic of the system has now been mostly consumed and what data are left are less useful and maybe even destructive if used. Apparently, such AI systems (and I’ve experienced this for myself) can aggregate bits of knowledge, suggest corollaries, and thus mimic a human’s ability to make predictions. Why not? Maybe humans do it that same way? But, it appears, like humans, AI is stuck with its knowledge base and comes up with conclusions reminiscent of a 37-year old guy fulminating on Redditt.
Moreover, the computer chips, which were originally devised to run video games, heat up, behave erratically and in ways that may be hard to identify, and therefore have a short useful half-life. And they are 60% of the cost of the whole system. Each system runs on massive amounts of electricity which the AI giants must build plants to provide themselves with or hoodwink utilities and their customers to provide for them.
The two Pauls then switch away from the technology itself to a discussion of what all this is costing and what economic effect it will have on the U.S.- on the U.S. mostly because these AI behemoths are fighting it out mostly here, while other models from other countries like China (with Deep Seek) are more sparing of resources. Turns out it’s slippery to estimate how much is being spent on this - maybe 14% of total GDP? And many investors have distorted ideas about what’s under the hood and rely instead on the perfume of Meta and Google to close the deal. Oh- there are also convoluted business arrangements called SPVs - shades of CDSs anyone?
Dr. Kedrowsky notes that this astonishing outpouring of economic resources - justified or not - is likely to distort our society in many ways - gobble up available credit and land to build power plants and data centers, and create a huge deflationary effect when it actually starts doing what it has promised to do. To which Professor Krugman responds “oh joy”.
Any interview on technology where the interviewer asks what a "thingee" is speaks my language.
Thank you for such a clarifying conversation. The message I take in is that despite the seemingly enormous brain power of the people who are making AI and AI itself is that we are making the same loop around the same race track we take every generation only faster and with more cumbustible gasoline heading into a fiery crash.
I worry about how this will be used for evil ends e. g. Politically.
Yes, definitely. Different and related topic. I can only take in one category of existential doom at a time, though.
I think it's interesting that AI allows people (maybe lawyer types?) to work within the context of some "collection of documents" without having to actually read those documents. "Adjust the language of the proposed legislation to be consistent with corporate policy (as described in [secret] corporate documents AJ94.5)" No one needs to every actually know what the secret corporate policy says or what it is even about.
Gloria you are correct. There are always people who will use technology for evil. like a coin there are two sides like Atomic energy and atomic bombs. There are good uses for AI, but we must be vigilant to limit the evil uses.
This has been a constant refrain on technology since the Guttenberg press
The printing press never threatened to become independent.
no but it presented different threats in a different era which created fear. and nobody but nobody knows whether AGI will be terminator and whether that will even happen and when.
And it has affected politics and well, everything, ever since.
I don't believe they have more brain power than Paul and Paul - and many, many others and certainly have VERY low EQs. They have much more ambition for power and are much greedier. The first thing that needs to happen when the Congress and Senate turn over is to outlaw Citizens United so they have less opportunity to ruin our democracy.
Doohicky, whatsit ...
Gremlins…phramastat…widget…thing-a-ma-jig…
Terry is great! I always translated at ‘what’s his name.’ But could be wrong…Patchet got a lot of very complex human things correct although his ideas of libertarian, murders and thieves, the guilds, is funny, as is the smart version the dictator…he makes the trains run on time. Of course Vimes is the hero, sorry he was never able to make Carrot the rightful King, would age made interesting children for the successions.
Elisabeth's Russian grandmother used "dishere." We follow Terry Pratchett and use "Wosname."
Yes, exactly! LOL
Found the article well worth my time. One question it left me with was the impact of the unprecedented involvement of four sectors in this possible bubble- real estate , technology, credit and government. Will that prevent the bubble from bursting because “everyone” is involved or will it be a far more disastrous economic crash?
yup
From my perspective, one of the best (if not the best) sessions since the start of this blog. Highly educational and enjoyable. Thanks to both PKs.
For the discussion of circularity just over 1/2 way in this presentation, I was reminded of the analogy of the AI driven factory that makes paper clips, described in Harari's "Nexus" and of the Sorcerer's Apprentice in "Fantasia".
A key point from this discussion is that asymptotic limits are being approached in ways that are not generally understood. The short mean time before failure (MTBF) has also not been prominently discussed. Another key point is that the utility of these LLM's in real life can only go so far for the average person. It's not driving your car (yet), and it's not preparing dinner (think Jetsons or Star Trek), nor is it helping in the labor of pouring a foundation or butchering a steer.
A point not discussed is the cost in this "industry" for robustness. Not just the hot swapping of chips that fail. A whole data center cannot be allowed to go dark, so there must be power generation backup, including batteries or enormous scale. In terms of water cooling needs, there is the problem of drought as a chronic issue, but there is also flooding and wind to be considered.
My response to all of this discussion is to prepare for a steep recession., for which the government is not prepared, especially with the proposed replacement person for Jerome Powell.
Thanks - I was a bit disappointed that this discussion didn't really address the issue of the water needs of these centers...
Former emergency generator tech here, the increase in scale over previous large financial data centers (themselves a very large increase over previous generation) is mind boggling. Lehman (now Barclays) started in the 80s with 3×1,000 kW. As their doom approached in 2007, they were building out 6x2.8 mW. I worked at both sites.
Microsoft's complex in Mount Pleasent, WI: existing, 39; permitted, 40 more; proposed, about 150. From AI overview. I wonder, can anyone else, like a hospital, even get one now?
Not to worry. Rural hospitals are going broke
I think it was GE who made a movie on how we make wiggets. https://youtu.be/pLE3NjuCsNI?si=NOGMcLFwnsksQOEI
So that's where those "widgets" I heard about in Econ 101 came from! Trouble here seems to be, AI widgets are "slop", and the machines that make them are very expensive and wear out fast.
There was an earlier theatrical reference but yes, Econ 101…but at least they are creating something rather than stealing…different economics for sure. Fake it till you make it! Fake reality!
In other words, financialization, more or less. Don’t need a real and useful good, just a quasi-plausible story to build financial structures on. And, ‘as long as we all stay irrational, we’re ok’.
TikTok, fashion, vibe, Donny-John…crypto…the willful suspension…Palantir.
Oh, I forget 'influencers.' and the drugification of phones...really, even the most bizarre science fiction never predicted that we'd pay 100/dollars a moth to become addicted to cat videos, the storage of which is killing our planet...although AI, which is neither, is jump-started the distortion!
I specifically am concerned about the "average person" upon which the LLMs are based. My brain functions at the 99th percentile so AI results are pedantic and almost useless.
Wow! This is incredible. I’ve been concerned about the AI boom and possible impacts on my IRAs. This is laid out so well and understandable that I, a music major, can understand it. I’ve wanted to know this info for a long time. And while all of this money is going into data center infrastructure, what is happening with all of the non- tech infrastructure like bridges, roads, water and sewer lines, etc.?
Well, along with those, what we SHOULD be investing a lot of that money in is green energy, or even nuclear. Instead, along with crypto (which is totally useless), we are generating more greenhouse gases at the worst possible time for very uncertain benefit. We should be solving the energy problem and the climate problem, instead of working hard to make it worse.
And then these shitheads like Musk, instead of focussing on preserving the Earth environment, wants to go to Mars when we've so thoroughly ratfucked what we have here.
To illustrate just how unrealistic the Mars pipe dream is: even after a global nuclear armageddon that kills 99% of humanity and poisons the entire planetary surface and atmosphere, Earth will still be more hospitable to human life than Mars currently is.
We have a magnetic field. You won’t be able to live on the surface of Mars without maybe wrapping the planet with superconducting cable to generate a field. Otherwise, always the atmosphere will be thin. And no oxygen.
One of my favorites is the dust. It blows everywhere on Mars and it ain't ordinary dust. It's fine, microscopically sharp and angular, abrasive to equipment, and full of toxic, carcinogenic perchlorates,
Yes. People can live in little huts. That’s about it
Underground.
Because of the externalities that cost the general population, should the data centers be nationalized? By the way, should StarLink be nationalized for national security purposes?
Starling, Definitely, but maybe by a future administration?
.. how many infrastructure weeks did the last Trump administration have while delivering goose eggs? Check with your local Dept of Transportation. They and grade bridges and road surfaces ... many are aged and McGuyvered to meet bare minimum standards and fall in at C or lower having been built in the 70s or earlier.
Biden administration did pass an infrastructure bills, some of which is being kneecapped by HeWhoMustNotBeNamed.
Add to that the growing pressure of these data centers on an already aged grid ... anyone remember the Great Black Out? Or the rolling blackouts of the Enron era in the West?
Pop goes the weasel.
It's going to blow up small boats out of the water off the coast of Venezuela with the full might of the US navy instead of Coast Guard cutters arresting them. It's going to build up an alternate domestic military force called Border Patrol and ICE, and build concentration camps for landscapers, roofers, cleaners etc. It's going to demolish the White House and put in an AMAZING BALLROOM so that thousands can pay homage to the first American King.
Thanks for the transcript. I read faster than I listen.
I think the most important thing is to realize that there is no good reason to think large language models are an important step towards a general intelligence machine. I cannot say for sure that it is not, but if it is, where is the actual connection?
So much this. It doesn’t seem like you can get there from here.
It seems likely that LLMs are at least a stepping stone to more sophisticated AI since at a basic level they operate in a similar way to some neural functions.
On the other hand, brains also have filtering systems and elaborate feedback mechanism to adjust the filtering.
Filtering can at one level be thought of as evaluating multiple ideas before choosing a word or action, but there is filtering at multiple levels down to electro/chemical signalling between neurons.
LLMs are trained in some facets of this filtering, but the mechanisms in the brain are much more complicated.
It is unclear whether the building blocks in an LLM are too simple to be useful to get to AGI, or will be a component, or are sufficient in and of themselves.
I wish I understood most of this conversation. I do understand how important it is to be aware of the issues addressed. I appreciate the opportunity to “listen in” on the conversation and to read the comments here as well. It is a foreign language to my 78 year old brain whose processor is becoming degraded🤣. Thanks for sharing the interview and thanks to the commenters for sharing their insights as well.
I'm with you Martha! I couldn't even write a SUMMARY of the conversation, but it sure did get my gray matter roiling!
I think if they were honest many of the (praising) commenters would admit they also wish they understood most of the conversation.
I found this discussion informative and fascinating. It was also quite terrifying. I had no idea how shaky the entire AI boom is. I'm left with the feeling that it only a matter of time before this enormous bubble bursts! Meanwhile, the massive build out of AI centers is causing serious environmental damage.
I came away both more and less terrified. Less so in terms of the singularity happening anytime soon, more so in how much crappier capitalism is in this reality than I realized...
This is not true capitalism.
OK, well then my basic "how capitalism is shitty bc it openly ignores the real human cost vs cost as investment" understanding grew exponentially
As it works in the US, yes. Capitalism is not a political system, but a system that can work fairly when both sides of a transaction can negotiate on equal footing.
In our world, corporations have more rights than individuals, and have become oligarchy. The Federalist Papers ... nine and ten in particular discuss the needs for guardrails to prevent certain concerns, religion, merchant class, etc., to throw their weight to undermine common good.
Trade can be free and fair where guardrails, i.e., system of government, exists to support and enforce the common good. We are not in that place.
Capitalism must be controlled with good governance. That isn't happening now.
I’m also appalled by how consumptive these data centers are, in terms of how much land, water and power they will need… concerned that they will be expensive, useless burdens on the public in a few short years
We are having a big fight over a data center to be built in Saline MI, near Ann Arbor. DTE wants to raise rates to accommodate the extra infrastructure needed, while politicians are offering up all kinds of taxes breaks in exchange for a minimum of 30 new jobs.
Good info on the can be found at Distill on Substack. One state's fight not to subsidize yet another corporation.
After reading this, I would stay away from it, particularly subsidizing power. Why should general population subsidize the power this dsta center needs, when the parent companies have made billions of dollars?
Exactly.
Buena suerte.
Yes, and for state and local governments to give concessions to attract while forcing residential consumers to bear the increases cost to build out infrastructure to support is malfeasance.
Very good. Kedrosky is doing a good job here (which is pretty rare in GenAI).
"Is there a less-than-90-minute explanation of how the whole thing operates?" https://youtu.be/9Q3R8G_W0Wc is a 45min talk from two years ago when certain things were still easy to illustrate/visualise; nothing about how a transformer works or the math, purely functional on generating text by token selection). These things have not fundamentally changed, though a lot of 'engineering the hell out of/around it' has taken place and quality has improved a lot (but fundamental limitations still remain). The 'thinking' models, for instance are massively costly because of their 'indirection' (it is just a system with massively more behind-the-scene inference with models trained on certain forms of 'token sequences' (like mathematical reasoning text), https://ea.rna.nl/2025/02/28/generative-ai-reasoning-models-dont-reason-even-if-it-seems-they-do/). There is also much more parallelism (multiple inferences running side by side in the models with at the end one that is put out, this, I guess, was probably the big improvement in GPT4).
One remark. As far as has been reported — it's all trade secrets so hard to estimate, there are a few papers, like https://arxiv.org/abs/2512.03024 and https://arxiv.org/abs/2505.09598 (but they aren't very strong) and it is a lot of professional guesswork — about 90% of a models energy spend comes from inference. While one big job for training sounds special, this big job too is a constant stream of little jobs, and I would doubt that inference is extremely less stressful for the chips than training is. But that is an estimate, surely.
At this point, there is an abundance of (often additional to an existing dataset) training, but that is because Google, OpenAI, Anthropic, etc. are in a race to get to the top position, the near-monopoly position, with the best model and that requires a lot of training during development. When that race settles, we will see that inference, not training makes up almost all of the cost. It is that '90%' inference why they are building these data centers.
The firms lose massive amounts of money on inference, and training is pure development cost. So, making everything more efficient is where most of the attention (pun intended) now is. GPT5, for instance, tries to estimate the difficulty of the text so far (both human and AI-generated) and tries to route inference to cheaper (smaller) models, also depending om your level of pay. Google has paid a lot of attention to hardware (TPUs) and in the past has created a large cache of (not very reliable or complete) 'AI-generated summaries' of web pages which it at one point used instead of reading the web pages themselves during an inference, like others do.
But that this stuff depreciates extremely fast remains true. And that all that massive growth in volume will not bring us AGI-quality is also pretty clear. Which means that the models have to economically become the 'cheap' option (both in cost as in quality) for mental work, just as factories became the 'cheap' (physical automation) option for physical work at the start of the Industrial Revolution. It is quite uncertain at this stage if that is economically viable, and at least it will put pressure on many 'mental artisans' of today (especially creatives). Some really large breakthroughs (e.g. combining with non-digital hardware) are still required, and these are pretty far off.
"Quite uncertain"??? Presuming I've accurately grasped what I read here, I would infer that the likelihood of economic disaster is quite high. Of course, that presumption may be as dubious as presuming the accuracy of AI summaries.
The likelihood of the bubble bursting is very high indeed. A recession seems indeed quite likely, but there are in my mind some uncertainties how deep it will get.
I was talking about the somewhat longer term (say 10-15 years). Suppose this research on combined analog-digital calculations can be scaled up a factor 256 from recent research, then we’re looking at a factor 100,000 energy drop for this stuff (1000 times as fast at 1% of energy for the same calculation). No AGI, but the economy of inference will then change dramatically. The current technology I doubt will be economically viable.
It would seem that a recession is certain. Kedrosky says “[Bubbles] tend to be about real estate, or they tend to be about technology, or they tend to be about loose credit, and sometimes they even have a government role with respect to some kind of perverse incentive that was created. This is the first bubble to have all four.”
In other words, to avoid a recession, these companies have to do everything perfectly and have a tremendous amount of luck on top of that.
Agreed. Though we should not forget sequences like the dotcom crash leading to central banks setting very low interest rates in attempts to stimulate the economy, which then lead to money fleeing into stocks and real estate, thus — in combination with MBS-silliness like turning junk bonds into AAA — led to the 2007 megaboom and 2008 megabust.
Hmm, maybe we should call this magaboom and magabust. Just kidding.
This was a great interview! I listened to Paul Kedrosky's talk with Derek Thompson, so things like the SPVs and "Dutch disease" were not news to me, but still learned some new things: The idea of those land power companies gobbling up parcels of land just to hoard and flip ala Chinatown is insane. We are building in so many layers of complexity and interdependence it's impossible to know exactly how it will turn out, but it seems the downside risks are a lot more prominent than the potential upside...
I was telling a colleague the other day that using a chat AI tool is like working with a 7-year-old and a yellow lab at the same time. Lots of questions and lots of pleasing.
The yellow lab is big head but no brains.
And great social skills!
Until that tail wipes everything off the coffee table in one swell foop
So, the GPU's, like ripe bananas, stop becoming useful?
The faith in AGI capturing the future, the obvious winners, are gonna be private share holders, of the monopolistic platforms, 6 or 7 of them total, dominating the market.
These platforms must capture the entire market, winner take all.
Eminent domain of water rights is ALREADY going on, electricity produced from lignite coal, depleted oil reserves, those holding future drilling rights.
None of this has anything to do with Democracy.
All this for future profits for--- whom again? Time to cozy up to those in charge. "Trump? oh Trump? Have some gold, but give me WATER RIGHTS.
You can't even make chocolate chip banana bread with GPUs.
GPU's. Pass the "chips"
A Trillion dollars?
Universal healthcare? $200 bn. High speed train network? $500 bn. Free college for everyone? $250 bn. None of those have any $ in one billionaire's pocket.
Wow. The comments are just as interesting as the initial discussion, not because they are all gems but bc they highlight humans dismissiveness. For those who got lost, read the transcript vs watching it, and if it's too long for you, run it thru an AI program for a detailed summary, bc this is a really great piece that many humans need to grok, especially those of us who have been running away from understanding AI out of fear or complexity
Just don't use Grok!
I try not to use anything Elon has touched which is impossible I know but I was using the word grok as it was intended not as he is ruining it
I use Ai to pull info on really basic topics then dive into links and related searches. Mostly just to hone in on related words and phrases to use in a more Boolian search.
Somehow, my listening mode shifted from the actual conversation between Paul and Paul to an A I generated 37 year old male voice on reddit😂
The difference is clicking on the Pauls picture in the post forbthe live chat, or the the voice to text link at the top right hand side of every post, dark black arrow.
thanks!
I use the text to voice if driving and I want my PK fix.