346 Comments
User's avatar
Mark Stave's avatar

I read this interview with a good deal of dismay. In my job as an attorney representing abused and neglected children in fostercare, I spend a great deal of time with low income families. I do see a great deal of screen time being used to substitute for human interaction - leaving both children and adults with an impoverished set of social skills, and negative self-image (comparing their life to Facebook images, influencers and advertising images) - my question is, "How will the 'intangible' GDP capture that negative impact?

My second big worry stems from the toxic intersection (as I see it) between the promise of Ai, human hubris and greed, the recommissioning of oil/gas/coal/nuke plants and the ever increasing impacts of climate change. Specifically, we already have, IMO, every solution we need to mitigate climate change - and we aren't using them.

One reason, or course, is the Fossil Fuel wealth funding denial, fake research, greenwashing and geoengineering - all selling the poisonous story that 'We got this, you relax'

With this new, shiny bauble to distract from the ever accelerating human death rate from climate driven disasters and resource wars, I fear the distraction will lead to another decade of doing nothing much - baking in truly catastrophic impacts and the resulting human decimation.

What shall we say to our grandchildren? Whoopsy, we got fascinated by the shiny new computer toys, and justified doing nothing partially with the fantasy that we could machine-intelligence our way out of the zero sum game that is our planetary ecology?

Expand full comment
Lee Peters's avatar

Re your first point (social skills)

Just before reading this newsletter I read a report on the state of young males and the influence of misogynists like Andrew Tate. The discussion of LLM using digitized and internet based content does not bode well for the half of humanity that is female or for men of color. Your point about less screen time, more human interaction is crucial.

Expand full comment
Jeff Luth's avatar

There was a documentary on the use of ai in policing and hiring. Amazon tried to ai their hiring to get rid of the biases in their system. Unfortunately the Ai machine very quickly unearthed the biases, amplified them, and the whole experiment failed. Basically It created a hyper Hegseth hiring manager instantly.

Expand full comment
Ryan Collay's avatar

There are so many ‘hidden’ costs to our current unregulated, market-driven, use of these tools to addict people. Cat videos are cute but if you are ignoring your two daughters as you doom scroll..this will hurt our whole community. Put down the ducking phone…you are addicted to a drug that is AI driven and abusing your children.

Expand full comment
Terry O'Neill's avatar

Thank you Mark Stave, for reassuring me I’m not crazy. I had almost exactly the same concerns as I read this conversation. While these tech/economists are measuring the benefits to consumers of “free” content on our phones & other screens, why don’t they measure the costs to consumers? The appropriation of our personal data is just one, albeit huge, cost which we haven’t figured out how to quantify. Or maybe the economists just aren’t bothering. Is that because our system rewards those who privatize their own gain while socializing their costs?

Expand full comment
WinstonSmithLondonOceania's avatar

Wow, you raise some great points here. Thank you. Isn't reality grand?

Expand full comment
Ryan Collay's avatar

Yes artificial reality, video games, computer generated and market-driven porn, the whole future based on political powers using big data, China and its ‘people’ control.

The impact of cat videos stored for ever and all the other digital Al dreck increasing energy use, and for what? Servers farms as the biggest energy suck to the grid.

I would suggest we have a life to some of this ill-named “data’…a self-destruct sell-by-date…as it were. Or you could buy a hard drive.

Expand full comment
Susan Burgess's avatar

Yes Mark, that is what we will have to tell them. Think of all the other modern inventions and occurrences we let slide before this — plastics, insecticides, GMO, microwave ovens, teflon, the internal combustion engine, topsoil degradation, lying in advertising, excessive logging, pollution of our air and water, overpopulation, ineffectual education for children, allowing non living wages. Hiring incompetent teachers who influence our kids, not paying the good teachers enough. Unsustainable trash disposal. Nuclear waste disposal. And on and on. It’s everyone’s fault.

Question: why does a store bought tomato sit unchanged on my counter for one month or more? Is our produce being irradiated?

Expand full comment
Matt Fulkerson's avatar

I'd like to see Paul Krugman write on the effectiveness of a carbon fee. Obviously all of these digital wonders don't mean much if we can't rescue the ecosystem that is planet Earth from runaway global warming. Their value will in fact be greatly reduced in the GDP-B measure.

Expand full comment
Dennis Allshouse's avatar

Great reality check post!

Expand full comment
Thomas Patrick McGrane's avatar

It's worse. Television is blowing up the world.

Expand full comment
Steve Schroeter's avatar

Thank you Steve for your Perspective and comments.

Expand full comment
Daniel Weintraub's avatar

Fascinating discussion Paul. Now can you do another starting where you stopped this one? Most of us might grant that your guest is correct about the transformative potential of AI. But what we really want to know is how will might it affect our lives? Thanks!

Expand full comment
Bruce Clark's avatar

Awesome! What I do not see here is an acknowledgment that while genius inventions are important to advancing our culture, nee economics, is that the collaboration of seemingly disparate technologies can come together with unique advances. But as a wide-data base, that would seem to be AI's most powerful potential.

Expand full comment
Mark Wegman's avatar

Try them out and see. You can learn a lot by playing with them. Of course they will get better but they are impressive at the moment. And most of them are free to use.

Expand full comment
Mary Pressman's avatar

YES!!!

Expand full comment
Rachel Lyons's avatar

this

Expand full comment
User's avatar
Comment removed
Mar 22Edited
Comment removed
Expand full comment
Paul's avatar

Jacob Reiss is a bot. Please disregard.

Expand full comment
Shari Lawrence Pfleeger's avatar

Interesting discussion, but it leaves out some important developments that lead to very different conclusions. For example, in suggesting that Amazon dominated Barnes & Noble: yes, in the short run. But B&N has come roaring back, and Amazon has closed most of its brick-and-mortar stores. (Thank you, James Daunt!) And independent bookstores are surging. Amazon Fresh and Amazon Go are failing; consumers don't like interacting only with technology when they shop. Newspaper articles keep describing how to get away from screens and re-enter real life. And no one likes having to deal with chatbots and automated phone lists when a problem arises. Much of the move to all-tech has ignored what social scientists have demonstrated: our need for strong and weak ties - those human connections that give meaning to our lives. You can't focus only on productivity. You need to look at what gives meaning and purpose to each of us.

Expand full comment
WinstonSmithLondonOceania's avatar

So true. I might also add that Amazon's initial domination was purely Vulture Capital funded. It was the release of the IPO that transformed it into a reckless monster.

Expand full comment
Ryan Collay's avatar

Ah, now there’s an interesting ‘investing’ productivity data point…just how much money have we wasted chasing the next butterfly? 10 of billions?, 100’s of billions? More?

And all in the name of ‘innovation’, most of which fails. And stock markets and values often get it wrong so the only money made is in the ‘exuberance’ phase, so those the get in early and sell fast make money...can you say insider trading? This is a Ponzi scheme…after all.

Expand full comment
WinstonSmithLondonOceania's avatar

Probably trillions. And yeah, big time Ponzi scheme.

Expand full comment
Ryan Collay's avatar

Thinking Thele and JD…

Expand full comment
WinstonSmithLondonOceania's avatar

Not to mention Andreessen, MuskRat, Bozo Bezos, Zuck, etc.

Expand full comment
Ryan Collay's avatar

I grew up in the valley before it Silicon, now Silli-con...Dave cam to my 5th grade class and talked about business, entrepreneurialism, and a Tide that Floats all boats. Big guy, very nice. What happened? He didn't hate anyone! Except maybe fascists.

Expand full comment
Brooks Keogh's avatar

many stores have been closing automated check-out lines for some years know-people would rather talk to people,even if it's superficial and short-just like i-and I presume others-dialogue with mion4edeals on wheels meal deliverers-a shallow connection beats no connection-glad you mentioned weak links,shari

Expand full comment
mike harper's avatar

One of the joys of using the cashier line is being able to flirt with the beautiful young women, and joke with anyone male or female, young or old in the line or behind the counter.

Expand full comment
sell-by's avatar

That's horrible. Do not sexually harass young women trapped behind cashiers trying to make a few bucks an hour. I hope they report you.

Expand full comment
Pragmatic Folly's avatar

The scourge of loneliness caused by our hours of interaction daily with screens is certainly something to be considered.

Expand full comment
Brooks Keogh's avatar

you sound like a well-balanced man-exactly our point

Expand full comment
Pam Birkenfeld's avatar

And aren’t we happy that Amazon is shrinking! I hope some of its due to those of us who are using it much less

Expand full comment
Pam Birkenfeld's avatar

Many people are boycotting Amazon. I haven’t completely weaned off of it yet because I’m still using the video part, but in one of the other columns on Substack, someone suggested this website which offers alternatives for buying things other than through Amazon.

goodgoodgood.com

I sent it to a couple of friends, and people have started using some of the suggestions.

Expand full comment
Frau Katze's avatar

I’ve read that some people are boycotting it. Don’t know how extensive that is.

Expand full comment
sell-by's avatar

Is it? How gratifying!

Expand full comment
Urb's avatar

You need to do a "rebuttal" episode with Ed Zitron ("Where's Your Ed At?"), who will tell you that Brynjolfsson is wrong, that the models will not get better as they eat more of their own garbage, and that the whole GenAI thing is basically a blind alley. Erik makes a lot of good observations, but then he moves the goalposts by arguing that "Maybe a lot of us, what we do is super powered auto completion." I spent my career as a journalist and technical writer - right in the crosshairs of LLMs -- and if you think an LLM can replace what I did, then... you have no idea what a technical writer actually does. (Hint: The job is about 5% actual writing.) The Achilles heel of LLMs is that they can't -- cannot, by their very nature -- do anything original in terms of thinking, ideation, and creativity. I've spent a ton of time with these tools and found them useless for anything beyond answering easily-verified questions and generating vanilla prose. Anyway, I strongly recommend bringing Zitron in to drop the hammer on all this nonsense.

Expand full comment
Tehanu's avatar

"If you think an LLM can replace what I did, then... you have no idea what a technical writer actually does. (Hint: The job is about 5% actual writing.)"

Hear, hear! I'm a tech writer too, and I tell people all the time: it's 5% grammar & punctuation, 5% tricks like signposting, and the other 90% is being able to think straight and remember that your readers can't read your mind. My experience with AI is that it just quotes the same legal & technical gobbledygook I'm trying to turn into plain language.

Expand full comment
Kent Myers's avatar

Maybe you are a great journalist/tech writer, but LLM output is good enough for most people. A recent big 'innovation' in intelligence has been to run all the late-breaking news from around the world through summarizers. (The prompt engineers are now pretty skillful.) There's simply too much of it for live analysts to sift through, and they have shifted over to 'exquisite' work. But that product is not quite so compelling for consumers because it's delayed. It feels like reading last week's newspaper.

Expand full comment
sell-by's avatar

LLM output is not good enough for most people who're paying attention, and the news summarizers were already tried and disposed of because it turns out that people reading news actually want to know what's up. I'm seeing people getting fired, not hired, and bounced from classrooms because they're relying on ChatGPT. Commercial products based on text prediction are already getting booed down because they're wrong so often, sometimes damagingly. Clear the stars from your eyes.

Expand full comment
Urb's avatar

They always love to use summaries as an example of a big "transformational innovation." But come on -- I can get a good free summary of the news just by scanning headlines in the NYT, bbc.co.uk, Semafor, and any number of others. As for summaries in technical writing, executives love to tell tech writers to "throw some AI at the doc to make it better," and usually fall back on summaries as the deliverable -- but a summary of a doc set is useless, when you're trying to find out how to do a task in the product. This is why OpenAI is losing money in buckets, Microsoft is cancelling data center plans, and generally, the GenAI bubble is... not producing results. They're good at writing some code -- which you still have to be a programmer to turn into something usable and secure -- but the killer use cases are just not there.

Expand full comment
sell-by's avatar

Actually there is a silver lining to all the crap the LLMs are producing: it might finally break Elsevier and Macmillan's hold on scientific publishing. Journal after journal is being forced to use AIs, and the editorial staffs and scientists are losing their minds because the AIs produce such shit -- to the point that the staff people are quitting, sometimes en masse, and scientists are starting to be willing to trade some impact factor for not being a laughingstock in their own communities and having their reps trashed. It takes an awfully long time for the message to travel to the corporate brain, so maybe it'll take long enough that Elsevier will be a dead man walking by the time it gets there.

Expand full comment
GaryF's avatar

Similar for actual software engineering - see my comment from a few minutes ago (a few days late, sorry) if you are curious....

Expand full comment
Thomas Formanek's avatar

I love Paul Krugman, but living near Ithaca NY, I have seen too many outings where the vitality all around us is sucked out by esoteric alpha posturing by Ivy Leaguers. Good grief.

That was this article.

The most vital question was an aside at the end: what does AI, etc mean for how prosperity is shared? Duh!

What DOGE under the Project 2025 DOPE is all about is further consolidation of oligarchic power over the lives of the rest of us: no unions, no sharing power, no more bandaids mitigating economic hardship using government while both parties since the Clintons accept and annoint the uber-klass!

Red and blue both see government as co-opted by elites while Page and Gitens have shown since the 70s that government does the work of donors.

Government is supposed to be in control of insuring widespread democratic political and economic outcomes. That evasive life, liberty pursuit if happiness thing....remember?

Democrats put bandaids on the resulting hurts while MAGAs rip them off. But the outrage in both parties has become a distractive social war rather than economic and political. The enemy is not guns nor CRT. It is a political system run by, for and of the rich on the cusp of replacing great swaths of human labor.

Thanks for focusing on LLMs and computing models. Next time please invite a social scientist for focus purposes please?

Expand full comment
kbaa's avatar

Sure, unmeasured benefits and contributions to the general welfare are important but so are unmeasured losses and increasing angst. Any revision of the GDP should include many different quality of life measurements if it is going to accurately reflect the health of a society. When a country like South Korea suddenly stops having children, or a country like the US twice elects a Donald Trump, there is a big problem somewhere and it would be nice to have a measurement that indicates what is wrong…

Expand full comment
sell-by's avatar

And women. These seem almost never to exist in Paul's professional universe.

Expand full comment
Ryan Collay's avatar

Yes and free markets, deregulation, stupid rules, no guardrails for the smart billion fake it till you make it…Tesla makes one good car, and there are now ten others made by normal companies that are as good. Missing was the actual innovation of the building and installation of early charging stations. Know we know that most rich folks will charge at home…and have two cars. One electric and one hybrid.

Expand full comment
WinstonSmithLondonOceania's avatar

Right on!

Expand full comment
User's avatar
Comment removed
Mar 22
Expand full comment
Thomas Formanek's avatar

I am trying to work out whether you are an idiotic minimalist who perceives wisdom in fortune cookie phrases or simply a bot.

In the meantime I have classified you as an "idibot".

Carry on with your inanity by all means.

Expand full comment
WinstonSmithLondonOceania's avatar

I think you nailed it. I'm not sure if it's a bot or just a bored adolescent, but I blocked it and reported it. Hopefully substack will close its account.

Expand full comment
Carolyn Meinel's avatar

Me, too. The more reports, the merrier.

Expand full comment
Thomas Formanek's avatar

Great choice on "it's".

Dunning Kruger effect on full display!

Expand full comment
Pam Birkenfeld's avatar

A friend found the word "shidiot" to be useful!

Expand full comment
User's avatar
Comment removed
Mar 22
Comment removed
Expand full comment
sell-by's avatar

lol, it's a rare commenter not on Soc Sec here. But go on, keep being awful.

Expand full comment
Roger Kellman's avatar

Interesting article. So AI helps us with productivity and growth and we need a new measure to better account for that. Is that how many iPhone's are produced? How cheap can we get one? How many people have an iPhone? How many free hours do have to spend on my iPhone?

Here are a few of the measures I think we need. How many people are dying in wars? How many people are dying from starvation? How many people struggle to survive in general? How many people commit suicide? How many people suffer depression? How happy are people in general? How many animal species are going extinct? How clean is our water? How clean is our air? You get the idea.

And don't take those measures in Silicon Valley or in Washington DC. Nor California, nor the United States. But the entire world of people and our environment!

Expand full comment
Lee Peters's avatar

A lot of us have thought similarly about the inadequacy of GDP for decades, so it’s disheartening elite economists are only now making a concerted effort to develop alternatives.

Expand full comment
Erwin Dreessen's avatar

The Genuine Progress Indicator tried to augment GDP as early as 1994. Looking at the GDP-B proposition, its narrow scope is striking: "consumer surplus generated by free digital goods and other nonmarket goods" -- my goodness, the looking-at-your-screen craze has now reached the point that it is thought of as the most important component of people's well-being. There's so much wrong with this idea.

Expand full comment
James M. Coyle's avatar

The use and misuse of time, for example. As we get older, we begin to realize what a precious commodity time is. And spending it on screens is not necessarily a good thing. Not all "free digital goods" are useful.

Expand full comment
Paul Olmsted's avatar

Economics texts have consistently warned us that GDP measures goods and services - and

“ bads “ like spending more money on sick people with cancer . The bads are counted with the goods . What we need to understand is

how to interpret the data. One important idea I

always brought up ( in my classes ) was that

Macroeconomics is income theory . And money spent ( with a few adjustments) becomes money earned. Used this way , the numbers can help us understand how the economy works .

Once you start talking about what it doesn’t do very well - you might get confused about some

of the things the model useful for. No doubt -

some concept of a non- monetary benefit

that is difficult to measure with our current statistics is of interest- especially when we consider productivity and the shortcomings

associated with that . Obviously - by reading some of the above replies- time spent on phone apps can have plus as well as huge negative impacts .

In my book , gross national welfare ( regardless of how different people define it ) will not leave

society with an even worse distribution of income than we have at present. If AI is going

to serve humankind rather than dismantle it even more - the net result needs to reverse the

rich getting richer and better luck next time for the rest of us .

Expand full comment
WinstonSmithLondonOceania's avatar

You have concisely nailed it.

Expand full comment
Jay's avatar

My goodness, that was the very definition of informative! I relived certain parts of my life during that conversation recalling the productivity gains I’ve witnessed and experienced! This passage struck a chord with me:

“GPT’s rarely have a big effect right away. It’s usually after they start transforming the way business is done, and companies rethink what they’re doing, that business process changes. And that takes time before it translates into productivity.”

Expand full comment
David in Tokyo's avatar

The underlying algorithm in LLMs is exactly and only _random text generation_.

Let that sink in.

Now, allow me to point out that LLMs have no mechanisms _whatsoever_ for relating the text they read, or the text they output, to anything in the real world. Everything any LLM has ever output, and ever will output, has no calculated, created, or reasoned relationship to anything anywhere of any nature (including abstract thought/ideas) whatsoever.

But when you, as a user, read text generated randomly by an LLM, being human, you read that text as ordinary, every day text that, barring serious mental illness on the part of the speaker, makes sense. But that "sense" was not created by the LLM. LLMs don't do "sense". That sense was created by the reader.

So when an LLM randomly generates some text that "makes sense" (and some other text that doesn't), the user (that's you, sucker) does the work of figuring out which is halucination and which is coincidentally sensible.

Now, the LLM algorithm is pretty neat. Anyone with a degree in Comp. Sci. (this writer raises a hand) is impressed. But anyone with a degree in AI from Minsky or Roger Schank or the like (this writer has another hand raised) is aghast at the stupidity of the idea that random text generation could be of any utility, or intellectual interest, whatsoever.

Other opinions differ, of course. Me, I'm predicting a horrific crash and burn when people get tired of the silliness.

Expand full comment
Chris Savage's avatar

There's a recent article in Science Magazine that says the way to think about LLMs (and AI) is as, in effect, a way or organizing and presenting pre-existing knowledge. E.g., I use Claude, which just added a web search capability, as the equivalent of a conversational and intelligent (casual use of the term) search engine. A lot of my queries are, "Tell me about [x]."

But you are, in my view, correct about a very important point, which is that LLMs do not have any "understanding" in a normal human sense. They don't have a "theory of mind," and as a result, they can't do well tasks and questions that call on that capability.

This comes up a lot in discussions of what types of tasks AIs/LLMs might someday take over. In my estimation the presence or absence of a required "theory of mind" component is likely to be an important differentiator among jobs that survive and jobs that don't.

Expand full comment
Lee Peters's avatar

The pre-existing knowledge aspect is thrown into high relief when dealing with a customer service chatbot. If you have an unusual situation with your product, good luck trying to find an answer. Chatbots are limited to the most common questions and problems. A live person can actually think through the situation and get you an answer a lot faster.

Expand full comment
Chris Savage's avatar

That's true. On the other hand, when a company don't carefully constrain its chatbots, it can get into trouble. In Moffatt v. Air Canada, the Air Canada chatbot told a customer that a discount was available when (according to the official Air Canada policies) it was not. The customer (Moffatt) sued for the discount, and won: MOFFATT V. AIR CANADA, Civil Resolution Tribunal (Canada), 2024 BCCRT 149.

The benefit for companies is that an LLM-driven chatbot will likely be much, much less expensive than a first-level customer service rep. The challenge for companies trying to deploy non-fully-scripted chatbots (LLMs) is that they may say things that create legal liability for the company.

I saw some reference to an early study, though, that said that AI in general doesn't necessarily supplant workers, but instead helps the workers improve their performance -- and more so with the lower-performing workers than the best performers. A live CSR with an AI-driven screen giving him/her prompts and ideas about how to respond to a given customer's inquiry might be the best of both worlds.

Expand full comment
Mark Wegman's avatar

You accurately describe the old (ok 1-2 years ago the field moves fast) LLMs which were trained to guess what the next token would be were it in human generated text. However, the systems can "reason" about a problem in manifold ways and see if there's a compelling solution in any of the manifold approaches it came out with. This is called Chain of Thought and using CoT these systems can be trained for example to do very well on math Olympiad questions. They may be able to be tested and trained on Physics questions. As they get better at this they may sometimes solve things humans can't or don't want to bother solving. Given the originality and vast intellectual resources being spent on the field I wouldn't bet that there will be no originality or intellectual interest coming out.

Expand full comment
David in Tokyo's avatar

The point that these things don't actually do the reasoning, they just randomly

instantiate patterns (from training data), even though they may be getting better at finding patterns that might have been reasoning, hasn't changed.

Sure. They're good at that. But there's still no mechanism for not halucinating. (By the definition of the term "halucination" (visions or ideas that are unrelated to actual reality), it's all halucination, since this whole game doesn't do relations to reality.)

Generating random stuff, even though it's based on (really good and lots of) patterns in the training data, and praying that the user will find something useful/interesting there isn't what anyone should call "intelligence". As I said, opinions vary, but from a philosophical standpoint, this whole game is problematic.

And that inability to "do reality" means that these things are always going to be problematic. Programmers love them and can use them because programmers are used to debugging things. But using an LLM in a commercial/production situation where the output will become a contractual obligation on the part of the user is going to continue to be a bad idea.

Thus my crash and burn prediction stands.

Historically (1970s/1980s), we thought (correctly, IMHO) that human intelligence is based on symbolic and logical processing: that humans think logically about concepts (i.e. symbols) shouldn't be controversial. We thought we were figuring out how to do some of those things in Lisp or Prolog. But human intelligence is humongously wonderful and amazing. And our programs ran off the rails almost instantly. "Some" was nowhere near enough. As the "expert systems" folks rediscovered.

Anyway, this random text generation stuff looks to me to be a distraction from the problem of figuring out what intelligence is and how it works. But doing that requires respecting human intelligence, and it looks to me that our current AI folks don't.

Expand full comment
Chris Savage's avatar

Good thoughts. FWIW my take is that what is (at least for now) distinct about human intelligence is (a) the ability to draw and understand novel analogies (LLMs suck at that); (b) a theory of mind, i.e., understanding that other entities have motives and goals, and that those motives and goals will affect the other entity's actions based on the details of a situation; and (c) the ability to update our estimates of the probabilities of different outcomes (including what other teleological entities will do) on the fly, in real time (we're all Bayesians at heart...). Explicit language and logic are, in my view, basically input-output channels for what our brains/selves do automatically. (Note: we could have a long discussion here of Type 1 v. Type 2 thinking in the Kahnemann sense, and the importance sometimes of NOT going with our intuition and instead logick-ing things out, but you get my point, I think...)

Expand full comment
David in Tokyo's avatar

"Type 1 v. Type 2 thinking in the Kahnemann "

FWIW, although it's a slog of a read, there's a new book, "Anatomy of a Train Wreck" by Ruth Leys, that relates to the stuff you are saying/thinking here. (Truth in advertising: it's a slog and I'm only partway through.) It's essentially the history of the cognitive end of psychology as it walked its way into the reproducibility crisis. (Your use of the word "automatically" tells me you'll love this book: it's a (if not the) major theme of the (subjects of the) book.)

Expand full comment
Pragmatic Folly's avatar

I agree, and the " training" on TY he internet means garbage-in-garbage-out in many instances. AI generated language has no human meaning at all except what we humans assign to it.

Expand full comment
Michael Kaplan's avatar

I'm really bothered by the glib observation that "we've digitized" so much information that now enables machine learning. much of this "information" is copyrighted work (including obviously Krugman's, as well as my own) that is now being recycled into chatbot-generated pablum. this is not just theft; it eviscerates education and research alike.

Expand full comment
Mark Wegman's avatar

Copyrights protect the form of the expression of an idea. They do not protect the idea. If you publish a news article that says for example that Trump says he didn't sign some important papers that doesn't prevent me from saying that there are some critical papers Trump didn't sign. Even if I only knew that by reading your writing. It also doesn't prevent and LLM from doing so. If you come up with a patentable idea you can get protection through the patent system but that's expensive. We may need a middle ground, but we don't have a legal one. Maybe copyright can apply to LLMs but I think you want something different, and figuring out exactly what that is, and getting it through our broken congress will be very hard.

Expand full comment
Michael Kaplan's avatar

I don't even know where to start with this hopelessly confused tangle of erroneous claims. Among other things, it should be pretty obvious that a report about some fact in the world is entirely different from an original idea, which by definition *adds* something new to the world. And even so, when Marisa Kabas broke a story that got picked up by NYT and WaPo, there was a minor scandal about the lack of attribution in those publications, which they had to correct. Neither the idea nor the expression was the decisive factor; the fact that *she did the original reporting* was.

But more to the point: if you produce an article detailing the theory of relativity and try to submit it to a physics journal, you'll have to answer some embarrassing questions, because the idea is Einstein's and it matters not one bit that your explanation has a different form from his. At minimum, the editors will ask you to cite his work and clarify where your particular contribution lies.

LLMs not only freely use the ideas *and expressions* of others without attribution, but actually attribute ideas to people who never had them and people to entirely nonsensical ideas. I have never given permission for my work to be presented without attribution and out of context as part of some random ChatGPT reply, and I certainly do not want to be "credited" for publishing work I never wrote and would not endorse. Since a human author would not be permitted to do any of these things, there's no reason that LLMs should—to say nothing of the fact that neither I nor my publishers have given permission for my work *to be copied* as training fodder for commercial use. That is literally what copyright is for.

Expand full comment
User's avatar
Comment removed
Mar 22
Comment removed
Expand full comment
Tom Hudak's avatar

Not at all. Dr. Brynjolfsson wants to add in unrecognized benefits. He should also add in unrecognized costs.

Expand full comment
WinstonSmithLondonOceania's avatar

A few random observations:

Access to "free" music and other digital "products" is all very nice and well, but there's a catch, in fact more than one.

1. If it's "free", >you< are the product.

2. To access all these "free goods", you have to have a computer and internet access (smart phones count), none of which are free.

3. Enjoying "free goods", such as music, doesn't pay the rent or feed a family, not even a "family" of one. So the amount of "equalizing" is barely detectable in real world terms.

4. AI >is< being pushed to replace humans. If you don't believe this, then you're either in denial deeper than the Mariana Trench, or just plain lying.

5. Any and all productivity growth goes straight to the top. All of us below management are bypassed. It does not now, nor has it really ever, nor apparently will it ever, raise all boats.

Like all software, LLM's have bugs. It's cute to call them "hallucinations", but unexpected output, especially of the erroneous variety, has always been called a bug. Why call it anything else?

On energy use: Stephen Hawking predicted that one day the world would glow red hot because of our energy usage.

Speaking of Asimov's Foundation Series, Trantor had gigantic heat diffusers to send excess heat out into space. In my humble opinion, that's a waste of energy. It could and should be recycled, but that's just my perspective.

Amazon didn't do "pretty well". It grew into a monopolistic beast that created one of the richest billionaire oligarchs on the planet (briefly >the< richest). He did this through wage theft, treating humans like robots, and most of all by playing the Wall St. shell game. Remember, he only first became a billionaire upon release of Amazon's IPO. Incidentally, before the IPO, he was able to sustain Amazon's persistent undercutting the competition through bets placed by Vulture Capitalists.

"And now what I see when I visit companies is that they roll out these systems for coding we were talking about, for call centers, for lots of other things, and they're getting 20, 30, 50% productivity gains in those particular applications."

I'd like to know how big the corresponding layoffs were.

"We're almost obsolete already."

Yes, exactly, thank you.

"...how do we figure out a system for not just having prosperity, but for having shared prosperity?"

Ah, therein lies the rub. The oligarchs controlling all this growth aren't interested in shared prosperity. They're only interested in maximizing their own prosperity, and macroeconomics won't solve that problem.

"I think the default will be that benefits get more and more concentrated and both wealth and power get in the hands of fewer and fewer people. And if we don't want that to happen, we have to be proactive about kind of reinventing our system and our economy in a way that we have the benefits widely shared."

Yes, thank you.

Expand full comment
DC Contrarian's avatar

The opposite of a bug is working as designed. Hallucinations are LLM's working as designed.

Expand full comment
Dennis Allshouse's avatar

Major dunk! Btw, finally got your handle

Expand full comment
User's avatar
Comment removed
Mar 22
Comment removed
Expand full comment
WinstonSmithLondonOceania's avatar

I don't do business with trolls. Bye bye.

Expand full comment
Judy the Lazy Gardener's avatar

I'm beginning to look forward to Jacob Reiss's comments. He's like a magic 8 ball or a fortune cookie. At first I had his name confused with Jacob Riis, muckraking journalist and social reformer. Alas, I don't think that is this Jacob Reiss's profession.

Expand full comment
Chris Savage's avatar

Paul,

My first comment here. I teach Internet Law, Privacy Law, and now Artificial Intelligence Law at a couple of law schools here in DC; I also practice law in the communications/privacy/Internet space. Your conversation touches on a number of things I discuss in class and wanted to see if you or the group had any reaction.

1. In our pre-Solow understanding, per-capita economic growth (as opposed to more people doing the same stuff leading to proportionately higher overall numbers) was driven by finding new, cheap resources: new forests, new iron deposits, new coal mines, etc. We mined the physical environment for newer, cheaper stuff. Drawing on Stuart Kaufman's ideas about "possibility space," I think of technology- and innovation-driven growth as mining our vast store of ignorance. Just as a newly found seam of coal -- that we didn't know was there before -- adds to wealth, so does a newly found production process or innovation like LLMs. I'm curious what you (and the group) thinks of the notion of "mining ignorance".

2. In discussing the pattern of development and adoption of new technologies in particular, I refer to Carlota Perez's "Technological Revolutions and Financial Capital," which seems to me to describe the phenomenon you and Erik were talking about: when a hot new tech comes along, folks with money invest in it to a seemingly absurd degree as a form of bet, the way you guys talked about Microsoft and betting on AI. Perez says that (in effect) capitalist FOMO drives bubbles in the tech sector, leading to vast amounts of what turns out to be over-investment during the bubble, because nobody knows which deployments will work and which won't. Then the bubble bursts and economic Darwinism takes effect, leading to bankruptcies, etc. But the places where the new tech works, remain functional and spread through the economy. What do you think of her work?

3. AI is great at generating text. It is (thus far) terrible at tasks that require a theory of mind - assessing other people's intentions in real time. Now, there's some overlap -- an ML online targeted ad system may recognize that I'm more likely to buy something on a Friday night. A theory of mind would say that if I'm at home online on a Friday night looking online, I'm feeling lonely and will buy as retail therapy. The ML system doesn't have to know THAT; it just has to know I'm more likely to buy on Friday nights. But for tasks where understanding intentions and how teleological entities modify their actions in real time based on the interaction between intention and environment, current AIs -- whether LLMs or ML systems -- are just not up to the task.

Expand full comment
Michael Fuchs's avatar

Along the lines of your point 3, even when it comes to coding, LLMs are not really up to the task. Three problems:

(1) understanding the purpose of the code—what the imprecise humans meant when they described what they wanted—requires a theory of mind, common sense, etc. Without AGI (artificial general intelligence, which doesn’t exist yet), LLMs generate code that runs and accomplishes the unintended thing.

(2) Relatedly, debugging is 90% of coding. LLMs don’t diagnose bugs, neither of miscoding nor of misinterpretation.

(3) Finally, the other 90% of coding (yeah, I know) is maintenance over time. Without a theory of mind, you can’t imagine what this old code meant to accomplish for the human users, or what the (non-existent mind) LLM was thinking when it coded things this way instead of that.

Expand full comment
mike harper's avatar

#1 reminded me of what my late wife told me about working the phones and walk in desk at the IRS. She said you have to be a good listener so you can figure out from the garbled question what the real question is.

Expand full comment
Bill Karwin's avatar

A friend of mine who worked at a bookstore said she'd often get customers walking in: "hey, I'm looking for a book... I don't know the title or the author or the subject. It's blue..."

My friend had to guess which book they meant. She'd pick a book with a blue cover from the most recent bestseller list or Oprah book club. That would nearly always be the one the customer was looking for.

I think every field has its share of inarticulate customers.

Expand full comment
Chris Savage's avatar

Fair. I'm a lawyer, and I was recently giving a talk about AI taking over legal work. It can certainly do some of the drudgery of saying what cases say about what issues (noting the hallucination problem), but it isn't very good at crafting arguments that will be persuasive to a judge or jury. Being persuasive requires a theory of mind -- you're trying to affect someone else's mental state, and thereby affect their motivations, and thereby affect their actions -- "Rule for me!"

But you're right about coding. Just this morning I was teaching a (make-up) law school class on the CFAA (the anti-hacking statute), and one of the questions was whether exploiting a bug counted as "hacking." One argument is that the "purpose" and "intention" of the code is determined by literally what is written. On that theory, the coder "intended" the bug to be there and it can't be illegal to use the code as written/intended. Of course courts don't buy that: the coder (and/or their organization) clearly has a purpose or intention behind code that goes beyond what the code literally says, so exploiting a bug is not consistent with the "intent" of the code.

So your point that LLMs can't really do full-on coding the way a human can makes total sense to me.

Expand full comment
Whit Blauvelt's avatar

Basic Gricean semantics: The meaning of an instance of language depends on knowing something of the intentions of the person who produced it -- which are not themselves fully embedded in the language token. Basically, syntax is not semantics. LLMs only ever do syntax.

It may be the case that a large portion of the modern human population is more engaged in syntax than semantics too. This relates to what in philosophy is known as "the zombie problem." Producing zombie machines is exactly not how to solve it, no matter how much syntax they multiply and matrix.

Expand full comment
mike harper's avatar

MMMMMM???? Will AI replace judges and juries???? Brave new world.

Expand full comment
Kent Myers's avatar

Most developers are using LLMs as an assist to get more done, more quickly. Are you telling them they should all be purists and gut it out the old-fashioned way? Maybe revert to machine language, because that's what the real pros used to know how to do?

Expand full comment
Michael Fuchs's avatar

The programmers I know who do benefit, as you say, from using LLMs don’t depend on them for generating an entire solution for anything. LLMs are useful for coding contained pieces. For example, an interface to a new external service that the programmer would otherwise have to slog through the documentation to figure out how to make into a working bit of code. But that is more about implementing something precisely specified rather than converting human imprecise desires into code.

Expand full comment
Carolyn Meinel's avatar

I use WordPress for a website I manage, but as an aging geek I often take a shortcut by editing the HTML. I've sometimes have even used a hex editor on a compiled program, and even heard of programmers who edit binaries, but that's beyond me. Will there always be a role for people like us? Is it possible that doing things from time to time the ancient ways might give us a better perspective on generative AIs?

Expand full comment
sell-by's avatar

Oh, I get it, you're a software guy, or were. Thing is you guys are always immensely high on your own supply, like the way things work in your field is not only the only thing that matters, but the only thing that is. And I have seldom seen a field more fully devoted to laziness, greed, and hiding from other people while believing fully that everyone else is also lazy, greedy, and wants to hide from other people. Which means that your takes about everything outside your field are usually wildly off.

A great irony in all this is that the jobs AI most readily replaces are in coding, then in other areas of STEM. Largely untouched: jobs that the CS guys and engineers, and they are guys, spent their lives deriding. Why? Because the guys know nothing about them: they belittled all day long, but never actually bothered to learn about them. When they make a foray, they're so far off base that they're laughed out. And they're that far off because the people in those fields, who not only do specialist work there, are not often lazy, greedy, or afraid of people. Do the AI guys learn from this, no, but in the meantime, job security through job obscurity.

Maybe the title of this post should've been "The Computing Industry Eats Itself".

Expand full comment
Michael Fuchs's avatar

Great job living in a world of stereotypes! I wonder if the comment was written by an LLM, given its verbal fluency combined with lack of psychological depth!

FWIW, I did spend decades in software, but also have advanced degrees in creative writing and international relations, was a senior suit in banking, and have my Equity card as a professional actor. So, I may not be the perfect cliche you imagine.

My time in software does alert me to the hype aspects of the current LLM craze. The honchos like Sam Altman overstate wildly because what other choice do they have? Once they started praising the naked emperor’s imperial clothes, they couldn’t go back and start bragging that they finally got him to put a sock on one foot.

Expand full comment
sell-by's avatar

Note that the comment was in response to Kent, not you, but since you mention them, another stereotype: older men named Michael, Andrew, Chris, etc. jumping immediately to the conclusion that anything with a keyword related to them must be primarily about them.

As for your background,

"FWIW, I did spend decades in software,"

...and you really think you drank no koolaid in all that time?

"but also have advanced degrees in creative writing and international relations,"

tell me your birth year without telling me your birth year

"was a senior suit in banking,"

so greed, then

"and have my Equity card as a professional actor."

and easily flattered.

"So, I may not be the perfect cliche you imagine."

Just don't start talking about chefs.

Expand full comment
Michael Fuchs's avatar

I must say, that was really funny. Well done!

Expand full comment
Carolyn Meinel's avatar

Huh? May I suggest that you check me out on Google Scholar to see whether I am just a "software guy"? Or even a "guy." Or you could view my ranch website https://www.facebook.com/PrairieRoseRanch. People in many lines of work like to fiddle with code, just like many people like to fiddle with old cars, like my 1972 full-bed Ford Ranger. Or enjoy training horses, like the gray Paso Fino mare I'm riding at the top of my ranch page. You can never know who someone is just by guessing.

Expand full comment
sell-by's avatar

Response was to Kent, not you, sorry. Everyone in the thread is apparently sociably and confusingly sent updates.

Expand full comment
Carolyn Meinel's avatar

Thanks. This thread system is confusing to me:(

Expand full comment
Dennis Allshouse's avatar

I see it now making sense for pros who know what they’re doing. For professionals It NEVER hurts to learn another language. Bootcamp coders are toast. (Btw, real programmers write in 1 & 0s, throwback ‘90s joke 🤣)

Expand full comment
Parker Dooley's avatar

"There are 10 kinds of people in the world..."

Expand full comment
Carolyn Meinel's avatar

LOL

Expand full comment
User's avatar
Comment removed
Mar 22
Comment removed
Expand full comment
Chris Savage's avatar

1. I'm sorry, who are you?

2. Where did you go to school? How many economics degrees to you have?

3. How many Nobel-Prize-winning economists have advised papers you've written?

4. Do you have any publications in any relevant field?

5. What do you do for a living? Does anyone pay you to think about this stuff?

Expand full comment
WinstonSmithLondonOceania's avatar

It's a troll. Feel free to ignore it.

Expand full comment
Chris Savage's avatar

So I’m gathering… Thanks for confirming…

Expand full comment
Eike Pierstorff's avatar

I feel that a conversation about the economics of AI should at least briefly mention the fact that AI companies are bleeding money without any idea how to recover it. Their strategy seems to be mainly in the hope that they can starve out the competition, and then jack up the prices for those companies who have come to rely on AI (so you human-level artificial intellect might not be a lot cheaper than the human it replaces, and you now have vendor lock in as well). Also it is at least interesting how fortitutous it was for Nvidia that the AI thing took of as Crypto lost traction and they required another outlet for overpriced GPUs. This is a bit myopic on my part or at least very anecdotal, but for my work the technological ingenuity (on which I have zero opinion and don't feel qualified to judge) matters less than the result, and the result is not there. I work in online marketing for a medium sized company. Our goal is to get as many people as possible to use our services while spending as little money as possible, while the companies who help us do that want to extract as much money from us as feasible while rendering the exact same service to other companies in the exact same business. This is already such a dumb oligopolistic constellation that no amount of AI will improve it, but it means also their use of AI for our advertising does not confer any advantage, and we just pay more for AI enabled services as to not fall behind. And that is before they tell us that all this amazing data modeling does not even work and if we want results we will have to surrender personal data from all our end users lest they will not receive "relevant" advertising at the peril of our campaign performance ( I am not per se against online advertising, stands to reason in my profession, but you would not believed the horrendous amount of money that is priced into the stuff you pay for, and that we essentially just pay for fear of missing out). What is boils down to, and I wonder if this is true for other industries, is that AI is just one more layer of abstraction added to an industry that would provide better benefits to the would if you peeled several layers of abstraction off instead. Sorry for the essay, I have obviously strong feelings about this.

Expand full comment
Tricia L Murphy's avatar

I would really like someone to price what replacing a human customer service rep with chat GBT really costs, in electricity, infrastructure to provide the electricity, payouts to the intellectual property owners of the stuff that was used to train the AI, etc. I have a hard time believing it's cheaper. And then you need to factor in that human beings with a salary are money multipliers in their community while any money multiplication from AI is wherever the data centers are, and the rest of it is locked up by the ai owners. Just a bad deal all around for pretty much everyone. None of the externalities of this technology are being priced in, which is what has always happened with new technology and always leaves us cleaning up a huge pile of shit afterwards, ie see environment and 3 rd world exploitation.

Expand full comment
Eike Pierstorff's avatar

Customer service was one of the poster use cases for Klarna (a Swedish fintech that provides payment gateways and some banking services), and they announced they had replaced most human staff (some 2000 people IIRC correctly, although that probably wasn't just service reps) with AI. Less than a year later they did at least a partial reversal, because (as I understood their press release) the quality was not satisfying. AI can do absolutely amazing things IMO, but just saying "let us do the same thing but with AI" might not be the best way to utilize it.

Expand full comment
Al Keim's avatar

Al art is to art what Sousa is to music.

Expand full comment
Dennis Allshouse's avatar

Whatever. Some people like brassband music.

Expand full comment
Ann Johnson's avatar

Dis

Expand full comment
Al Keim's avatar

A weak take on an old joke.

Expand full comment
User's avatar
Comment removed
Mar 22
Comment removed
Expand full comment
Chris Savage's avatar

FWIW I think Goldman Sachs put out an advisory making this point: the investment is huge but there are no obvious revenue streams to generate ROI.

Expand full comment
Eike Pierstorff's avatar

Sam Altman for example, who says that OpenAI will continue to lose money for the next years, who admits that he creates pricing models from gut feelings and who is offering a "PhD level" AI assistant at the price of four actual PhDs.

Expand full comment
Chad Bailey's avatar

I work in the federal government, and I’m trying to get my mind around what AI means for my agency. I won’t go into detail, but I can say that I work for a regulatory agency that has received some attention in the last few months. That said, I feel like Elon Musk and his brand of Maximalism Regarding AI is probably causing damage. He has said that we don’t need fighter planes anymore, because drones can do the job that a man fighter plane can do, but aircraft and Air Force experts say that he is way too early. I feel like that type of ideology is alsoguiding what he is doing in the government.

Expand full comment
Al Keim's avatar

Elon predicts many things. He's a lot like Carnak the Magnificent.

Expand full comment
Kent Myers's avatar

I am one of those low-life Fed contractors, and I agree that our human labor is not so easily eliminated, or not without serious damage. However, on the matter of fighter aircraft, I think they are obsolete. Its a great shame that we are doing it again with the F-47 "Trumper." We spent a lot of effort getting Ukraine spun up to use fighters, and the planes have been quietly put aside, along with main battle tanks. Meanwhile, there is an incredibly rapid escalation in drone/anti-drone technology. If you insist on including a trigger puller, put him in a darkened room in Nebraska, not in a cockpit where he's truly dead weight.

Expand full comment
Carolyn Meinel's avatar

Speed of light is the ultimate constraint of where the human in the loop should be. Sometimes it takes a hair trigger decision/reaction. Determining use cases where that applies is the next challenge.

Expand full comment
Vefessh's avatar

Long before the current administration, the Pentagon put out a call for designs of an ultra-survivable drone carrier and command vehicle, to hold the trigger pullers despite ever-increasing EW that can cut off satellite communication. (Or, indeed, destruction of the satellites. Because the only treaty against space warfare forbids nukes. Everything else is fair game.)

Expand full comment
User's avatar
Comment removed
Mar 22
Comment removed
Expand full comment
Chad Bailey's avatar

I’m sorry that I don’t understand

Expand full comment
Genevieve Guenther's avatar

I feel very frustrated that your interlocutor sort of waved away your concerns about the electricity consumption of data centers for LLMs. If the world doesn't decarbonize now, today's teenagers will see catastrophic levels of heating by the time they're our age. The UK Institute of Actuaries warned recently that worst case on current emissions pathways we will see an absolute global GDP reduction of 50% 2070-2090. And it's no accident that your guest cited Solow and Nordhaus as his touchstones, as they are the two economists who have most influentially insisted that growth is exogenous to accounts of environmental externalities. See chps 2 and 3 of this: https://bookshop.org/p/books/the-language-of-climate-politics-fossil-fuel-propaganda-and-how-to-fight-it-genevieve-guenther/20688378?ean=9780197642238

I know climate change bedevils any discussion we might have about economics or politics now, but tragically there is no way around it. Thankfully there's still time to repair our blind earlier mistakes!

Expand full comment
J.C. Snow's avatar

I am really surprised at placing digital consumption as a net benefit of productivity. We all experience a massive drop in personal productivity from the endless scroll (and a corresponding drop in happiness). It doesn't seem to be a benefit at all. I am also surprised that the future loss of many jobs, careers, and artistic pursuits is not coming up as a negative or even discussed. The optimism is bizarre to me. And finally, the complete openness that there is no real humanity wide benefit to AI, for which we are going to mortgage our climate and inequality levels, nothing like "we'll cure cancer" and only "we will streamline business processes and make more money!" - well, again, guess I appreciate the honesty. And no comment at all on the open theft by Meta recently revealed by The Atlantic to feed their LLM's maw? To save money? No mention at all?

Expand full comment
Erwin Dreessen's avatar

Precisely. As I say elsewhere in these comments, the narrow scope of Erik Brynjolfsson GDP-B proposition, in particular, is striking: "consumer surplus generated by free digital goods and other nonmarket goods" -- my goodness, the looking-at-your-screen craze has now reached the point that it is thought of as the most important component of people's well-being.

And, as you say, no word about the costs. Despite the massive data, this amounts to little more than a narrowly-based measure of consumer sentiment.

Expand full comment
sell-by's avatar

And a harder steer into bro territory there, Paul. Please fix the gender balance.

Anyway. How should we think about the economics? Easy. We already know that everything associated with AI is to do with horrible techbros with stunted senses of humanity. We also know that a much larger circle of assorted bros, largely with money, finds them fascinating, rather than a case of blighted humanity. So they will go on and have a big money party about AI while damaging people and not caring. Should we care?

No. They're not interesting; they're a misery. There is also nothing you can do about these people. Insulate yourself from them as far as you can in the same way that you insulate yourself from the war bros and the re-enslave-women bros and all the other various crooks out there. And go on about your life.

(I will take this opportunity to remind Paul that there are interesting economic events happening in the world that do not come advertised by journo bros obsessing about other bros. And that if he fixed the gender balance in his conversations and associations, he might hear more about them.)

Expand full comment