0:00
/
0:00
Transcript

Epistemic Apocalypse and Prediction Markets (Bo Cowgill Pt. 2)

On the uses of language, from information to incantation.

We continue our conversation with Columbia professor Bo Cowgill. We start with a detour through Roman Jakobson’s six functions of language (plus two bonus functions Seth insists on adding: performative and incantatory). Can LLMs handle the referential? The expressive? The poetic? What about magic?

The conversation gets properly technical as we dig into Crawford-Sobel cheap talk models, the collapse of costly signaling, and whether “pay to apply” is the inevitable market response to a world where everyone can produce indistinguishable text. Bo argues we’ll see more referral hiring (your network as the last remaining credible signal), while Andrey is convinced LinkedIn Premium’s limited signals are just the beginning of mechanism design for application markets.

We take a detour into Bo’s earlier life running Google’s internal prediction markets (once the largest known corporate prediction market), why companies still don’t use them for decision-making despite strong forecasting performance, and whether AI agents participating in prediction markets will have correlated errors if they all derive from the same foundation models.

We then discuss whether AI-generated content will create demand for cryptographic proof of authenticity, whether “proof of humanity” protocols can scale, and whether Bo’s 4-year-old daughter’s exposure to AI-generated squirrel videos constitutes evidence of aggregate information loss.

Finally: the superhuman persuasion debate. Andrey clarifies he doesn’t believe in compiler-level brain hacks (sorry, Snow Crash fans), Bo presents survey evidence that 85% of GenAI usage involves content meant for others, and Seth closes with the contrarian hot take that information transmission will actually improve on net. General equilibrium saves us all—assuming a spherical cow.

Topics Covered:

  • Jakobson’s functions of language (all eight of them, apparently)

  • Signaling theory and the pooling equilibrium problem

  • Crawford-Sobel cheap talk games and babbling equilibria

  • “Pay to apply” as incentive-compatible mechanism design

  • Corporate prediction markets and conflicts of interest

  • The ABC conjecture and math as a social enterprise

  • Cryptographic verification and proof of humanity

  • Why live performance and in-person activities may increase in economic value

  • The Coasean singularity

  • Robin Hanson’s “everything is signaling” worldview

Papers & References:

  • Crawford & Sobel (1982), “Strategic Information Transmission”

  • Cowgill and Zitzewitz (2015) “Corporate Prediction Markets: Evidence from Google, Ford, and Firm X”.

  • Jakobson, “Linguistics and Poetics” (1960)

  • Binet, The Seventh Function of Language

  • Stephenson, Snow Crash


Transcript:

Andrey:
Well, let’s go to speculation mode.

Seth: All right. Speculation mode. I have a proposal that I’m gonna ask you guys to indulge me in as we think about how AI will affect communication in the economy. For my book club, I’ve been recently reading some postmodern fiction. In particular, a book called The Seventh Function of Language.

The book is a reference to Jakobson’s six famous functions of language. He is a semioticist who is interested in how language functions in society, and he says language functions in six ways.1 I’m gonna add two bonus ones to that, because of course there are seven functions of language, not just six. Maybe this will be a good framework for us to think about how AI will change different functions of language. All right. Are you ready for me?

Bo Cowgill: Yes.

Seth: Bo’s ready. Okay.

Bo Cowgill: Remember all six when you...

Seth: No, we’re gonna do ‘em one by one. Okay. The first is the Referential or Informational function. This is just: is the language conveying facts about the world or not? Object level first. No Straussian stuff. Just very literally telling you a thing.

When I think about how LLMs will do at this task, we think that LLMs at least have the potential to be more accurate, right? If we’re thinking about cover letters, the LLMs should maybe do a better job at choosing which facts to describe. Clearly there might be an element of choosing which facts to report as being the most relevant. We can think about, maybe that’s in a different function.

If we ask about how LLMs change podcasts? Well, presumably an LLM-based podcast, if the LLM was good enough, would get stuff right more often. I’m sure I make errors. Andrey doesn’t make errors. So restricting attention to this object-level, “is the language conveying the facts it needs to convey,” how do you see LLMs changing communication?

Bo Cowgill: Do I go first?

Seth: Yeah, of course Bo, you’re the guest.

Bo Cowgill: Of course. Sorry, I should’ve known. Well, it sounds like you’re optimistic that it’ll improve. Is that right?

Seth: I think that if we’re talking about hallucinations, those will be increasingly fixed and be a non-issue for things like CVs and resumes in the next couple of years. And then it becomes the question of: would an LLM be less able to correctly report on commonly agreed-upon facts than a human? I don’t know. The couple-years-out LLM, you gotta figure, is gonna be pretty good at reliably reproducing facts that are agreed upon.

Bo Cowgill: Yeah, I see what you mean. So, I’m gonna say “it depends,” but I’ll tell you exactly what I think it depends on. I think in instances where the sender and the receiver are basically playing a zero-sum game, I don’t think that the LLM is gonna help. And arguably, nothing is gonna help. Maybe costly signaling could help, but...

Seth: Sender and the receiver are playing a zero-sum game? If I wanna hire someone, that’s a positive-sum game, I thought.

Andrey: Two senders are playing a zero-sum game.

Seth: Oh, two senders. Yes. Two senders are zero-sum with each other. Okay.

Bo Cowgill: Right. This is another domain-specific answer, but I think that it depends on what game the two parties are playing. Are they trying to coordinate on something? Is it a zero-sum game where they have total opposite objectives? If all costly signaling has been destroyed, then I don’t think that the LLM is gonna help overcome that total separation.

On the other hand, if there’s some alignment between sender and receiver—even in a cheap talk world—we know from the Crawford and Sobel literature that you can have communication happen even without the cost of a signal. I do think that in those Crawford and Sobel games, you have these multiple equilibria ranging from the babbling equilibrium to the much more precise one. And it seems like, if I’m trying to communicate with Seth costlessly, and all costly signal has been destroyed so we only have cheap talk, the LLM could put us on a more communicative equilibrium.

Seth: We could say more if we’re at the level where you trust me. The LLM can tell you more facts than I ever could.

Bo Cowgill: Right. Put us into those more fine partitions in the cheap talk literature. At least that’s how I think the potential for it to help would go.

Andrey: I wanna jump in a little bit because I’m a little bit worried for our listeners if we have to go through eight...

Seth: You’re gonna love these functions, dude. They’re gonna love... this is gonna be the highlight of the episode.

Andrey: I guess rather than having a discussion after every single one, I think it’s just good to list them and then we can talk.

Seth: Okay. That’ll help Bo at least. I don’t know if the audience needs this; the audience is up to date with all the most lame postmodern literature. So for the sake of Bo, though, I’ll give you the six functions plus two bonus functions.

  1. Informational: Literal truth.

  2. Expressive (or Emotive): Expressing something about the sender. This is what actually seems to break in your paper: I can’t express that I’m a good worker bee if now everybody can easily express they’re good worker bees.

  3. Connotative (or Directive): The rhetorical element. That’s the “I am going to figure out how to flatter you and persuade you,” not necessarily on a factual level. That’s the zero-sum game maybe you were just talking about.

  4. Phatic: This is funny. This is the language used to just maintain communications. So the way I’m thinking about this is if we’re in an automated setting, you know how they have those “dead man’s switches” where it’s like, “If I ever die, my lawyer will send the information to the federal government.” And so you might have a message from your heart being like, “Bo’s alive. Bo’s alive. Bo’s alive.” And then the problem is when the message doesn’t go.

  5. Metalingual (or Metalinguistic): Language to talk about language. You can tell me if you think LLMs have anything to help us with there.

  6. Poetic: Language as beautiful for the sake of language. Maybe LLMs will change how beautiful language is.

  7. Performative: This comes to us from John Searle, who talks about, “I now pronounce you man and wife.” That’s a function of language that is different than conveying information. It’s an act. And maybe LLMs can or can’t do those acts.

  8. Incantatory (Magic): The most important function. Doing magic. You can come back to us about whether or not LLMs are capable of magic.

Okay? So there’s eight functions of language for you. LLMs gonna change language? All right. Take any of them, Bo.

Andrey: Seth, can I reframe the question? I try to be more grounded in what might be empirically falsifiable. We have these ideas that in certain domains—and we can focus on the jobs one—LLMs are going to be writing a lot of the language that was previously written by humans, and presumably the human that was sending the signal. So how is that going to affect how people find jobs in the future? And how do we think this market is gonna adjust as a result? Do you have any thoughts on that?

Bo Cowgill: Yeah. So I guess the reframing is about how the market as a whole will adjust on both sides?

Andrey: Yes, exactly.

Bo Cowgill: Well, one, we have some survey results about this in the paper. It suggests you would shift towards more costly signals, maybe verifiable things like, “Where did you go to school?”

Andrey: No, but that is easy, right? That already exists, more or less.

Bo Cowgill: That’s true. Yeah, I mean, you could start using these more and start ignoring cover letters and things like this.

One thing somewhat motivated by the discussion of cheap talk a minute ago is that there’d be more referral hiring. This is something that lots of practitioners talk about: we can’t trust the signal anymore, but I can still trust my current employees that worked with this person in the past. It has a theoretical interpretation as well, which is that when all you have is cheap talk, the only communication you can have is maybe between people who are allies in some sense or who share the same objective. This would be why you could learn or communicate through a network-based referral. So I think that’s super interesting and lots of people are already talking about it. It would be cool to try to have an experiment to measure that.

Andrey: What about work trials? Do you think that’s gonna become more common? Anecdotally, I see some of the AI labs doing some of this. If you can’t trust the signals, maybe just give a trial.

Bo Cowgill: Most definitely. The cheap talk idea is not the only one. You could have a variety of contractual solutions to this problem. There was a recent Management Science paper about this: actually charging people to apply, thinking that they have a private signal of whether they can actually do this or not. If they’re gonna get found out, they would be less likely to be willing to part with this money. It’s less of a free lottery ticket just to apply if you’re charging.

Andrey: For what it’s worth, I strongly think that we’re gonna move into the “pay to apply” world.

Bo Cowgill: Oh. That’s interesting. I mean, I think that “pay to apply” is super underrated. Having said that, people have been willing to ignore more obvious good things for longer, so I don’t think it’s as inevitable as it sounds like you do.

Andrey: Well, I think it’s the natural solution to the extent that what the cover letter is doing is signaling your expected match quality. And you have private information about that. I think both Indeed and LinkedIn have now premium plans with costly signals. So it’s not exactly a “pay for apply,” but you pay for a subscription that gives you limited signals, which is essentially the same exact thing.

Bo Cowgill: Makes sense.

Andrey: Yeah. So I think, whether that solves these issues, I’m not sure. It needs to be objective to really do the deed.

Seth: It solves the express... well, which is fine if we think willingness to spend on this thing is more correlated with ability. It’s back to the same signaling model.

Bo Cowgill: I mean this solution also relies on the applicant themselves to know whether they’re a good match in some sense, and some people are just deluded.

Andrey: Yeah. Well also the platform, like in advertising, could be a full auction-type thing.

Bo Cowgill: It could be a scoring auction that has its own objectives and gives people discounts. What Seth says raises a common objection for “pay to apply,” which is: “What about the people who can’t afford it?” And I think a high number of the people who have said that in my life work for an institution that charges people to apply for admission. So you could use some of the same things. You could have fee waivers, and the fee waivers might require a little bit of effort to get.

Another idea I’ve heard is that you could put the money in escrow and then possibly give it back if it doesn’t work out. Or you could actually give it back if it does work out. So yeah, people have different takes on this. But there are various ways to harness “pay to apply” and then deal with the negative aspects of it in other ways.

Seth: So what it seems to solve is this very narrow element of what we call the expressive function of language. So one thing I’m trying to express with my cover letter is, “I’m a good worker bee. I do the things. I have resources. I will bring my resources to your firm.” But we also want the letters to do lots of different things, like be beautiful and tell me a little bit about yourself. Have heterogeneous match quality elements, right? So it seems like this money only helps with one vertical dimension of quality.

Andrey: Actually, when you’re sending that costly signal and you cater your cover letter to that employer, that is about match quality, right? The costly signal, the “pay to apply,” gives you the incentive to reveal that information in your cover letter.

Seth: Right. It’s a “both,” right? It’s not a payment or a cover letter. It’s a both. Good point.

Andrey: We’ve spent a lot of time thinking about the signaling, this information apocalypse—or epistemic apocalypse—that Bo has been calling it. I think one solution to various epistemic issues has been prediction markets. I wanted to ask Bo about his earlier life experiences with those because it’s a very hot topic now, with a lot of prediction markets gaining traction.

Bo Cowgill: Yeah, definitely. We should get back to the GenAI information apocalypse as well and ask: do we think it’s gonna happen? But yeah, it is true that some of my first papers out of grad school were about prediction markets. In my former life I worked at Google, where at one time people had 20% projects. I started an internal prediction market. At the time it was the largest internal prediction market known to exist.

There were around 400 or so different markets where we offered employees the ability to anonymously bet on different corporate performance measures. The two most common ones were: What will the demand for our products be? How many new advertisers, Gmail signups, or 7-day-active-users will we get? And then also, project launch deadlines. Basically, would it be on time or early or late? Not very often early, but sometimes on time.

I had a paper about this in the Review of Economic Studies. It showed, like in many other cases, the markets perform really well, both in absolute terms and relative to other forecasters at Google. We eventually got other companies’ data to try to do similar things.

I think one interesting thing is that prediction markets have gotten really big externally for things like elections, but you still don’t see a lot of companies seemingly use it to guide decision-making.

Andrey: I want to hear your best explanation for why you think the internal prediction markets haven’t taken off.

Bo Cowgill: There are lots of reasons. Our prediction market at Google was really built around having a proof of concept that we can then use to launch our own Kalshi, or our own Polymarket. I think it was a little bit too soon for that. In our case, we weren’t really trying to make it as good of a decision-making tool as possible. Like we wanted to go public and have the election markets be hosted by Google. There were some regulatory barriers I think that Kalshi eventually was able to get past.

The part of the problem I’ve been working on recently is that the prediction market paradigm inside of a company assumes that all the workers have some information about what plan of action would be best, but they otherwise have no preference about what you do with this information. Like, “Should we launch a new product?” The paradigm assumes that they all know something about whether it’s gonna be a successful product, but they sort of don’t care whether you do it or not. Obviously they care. Some of the people with the best information about this new product could have a very strong preference. I heard about this situation in Asia, where the person with the best information on the new product would also probably have their career sabotaged if they launched a competing product. So that could interfere with the incentive compatibility of the market.

Seth: The incentives aren’t high-powered enough.

Bo Cowgill: That’s true. And it’s hard to think about how the incentives would ever be high-powered enough to offset this unless the company proactively designs the market differently to deal with these conflicts of interest.

Seth: I wanna follow up with Andrey’s question. This seems like a really good way to accumulate information, and maybe AI will help us do these better. Is there really an epistemic apocalypse or will prediction markets plus AI predictors save us all?

Bo Cowgill: It’s possible that prediction markets will help in this way just by making the information... it’s essentially a form of a contract. When we talked about various contracts including “pay for apply” and maybe doing a trial period at a job, all these are contractual ways of making it costly to lie. And that could possibly discipline this sort of thing.

One reason I think that the epistemic apocalypse isn’t going to fully happen is that for cases where there’s an information bottleneck, I think the economy is gonna find a way to get the information it needs so that you can hire someone for a valuable role. There’s lots of reason that buyers want to coordinate on information.

Seth: It’s positive-sum.

Bo Cowgill: Right. So that would be one reason. I think in a lot of cases, the informational bottlenecks will be closed even if you don’t have as good of positive, costly signaling as you used to. But, number one, we could just have to tolerate a lot of mistakes. And that already happens in the hiring setting. So it’s possible that we could have to tolerate even more hiring mistakes because now the signal is actually worse.

Andrey: Bo, why are we hiring anyone? I thought all the jobs will be non-human jobs. Maybe it’ll be a Coasean singularity where we’re all one-person firms.

Seth: Exactly. What is the Coasean singularity? It’s the zero bargaining frictions, and one of the bargaining frictions is information asymmetry. Bo, would it be fair to say then that you’re kind of more optimistic about convergence in sort of public, big-question information—the kinds of stuff that prediction markets are good at at scale—but you’re more pessimistic about Seth trying to send a message to stranger number three?

Bo Cowgill: That is a good distinction. The prediction markets are generally better at forecasts when there’s lots of information that’s dispersed around lots of different actors, and the market kind of aggregates this up.

Seth: And theoretically, a high-quality LLM that has a budget to do training will be a super-forecaster and will be conveying and aggregating this information, right?

Bo Cowgill: That’s true. But when we think about agents participating in prediction markets, a bunch of the theory assumes that everyone receives some independent signal or a signal with some independent noise. Insofar as everyone’s agent derives from the same three or four big labs, then they might not actually be all that independent. And that would be a reason to not think that the markets will save us.

Seth: Only if they’re not independent ‘cause they’re wrong.

Andrey: Well, even if the foundation models are the same, they may be going out to acquire different pieces of information.

Bo Cowgill: That’s true. You also have the temperature in the models that adds some level of randomness to the responses.

Andrey: No, but I literally mean, like, you have these sci-fi novels where you tell the AI to go out and find information, and that’s a costly acquisition process for the LLM. Maybe it has to interview some humans or pay for some data. I think this viewpoint that you’re just taking an identical prompt from some off-the-shelf chatbot and asking, “Hey, what’s the prediction here?” is really not the right way to think about what agent-assisted functions would be doing. Think about hedge funds: they’re all using various machine learning to trade, but it’s not like they’re all doing the same thing, even though I assume that many of the algorithms they’re using are in some sense the same.

Bo Cowgill: I see. So you’re basically more optimistic about prediction markets and AI being a combined thing that would help overcome the apocalypse.

Andrey: Yes.

Bo Cowgill: I don’t know. Well, one way in which I guess I’m a little bit more pessimistic is that, in the world that we’re just coming from, I think there is just more reliable, ambient information that you would get just from being in the environment that you could trust.

I think in the old world, you could just trust a photograph. Now it’s true that there were a lot of staged photographs even back in the day...

Andrey: Have you seen friends of comrade Stalin?

Bo Cowgill: Totally.

Seth: Losing his friends very quickly.

Bo Cowgill: But it does still feel like... maybe not stuff that you would see in the media where there were parties that would have some incentive to doctor photos. But if your friend said that they met Tom Brady, they could bust out a picture and show you Tom Brady and you could have more faith in that. Or other smaller-stakes, ambient things that might be a little bit more trustworthy now that could accumulate.

Seth: That’s the question. Does all of the little small stuff add up to an apocalypse if we’re all still agreeing at the big stuff from the top down?

Andrey: What about reputation? He’s not gonna show you fake photos, come on.

Bo Cowgill: This is true. Well, I mean, if we’re not gonna interact again, then who knows?

Seth: Zero-shot.

Bo Cowgill: You’re a sock puppet, you know?

Seth: Shit. Stay contrary.

Andrey: That’s the twist, is that this was an AI podcast the entire time. I am a robot.

Bo Cowgill: That’s funny.

Andrey: I mean, reputation is not a bilateral thing only, right? You have reputational signals that you can accumulate, and certainly for media outlets, they could form reputations. That’s kind of the point of media outlets.

Seth: In the future, everyone’s their own media outlet. Everyone’s got their own Substack. Everyone could have an LLM pointed at them saying, “Hey, keep track if Seth and Andrey ever lie or do anything bad on their podcast.” So there’s a sense in which it’s the classic AI attack-defense thing. It makes it easier to make fakes, but it also makes it easier to monitor fakes.

Bo Cowgill: I see what you’re saying. So yeah, this is why I say I think in situations where it’s high-stakes enough to form a contract and do monitoring, that we don’t necessarily get these huge amounts of information loss. But you would also get a lot of information about the world.

Actually, here’s a specific example. I have a 4-year-old daughter.

Seth: Cute. Can confirm.

Bo Cowgill: Thank you. So there was a GenAI photo of a squirrel who ate a piece of candy or something like that. It was GenAI, but it was high-quality, and the squirrel has expressive body language saying how good it is. I would know that that’s not a real squirrel, that they were trying to create a viral video. But she hasn’t really experienced real squirrels yet. So I actually think that she probably thought this was something that could actually happen. Now we’re gonna have a whole generation of people who have probably seen more fake cat videos than actual cat videos. And I just think that will accumulate, not necessarily to an apocalypse, but to some level of aggregate information loss.

Andrey: It’s interesting ‘cause I would think that it’s not the kids who are gonna be affected, but it’s the adults. Think about who are the primary spreaders of mass emails with completely unverified information.

Seth: Even better. And at the end it says, “Please share. Share with everyone.”

Bo Cowgill: Right. I mean, one answer to that is: yes, and/or why not both?

Seth: It’s attack and defense again on the squirrel thing. When I grew up, I had no idea that trees actually looked like these lollipop palm trees that they have here in Southern California. When I was reading Dr. Seuss, I thought those were made-up BS. And then I had to actually go out here to find out.

Bo Cowgill: Stuff you believe. I’m just kidding.

Seth: Fair enough. I guess what I’m trying to say is that, as a child, I was exposed to a lot of media with talking animals and eventually I figured it out. And who knows, maybe your daughter will have access to LLMs and instead of having to wait until she’s 20 to find out, she can ask, “Hey, do squirrels actually thank you and be emotive in a human-like way?”

Bo Cowgill: Yeah. What do you guys think about the idea that the rise of fake AI will actually create demand for crypto and for things being cryptographically signed as proof of their authenticity?

Andrey: Yes. I think the answer is yes. I’m very interested in ideas such as “proof of humanity.” I think on a practical level, the concepts involved in crypto are just too abstract for most people. So the success will come from essentially someone putting a very nice user interface on it, so people aren’t actually thinking about the crypto part.

Seth: The blocks. I mean, I definitely see a huge role for just this idea of timestamping: this thing went on the blockchain at this date, and if we can’t agree on anything else, at least we can agree on the original photo of Stalin with his four friends.

Andrey: I guess the big question for all of these systems is they’re not that useful until lots of people are on them. It’s a chicken-and-egg problem.

Seth: Really? You don’t think if you got the three big news services on it, wouldn’t that be standard-setting?

Andrey: Yeah. But I view that as a different and a harder ask than the timestamping. I know news organizations can do that themselves. I assume they’re actually already doing it to some extent. And normal human beings would never check. But if there was an investigation, someone could in principle check.

Seth: Well, it comes up all the time in terms of documenting war events. It’s like, “Oh, you said this was a bombing from yesterday, but this is photos from 10 years ago,” right?

Andrey: Yes. And if we had some enlightened CEOs of social media companies, they might facilitate that. It’s not clear that their business interests are actually well-aligned with that. But I think with the proof-of-humanity type stuff, you’re gonna wanna use it when everyone else is using it. Let’s say Meta wanted to verify that everyone on its platform was a unique human being. If everyone has access to proof-of-humanity technology, then that’s very feasible to do. But if only a tiny share of the population is using it, then it’s not a very effective mechanism.

Seth: What do we think? One thing we haven’t talked a lot about today, and I wanna give us a chance to at least address it in passing, is that it seems like the effect of LLMs on writing has a lot to do with how much LLMs will be doing reading. We’ve already talked in passing about how LLMs prefer the writing of other LLMs; it seems to show up in your study. It makes perfect sense. If you prompt an LLM saying, “Write the best thing,” it should be pretty good at it, right? Because it can just evaluate it itself and iterate.

To what extent is that a problem or a solution? The positive vision is the LLMs are going to be able to convey extremely detailed information and then on the other end, parse extremely detailed information in an efficient way. That’s Andrey’s Coasean singularity. But you might imagine that because now only LLMs are reading, people put less effort into submitting, and that’s the epistemic apocalypse: “Why even try if they prefer a bullshitted GenAI version?”

Bo Cowgill: Yeah, totally. Or I guess in a lot of my own prompts, sometimes I know I don’t have to describe what I’m talking about in very fine detail ‘cause it knows the context of the question and can do it. It does seem like it’s potentially a problem to me, mainly because we should still care about the human-to-AI communication pipeline, and that pipeline might actually need to go in both directions. And so if the LLMs are basically really good at talking to each other, but lose the ability to talk to normal people, then that seems potentially bad for us.

Seth: But there’s one thing LLMs are great at, it’s translating. That’s something I’m optimistic about.

Bo Cowgill: That’s true. Arguably it needs to be trained and/or prompted or rewarded somehow to do that. And maybe the business models of the companies will keep those incentives aligned to actually do this.

Andrey: Well, the models are gonna be scheming against each other, so they wouldn’t wanna tell us what they’re really conspiring to do. One final topic I wanted to get to was superhuman persuasion.

Bo Cowgill: So, Andrey I think had this provocative statement at some point that he doesn’t think of persuasion as being a big part of the effects of GenAI. I was surprised by that. I think maybe Andrey is representing a common view out there.

There’s a lot more discussion of the productivity effects of GenAI maybe than the persuasion effects. And I don’t know if at some level, without persuasion... persuasion ultimately is some part of productivity if we’re measuring productivity in some sort of price-weighted way. Because two companies could have the same exact technology, one with a bad sales force, and it might show up as one of them being a zero-productivity company.

Seth: But how much is that zero-sum? I guess the idea there would be is that sure, if Coke spends more on advertising, we’ll sell more Coke and less Pepsi. But is that positive-sum GDP or have we just moved around the deck chairs?

Bo Cowgill: In order to get the positive sum, I think you would still need to persuade someone that this is worth buying.

Seth: No, ‘cause it could be negative. You can make Pepsi shitty. You can be like, “Don’t drink Pepsi. It’s shit.” But it’s negative-sum. It’s negative GDP.

Andrey: I just wanna state precisely what I think my claim was, which is: I don’t believe in substantially superhuman persuasion. Which isn’t to say that in jobs that require persuasion, AI can’t be used. It’s just more that I don’t think there’s this super level of like, you talk to the AI and it convinces you to go jump off a bridge.

Seth: Right. So in Snow Crash, it’s posited that there’s a compiler-level language for the human brain that if you can speak in that, you can just control people. Similarly, in The Seventh Function of Language, there’s this idea of a function of language that is just so powerful, you can declare something and it happens.

Andrey: That’s the magic.

Bo Cowgill: Right. Productivity is not that many steps away from persuasion about willingness to pay or willingness to supply. And it does seem like the persuasion aspects of GenAI should be talked about more.

I wanted to bring up this ABC conjecture because I think that there’s a belief that in areas very cut and dry, like math, there is no real room for persuasion because something is just either true or not. This story about the ABC conjecture illustrates this.

There’s a Japanese professor of math who studied at Princeton and has all of the credentials to have solved a major conjecture in number theory. He puts forth this 500-page attempted solution of the ABC conjecture. A credible person claiming this is the proof. Unfortunately, his proof is so poorly written, so technical and so badly explained, that no one else has been able to follow the proof.

Seth: Or even put it in a formal proof checker. If they had put it in a formal proof checker, everyone would’ve been satisfied.

Bo Cowgill: Yes. I think that this story is interesting because it highlights that, even in something like math, it’s ultimately a social enterprise where you have to try to convince other human beings that you have come up with something that has some value.

Seth: Wait, people aren’t born with values? Without a marketing company, I would still wanna drink water.

Andrey: That’s actually not true. I mean, isn’t there the whole movement to drink more water?

Bo Cowgill: It’s true that you may have been persuaded just by your parents or your rabbi or whoever. But let’s get to a more narrow objection. As part of the motivation for this “cheaper talk” paper, we ran some surveys to try to get a sense of what people do with AI. One of the first questions was, “Think of the recent time that you’ve used GenAI. Were you developing something that you were eventually going to share with other people?” Something like 85-90% were using this on something that I would share directly with other people.

Seth: Really? I’m at like 95% of my usage is just looking stuff up for me.

Bo Cowgill: But were you looking it up and ultimately going to share this as part of a paper or a podcast conversation?

Seth: I mean, only insofar as the Quinean epistemic web of everything in the universe is connected to everything else. So yeah, if I learn about tree care, it could help me write an economics paper.

Andrey: Everything is signaling according to Robin Hanson, right?

Bo Cowgill: Sure. I think it’s fair that if this was not your intent, even two or three steps away, then you shouldn’t say yes in the survey. But anyway, a big majority of people say yes.

Then the next question, for the people who were using it for something that would be shared: “Were you using the GenAI to try to improve the audience’s impression of you?” So come up with your prior.

Seth: Hundred percent. Wait, sorry. So 15% of people use GenAI to make other people feel worse about them?

Bo Cowgill: Well, I assume these people would say that they weren’t trying to make it feel worse. They were just not trying to sort of propaganda the person.

Andrey: And to be clear, these are Prolific participants, so they’re trying to just make sure that their Prolific researchers don’t kick them out of their sample.

Bo Cowgill: Maybe. But most people who I tell these results to are like, “Well, yes, of course. I use GenAI a ton of time to help with writing, to rewrite emails, to explain something in a way that sounds a little bit nicer or smarter.” And it does seem like a very dominant use of GenAI.

If this is the case, then the fact that it’s making it easier to impress people all at once is a super interesting part of the effects. And, I know Andrey has offered his caveat about what he actually meant, but I think that would put this persuasion aspect as more of one of the central things.

Andrey: I agree that what you’re saying is interesting. It’s more the claim I was talking about where people—mostly in the Bay Area—think that super AI is gonna take over the world.

Bo Cowgill: That we’ll just turn people into puppets.

Andrey: Yeah, exactly.

Bo Cowgill: No, fine. I won’t take any more cheap shots at you.

Seth: We can bring up the Anthropic AI index.

Andrey: Well, I was gonna do the ChatGPT usage paper, but you do the AI one first.

Bo Cowgill: Of course, one of the major things that the ChatGPT usage paper says is writing.

Seth: Which interestingly, this showed up in GDPVal, is that ChatGPT seems like a little bit better at writing, and Claude seems a little bit better at coding, and it seems to show up in usage also.

Bo Cowgill: But they should break down writing. The question that this raises is: who is the writing for? And why aren’t you writing yourself? And are you possibly trying to signal something about yourself by having this clear writing?

Andrey: But I guess I truly do think, like Robin Hanson, that a vast majority of what humans do, period, is signaling to others.

Seth: Is that your claim, Bo? Or is your claim that AI is gonna make it worse?

Bo Cowgill: I’m not as Robin Hanson on “everything is signaling,” but I would just claim that this should be a more front-and-center thing that people think about with regards to the effects of the tech.

Seth: Listen. If you wanna be an economist, you gotta tell us what to study less. You can’t tell us to study everything more. What are we gonna do less of?

Bo Cowgill: I mean, I guess the easy thing would be to say human-AI replacement just because there’s so many studies on that right now.

Andrey: The productivity effects of this one deployment of a chatbot in this one company.

Bo Cowgill: Oh, yes. I can totally get on board with complaining about that.

Seth: Bo, help me get beyond it. This is what you need to do for me. People are gonna do what you said and write that paper on signal quality in one population. What’s the meta-paper? How can we get beyond that into a more comprehensive view of what’s going on? What’s your vision for research in this direction?

Bo Cowgill: Part of this goes back to the question about just what are general equilibrium effects overall? If people all become more persuasive all at once, then this totally destroys the quality of information.

Another question is, how much do the AI labs themselves actually have an incentive to build positive-covariance technology or negative-covariance technology? If part of the value of a camera is that you could take pictures and then show people and be like, “Look, this is real, this is a costly signal,” then you might actually want to keep the covariance of your technology somewhat high because this will be one use case that people would actually want.

Andrey: This is a very interesting, broader question. I was at a dinner with a few AI folks and we were talking about the responsibility of the AI labs to do academic research. We don’t expect the company that creates a tool to create the solutions to all of the unintended consequences of that tool. That to me is a very strange expectation. It seems impossible, and we don’t expect that from any other company.

Bo Cowgill: Definitely. But just to put a finer point of what I’m talking about: suppose that the covariance is so negative that you’re just getting a lot of signal jamming, to the point where now there’s just less demand for writing in general. Even if there’s still some demand, well then that less demand for writing could feed back into the underlying demand for the LLM product itself because this was supposed to help you write better, but now no one trusts the writing. And there could be something financially self-defeating about having this technology that is negative.

Seth: It would be general equilibrium self-defeating. Individually, we’d all wanna defect and use it.

Andrey: Even if one company tried to [fix it], the solution by the market is: if you really care that a human wrote this, the market will create a technology where we verify that the human is literally typing the thing as it’s happening.

Personally, I think that live performance and in-person activities in general are gonna rise up in economic value because they’re naturally... I do think humans care about interacting with other humans. We care that other humans are creating speech, art, and so on.

Seth: So those are the expressive functions of language. That’s the phatic function of, “Hey, look, I’m still alive, Grandma.” That’s the poetic function. And LLMs can’t... we don’t think it can do this performative function. It’ll be interesting to see whether AIs get enough rights to be able to make binding contracts on our behalf.

Andrey: There’s gonna be a ubiquitous monitoring technology, and every time I declare bankruptcy, it will enact.

Seth: It’ll immediately get locked in.

If I can just share my wrapping-up thoughts. I come away a little, not as scared as Bo about this epistemic apocalypse. He has scared me. But I come away thinking that it’s fundamentally kind of partial equilibrium to say, “Hey, look, we used to send signals this way. There’s a new technology that comes along. Now that signal isn’t coming through as well.” To me, that doesn’t mean communication is impossible. Now I just get to: “Okay, what’s the next evolution of the communication? Are we gonna have LLM readers? Are we gonna have verified human communication?” There seem to be solutions.

Bo Cowgill: It’s probably a little bit of an exaggeration of what I was saying to characterize it that way. But I did say that Andrey said that persuasion wasn’t important, so maybe I’m owed some exaggeration back.

Seth: Fair enough. If you put a gun to my head, I would say that information transmission will get better on net because of AI.

Andrey: What a hot take to end this.

Seth: That’s my hot take.

Andrey: You don’t hear anyone saying that. That is fun.

Seth: Who would’ve thought that the greatest information technology product of all time might actually give us more useful information?

Andrey: No, no, no. You’re only allowed to be pessimistic, Seth. That’s the rules of the game.

Bo Cowgill: So Seth, do you think this is mainly because people will be able to substitute away from other things?

Seth: It’s partially that. I think what you’re identifying in this paper is definitely important. But it does seem like this is transitional and that more fundamentally, LLMs help us say more and help us hear more. And so I think once the institutional details are worked out—and of course that’s a lot of assuming a spherical cow—there will be better information in the long run.

Andrey: There are even entrepreneurial activities that one could undertake to try to amend some of the concerns raised by this paper. We oftentimes take this very observer perspective on the world, but certainly we could also, if we think that a solution is useful, do something about that.

Seth: Right. We will sell human verification. We will verify you are a human. If you pay us a thousand dollars, we will give you a one-minute spot on this podcast where we will confirm you are human.

So Bo, I guess we’re just a little bit different on this. What do you think?

Bo Cowgill: Well, I do agree that the paper was proof of concept and partial equilibrium, and what happens in the general equilibrium... we’ll just have to figure out in future episodes of Justified Posteriors.

Andrey: Yeah. Well, thanks so much, Bo, for being a great guest.

Seth: And Bo, both you, everybody else, keep your posteriors justified.

Discussion about this video

User's avatar

Ready for more?