Balancing Act with John Katko
Artificial Intelligence
Episode 109 | 26m 46sVideo has Closed Captions
John Katko finds the balance in conversations about artificial intelligence.
John Katko is joined by Ryan Cummings from the Stanford Institute of Economic Policymaking to learn about artificial intelligence. In the Trapeze, we'll hear from New York Assemblymember Alex Bores and Adam Thierer from the R Street Institute about A.I. regulation.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
Balancing Act with John Katko is a local public television program presented by WCNY
Balancing Act with John Katko
Artificial Intelligence
Episode 109 | 26m 46sVideo has Closed Captions
John Katko is joined by Ryan Cummings from the Stanford Institute of Economic Policymaking to learn about artificial intelligence. In the Trapeze, we'll hear from New York Assemblymember Alex Bores and Adam Thierer from the R Street Institute about A.I. regulation.
Problems playing video? | Closed Captioning Feedback
How to Watch Balancing Act with John Katko
Balancing Act with John Katko is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship♪ ♪ This program is brought to you by the members of WCNY.
Thank you.
♪ ♪ ♪ ♪ JOHN KATKO: Welcome to "Balancing Act", the show that aims to tame the political circus of two-party politics.
I'm John Katko.
This week: Artificial Intelligence, better known as AI, and its effect on us and the economy.
Giving his input in the center ring is Ryan Cummings, Chief of Staff of Research at the Stanford Institute for Economic Policy Making.
Then, the R Street Institute's Adam Thierer and New York Assembly member Alex Bores discuss its ramifications on the trapeze.
Then I'll give you my take, and we'll speak with Bloomberg's Nancy Cook about what's Happening next week in Washington.
But first, it's time to walk the tightrope.
♪ ♪ If the letters AI make you feel like OMG, you're not alone.
But what exactly is artificial intelligence?
To find out, we asked AI, which answered: "The science of creating machines and software that can perform tasks that normally require human intelligence, like recognizing speech, understanding language, learning from experience, and making decisions."
AI is hot now, but it wasn't created overnight.
In the 1940s, mathematician Alan Turing asked, "Can machines think?"
And in 1955, John McCarthy, at Dartmouth, coined the phrase "artificial intelligence."
Now let's jump to 2012, when researchers at the University of Toronto built a neural network that recognized images better than anything before, launching deep learning, or computers teaching themselves from mountains of data.
Now, let's go into 2017, when Google scientists introduced a transformer model that allowed AI to understand language and not just memorize it.
That became the backbone of systems like ChatGPT and Gemini.
ChatGPT itself went public only three years ago, reaching 100 million users in just two months-the fastest tech adoption in history.
So, what sparked the boom?
Smarter algorithms, massive computing power, and oceans of online data training systems to sound and reason more like us.
But not everyone is okay With AI.
A Quinnipiac poll found 83% of Americans worry AI could weaken the next generation's ability to think for themselves.
A Gallup survey reported three-quarters believe AI will cut jobs over the next decade, and nine out of ten think it's at least somewhat likely to be used in a cyberattack.
Also troubling, AI trading bots now handle up to 70% of all U.S.
stock trades.
In the past, the government regulated industries like railroads, oil, communications, and banking as they developed.
But as AI moves at the speed of code, it's global, invisible, and constantly learning without human direction, making it very difficult to rein in.
Washington is trying.
Recent White House orders called for safety testing, transparency Labels, and stricter privacy protections.
The AI risk management framework now guides both government and industry.
And this year, the Trump administration unveiled America's AI action plan focused on speed, infrastructure, and global competition, aiming to make the U.S.
the world leader In AI innovation.
Clearly, AI is here to stay.
So the real question is, can human intelligence keep up with artificial intelligence?
For the answer, let's enter the center ring.
♪ ♪ KATKO: Joining me in the center ring is Chief of Staff of Research at the Stanford Institute for Economic Policy making, Ryan Cummings.
Welcome, Ryan, right into it, Briefly what's a good summary of what AI actually is?
CUMMINGS: First of all, thank you very much for having me.
So, in short, AI, which stands for artificial intelligence, is broadly, as we know it, a feature that uses these things called large language models, which helps automate a lot of Different kinds of tasks.
KATKO: Okay, and so there is a Lot of discussion around the country about the pros and cons.
Why don't you tell us what we should be optimistic about and what we should be concerned about regarding this rapid growth of this technology?
CUMMINGS: As I just said, it automates a lot of different tasks, whether that's computer programming or just something as simple as summarizing a document.
So on the upside, what we should Really look for is an increase in productivity.
So that means, for example, if you're a computer programmer, this will allow you to write a lot more code than you previously did because you have an AI agent that's sitting there and helping you debug or something like this.
Similarly, if you're perhaps a lawyer and you're looking to summarize different kinds of court cases and need help doing background research, that should enable you to do that more efficiently.
So we're kind of looking for people being able to do more work with their time, which should help firms and governments and everybody else have more profitability or enable more service for constituents or something like this.
So that's on the upside.
On the downside, what people worry about broadly is displacement of jobs.
So, for example, on the labor side, well, if the software engineer can now do a lot more using AI, then maybe you don't need as many software engineers.
Or it's not all these High-prestige jobs like software engineering that are at risk.
It's also things like, for example, individuals that work at call centers.
So an AI agent is very good at that.
Instead of navigating through a phone tree, you can have a customized AI agent that talks back and forth with you.
So these are the two kind of big things that people are excited about and worried about, probably.
KATKO: To drill down a little bit deeper, because the last couple of days, you put a lot of announcements about major companies cutting their workforces tremendously because of AI.
Is that something we think is going to continue to happen going forward?
CUMMINGS: So right now, the data isn't particularly clear About how big the impact AI is having on the labor market.
So, as you mentioned, some firms are saying, well, we're finding out we have a lot more productive workers when they're using AI, so we don't need as many.
But there's a lot of industries still where it's not clear that AI is immediately going to help you be more productive.
For example, with agriculture, you need people to go pick strawberries and cultivate fields, and maybe some of the machines can be used a little bit better, but it's not going to have that immediate of an impact right away.
So the jury's still out, but if it is a revolutionary technology, one thing we know about these productivity-increasing technologies throughout history, Whether that's the internet or the electrification of the factory floor or going all the way back to railroads in the 19th century, these technologies always do displace some people.
Who's going to be affected and how is going to be uncertain, but we should be pretty confident in the future.
There is going to be some displacement.
So we need to start thinking about the policy architecture now to assist those who are displaced.
KATKO: So let's say I'm a bad guy in the world, a bad actor, and I'm going to use AI.
What are some ways that there's real concerns that AI could be used for nefarious means?
CUMMINGS: So, in kind of the most basic sense, there's a lot of scams that are occurring with AI right now.
For example, somebody pretends To be your grandma or alternatively your grandson and is chatting with you.
And instead of having a human on the other end of the line halfway across the world, now it can even just be a robot.
Alternatively, if you think about something even more severe, a lot of folks that I talk to in the cybersecurity industry worry about this quite a bit, because the AI, instead of having an individual human sitting there trying to hack into someone's network, it can just do that for them and look for vulnerabilities.
So just like it offers a computer programmer at Google the ability to, for example, make improvements on Gmail much faster, it also allows a nefarious hacker to be able to exploit vulnerabilities in networks much faster, too.
KATKO: So it seems like with AI that if you have a company that's being traded on exchanges or any company at all, if it has anything to do with artificial intelligence, it seems to have a meteoric rise in value.
There is a boom here going on for sure.
Can this boom be characterized as a speculative bubble?
And what does it mean if it is?
CUMMINGS: So this is something my co-author, Jared Bernstein, and I wrote about in a New York Times piece last week, which is that generally a bubble is where the expectations for future profits are well exceeding what many would consider as reasonable.
So right now, what we see, for example, in the markets, as you mentioned, is anybody that mentions they're doing AI is getting a little bit of this magic fairy dust in their Valuation and is having their valuation uplifted.
Now with that being said, for example, the last time we had a speculative technology bubble in 2000, there were firms that were able to harness the power of the internet and make trillions of dollars if you think about the Googles and Amazons of the world.
However, there were a lot of firms like Pets.com, which said, "Oh, we're going to use this internet technology to make a lot of money," and then went bankrupt nine months after they IPO'd.
So right now, from our perspective, if you look at a variety of different metrics in the financial markets, it does look like the amount of profits that markets are expecting AI to deliver across the market are a little too high for what we're Actually seeing in terms of actual revenues and profits being generated, and then adoption across the economy as well.
KATKO: So we have potential for misuse.
It's growing very quickly, and overvaluation is a real issue.
What's the regulatory environment right now and what should it be?
CUMMINGS: So right now, it's pretty lightly regulated.
The Trump administration has made clear that they're not going to, you know, put their hand in terms of what the AI companies are doing, and they're not going to try and disrupt their innovation.
Now, obviously, that's good in the sense that it enables companies to move fast, but it's bad in the sense of all these things that we talked about.
For example, people are using These AIs to try and hack into someone's system or try and scam people.
So a regulatory infrastructure should enforce on these companies an obligation to think about safety, to make sure people aren't using the tools for these nefarious purposes that we talked about, but also it doesn't hamstring them in their ability to deliver new kinds of uses for AI.
For example, it might be really helpful to have an AI help you file your taxes.
That's a good thing that we should enable these companies to learn how to do and ship these products.
But we should not enable them to just say, "You can do whatever you want and use it for whatever purpose that's going to make money," even if it's at the expense of different kinds of Vulnerable groups.
Those groups, which are particularly exposed-if you think about the elderly or you think about potentially lower-income groups - those need to be protected with the usage of AI.
KATKO: We have about 30 seconds left, and quickly, the regulatory environment surrounding public trading of AI - is that where it should be?
CUMMINGS: Yeah, so in terms of where people like how they're trading news on AI, they should definitely be, you know, made aware of the company's actual revenues.
So, for example, hiding if you're not doing particularly well and trying to sweep that under some AI rug, saying a loss is actually an AI investment, that should be closely tracked and make sure that shareholders and the broader public are not Being taken advantage of.
KATKO: All right, Ryan Cummings, thanks so much for your great insight.
Now, let's take to the trapeze.
♪ ♪ KATKO: On the trapeze this week are New York Assembly member Alex Bores and Senior Fellow at the R Street Institute's Technology Innovation Team, Adam Thierer.
Welcome, gentlemen, and Adam, we'll get right at it.
What are your concerns and optimisms about AI?
THIERER: Well, I'm excited about where artificial intelligence can take our nation and our civilization.
We're talking about the most important general-purpose technology of our lifetime.
It is affecting every single segment of our economy and our Culture.
It offers us a potential to have a true knowledge explosion and an accompanying innovation explosion alongside of it.
This can improve human health and well-being along multiple dimensions and, more importantly, have profound geopolitical, strategic significance for our nation as we compete against China and others for technological supremacy across the world.
BORES: I agree.
I'm equally bullish on the capabilities of AI and really excited for what it's going to do to cure diseases, to create economic opportunity.
But because I'm so excited about its capabilities, I also worry about misuse and what can go wrong if we don't have guardrails on the downside of it.
So I think government has a Strong role in protecting us from the worst-case scenario so that we can get all the benefits of the best case.
KATKO: Adam, do you have any concerns?
THIERER: So of course, there's lots of concerns about artificial intelligence as there are with any new emerging technology.
It takes time for society to adjust, but we have to be very careful with sort of precautionary principle-based regulations that would essentially freeze the technology upfront based upon hypothetical fears.
What we instead want to do is embrace the real opportunity of innovation in this space and many others, but that requires some risk-taking and an understanding that we need to find a more flexible approach to dealing with concerns and risks that do arise.
KATKO: Now, Assemblyman, you hear from the titans in the industry like Musk and Andreessen and the others, and Sam Altman.
And they all seem to have different takes on what the regulatory scheme should look like.
And, boy, do they vary dramatically between the three.
So just curious, from your standpoint, from a regulatory standpoint, you know, obviously, I think you think regulations are needed.
What should they look like?
BORES: Where I believe Adam and I agree is that most of that regulation should be at the federal level, and most of it should be on specific use cases.
I think where we disagree is that that can't be everything.
If the federal government doesn't act, states have a role in protecting their citizens.
And beyond that, there are certain really horrific outcomes that, if they happen, we can't come back afterwards and just sue who's responsible, right?
If you think about a technology as important as AI, that even Adam says could involve national security, generally, when you Think about national security, you don't say, "Wait till the bad thing happens."
You look at hypothetical harms, you look at real probable harms, you take action ahead of time to protect Americans.
KATKO: So what should that look like in your mind?
BORES: Yeah, I think that requires the major labs having safety plans that they make public.
I think that requires disclosure of critical safety incidents - things that show a huge increase in the risk of harms occurring.
And I think they bear some responsibility to not release models that their own tests show have an unreasonable risk of critical harm.
We don't want them to be like tobacco companies that knew cigarettes caused cancer but denied it publicly and kept releasing their products.
KATKO: Adam?
THIERER: Well, if we have that kind of a framework - and I don't disagree with all of the specifics that the Assembly member outlined-but the reality is, if we have that sort of a framework and a patchwork basis in New York and California, and Illinois and Colorado, then we're going to face a real problem as a nation trying to create a coherent national framework that everybody can actually live up to.
The reality is that regulation like that has costs and consequences, especially for smaller and mid-size players that I think we all agree we want more of-the sort of little tech companies that would bring fresh faces into this sector.
We're not going to get them if they're stuck dealing with a mountain of compliance costs on a state-by-state basis.
With over 1,100 bills pending in the United States, and the vast majority of them being state and local enactments, we're going to run into a real set of roadblocks for that sort of innovation and competition to happen.
This really needs to be more of a national framework.
KATKO: Oh, if it is a national framework, Adam, what does it look like in your mind?
THIERER: Well, if it's a national framework, it can actually incorporate some of what we just heard.
It can have certain transparency mechanisms and certain maybe whistleblower protections.
There's a variety of so-called light-touch regulations.
But let's be clear, there's a whole bunch of regulation that already applies to artificial intelligence, machine learning, and robotics in this country.
You know, just because you use AI or a robot to do something doesn't mean you get a get-out-of-jail-free card from things like consumer rights and civil rights protections and a variety of other types of regulations already on the books, especially in sensitive sectors, whether it's things like transportation and aviation, medicine, finance.
We already have mountains of regulation, and the Biden administration, when they were in office, pointed out quite clearly and correctly that all of that applies already to AI, machine learning, and robotics.
So the question is, why not allow those sorts of things to run their course and work their magic?
KATKO: Assemblyman, your thoughts?
BORES: Well, I think largely that's accurate, and also, AI is special.
That was our intro to this conversation about how different it is and, in particular, how quickly it's moving - faster than any technology before.
And any time there's new technology that's different, you sometimes require new regulation.
We didn't need driver's licenses until the invention of the car, right?
There are going to be specific aspects when it comes to AI, when we think about its impact on the labor force, when we think about its impact on cybersecurity attacks, or the creation of bioweapons, that government better take a look at ahead of time and require specific action.
KATKO: Now, do you agree with Adam that there is a concern with respect to this patchwork quilt of regulations coming out of the states, and that we need really more robust federal oversight?
BORES: I agree in principle, if not in the specifics.
So for example, the bill in New York is written in such a way that it only applies if you are spending more than $100 million on training frontier models.
By our best approximation, that applies to four companies right now: Google, OpenAI, Anthropic, and XAI.
I think by next year it'll apply to Amazon and DeepSeek as well.
It is a very small set of companies that we're asking to take action.
And lobbyists for the industry put in a memo that said the compliance costs in that New York bill would require one full-time employee.
So, no, I don't think requiring one additional employee is drowning startups in regulation.
KATKO: Adam, does regulation in this field stunt the growth of the AI enterprise?
THIERER: Well, it very clearly could if done improperly, and if we had that sort of a patchwork, then it's not just the kind of ability that the Assembly member is sponsoring that's out there.
There's all sorts of different bills having anything to do with, you know, hiring practices and various other types of like siting rules, permitting rules for AI, new licensing schemes - just layer upon layer of regulation.
And at some point, it just becomes an incoherent mess.
And right now, we have a pretty good model at the federal level, but we need to obviously supplement it with some additional rules, and Congress is getting around to that.
But let's be clear.
Right now, because of the kind of light touch we're taking, this has had enormous benefits for the U.S.
economy.
We're talking about-we're on the way to a $400 billion digital infrastructure spend this year on data centers and AI buildout.
We're talking about this being responsible for something like 92% of GDP for this year right now - essentially a massive private sector-led stimulus, thanks to the fact that we haven't gone and taken the European approach of preemptively overregulating based on hypothetical fears.
We need a more flexible approach; it needs to be primarily federal.
KATKO: So we've got about a minute left, so it's kind of like the lightning round here, and it's something we need to touch on, and that is something you just alluded to, Adam, and that is, is there a lot of overvaluation going on in these companies?
Because it sure seems like to me a bit like the dotcom boom and then bust in the early 2000s.
So Alex, you first.
BORES: Yes, but that doesn't mean don't invest in it or think that that's the future.
The dotcom boom still led to an internet-driven economy a few years later.
Even if AI is overvalued now, it will be the future and should be focused on.
THIERER: Yeah, I agree with that wholeheartedly.
The reality is, even if there is a bit of a bubble burst, we've had a lot of great investment and buildout of infrastructure that can be utilized for many, many purposes.
This benefits the United States and our economy.
So, you know, so what if there's a little bit of a bubble burst?
But I don't think it actually will.
I think we're actually going to see more growth in the coming years, so long as we get public policy right on this front.
KATKO: Alex Bores, Adam Thierer, thanks, gentlemen, for a great conversation.
Now it's time for my take.
♪ ♪ KATKO: Artificial intelligence represents transformational change that is both exciting and terrifying - not to mention a playground for speculative and sometimes irresponsible stock trading valuations.
It's been developed with very little, if any, guardrails.
It's almost like designing a breathtaking automobile that can reach astronomical speeds but leaving out a single safety feature.
It's high time state and federal regulators do the necessary deep dive into the potential pitfalls of artificial intelligence and fashion protective measures accordingly.
It's also long past due for stock trading regulators to take a sober, realistic look at the true financial health of the seemingly endless AI companies out there.
This feels a lot like the dotcom bubble that spectacularly burst in the early 2000s, devastating the stock market and exposing just how overvalued many of those dotcom companies actually were.
I'm all for designing the fastest and coolest car on the road, but I'd also like it to have brakes and airbags.
And that's my take.
♪ ♪ KATKO: Now, let's look at what's happening next week in Washington.
Here's Bloomberg's Nancy Cook.
Hello, Nancy, and tell us what's on your mind this week in on your mind this week in Washington.
COOK: Thanks so much for having me, John.
What is happening in Washington next week-or actually, it's around the country-is that there are three really key elections that we're watching.
These are really the elections that are happening in 2025, and they'll offer both parties some key takeaways heading into the 2026 midterms about where the party stands and where the power is.
There are two key governor's races in New Jersey and Virginia that I'll be watching.
The New Jersey governor's race, in particular, has been really close.
And there are a lot of issues there where Americans and New Jersey residents are very mad about utility bills and affordability issues.
So that's really a race centered around an economic message.
In Virginia, Abigail Spanberger is running against Winsome Earle-Sears.
That has been a race that has definitely tightened in the last few weeks.
And then, of course, there's the New York mayoral race, where it looks like Mamdani is going to win.
And that, of course, presents its own issues for Democrats because he really has a socialist message, whereas many people in the party want the Democrats to move towards a more centrist message - something that appeals to a wider swath of voters.
But what I will be looking for are the takeaways that both Democrats and Republicans can gather from just these handful of races in an off-election year.
KATKO: Yeah, it just seems like the easy one is the New York City mayoral race.
If Mamdani wins like everyone expects, it could portend a headache for Governor Hochul's reelection chances next year because she was an early supporter of him.
So that'll be interesting.
The more complicated ones seem to be the governor's races in Jersey and in Virginia.
What's the potential referendum issues there?
Also, talking about California and redistricting as well.
What's going on out there?
COOK: Well, in California, there's a redistricting issue that's really been put forward by its governor, Gavin Newsom.
You know, he is likely to run for president in 2028.
And so this is a key way that he is trying to prove to the party that he is a major player by pushing forward on that.
It does seem like that is likely to pass, and that will be a huge feather in his cap and also set him up for, again, the 2028 cycle.
But I think what is tricky is that there's not necessarily going to be a key kind of through-line with these races where Democrats can say, "This is where we're headed.
These are what the most effective messages are for voters.
This is what our party leadership should look like."
You know, we're going to have a bunch of different victories.
Gavin Newsom is not much different than Mamdani in New York.
They are much different than the Democratic candidates in New jersey and Virginia.
And so, you know, everyone's going to be looking for, like, what can we understand about both parties heading into 2026?
And I think it will be a muddied picture.
KATKO: Nancy Cook from Bloomberg News, thanks so much.
That's all for this week.
To send in your comments for the show or see "Balancing Act" extras and exclusives, follow us on social media or go to BalancingActWithJohnKatko.com.
Thank you for joining us.
And remember, in the circus that is politics, there is always a "Balancing Act".
I'm John Katko.
We'll see you next time, America.

- News and Public Affairs

Top journalists deliver compelling original analysis of the hour's headlines.

- News and Public Affairs

FRONTLINE is investigative journalism that questions, explains and changes our world.












Support for PBS provided by:
Balancing Act with John Katko is a local public television program presented by WCNY