Politico has an opinion piece today on nationalizing AI. I don’t agree with it’s conclusion, but it has some important tidbits on AI, that far too often are not explained.
Some highlights
I am going to stay out of most of the politics here, that can wait for another post.
The Black Box
That’s the thing about AI: Not even the engineers who build this stuff know exactly how it works.
Understand — we know how we build it but we don’t know how it works.
…certain aspects of today’s thinking machines are beyond anyone’s understanding… There’s an element of uncertainty — even unknowability — in AI’s most powerful applications. This uncertainty grows as AIs get faster, smarter and more interconnected… They solve problems in ways that boggle human experts.
The AI models are built with thousands to millions of training pieces. The training process is known, but what “knowledge” the model ends up with is not.
It’s like learning anything: we know how to teach and learn, but we can’t peer into the physical brain and understand how thoughts work. That’s about how AI works.
AI Is Everywhere
It isn’t a thing of the future — even if ChatGPT broke out of nowhere. Existing uses:
- Facial Recognition
- Radiology
- Driving assistance
“Electrication”
This is my favorite analogy of how AI is affecting people:
We live in the era of mass AIelectrification, except this time the electricity itself keeps evolving.
There is much about AI we don’t know, but AI experts do agree on one thing: The pace of AI’s disruption of society will never be this slow again.
Regulation Won’t Work
A third option is regulation of AI by current agencies of the U.S. government. As a West Coast techie who has worked extensively in D.C., my first thought is: Good luck with that.
The author doesn’t go into details on why it won’t work, but here are a handful of ideas.
- The R&D is not geographically constrained. You can regulate usage, but not the creation
- A lot of it is open source: large portions of AI is already available freely for download.
- …And editing. The generative image model Stable Diffusion has hundreds of user-generated models that work on top of the original.
- A working definition will be impossible.
- When Siri recognizes your voice, is that an appropriate AI use?
- What about when my non-cloud photo catalog recognizes photos of my kids?
- What about programs that write up short articles on sports and financial releases?
And I’m not “an all regulation is bad” kind of guy! It’s just technically too late to regulate AI creation in a meaningful way.
If you have 5 minutes, it is worth the read today.
0 Comments