Infosec guru Schneier worries corp AI will manipulate us
Bruce Schneier, a well-known infosec expert, has raised alarms about the integrity of AI models developed by corporations, suggesting that they’re often biased and tailored to manipulate users. During a recent keynote at the RSA Conference, he emphasised the risk of AI being used for commercial manipulation, reflecting a broader concern about industry influence over technology.
Key Points
- Schneier warns that corporate AI models are skewed to favour their creators’ interests.
- He compares AI recommendation systems to biased search engines that manipulate users.
- The EU AI Act is highlighted as a positive step towards transparency and regulation in AI development.
- Governments and academia are encouraged to create non-corporate AI models to counterbalance commercial ones.
- The current French initiative ‘Current AI’ aims to build transparent AI infrastructure funded by public-private partnerships.
Why should I read this?
If you’re interested in the future of AI and its implications for society, this article is definitely worth your time. Schneier’s insights challenge us to rethink how AI is developed and regulated, which is crucial as these technologies become increasingly integrated into our daily lives. Understanding these dynamics will empower you to navigate the evolving tech landscape more effectively.