We’re entering an era of deep fakes where determining the validity of audio and video rises to an extreme level of difficulty. Did a politician actually say what they were supposedly recorded saying? Can we believe it even when we see their lips move in a video?
How much of the public even cares about the truth? Has objectivity ever been the dominant factor for informing public opinion?
“The villain in the drama of communications is the present condition of society itself, and especially the political and economic ideas which rule modern society.”
That statement is from a book published in 1934. The book is Mobilizing for Chaos: The Story of the New Propaganda, written by O.W. Riegel, and published by the Yale University Press.
Riegel died in 1997 at the age of 94. He was a reporter, a journalism professor, and co-founder of the academic journal Public Opinion Quarterly.
His life was of the 20th century. In our 21st century lives, we are seeing many parallels to what has come before but in different forms.
Communications networks have evolved from radio and telegraphs to our always connected presence to the network where we are constantly seduced and provoked by opinions disguised as news.
Propaganda, a term popularized in the middle of the last century, is most often applied to nations that are our enemies and the slanted promotion of their nationalistic views.
Twenty-first century nationalism fragments into an internal discourse, conveniently left and right or conservative and progressive to use loaded terminology.
People proudly proclaim that they speak truth to power. But whose truth are they speaking?
We were warned in 1934 by Riegel:
“the world is moving rapidly into an era of universal obstruction of the free flow of information and opinion.”
Ninety years later, we have found ourselves on the other side of that warning with a deluge of information and the opinions of everybody.
The situation is worsening. Social scientists have methods and tools for studying public opinion. But scholarship does not make governmental policy. The public at the ballot box are not swayed by research studies. Politicians, corporations, and the media all have their agendas.
Weaponized AI is inevitable
The AI threat rises not from robots wielding destruction, like scenes from a movie.
The AI threat is a deluge of multimedia content generated with ease.
Generative AI has extended image-making techniques in ways that only recently were inconceivable. The controversies over AI art are nothing compared to the upheaval society will experience with generative audio and video.
These tools will result in enormous productivity, enhancing value and providing satisfying experiences for most people. The same tools hold the potential for irresponsible use. Responsible AI is an emerging field. Professional careers are being established that focus on utilizing AI ethically. Responsible AI is NOT only philosophical and discerning best practices for making judgments about how and when to use AI. Responsible AI also guides the implementation of software and APIs that enable widespread use of AI.
The capabilities of AI are already more advanced than what we’re seeing in applications today. Researchers are busy releasing papers and advancing the state-of-the-art techniques in machine learning.
A challenging question for society is whether AI should be controlled by a handful of large corporations. The reality, though, is that AI cannot be controlled easily, either by corporations or by governments. Ultimately, that’s a good thing for society. But with freedom of AI comes dangers, just like with any freedom.
Governments will regulate aspects of AI. Corporations will hold back their most advanced algorithms and AI models for competitive advantage. And an open source community around AI has emerged. Open source allows everyone to view the code, understand the model and inspect the data set. Open source does not mean non-profit. Many companies have built very successful businesses around open source software.
Open source software and open source data models play an important role in transparency and accountability in the development and use of machine learning.
With open source, the code and data are publicly available, usually under a license that often allows for free use, modification, and distribution. This means that anyone can access the code, examine the code, and use the code to develop new applications or to make recommendations for improving the existing code.
Of course, AI will be misused. But AI is already in the hands of those who aim to deceive. We cannot stop those with the financial means, either individuals or nation-states, that intend to promote their own viewpoints for whatever reason.
In 1932, Riegel stated that the “only practical defense is exposure and counter-attack”. In 2024, however, it’s not clear that the public cares about objectivity.
Misinformation, enhanced and expanded through AI, is not the primary threat in this century. The threat rests in the fervent attachment to our own viewpoints without any dialog with opposing perspectives.
Overall, I’m extremely optimistic about the benefits of AI. At the same time, we must all be vigilant and skeptical about the content that we encounter.