I’m worried about the dangers of social media, AI, deep fakes, metaverse privacy, security, misrepresentation, and fraud. So much so that I’ll be spending much of my time in 2023 studying, talking about, and possibly even litigating these issues.
Digital media is addicting
Digital media has become integral to our daily lives, but its highly addictive nature has raised concerns. As Anderson Cooper explored in his 60 Minutes piece, Silicon Valley is engineering your phone, apps, and social media to get you hooked.
The Netflix documentary “The Social Dilemma” examines how social media’s design nurtures addiction to maximize profit and its ability to manipulate people’s views, emotions, and behavior and spread conspiracy theories and disinformation.
Now add AI and the metaverse into the mix
My concern is that when you add artificial intelligence and the metaverse into the mix, these additional factors will only become more powerful and dangerous.
AI will analyze, evaluate, and then serve content to your devices and experiences that your brain wants to see 24/7. You won’t know what’s coming, but the AI will.
Metaverse VR headsets and experiences will add millions of new data points to company databases every 20 minutes (facial expressions, reactions, time spent, and points of interest in virtual spaces and environments). This new information will be used and sold as desired (see the TOS agreements).
Once this data is served into the different algorithms, the providers will control what you see, think, and do.
Deep fake audio and video
Distinguishing between what’s real and what’s fake will eventually become impossible for most consumers.
Adobe already has audio technology called Adobe VoCo that allows anyone to sound exactly like someone else. Work on Adobe VoCo was suspended due to ethical concerns, but dozens of other companies are perfecting the technology, with some offering alternatives today. Take a look, or listen, for yourself:
Picture and video versions of deep fake videos are getting better and better. Sometimes, it’s impossible to tell the fake videos from the real ones.
Without implementing safeguards, we’re only three to five years away from this technology manipulating the financial markets or threatening nuclear war. Decentralized web3 platform blockchain technology may make it impossible to identify the wrongdoers and remove the deep fake content from the Internet and other platforms.
Private data and security
Don’t even get me going on the current and future challenges relating to companies that use and sell our private data. The above new factors will increase the amount and kind of data recorded by 1000X and make that data much more valuable to the highest bidder.
Current internet and data security systems will become inadequate.
In the near future, AI will easily be able to breach most security systems without too much effort. Codes and recognition systems (passwords, fingerprints, face, voice) will be duplicated and used for improper purposes.
Corporate espionage and blackmail, which are already happening but often unreported, will increase in frequency, scope, and size.
Tangible and digital assets, including cryptocurrency held in once relatively safe digital wallets, will be exposed to bad actors, including foreign governments.
This conversation is essential.
Please understand that I’m not trying to put fear into anyone reading this post. I’m a big advocate of using this technology to improve our personal and professional lives and, frankly, make the world a better place.
However, at the same time, I want everyone to appreciate the following:
If people are easily manipulated by today’s incomplete and misleading conspiracy-driven fake news on television and the Internet, imagine what will happen in the near future when you toss AI and deep fake technology onto the misinformation fire?
When you add the additional elements of AI, deep fakes, and millions of new data points to the mass media persuasion engine, the fake news will become much worse and, in all likelihood, indistinguishable from the actual facts. When this happens, humanity will be in serious trouble.
Questions for your consideration and feedback
So what can we do to keep everyone safe?
Extensive regulation is necessary. Any piece of manipulated or fake content should be required to prominently display a letter or warning, much like the movie (G, PG, R, and X) or music industries (Parental Advisory warning label ). Maybe something like “Digitally Altertered” or “DA.”
What steps can we take to regulate authentic and accurate content from fake and misleading content?
National and international civil and criminal laws and penalties must be established and enforced.
What can we do to educate society about these issues moving forward?
Conversation is always good. I’d like to see an AI service that allows us to input a URL or actual content to determine if a digital file has been altered from the original.
What do you think? What suggestions do you have moving forward?
What did I miss? What would you add to this conversation?
PS- The video at the top of this post is AI generated through a service I use called Synthesia, and much of this written post was created for me using the new OpenAI ChatGPT.
⚖️ I’m a lawyer with three decades of experience who enjoys helping others.
🔦 I shine a bright light on others by pinning and sharing their content.
🔔 Please follow me to be notified when I post and go live.
💡 Each day, I try to add value to our community.