The Darkest Sides of AI
It’s official, AI is pretty much everywhere now and present in almost every industry. While there are many uses for AI that we enjoy, we have found there is a dark side to AI that no one is talking about.
While AI does help make mundane computer tasks easier, it also increases the speed of hacking, bots, and more. Keep reading to learn the dark side of AI.
Data Security Concerns
The major concern when it comes to AI is the ability of AI to hack sensitive systems and websites. Currently, AI can be used in many malicious ways to hack databases and steal data.
One of the main examples of this is a password AI bot, which can try millions of combinations of passwords in just a few minutes—something which previously would have taken a hacker hours, if not days. And if the hacker behind the AI can train the AI to include personal information (from a Facebook page, for example) into its process of testing passwords, this can make the password hacking AI even more dangerous.
AI can also be designed to impersonate a person, and with the increases in the ability of generative programs, deepfakes have become a huge concern in industries around the world, especially when these deepfakes are combined with social engineering scams used to steal money and cryptocurrency from unsuspecting victims.
Researchers have also recently begun to worry about the ability of generative AI models to hack past high level security systems—as the AI will learn each time it is prevented from entering a system, making brute force attacks more worthwhile for hackers with the right AI system.
Not only that, but not all AI software is designed by reputable companies. A hacker could even hack an existing AI and change the code to cause it to be malicious within already existing systems. This is a significant concern for native AI systems like those used by Meta, for example. Just think about what someone could do if they were able to hack Meta AI and steal personal data from those individuals with a Meta profile.
Data Accuracy Concerns
If that weren’t enough, many scientists have begun to worry about how accurate the data put out by AI is. AI relies on a large amount of input data, and cybercriminals could possibly input a large amount of false data to affect the output of the machine.
While this may not affect the average person who is asking ChatGPT how to make sourdough starter, it is a national security concern as AI could be used for propaganda, spreading misinformation, and more.
Additionally, major concerns have been raised in the medical industry, where while AI could be used to increase the accuracy of medical diagnoses, improper AI training could increase misdiagnoses as well.
Lack of Transparency
A lot of AI programs aren’t very transparent. How they are built or trained is kept private, mostly for security reasons, but at the same time this means we are unable to identify or correct bias.
This is especially critical when you consider that racism was legal in the United States until the 1960s. There is a vast amount of data around the world that supports racism. And while it is no longer legal, this does not mean the information fed to AI is done without an inherent bias.
You may be asking how this matters, but if you think about it, many companies are now using AI to read applicant resumes. Although things are finally changing in the US, until the 1960s, there were last names that were associated with those of African-American descent, just as there were last names associated with those of Mexican heritage. This means that if AI carries any sort of racial bias, it could incorrectly decline these applications based on a last name only.
Increased Plagiarism
In the media and publishing industry, AI has become a major concern in terms of plagiarism. When data is fed into an AI machine, it can be rehashed and given as an answer, regardless of the copyright status of the data.
This is especially an issue in the visual art industry, where AI images have already surfaced containing artists’ signatures. Written media plagiarism is harder to prove, but it has happened.
Even when data is labeled as copyrighted and not allowed to be fed into AI learning models, this does not actively prohibit AI from learning or pulling from it if the information is found by the AI on its own (for example, in a Google search result). It is a mounting problem with no clear solution.
Safety Concerns in Autonomous Devices
Self-driving cars are already a thing, and with AI devices increasing in popularity, scientists and cybersecurity analysts have raised the alarm about physical safety.
For example, if a hacker were able to inject malicious code into an AI device, the results could cause a catastrophe. Just imagine self-driving cars going off the road and crashing into a building. The losses could be quite steep.
Of course, many companies are taking steps to ensure this doesn’t happen, but remember that hackers always exploit previously unknown vulnerabilities, meaning it just takes one malicious actor one lucky moment to cause a possible catastrophe.
Overall, we know that AI is part of the future and that there are many industries where it could help revolutionize supply chains, automated processes, and more. But we want to point out the dark side as well, emphasizing that proper security controls absolutely need to be in place before we begin to live in a world designed and run by AI.
It is also important to note that many human individuals could lose jobs because of AI, and we need to ensure that proper pathways are created to stop the widespread hardship AI could (and is) currently causing. Whether you use AI or not, it’s important to remember the abundant safety and creativity concerns of these systems and take the steps to ensure you are protected.