The 7 Most Dangerous Technology Trends In 2020 Everyone Should Know About

Image result for The 7 Most Dangerous Technology Trends In 2020 Everyone Should Know AboutAs we enter new frontiers with the latest technology trends and enjoy the many positive impacts and benefits it can have on the way we work, play and live, we must always be mindful and prepare for possible negative impacts and potential misuse of the technology. Here are seven of the most dangerous technology trends: 

1.  Drone Swarms 

The British, Chinese, and United States armed forces are testing how interconnected, cooperative drones could be used in military operations. Inspired by a swarm of insects working together, drone swarms could revolutionize future conflicts, whether it be by overwhelming enemy sensors with their numbers or to effectively cover a large area for search-and-rescue missions. The difference between swarms and how drones are used by the military today is that the swarm could organize itself based on the situation and through interactions with each other to accomplish a goal. While this technology is still in the experimentation stage, the reality of a swarm that is smart enough to coordinate its own behavior is moving closer to reality. Aside from the positive benefits of drone swarms to minimize casualties, at least for the offense, and more efficiently achieve a search-and-rescue objective, the thought of machines equipped with weapons to kill being able to “think” for themselves is fodder for nightmares. Despite the negative possibilities, there seems to be little doubt that swarm military technology will eventually be deployed in future conflicts.  

2.  Spying Smart Home Devices 

For smart home devices to respond to queries and be as useful as possible, they need to be listening and tracking information about you and your regular habits. When you added the Echo to your room as a radio and alarm clock (or any other smart device connected to the Internet), you also allowed a spy to enter your home. All the information smart devices collect about your habits such as your viewing history on Netflix; where you live and what route you take home so Google can tell you how to avoid traffic; and what time you typically arrive home so your smart thermostat can make your family room the temperature you prefer, is stored in the cloud. Of course, this information makes your life more convenient, but there is also the potential for abuse. In theory, virtual assistant devices listen for a “wake word,” before they activate, but there are instances when it might think you said the wake word and begin recording. Any smart device in your home, including gaming consoles and smart TVs, could be the entry point for abuse of your personal information. There are some defensive strategies such as covering up cameras, turning off devices when not needed and muting microphones, but none of them are 100% foolproof.  

3.  Facial Recognition 

There are some incredibly useful applications for facial recognition, but it can just as easily be used for sinister purposes. China stands accused of using facial recognition technology for surveillance and racial profiling. Not only do China’s cameras spot jaywalkers, but they have also monitored and controlled Uighur Muslims who live in the country. Russia’s cameras scan the streets for “people of interest,” and there are reports that Israel tracks Palestinians inside the West Bank. In addition to tracking people without their knowledge, facial recognition is plagued with bias. When an algorithm is trained on a dataset that isn’t diverse, it is less accurate and will misidentify people more. 

 

4.  AI Cloning 

With the support of artificial intelligence (AI), all that’s needed to create a clone of someone’s voice is just a snippet of audio. Similarly, AI can take several photos or videos of a person and then create an entirely new—cloned—video that appears to be an original. It’s become quite easy for AI to create an artificial YOU and the results are so convincing our brains have trouble differentiating between what is real and what is cloned. Deepfake technology that uses facial mapping, machine learning, and artificial intelligence to create representations of real people doing and saying things they never did is now targeting “ordinary” people. Celebrities used to be more susceptible to being victims of deepfake technology because there was abundant video and audio of them to use to train the algorithms. However, the technology has advanced to the point that it doesn’t require as much raw data to create a convincing fake video, plus there are a lot more images and videos of ordinary people from the internet and social media channels to use.  

5.  Ransomware, AI and Bot-enabled Blackmailing and Hacking 

When high-powered technology falls into the wrong hands, it can be very effective to achieve criminal, immoral, and malicious activities. Ransomware, where malware is used to prevent access to a computer system until a ransom is paid, is on the rise according to the Cybersecurity and Infrastructure Security Agency (CISA). Artificial intelligence can automate tasks to get them done more efficiently. When those tasks, such as spear phishing, are to send out fake emails to trick people into giving up their private information, the negative impact could be extraordinary. Once the software is built, there is little-to-no cost to keep repeating the task over again. AI can quickly and efficiently blackmail people or hack into systems. Although AI is playing a significant role to combat malware and other threats, it’s also being used by cybercriminals to perpetrate the crimes. 

 

6.  Smart Dust 

Microelectromechanical systems (MEMS), the size of a grain of salt, have sensors, communication mechanisms, autonomous power supplies, and cameras in them. Also called motes, this smart dust has a plethora of positive uses in healthcare, security, and more, but would be frightening to control if used for evil pursuits. While spying on a known enemy with smart dust could fall into the positive column, the invasion of a private citizen’s privacy would be just as easy. 

7.  Fake News Bots 

GROVER is one AI system capable of writing a fake news article from nothing more than a headline. AI systems such as GROVER create articles more believable than those written by humans. OpenAI, a nonprofit company backed by Elon Musk, created “deepfakes for text” that produces news stories and works of fiction so good, the organization initially decided not to release the research publicly to prevent dangerous misuse of the technology. When fake articles are promoted and shared as true, it can have serious ramifications for individuals, businesses, and governments. 

Along with the positive uses of today’s technology, there is no doubt that it can be very dangerous in the wrong hands.  

[“source=forbes”]