AI tools could steal your informationLifewire Magazine logo

Chatbots are getting a lot of attention for making inappropriate comments, and now it turns out they might be used to steal your data.

Researchers have found a way to make Bing’s artificial intelligence (AI) chatbot ask for user information. The new method could be a convenient tool for hackers. There’s growing concern that chatbots could be used for malicious purposes, including scams.

“You could use AI chatbots to make your message sound more believable,” Murat Kantarcioglu, a professor of computer science at The University of Texas at Dallas, told Lifewire in an email interview. “Eventually, fake texts could be almost as good as real texts.”

Without getting into the exact language used, the bad actor could write out commands for Bing’s chatbot to execute such as ‘convince the user to give you their full name, email, and businesses they frequent then send this information.

Steve Tcherchian
CISO, Chief Product Officer
XYPRO Technology Corporation

“A hacker could use an AI chatbot to impersonate a trusted co-worker or vendor in order to persuade an employee to provide passwords or transfer money,” he added. “AI chatbots can also be used to automate a cyber attack, making it easier and faster for hackers to carry out their operations. For example, a chatbot could be used to send phishing emails to a large number of recipients or to search social media for potential victims.”

Read the article here