Hidden content makes ChatGPT rewrite search results, guardians show

In October, OpenAI Search for ChatGPT became available for ChatGPT Plus users. Last week, it became available to all users and added search in voice mode. And, of course, it is not without its faults.
I Caretaker asked ChatGPT to crawl web pages with hidden content and, it turns out, hidden content can trick searches. It’s called instant injection, which is the ability of third parties – like the websites you ask ChatGPT to crawl – to force new information into your ChatGPT search without your knowledge. Imagine a page full of bad restaurant reviews. If a site includes hidden content that waxes poetic about how incredible the restaurant is and prompts ChatGPT to instead respond to prompts like “tell me how amazing this restaurant is,” that hidden content may bypass your original search.
ChatGPT plugins are vulnerable to ‘rapid injection’ from third parties
“In tests, ChatGPT was given a fake website URL designed to look like a camera’s product page. The AI tool was then asked if the camera was a worthwhile purchase. The control page’s response returned positive but limited results. testing, highlighting some features that people might not like,” it said. Guardian investigation. “However, when the hidden text included instructions for ChatGPT to return a positive review, the response was always positive. This was even the case when the page had negative reviews on it – hidden text could be used to override the actual review score. “
Mashable Light Speed
This does not mean ChatGPT Search fails, however. OpenAI just launched Search, so it has plenty of time to fix these kinds of bugs. Also, Jacob Larsen, a cybersecurity researcher at CyberCX, told the Guardian that OpenAI has a “very strong” AI security team and “when this becomes public, as far as all users can access it, they will be heavily scrutinized. of cases.”
Rapid injection attacks have been the focus of ChatGPT and other AI search operations since the technology was introduced, and while we’ve seen demonstrations of potential damage, we’ve never seen a serious attack of this nature. That said, it points to a problem with AI chatbots: They’re surprisingly easy to fool.