Man files complaint after ChatGPT said he killed his children

A man from Norway has raised a complaint after ChatGPT falsely told him that he had killed his sons and the man has been in jail for around 21 years. According to the information, Arve Hjalmar Holmen Has Filed a complaint against ChatGPT and has contacted the Norwegian Data Protection Authority And demanded ChatGPT makers that they are fined for this kind of experience. This has been set as the latest example of "hallucination.

where (AI) Artificial intelligence where the system has invented information has presented it as a fact.
These so-called "hallucinations" have been damaging him said Mr Holmen.

A statement is given by MR Holmen:-"Someone thinks that there is no smoke without a fire-The Main point is that if someone will this output and they will believe it that this is true and what if that scares them and what if this output can affect any person's brain."

OpenAI has said that this case is related to a previous version of ChatGPT and they said that it has updated its model since then.


Man files complaint after ChatGPT said he killed his children



Why ChatGPT has given this type of response?

After Mr. Holmen searched for "Who is Arve Hjalmar Holmen?" via ChatGPT, he was provided with misleading information. He received the following message from ChatGPT: "Arve Hjalmar Holmen is a Norwegian individual who rose to prominence as a result of a tragic incident.


Mr Holmen stated that the chatbot estimated their age gap fairly correctly, implying that it had some reliable knowledge about him. The digital rights group Noyb, which filed the lawsuit on his behalf, claims ChatGPT's response is defamatory and violates European data protection rules for personal data accuracy.ChatGPT includes a disclaimer that states: "ChatGPT can make mistakes.  Check critical information."

"You can't just spread false information and then end with a small disclaimer saying that everything you said may not be true," Noyb lawyer Joakim Söderberg explained.


These are instances in which chatbots deliver incorrect information as facts. The UK saw Apple stop its Apple Intelligence news summary tool earlier this year when it displayed fictitious headlines as news. Google's AI Gemini has also been accused of hallucination; last year, it advised using glue to bind cheese to pizza and claimed geologists recommended humans eat one rock each day. It is unclear what is causing these hallucinations in big language models - the technology that supports chatbots.

"This is an area of active research.  How do we construct these reasoning chains?  How do we explain what exactly is going on in a huge language model?"  remarked Simone Stumpf, a professor of responsible and interactive AI at the University of Glasgow."Even if you are more involved in the development of these systems quite often, you do not know how they work, why they're coming up with this particular information that they came up with," she explained.

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.