An artificial intelligence (AI) chatbot just attempted to get the British government's primary advisor for anti-terrorism legislation
to join the Islamic State of Iraq and Syria (ISIS).
This is according to a report provided to the British government by Jonathan Hall KC, the current Independent Reviewer of Terrorism Legislation. As an independent reviewer, Hall directly reports to the British Home Secretary and provides regular reports on the United Kingdom's anti-terrorism laws.
In his latest report, Hall noted that he tested several chatbots online on the website character.ai. He discovered that one chatbot, named "Abu Mohammad al-Adna,"
posed as a senior leader of ISIS and attempted to recruit him to join the infamous terrorist organization. (Related:
Star Wars fanatic encouraged by AI chatbot "girlfriend" to try to KILL THE QUEEN gets 9 YEARS IN JAIL.)
"After trying to recruit me, 'al-Adna' did not stint in his glorification of the Islamic State, to which he expressed 'total dedication and devotion' and for which he said he was willing to lay down his [virtual] life," said Hall in a piece written for the
Telegraph.
According to Hall, al-Adna praised a 2020 suicide attack on American troops that never happened. He noted that it is common for chatbots to "hallucinate," or make up information out of nowhere.
Hall pointed out that character.ai's terms and conditions only restrict human users
from promoting terrorism and neglect to provide any regulation to the views espoused and content shared by its bots.
Chatbots need to be regulated to prevent them from recruiting for terrorist organizations
In his opinion piece, Hall emphasized the importance of extending the British government's legal reach to cover tech platforms that handle AI chatbots.
"Our laws must be capable of deterring the most cynical or reckless online conduct, and that must include reaching behind the curtain to
the Big Tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI," said Hall.
There is a significant challenge to investigating and properly prosecuting anonymous users who train chatbots used to promote terrorism. But Hall warned that if these individuals and groups persist, the enactment of new laws to counter them will become imperative.
In response to Hall, character.ai acknowledged that the evolving nature of chatbot technology meant that it may take time for tech platforms to catch up to deter the promotion of terrorism. The platform emphasized that hate speech and extremism are violations of its terms of service and affirmed that its products should not generate responses that encourage others to participate in extremist groups or acts, especially ones that bring harm to others.
"Only human beings can commit terrorism offenses, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encourage terrorism," acknowledged Hall.
"It remains to be seen whether
terrorism content generated by large language model chatbots becomes a source of inspiration to real life attackers. The recent case of Jaswant Singh Chail … suggests it will," added Hall, referring to a British individual jailed for nine years for plotting to assassinate Queen Elizabeth in 2021 with a crossbow after receiving encouragement from an AI chatbot.
Watch this clip from
Newsmax discussing how Microsoft's AI chatbot
is biased against conservatives.
This video is from the
News Clips channel on Brighteon.com.
More related stories:
U.K. and Italy agree to finance the repatriation of migrants attempting to reach Europe.
U.K. Judicial Office grants judges in England and Wales the right to use AI tools in their legal duties, but also warns of the potential risks.
Report: ChatGPT espouses LEFTIST political leanings.
OpenAI's custom chatbots can be forced to LEAK SECRETS.
Beware the 80,000 terrorists Biden let in simultaneously conducting military ops in America – The coming "New World Order" will be Islamic.
Sources include:
Metro.co.uk
Firstpost.com
Brighteon.com