Congress report highlights how the federal government is weaponizing the development of AI for CENSORSHIP
- A report warns that the federal government is pushing for AI development to aid in the suppression of content on the internet.
- The report highlights government initiatives, aimed at developing AI tools for censorship, often disguised as combating "misinformation."
- The committee calls for a ban on government funding for censorship-related AI research and collaboration with foreign entities on AI regulation that leads to censorship.
- Private sector AI firm are now aligning with government demands for AI development, including using AI for cybersecurity and intelligence operations.
The recently released report by the House Judiciary Select Subcommittee on the Weaponization of the Federal Government reveals that the outgoing administration of President Joe Biden and Vice President Kamala Harris has been attempting to mold artificial intelligence development
to power more effective censorship of online content.
In recent years, the U.S. and other international entities like Canada, the United Kingdom and the European Union have been treating emerging AI technology as a threat. But the subcommittee's report indicates that the real issue lies not with AI but with government efforts to use it to better suppress free speech on the internet.
The report, titled "Censorship's Next Frontier: The Federal Government's Attempt to Control Artificial Intelligence to Suppress Free Speech," highlights the alarming trend of governments and third parties funding, developing and deploying AI
to control online discourse. This push to weaponize AI for censorship has raised serious concerns about the future of free speech and digital freedoms. (Related:
Ex-Google CEO warns that AI poses an imminent existential threat.)
The report argues that the primary reason for the alarm over AI's role in spreading "disinformation" is the government's push to harness this technology for censorship. According to the committee, the Biden-Harris administration has been particularly aggressive in pressuring AI developers to incorporate censorship features into their models.
The report points out that instead of addressing the underlying issues of misinformation, the government is more focused on building tools that can quickly and efficiently censor content. This approach, the committee argues, risks stifling free speech and muzzling dissenting voices online.
According to the report, the government has made several direct moves to regulate AI development and use it for its political advantage.
For instance, the
National Science Foundation has issued grants aimed at developing AI tools to "combat misinformation." However, the committee warns that such moves are often thinly veiled attempts to control online discourse in ways that align with the current administration's agenda.
The report emphasizes that the government must refrain from influencing private algorithm and dataset decisions related to "misinformation" or "bias." It also calls for a ban on government funding for censorship-related research and collaboration with foreign entities on AI regulation that leads to censorship.
Private sector aligning with government goals for AI development
One of the key developments highlighted in the report is the recent appointment of
retired U.S. Army Gen. Paul Nakasone to the board of directors of OpenAI. Nakasone is known for his previous role as the head of the
Department of Defense's Cyber Command and his expertise in cybersecurity and intelligence operations.
Nakasone will now be advising OpenAI on safety and security, and his appointment is seen as a potential shift in the company's priorities towards aligning with government and military-industrial interests.
The report notes that this move is part of a broader trend where tech giants like Amazon, Google, and Microsoft have increasingly aligned themselves with government and military agendas under the guise of "security." As a result, companies that once promised to democratize information have become tools for surveillance and control.
The report warns that advanced AI systems, originally developed for defensive purposes, could evolve into tools for mass surveillance. This could include monitoring citizens' online activities, communications, and even predicting behaviors under the pretext of combating terrorism and cyber threats.
With AI now being designed to analyze vast amounts of data, the potential for these tools to shape public discourse is real. Critics argue that these developments could lead to a chilling effect on free speech, where people are hesitant to express their opinions for fear of being labeled as "misinformation" or "disinformation."
In the report, the Select Subcommittee emphasizes that if allowed to develop freely, AI could expand Americans' capacity to create knowledge and express themselves. However, the current trajectory suggests that
AI may be distorted to serve the interests of those in power, rather than to enhance individual freedoms.
Visit
Censorship.news for more on government attempts to censor online speech.
Watch this episode of the "Health Ranger Report" as Mike Adams, the Health Ranger, discusses
the potential developments in AI for 2025.
This video is from the
Health Ranger Report channel on Brighteon.com.
More related stories:
OpenAI whistleblower who dissented against how the company trained ChatGPT found dead.
Surveillance AI detects for suicidal ideation at schools and sends police to students’ homes.
Joshua Hale on Decentralize TV: The importance of decentralized AI SYSTEMS.
Bill Gates wants AI algorithms to censor vaccine "misinformation" in real time.
Sources include:
ReclaimTheNet.com
Newsweek.com
Brighteon.com