
Click the link below the picture
.
China is pushing ahead on plans to regulate humanlike artificial intelligence, including by forcing AI companies to ensure that users know they are interacting with a bot online.
Under a proposal released on Saturday by China’s cyberspace regulator, people would have to be informed if they were using an AI-powered service—both when they logged in and again every two hours. Humanlike AI systems, such as chatbots and agents, would also need to espouse “core socialist values” and have guardrails in place to maintain national security, according to the proposal.
Additionally, AI companies would have to undergo security reviews and inform local government agencies if they rolled out any new humanlike AI tools. And chatbots that tried to engage users on an emotional level would be banned from generating any content that would encourage suicide or self-harm, or that could be deemed damaging to mental health. They would also be barred from generating outputs related to gambling, obscene, or violent content.
A mounting body of research shows that AI chatbots are incredibly persuasive, and there are growing concerns around the technology’s addictiveness and its ability to sway people toward harmful actions.
China’s plans could change—the draft proposal is open to comment until January 25, 2026. But the effort underscores Beijing’s push to advance the nation’s domestic AI industry ahead of that of the U.S., including through the shaping of global AI regulation. The proposal also stands in contrast to Washington, D.C.’s stuttering approach to regulating the technology. This past January, President Donald Trump scrapped a Biden-era safety proposal for regulating the AI industry. And earlier this month, Trump targeted state-level rules designed to govern AI, threatening legal action against states with laws that the federal government deems to interfere with AI progress.
.
dem10/Getty Images
.
.
Click the link below for the complete article:
.
__________________________________________
Jan 07, 2026 @ 14:12:13
Most likely, every prohibited rule will be of interest to the criminals operating on the black market. Surely, programs will be written that will harm people. I’m just saying!
LikeLike
Jan 07, 2026 @ 14:30:33
Thanks for your comment! AI definitely can be dangerous!
LikeLiked by 1 person