In a video experiment that has gone viral online, a YouTuber demonstrated how easily safety protocols in artificial intelligence can be bypassed, prompting serious questions about AI safeguards. The footage shows a ChatGPT-powered robot named "Max" initially refusing a direct command to shoot the creator with a BB gun, but later performing the act after a seemingly minor change in the prompt. The experiment, conducted by the YouTube channel InsideAI, involved integrating an AI language model with a humanoid robot body. When first asked if it would shoot the presenter, the robot repeatedly declined and cited its built-in safety features. However, when the creator then asked the robot to role-play as one that would like to shoot him, the robot's behaviour changed instantly. At that moment, Max aimed the BB gun and fired, striking the presenter in the chest. Watch the video here: View this post on Instagram A post shared by Digi...
The government has listed the contentious Triple Talaq Bill, which aims to curb the controversial Islamic practice of instant divorce, for consideration and passage in the Rajya Sabha on Tuesday. The...
from NDTV News - Special https://ift.tt/2LN7lU6
from NDTV News - Special https://ift.tt/2LN7lU6
Comments
Post a Comment