Skip to main content

Samsung Galaxy Tab S10 Lite With Exynos 1380 SoC Launched in India

Samsung Galaxy Tab S10 Lite is now available to purchase in India, nearly a month after it was launched by the company in global markets. The Galaxy Tab S10 Lite tablet was unveiled by the company in August, and showcased at a Galaxy Event on September 5, where the company launched the Samsung Galaxy S25 FE. Meanwhile, the company has also announced that the Samsung Galaxy Buds 3 FE will go on sale in India soon. Samsung Galaxy Tab S10 Lite Price in India, Availability Samsung Galaxy Tab S10 Lite price in India starts at Rs. 30,999 for the base Wi-Fi only 6GB RAM + 128GB configuration, while the Wi-Fi only 8GB RAM + 256GB storage variant costs Rs. 40,999. On the other hand, the price of the Galaxy Tab S10 Lite with 5G connectivity starts at Rs. 35,999 for the base option with the same RAM and storage capacity as the Wi-Fi only variant. The higher-end 5G-enabled model costs Rs. 45,999 in India. The tablet is available to purchase via the Samsung India website and e-commerce websit...

White House "Alarmed" After Taylor Swift, Joe Biden Deepfakes Surface

Deepfakes generated by artificial intelligence have proliferated on social media this month, claiming a string of high-profile victims and elevating the risks of manipulated media into the public conversation ahead of a looming US election cycle.

Pornographic images of singer Taylor Swift, robocalls of US President Joe Biden's voice, and videos of dead children and teenagers detailing their own deaths all have gone viral - but not one of them was real.

Misleading audio and visuals created using artificial intelligence aren't new, but recent advancements in AI technology have made them easier to create and harder to detect. The torrent of highly publicized incidents just weeks into 2024 has escalated concern about the technology among lawmakers and regular citizens.

"We are alarmed by the reports of the circulation of false images," White House press secretary Karine Jean-Pierre said Friday. "We are going to do what we can to deal with this issue."

At the same time, the spread of AI-generated fake content on social networks has offered a stress test for platforms' ability to police them. On Wednesday, explicit AI-generated deepfaked images of Swift amassed tens of millions of views on X, the website formerly known as Twitter that is owned by Elon Musk.

Although sites like X have rules against sharing synthetic, manipulated content, the posts portraying Swift took hours to remove. One remained up for about 17 hours and had more than 45 million views, according to the Verge, a sign that these images can go viral long before action is taken to stop them.

Cracking Down

Companies and regulators have a responsibility in stopping the "perverse customer journey" of obscene manipulated content, said Henry Ajder, an AI expert and researcher who has advised governments on legislation against deepfake pornography. We need to be "identifying how different stakeholders, whether they are search engines, tool providers or social media platforms, can do a better job creating friction in the process from someone forming the idea to actually creating and sharing the content."

The Swift episode prompted fury from her legions of fans and others on X, causing the phrase "protect Taylor Swift" to trend on the social platform. It's not the first time the singer has been subjected to her image being used in explicit AI manipulation, though it's the first with this level of public outrage.

The top 10 deepfake websites hosted about 1,000 videos referencing "Taylor Swift" at the end of 2023, according to a Bloomberg review. Internet users graft her face onto the body of porn performers or offer paying customers the ability to "nudify" victims using AI technology.

Many of these videos are available through a quick Google search, which has been the primary traffic driver to deepfake websites, according to a 2023 Bloomberg report. While Google offers a form letting victims request removal of deepfake content, many complain the process resembles a game of whack-a-mole. At the time of Bloomberg's report last year, a spokesperson for Google said the Alphabet Inc. company designs its search ranking systems to avoid shocking people with unexpected harmful or explicit content they don't want to see.

Almost 500 videos referencing Swift were hosted on the top deepfake site, Mrdeepfakes.com. In December, the site received 12.3 million visits, according to data from Similarweb.

Targeting Women

"This case is horrific and no doubt extremely distressing for Swift, but it's sadly not as groundbreaking as some may think," Ajder said. "The ease of creating this content now is disturbing and affecting women and girls, regardless of where they in the world or their social status."

As of Friday afternoon, explicit AI-generated images of Swift were still on X. A spokesperson for the platform directed Bloomberg to the company's existing statement, which said non-consensual nudity is against its policy and the platform is actively trying to remove such images.

Users of popular AI image-maker Midjourney are already taking advantage of at least one of the fake visuals of Swift to come up with written prompts that can be used to make more explicit pictures with AI, according to requests in a Midjourney Discord channel reviewed by Bloomberg. Midjourney has a feature in which people can upload an existing image to its Discord chat channel - where prompts are input to tell the technology what to create - and it will generate text that can be used to make another image like it via Midjourney or another similar service.

The output of that feature is on a public channel for any of the more than 18 million members of Midjourney's Discord server to see, giving them the equivalent of tips and tricks for fine-tuning AI-generated pornographic imagery. On Friday afternoon, there were nearly 2 million people active on the server.

Midjourney and Discord didn't respond to requests for comment.

Surging Numbers

Amid the AI boom, the number of new pornographic deepfake videos has already surged more than ninefold since 2020, according to research from independent analyst Genevieve Oh. At the end of last year, the top 10 sites offering this content hosted 114,000 videos, among which Swift had already been a common target.

"Whether it's AI or real, it still damages people," said Heather Mahalik Barnhart, a digital forensics expert who develops curriculum for the SANS Institute, a cyber education organization. With the images of Swift, "even though it's fake, imagine the minds of her parents who had to see that - you know, when you see something, you can't make it go away."

Just days before the images of Swift created a firestorm, a deepfake audio message of Biden had been spread in advance of the New Hampshire presidential primary election. Global disinformation experts said that robocall, which sounded like Biden telling voters to skip the primary, was the most alarming deepfaked audio they had heard yet.

There are already concerns that deepfaked audio or video could play a role in upcoming elections, fueled by how fast things spread on social media. The fake Biden message was dialed directly into people's telephones, which provided fewer means for expects to scrutinize the call.

"The New Hampshire primary gives us the first taste of the situation we have to deal with," said Siwei Lyu, a professor at the University at Buffalo who specializes in deepfakes and digital media forensics.

Difficult to Detect

Even on social media, there are currently no reliable detection capabilities, which leaves a frustratingly roundabout process that depends on someone spotting a piece of content and doubting it enough to go to the source to confirm it. That's a presumably more likely scenario for a prominent public figure like Swift or Biden than a local official or private citizen. Even if companies identify and remove these videos, they spread so quickly that often the damage has already been done.

A viral deepfaked video of a victim of the Oct. 7 terrorist attack on Israel, Shani Louk, has amassed more than 7.5 million views on ByteDance Ltd.'s TikTok app since it was posted more than three months ago, even after Bloomberg singled it out for the company in a December story about the platform's struggle to police AI-generated videos of dead victims, including children.

The video-sharing app has banned AI-generated content of private citizens or children, and says "gruesome" or "disturbing" video is also not allowed. As recently as this week, deepfaked videos of dead children voicing the details of abuse and their death were still popping into users' feeds and amassing thousands of views. TikTok removed the videos sent by Bloomberg for comment. As of Friday, dozens of videos and accounts that exclusively post this kind of disturbing fake content are still live.

TikTok has said it's investing in detection technologies and is working to educate users on the dangers of AI-generated content. Other social networks have voiced similar sentiments.

"You can't respond to something, you can't react to something - let alone regulate something - if you can't first detect it," said Nick Clegg, president of public affairs at Facebook and Instagram owner Meta Platforms Inc., at the World Economic Forum in Davos, Switzerland, earlier this month.

Few Laws

There is currently no US federal law banning deepfakes, including those that are pornographic in nature. Some states have implemented laws regarding deepfake pornography, but their application is inconsistent across the country, making it difficult for victims to hold the creators to account. 

Jean-Pierre, the White House press secretary, said Friday that the administration is working with AI companies on unilateral efforts that would watermark generated images to make them easier to identify as fakes. Biden has also appointed a task force to address online harassment and abuse, while the US Justice Department created a hotline for those victimized by image-based sexual abuse.

Congress has began discussing legislative steps to protect celebrities' and artists' voices from AI usage in some cases. Absent from those conversations are any protections for private citizens.

Swift has made no public comment on the issue, including whether she will take legal action. If she chooses to do so, she could be in a position to take on that sort of challenge, said Sam Gregory, executive director of Witness, a nonprofit organization that uses ethical technology to highlight human rights abuses.

"In absence of federal legislation, having a plaintiff like Swift who has the capability and willingness to go after this using all available means to make a point - even if the likelihood of success is low or long-term - is one next step," Gregory said.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



from NDTV News- Special https://ift.tt/e7tI0mq

Comments

Popular posts from this blog

Video: US Teens Vandalise Chinese Restaurant, Police Search For Suspects

A group of masked, violent teens enters a Chinese restaurant and vandalises it badly over the weekend, overturning tables and breaking chairs as terrified staff watched helplessly, according to The New York Post. According to a video shared on Twitter by local activist Yiatin Chu, the wicked hooligans left the eatery in very bad shape.The footage shows full anarchy ensuing inside the restaurant as the masked teenagers flipped tables, flung chairs, and broke dishes, leaving a trail of destruction behind. This video is going viral on WeChat. Fish Village, a restaurant in College Point, Qns was ransacked by a gang of masked kids in hoodies. We've fallen so low that there's no expectation of consequences for this horrific attack on private property. pic.twitter.com/DQdnHPR5r8 — Yiatin Chu (@ycinnewyork) March 7, 2023 "This video is going viral on WeChat. Fish Village, a restaurant in College Point, Qns., was ransacked by a gang of masked kids in hoodies. We've fall...

"Khala Ka Ghar Nahi...": Akhtar On India Having It Easy In Asia Cup Final

Former Pakistan pacer Shoaib Akhtar has warned Team India ahead of the Asia Cup final against Sri Lanka. The Rohit Sharma -led side received a timely wake up call following its loss to Bangladesh in the final Super 4 fixture on Friday. While India had rested several key players for the game, including star batter Virat Kohli , Akhtar suggested that it was a embarrassing defeat. Sri Lanka defeated tournament favourties Pakistan to reach the final, and Akhtar feels that winning the final won't be a cakewalk for India. "We were not expecting that India would lose to a team like Bangladesh but they did. It was an embarrassing defeat. Pakistan lost to Sri Lanka. They are out of the Asia Cup, which is an even bigger embarrassment. India are still in the final. All is not lost for them. It was a great wakeup call for them to come back harder and make sure they secure victory in the finals but that will only happen if they play really well. Ye khala ji ka ghar nahi hai jaha pe India...

Why Trump's Claims About Americans Splitting The Atom Angers New Zealanders

Imagine a newly-elected president of a country claiming the legacy of someone as foundational as Thomas Edison. That's the kind of appropriation New Zealanders are witnessing. The recent inaugural address of President Donald Trump has sparked a heated debate over the origins of a groundbreaking scientific achievement: splitting the atom. Trump's claim that American experts were responsible for this feat has been met with swift correction from New Zealanders, who proudly assert that their native son, Sir Ernest Rutherford, was the true pioneer behind this discovery. Rutherford's achievement in 1917 at Victoria University of Manchester in England marked a pivotal moment in the history of nuclear physics. His work not only earned him a Nobel Prize in Chemistry in 1908 but also led to the discovery of radioactive half-life and the understanding that radioactivity involves the transmutation of one chemical element to another. Nick Smith, the mayor of Nelson, near Rutherford...