Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
Dec 3, 2024 - WASHINGTON, D.C. — U.S. Senate Commerce Committee Ranking Member Ted Cruz’s (R-Texas) bipartisan legislation, the TAKE IT DOWN Act, today passed the Senate unanimously and now moves to the House for consideration. The TAKE IT DOWN Act, which Sen. Cruz introduced with Sen. Amy Klobuchar (D-Minn.), would criminalize the publication of non-consensual intimate imagery (NCII), including AI-generated NCII (commonly referred to as “deepfake revenge pornography”), and require social media and similar websites to have in place procedures to remove such content within 48 hours of notice from a victim.
Nov 26, 2024 - As artificial intelligence technology evolves and becomes more accessible, these kinds of scams are becoming more common.
According to Deloitte, a leading financial research group, AI-generated content contributed to more than $12 billion in fraud losses last year, and could reach $40 billion in the U.S. by 2027.
Oct 26, 2024 - A mother in Florida is suing the Google-owned Character.AI platform, claiming it had a large part in the suicide of her 14-year-old son.
Sewell Setzer III fatally shot himself in February 2024, weeks before his 15th birthday, after developing what his mother calls a “harmful dependency” on the platform, no longer wanting to “live outside” the fictional relationships it had created.
Oct 23, 2024 - A 14-year-old Florida boy killed himself after a lifelike “Game of Thrones” chatbot he’d been messaging for months on an artificial intelligence app sent him an eerie message telling him to “come home” to her, a new lawsuit filed by his grief-stricken mom claims.
Oct 19, 2024 - Now, a WIRED review of Telegram communities involved with the explicit nonconsensual content has identified at least 50 bots that claim to create explicit photos or videos of people with only a couple of clicks. The bots vary in capabilities, with many suggesting they can “remove clothes” from photos while others claim to create images depicting people in various sexual acts.
Oct 17, 2024 - Cops aren't sure how to protect kids from an ever-escalating rise in fake child sex abuse imagery fueled by advances in generative AI. Last year, child safety experts warned of thousands of "AI-generated child sex images" rapidly spreading on the dark web around the same time the FBI issued a warning that "benign photos" of children posted online could be easily manipulated to exploit and harm kids.
Oct 9, 2024 - As part of my ongoing research on the human elements of AI, I have spoken with AI companion app developers, users, psychologists and academics about the possibilities and risks of this new technology. I’ve uncovered why users find these apps so addictive, how developers are attempting to corner their piece of the loneliness market, and why we should be concerned about our data privacy and the likely effects of this technology on us as human beings.
Oct 8, 2024 - A new report about a hacked “AI girlfriend” website claims that many users are trying (and possibly succeeding) at using the chatbot to simulate horrific sexual abuse of children.
NOTE: ARTICLE IS GRAPHIC
Oct 4, 2024 - According to a recent survey, 90% of workers find that AI (artificial intelligence) is improving their productivity. One in four of us uses AI on a daily basis, and 73% of workers use AI at least once a week. The survey, commissioned by YouGov and Leapsome, offers insight into productivity, culture and communication in the age of AI. But AI is being asked to do more than just help with workplace efficiency. In addition to on-the-job productivity, AI is helping millions to find friendship, and more, as online platforms and search engines serve up artificial relationships.
Oct 2, 2024 - California Gov. Gavin Newsom signed two bills on Sunday to help protect minors from harmful sexual imagery of children created through the misuse of artificial intelligence tools.
Sep 27, 2024 - A disaster is brewing on dark-web forums, in messaging apps, and in schools around the world: Generative AI is being used to create sexually explicit images and videos of children, likely thousands a day. “Perhaps millions of kids nationwide have been affected in some way by the emergence of this technology,” I reported this week, “either directly victimized themselves or made aware of other students who have been.”
Sept 26, 2024 - For years now, generative AI has been used to conjure all sorts of realities—dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases—perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been.
“This is an issue that affects everybody — from celebrities to high school girls.”
That’s how Jen Klein, director of the White House Gender Policy Council, describes the pervasiveness of image-based sexual abuse, a problem that artificial intelligence (AI) has intensified in recent years, touching everyone from students to public figures like Taylor Swift and Rep. Alexandria Ocasio-Cortez.
Now starring in Beetlejuice, Ortega, who previously starred on Disney's "Stuck in the Middle" beginning in 2016, says she was subjected to explicit content from the moment she joined social media.
TAMPA, Fla. (WFLA) — Artificial Intelligence technology is already changing lives and not always for the better.
Some are using A.I. to create pornographic images of people without their consent, and sometimes, the victims are children. This is already happening here in Florida, including in the Bay area. You might be shocked to hear that in the State of Florida, and at the federal level, there is no law to protect victims.
Artificial intelligence (AI) is being used to generate deepfake child sexual abuse images based on real victims, a report has found.
The tools used to create the images remain legal in the UK, the Internet Watch Foundation (IWL) said, even though AI child sexual abuse images are illegal.
There is clear evidence of a growing demand for AI-generated images of child sexual abuse on the dark web, according to a new research report published by Anglia Ruskin University's International Policing and Public Protection Research Institute (IPPPRI). The full report can be viewed here.
After a series of highly publicized scandals related to deepfakes and child sexual abuse material (CSAM) have plagued the artificial intelligence industry, top AI companies have come together and pledged to combat the spread of AI-generated CSAM.
Thorn, a nonprofit that creates technology to fight child sexual abuse, announced Tuesday that Meta, Google, Microsoft, Civitai, Stability AI, Amazon, OpenAI and several other companies have signed onto new standards created by the group in an attempt to address the issue. At least five of the companies have previously responded to reports that their products and services have been used to facilitate the creation and spread of sexually explicit deepfakes featuring children.
GENEVA (5 February 2024) – A UN expert today warned of the urgent need to put children’s rights at the heart of the development and regulation of the internet and new digital products. Ahead of Safer Internet Day, the UN Special Rapporteur on sale and sexual exploitation of children, Mama Fatima Singhateh, issued the following statement:
“The internet and digital platforms can be a double-edged sword for children and young people. It can allow them to positively interact and further develop as autonomous human beings, claiming their own space. While also facilitate age-inappropriate content and online sexual harms of children by adults and peers.
Artificial intelligence (AI), now an integral part of our everyday lives, is becoming increasingly accessible and ubiquitous. Consequently, there’s a growing trend of AI advancements being exploited for criminal activities.
One significant concern is the ability AI provides to offenders to produce images and videos depicting real or deepfake child sexual exploitation material.
Deepfake image-based sexual abuse is a growing and alarming form of tech-facilitated sexual exploitation and abuse that uses advanced artificial intelligence (AI) to create deceptive and non-consensual sexually explicit content. Vulnerable groups, particularly women and girls, face amplified risks and unique challenges in combatting deepfake image-based sexual abuse.
Equality Now and The Alliance for Universal Digital Rights recently held a webinar focusing on deepfake legislation across the world and the real-life experiences and responses to this unnerving new medium for sexual exploitation.
Nearly a year after AI-generated nude images of high school girls upended a community in southern Spain, a juvenile court this summer sentenced 15 of their classmates to a year of probation.
But the artificial intelligence tool used to create the harmful deepfakes is still easily accessible on the internet, promising to “undress any photo" uploaded to the website within seconds.
It’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s generative AI tools. A 2023 report by Home Security Heroes (a company that reviews identity-theft protection services) found that it took just one clear image of a face and less than 25 minutes to create a 60-second deepfake pornographic video—for free.
There is no official tally of how many students have become victims of explicit deepfakes, but their stories are mounting faster than school officials are prepared to handle the abuse.
Generative AI is exacerbating the problem of online child sexual abuse materials (CSAM), as watchdogs report a proliferation of deepfake content featuring real victims' imagery.
A Spanish court has reportedly penalized 15 teens after being charged for creating nude artificial intelligence-generated images of two of their female classmates and spreading them in a group on WhatsApp.
Safety groups say they’re increasingly finding chats about creating images based on past child sexual abuse materials
Predators active on the dark web are increasingly using artificial intelligence to create sexually explicit images of children, fixating especially on “star” victims, child safety experts warn.
AI is emerging as a critical tool to sort through record-breaking amounts of digital evidence in the fight against the online exploitation of children and teens.
We’ve all heard of catfish scams — when someone pretends to be a lover on the other side of the screen, but instead, they aren’t who they say they are once their real face is revealed. Now, there’s a similar scam on the rise, and it’s much more sophisticated because scammers can fake the face, too. The scam is known as the “Yahoo Boys” scam, and it’s taking “catfishing” to a whole new level.
A tipline set up 26 years ago to combat online child exploitation has not lived up to its potential and needs technological and other improvements to help law enforcement go after abusers and rescue victims, a new report from the Stanford Internet Observatory has found.
You may feel confident in your ability to avoid becoming a victim of cyber scams. You know what to look for, and you won’t let someone fool you.
Then you receive a phone call from your son, which is unusual because he rarely calls. You hear a shout and sounds resembling a scuffle, making you take immediate notice. Suddenly, you hear a voice that you are absolutely certain is your son, screaming for help. When the alleged kidnappers come on the line and demand money to keep your son safe, you are sure that everything is real because you heard his voice.
Multiple Los Angeles-area school districts have investigated instances of "inappropriate," artificial intelligence-generated images of students circulating online and in text messages in recent months.
Etsy, the online retailer known for providing a platform to sell hand-made and vintage products, continues to host sellers of "deepfake" pornographic images of celebrities and random women despite the company's efforts to clean up the site.
A Florida man is facing charges after authorities said he took a photo of a young girl in his neighborhood then used artificial intelligence to create child pornography with it.
Officials with the Martin County Sheriff's Office officials said the case if the first they've handled of its type.
The suspect, 51-year-old Daniel Warren, is facing 17 child pornography charges.
AI-generated images are everywhere. They’re being used to make nonconsensual pornography, muddy the truth during elections and promote products on social media using celebrity impersonations.
Mar 14, 2024 - A sophomore at Richmond Burton High School in Illinois said what she discovered this week was disturbing, alarming and upsetting.
"I felt really nauseous and violated. It was not a good feeling,” said Stevie Hyder. "I actually went home right after that for the rest of the day. It was just really bad, and as soon as I got into the car, I just started crying."
Police are now investigating after Hyder said someone took her prom picture online and altered the image into explicit content using artificial intelligence.
Feb 27, 2024 - Students at a middle school in Beverly Hills, California, used artificial intelligence technology to create fake nude photos of their classmates, according to school administrators. Now, the community is grappling with the fallout.
Nov 26, 2023 - WASHINGTON — When Ellis, a 14-year-old from Texas, woke up one October morning with several missed calls and texts, they were all about the same thing: nude images of her circulating on social media.
That she had not actually taken the pictures didn't make a difference, as artificial intelligence makes so-called "deepfakes" more and more realistic.
Parents of girls at a New Jersey high school said their daughters were humiliated after they learned fake pornographic images of themselves generated with the use of Artificial Intelligence (AI), were circulated among classmates.
Nov 3, 2023 - The dangers of artificial intelligence technology have been brought home in Westfield after fake pornographic images of female students were circulated at the local high school.
The images were created using real photos from social media on an AI app that is shockingly easy to access and use. The creation of these types of altered photos are known as 'deepfakes.' In many cases, it is difficult to tell that the photos are not real.
Jun 20, 2023 - Child safety experts are growing increasingly powerless to stop thousands of "AI-generated child sex images" from being easily and rapidly created, then shared across dark web pedophile forums, The Washington Post reported.
GREENVILLE, S.C. (FOX Carolina) - Two bills filed in South Carolina are seeking to crack down on a new way predators are exploiting children. The rise of artificial intelligence has also led to a rise in what investigators are calling “morphed pornography.”
After several reports of artificial intelligence-generated child pornography surfaced in California, Ventura County Dist. Atty. Erik Nasarenko advocated for a change to state law to protect children who are increasingly vulnerable to this misuse of technology.
Police agencies and investigators the world over are finding that AI-generated images of child pornography are so incredibly realistic that they have lost time, effort, and money investigating images that aren't even real. And it is a growing problem.
A study by the Stanford Internet Observatory found 3,226 images of suspected child sexual abuse in an AI database called LAION, which is used to train other popular text to image AI programs like Stable Diffusion.
MillionKids.org
Million Kids is a registered 501(c)3 organization located in Riverside, California | EIN: 26-3174662
Copyright © 2024 MillionKids.org
Powered by GoDaddy
Written by Lynda Bergh Herring, Million Kids contributor and speaker.
Human trafficking involves using force, fraud, or coercion to obtain some form of labor or commercial sex act. Every year millions of men, women, and children are trafficked worldwide. This includes Sextortion, where individuals are blackmailed into giving money or sexual favors to someone threatening to reveal evidence of their sexual activity. The vast majority of human trafficking and sextortion begins online.
In her book, Your Amazing Itty Bitty ™ Keep Your Children Safe Book, Lynda Bergh Herring will educate you on prot