What is this about AI images and videos?

Rose65

Well-known Member
Location
United Kingdom
I just don't understand about AI and graphic unauthorized videos. Something about Taylor Swift currently being a high profile victim - it's awful. I read somewhere it's happening in India where women are being humiliated by terrible videos being made of them, all without consent.

Can people really make video nasties of just about anyone or only those who have visible useable photos of themselves online?
Should and can people protect themselves with security measures online?
 

I just don't understand about AI and graphic unauthorized videos. Something about Taylor Swift currently being a high profile victim - it's awful. I read somewhere it's happening in India where women are being humiliated by terrible videos being made of them, all without consent.

Can people really make video nasties of just about anyone or only those who have visible useable photos of themselves online?
Should and can people protect themselves with security measures online?
I think Taylor is suing the dude that created the awful video of her. I feel like right now there is no way of stopping content from spreading. It will be up to the large companies to start policing and regulating the content. Remember Musk promised there will be "no censorship" on "X". Well, he better start acting soon, because the lawsuits are gonna start flying.
 

I just don't understand about AI and graphic unauthorized videos. Something about Taylor Swift currently being a high profile victim - it's awful. I read somewhere it's happening in India where women are being humiliated by terrible videos being made of them, all without consent.

Can people really make video nasties of just about anyone or only those who have visible useable photos of themselves online?
Should and can people protect themselves with security measures online?
The only way you can protect your photos and videos online is to make them private, not public.
 
Just think what people who want to change elections will be able to do with this kind of technology! Someone you trust, sending people to the wrong polling stations, making political statements in the image of their opponents that the real person never said....the world is in trouble on this one because no governments seem to be recognizing the potential dangers.
 
Bad enough that evil people created such images of Taylor Swift but surely worse are the millions who viewed and shared them. Sometimes I really truly hate this world, there seems no end to abuses. That poor girl must be devastated.

As for ordinary people like us, just ensure privacy settings are set to highest possible. Trouble is in this day and age, many people, especially the young, want lots of online likes and interactions. It leaves people vulnerable.

I am a very private person, I do not publicise my face or put personal business on my FB page. Yet however careful one might be, I suppose determined hackers can do just about anything. It's like with burglars, do not make yourself an easy target.
 
I am a very private person, I do not publicise my face or put personal business on my FB page. Yet however careful one might be, I suppose determined hackers can do just about anything. It's like with burglars, do not make yourself an easy target.
The average person has to worry more about being fooled and manipulated by the AI creators. If it’s put out on a platform that rewards for hits, they’ll make money off everyone who just takes a look. The biggest fear this year is really believable ads put out by opposing political parties. I saw two on tv, that had I not been told they were AI, I wouldn’t have known were manipulated.
 
Last edited:
To a certain degree, the thing that happened to Taylor has been around for a long time... photoshopping someone's head onto a nude body, etc. But this is a whole other level and has the potential to ruin many lives and careers. And like @Paco Dennis and Warren Buffett said, there is no stopping it now. :(
 
AI can create an image of you doing whatever, but it's your face superimposed on somebody else doing whatever. Now, if you're a famous person, your image may be also doing whatever. A video of Lester Crandon and Eva Flatbush rocking the bed boards may not be a big seller, but a video of Tom Cruise and Taylor Swift could be.
Also, what about: Joe couldn't have murdered his wife, there's a video of him sitting in a movie theater at the time?? That's destroying the credibility of video evidence.
 
I just don't understand about AI and graphic unauthorized videos. Something about Taylor Swift currently being a high profile victim - it's awful. I read somewhere it's happening in India where women are being humiliated by terrible videos being made of them, all without consent.

Can people really make video nasties of just about anyone or only those who have visible useable photos of themselves online?
Should and can people protect themselves with security measures online?
Can people really make video nasties of just about anyone. YES they can. They can also hire people to make the stuff for them.
Should and can people protect themselves with security measures online? Yes, but how will you protect yourself from a Ring video doorbell or store security cameras? Wear a heavy coat everywhere you go and a paper bag over your head?
 
AI is taking over the world...fast...and it's a real concern.

Just a small example...much of the narration on youtube videos may sound like a real person but they're not, they're AI voices..you'll get to recognise them quite quickly when you realise they often will pronounce words phonetically...
 
A.I. technology is just the latest new evil unleashed upon society. Why? Because Calfornia, i.e. Silicon Valley, wants to make money from it!

This Taylor Swift BS, it's all to generate clicks and web traffic. People are reactionary and when the "news", the so-called journalism profession, says fake nudes of Taylor Swift are online, 200 million people go looking for them to see if it's true.

It's quite sad. Maybe the death of local newspapers is Karma against this kind of manipulation of people who have actual, real problems they need and want help with? It's a shame if someone did that to Ms. Swift, but she's a billionaire now. She can take care of it herself. She does not need press advocacy. But there's a ton of non-millionaires who do need a little help from the press with all kinds of issues.
 
I'm believe that in the future, ai recognition software to scan for ai generated content will be as common and necessary as antivirus and malware software is today.

Fact checking is already a nightmare. Talk about a recipe for global paranoia.
 
AI is taking over the world...fast...and it's a real concern.

Just a small example...much of the narration on youtube videos may sound like a real person but they're not, they're AI voices..you'll get to recognise them quite quickly when you realise they often will pronounce words phonetically...
That may be for now, but it's constantly improving and AI is 'learning' and there will come a day when there aren't clues for the average person to catch. Even now, you have to watch so carefully to see that mouths aren't 100% synced with the words, but even that is very hard to catch. Now it's like 95% synchronized.....
 
That may be for now, but it's constantly improving and AI is 'learning' and there will come a day when there aren't clues for the average person to catch. Even now, you have to watch so carefully to see that mouths aren't 100% synced with the words, but even that is very hard to catch. Now it's like 95% synchronized.....
no you just have to listen. I'm not talking about Al where you can see a 'person'..I'm talking about narration ..
 
no you just have to listen. I'm not talking about Al where you can see a 'person'..I'm talking about narration ..
Those are easy to spot of course and so far, I'm only hearing those on kind of inconsequential videos where they narrate for a pet video or a rescued animal or stuff like that. But my concern is when people are using it to misdirect us on health issues or election issues and voting places or investment videos and we're starting to see a lot of those. Any of those can be life changing decisions.
 
Those are easy to spot of course and so far, I'm only hearing those on kind of inconsequential videos where they narrate for a pet video or a rescued animal or stuff like that. But my concern is when people are using it to misdirect us on health issues or election issues and voting places or investment videos and we're starting to see a lot of those. Any of those can be life changing decisions.
yes and that is going on currently.. unfortunately.. and it is a real concern....
 
Guess what? You can't do anything about this. You can pass laws that sound good, but then you have to police it. How long have they been trying to stop people downloading movies and music? A couple decades? How long have they been trying to remove hate speech from the web? Forever.

This is a consequence of both AI and the internet itself. My prediction is (just personal, opinion) that the internet will eventually be legislated. Further, access will be controlled, at a country level, and eventually a personal level. The free-for-all of the internet is eroding democracy, moral rights and wrongs, and decency. It is turning entire countries upon themselves. Someone, somewhere, will eventually draw a line in the sand.

Still, that's some way off. We should turn our minds not to the result of the problem (the video) and instead think about how we can possibly police such laws.
 


Back
Top