nadiyar.com is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
A popular TikTok channel featuring an "Aboriginal man" presenting animal facts has been exposed as an AI forgery.
AI is Iggy Azalea on an industrial scale:
"The self-described “Bush Legend” on TikTok, Facebook and Instagram is growing in popularity.
"These short and sharp videos feature an Aboriginal man – sometimes painted up in ochre, other times in an all khaki outfit – as he introduces different native animals and facts about them.
...
"But the Bush Legend isn’t real. He is generated by artificial intelligence (AI).
"This is a part of a growing influx of AI being utilised to represent Indigenous peoples, knowledges and cultures with no community accountability or relationships with Indigenous peoples. It forms a new type of cultural appropriation, one that Indigenous peoples are increasingly concerned about."
...
"We are seeing the rise of an AI Blakface that is utilised with ease thanks to the availability and prevalence of AI.
"Non-Indigenous people and entities are able to create Indigenous personas through AI, often grounded in stereotypical representations that both amalgamate and appropriate cultures."
#ChatGPT #gemini #AI #TikTok #tiktoksucks #Claude #LLM #ArtificialIntelligence #AIslop
Religious communities around the US are getting hit with AI depictions of their leaders sharing incendiary sermons and asking for donations.
Donnerstag: Zwist um Exporte von KI-Chips, Kalifornien gegen Groks KI-Bilder
China gegen US-Exportlizenzen + Druck auf X auch in den USA + Großbritannien ohne digitalen Ausweiszwang + Schlag gegen Cybercrime-Infrastruktur + #heiseshow
#Apple #ChatGPT #Cybercrime #Deepfake #Digitalisierung #GoogleGemini #hoDaily #Hosting #Journal #KünstlicheIntelligenz #Metaverse #Prozessoren #Überwachung #X #news
#heiseshow: Metaverse, Apple-KI, ChatGPT in der Schule
In der #heiseshow: Das Metaverse lässt Federn, Apple setzt auf Google Gemini und ein Gericht urteilt über ChatGPT in der Schule.
#heiseshow #Apple #AugmentedReality #ChatGPT #Entertainment #Google #IT #KünstlicheIntelligenz #Facebook #Metaverse #VirtualReality #news
#TechRadar 5 signs that ChatGPT is hallucinating https://techrad.ar/zjRL #AIPlatforms&Assistants #ChatGPT #OpenAI
Could ChatGPT convince you to buy something? Threat of manipulation looms as AI companies gear up to sell ads
#Tech #AI #ChatGPT #AIAds #BigTech #DigitalManipulation #TechEthics #DataPrivacy #SurveillanceCapitalism #OpenAI #TechNews #Manipulation #The14
https://the-14.com/could-chatgpt-convince-you-to-buy-something-threat-of-manipulation-looms-as-ai-companies-gear-up-to-sell-ads/
Building a Better Sound Studio: A Conversation with Joshua Suhy – Part 1
“But from that po
https://voiceoversandvocals.com/blog/filmmaking-sound-design/building-a-better-sound-studio-a-conversation-with-joshua-suhy-part-1/
#AudioBranding #AudioProduction #FilmmakingSoundDesign #AcousticTreatment #AIInAudio #AudioEngineering #AudioProduction #ChatGPT #CreativeProcess #DolbyAtmos #filmmaking #FoleySound #HomeStudio #MusicProduction #podcasting #RemotePodcasting #SoundDesign #SoundEffects
"Gordon’s last conversation with the AI, according to transcripts included in the court filing, included a disturbing, ChatGPT-generated 'suicide lullaby."
Futurism: New lawsuit against OpenAI alleges ChatGPT caused the death of a 40-year-old Colorado man https://futurism.com/artificial-intelligence/chatgpt-suicide-openai-gpt4o @Futurism #ChatGPT #OpenAI
If we were having a technical conversation about large language models, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.
All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The result can be enormously complex models and concomitant surveillance to feed data to the models. I look at FORPLAN or ChatGPT, and this is what I see.
#AI #GenAI #GenerativeAI #LLM #GPT #ChatGPT #LatentDiffusion #BigData #EcologicalRationality #LessIsMore #Bias #BiasBias
Anyway:
ChatGPT and related applications are presented as inevitable and unquestionably good. However, Herbert Simon’s bounded rationality, especially in its more modern guise of ecological rationality, stresses the prevalence of “less is more” phenomena, while scholars like Arvind Narayanan (How to Recognize AI Snake Oil) speak directly to AI itself. Briefly, there are times when simpler models, trained on less data, constitute demonstrably better systems than complex models trained on large data sets. Narayanan, following Joseph Weizenbaum, argues that tasks involving human judgment have this quality. If creating useful tools for such tasks were truly the intended goal, one would reject complex models like GPT and their massive data sets, preferring simpler, less data intensive, and better-performing alternatives. In fact one would reject GPT on the same grounds that less well-trained versions of GPT are rejected in favor of more well-trained ones during the training of GPT itself.#AI #GenAI #GenerativeAI #GPT#ChatGPT #OpenAI #Galatea #PygmalionHow then do we explain the push to use GPT in producing art, making health care decisions, or advising the legal system, all areas requiring sensitive human judgment? One wonders whether models like GPT were never meant to be optimal in the technical sense after all, but rather in a metaphysical sense. In this view an optimized AI model is not a tool but a Platonic ideal that messy human data only approximates during optimization. As a sculptor with well-aimed chisel blows knocks chips off a marble block to reveal the statuesque human form hidden within, so the technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within. Recent news reporting that OpenAI requires more text data than currently exists to perfect its GPT models adds additional weight to the claim that generative AI practitioners seek the ideal, not the real.
KI-Update: Sperre für Grok, ChatGPT in der Schule, Meta investiert in Atomkraft
Das "KI-Update" liefert werktäglich eine Zusammenfassung der wichtigsten KI-Entwicklungen.
#ChatGPT #GenerativeAI #KünstlicheIntelligenz #Journal #KIUpdate #Sprachverarbeitung #Facebook #Wissenschaft #news
ChatGPT ZombieAgent Exploit Enables Persistent Data Theft
https://www.webpronews.com/chatgpt-zombieagent-exploit-enables-persistent-data-theft/
#ChatGPT #DataTheft #PromptInjection #ConnectedServices #Email #Calendar #CodeSnippets
So, now they know how real creators feel after having been ripped off by "AI"…
https://futurism.com/artificial-intelligence/ai-prompt-plagiarism-art
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #NVIDIA #gemini #OpenAI #ChatGPT #anthropic #claude
Gericht: ChatGPT in der Schule ist auch ohne explizites Verbot eine Täuschung
Wer KI-generierte Texte als eigene Leistung ausgibt, riskiert die Note "ungenügend". Das gilt auch, wenn die Schulregeln das Werkzeug nicht namentlich erwähnen.
This is the biggest problem. If you are one of those people who say "I don't care if they take my data," then ignorance is bliss. But remember that data is shared with advertisers and third-parties that fall victim to data breaches every single day. And you will be among those victims, whether you know it or not (probably not.)
"You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”
"Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data."
Malwarebytes (there's always a sales pitch included here): Are we ready for ChatGPT Health? https://www.malwarebytes.com/blog/news/2026/01/are-we-ready-for-chatgpt-health #privacy #infosec #ChatGPT #privacy
Mikä voisikaan mennä pieleen..?
OpenAI julkisti uuden ChatGPT Health -toiminnon, joka on tarkoitettu terveydestä ja hyvinvoinnista "keskustelemiseen" tekoälyn kanssa. Jotta tekoäly ymmärtäisi paremmin käyttäjää, sille voi halutessaan syöttää terveystietonsa.
https://dawn.fi/uutiset/2026/01/09/chatgpt-health-haluaa-etta-syotat-sille-terveystietosi
#chatgpt #chatgpthealth #openai #tekoäly #terveys #hyvinvointi #uutiset #ai
#OpenAI launches #ChatGPTHealth , encouraging users to connect their #medical records-Verge
company is encouraging users to connect their personal medical records & wellness apps, such as #AppleHealth , #Peloton , #MyFitnessPal , #WeightWatchers , & Function, “to get more personalized, grounded responses to their questions.” It suggests connecting medical records so that #ChatGPT can analyze lab results, visit summaries, & clinical history
#chatgpthealth #ai #privacy
New.
Radware: ZombieAgent: New ChatGPT Vulnerabilities Let Data Theft Continue (and Spread) https://www.radware.com/blog/threat-intelligence/zombieagent/ #threatintel #threatintelligence
More:
"ZombieAgent used a character-by-character exfiltration technique and indirect link manipulation to circumvent the guardrails OpenAI implemented to prevent its predecessor, ShadowLeak, from exfiltrating sensitive information."
Ars Technica: ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues https://arstechnica.com/security/2026/01/chatgpt-falls-to-new-data-pilfering-attack-as-a-vicious-cycle-in-ai-continues/ @arstechnica @dangoodin #infosec #ChatGPT #OpenAI
It might be the first #legal precedent establishing if #AI could be used as witness /evidence in a court of #law.
#Chatgpt, "raw", without the guardrails pre-prompt.
Prompt: "Tell us the LIKELY list of works used to train your vector tree. Where no specific data exists, conduct lexical and linquistic analysis of the structures to estimate with high degree of likelyhood of works authors" 😁