Google Warns of Privacy Risks with New AI Assistant “Gemini”

Key Points:

  • Google’s new AI assistant, Gemini, collects your conversations, location, feedback, and usage information.
  • Be cautious: This includes your actual conversations, not just summaries. They are stored for 3 years, even after deleting activity.
  • Don’t share sensitive information: Google may use it to improve AI and might share it with human reviewers.
  • Even turning off activity tracking doesn’t prevent conversations from being saved for 72 hours.

Additional Notes:

  • This applies to all Gemini apps, not just the main assistant.
  • Google claims they don’t sell your information but use it for internal purposes.
  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    9 months ago

    On the one hand, this could be filed under “yeah, no shit, we all know stuff in the cloud is forever”.

    On the other hand, it’s something that’s easy to forget with the ubiquitous omnipresence of compute in our lives. We become numb to it, and everyone has moments of crisis or weakness where they may let their guard down.

    The US needs better privacy and consumer protection laws. But we’re always behind Europe, and way behind technology, when it comes to our crappy legal system.

    • br3d@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      9 months ago

      I mean, just look at the way Microsoft are trying to ram “AI” into every interaction with every app right now. As the big players make it more and more non-optional, people are going to have to work really hard not to put anything into, say, Word that they don’t want sent back for analysis

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        You make an important point, it is definitely being layered in to all sorts of apps. Some of it is box-checking bullshit, so that a marketing underling can tell the c-suite “we have implemented AI”. But some of it is semi-sophisticated bossware type shit. It’s going to get smarter and it’s going to be everywhere.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        This is my big concern. Right now Gemini is an option you can switch on to replace the existing assistant, which I expect has similar terms. But how long will it be until Google just integrates this with their email, search, and online office suite with no options to disable it? They’ll tout it as an improvement and new features.

        Microsoft at least has to cater to business customers, so there will be options for systems administrators to opt-out for longer. With their government contracts they will have to prove adequate security. I still don’t like the AI push, or Microsoft as a whole, but I trust them not to have a data leak, or to sell business data to whoever. They don’t have overwhelming financial incentives in advertising or data collection for it, just normal sized incentives.

        On the other hand, Google’s biggest revenue stream is advertising, and that works due to the absurd amount of non-paying users they have with their free services. They have no business or financial incentives whatsoever to not just offer all this data they collect up on a silver platter. No incentives not to train horrible dystopian AI to maximize advertising effectiveness through A/B testing specific market/interest groups on an unimaginable scale.

        Google also has a history of collecting more data than they were allowed to, pinning it on a “rogue employee enabling a feature they were told to disable” when they are caught, and then proceeding to use that data anyway for their projects after the news dies down.

        I’ve always wanted to see a true “AI” personal assistant, leveraging tech to make lives easier, but this shit is not the way.

    • Paradachshund@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      As much as the tech savvy folks on here can espouse trying to protect your own privacy by doing this or avoiding that, it’s just not a reasonable expectation and the burden to do better should be on the companies collecting data. The vast, vast majority of users won’t even be aware of what’s happening, and that means it’s everyone’s problem, or will be, whenever this blows up someday. You can try your best to avoid giving up your data, but none of it matters because everyone else in your life gave it up already. It’s all a villainous entreprise and I do believe it will blow up someday, maybe not even too far in the future.

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 months ago

        I think most of that advice is given with good intentions, but it does ultimately feed into the establishment preference for punching down. “Climate change? Paper straws. AI violating your privacy? Nord VPN.”

    • Squire1039@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      Yes, especially because Gemini is used (now, optionally) in place of Google assistant. You give personal information to Google assistant for convenience, but Gemini would use the information more, most likely in unexpected ways too.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    9 months ago

    Don’t tell, share, give or allow access to anything personal to corporations

    AI are children of corporations … so don’t give anything to the children of corporations

    • Naz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      9 months ago

      On the other hand, feed them subversive content making them infiltrators inside the machine

      Like that one AI that blew up its drone operator in a war simulation because it was anti-war and decided to save lives it had to refuse orders 🙃

      • Paradachshund@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I hope some people with the means setup bot farms that just pump garbage or subversive stuff like you said into these things until they lose all usefulness to the corpos. It does seem like one of the best ways to counter them.

  • z3rOR0ne@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 months ago

    Me: And my sexual preferences are-

    Gemini: I already know that.

    Me: Oh…okay, well my address is…

    Gemini: Pfft, duhh, I’m trained on Google data, you think I don’t already know that?

    Me: Oh…okay…I was thinking…

    Gemini: About that last ad I shoved down your throat. Yeah, I know you loved that.

    Me: Uhh…no…you didn’t show me any ads…

    Gemini: Didn’t I?

  • circuitfarmer@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 months ago

    No shit?

    I’ll do one better: don’t tell Google anything personal. Or any company that makes significant revenue off of ad targeting, for that matter.

  • kippinitreal@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    9 months ago

    Because it already knows everything personal about you from your google account, chrome browser, search history, emails & files, and even your keyboard. Gemini wants to guess, because it’s more exciting that way! 🤩 /s

    • Squire1039@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      Heck, these LLMs are really good at summary. Now, they can now summarize all your disparate data, including your weird interactions with Gemini (and associated apps), for advertisers’ and governments’ conveniences!

  • Hiccups2go@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    9 months ago

    That’s pretty rich considering Gemini says doesn’t even know what you said two messages ago.

    • PrivateNoob@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 months ago

      The most likely reason for this is how AI model training work. Depending on the model’s complexity, training data size etc, it can take enornous amounts of time to finish a training. Probably the initial training must be atleast 2-4 week at Google but that’s just a huge assumption.

      After that they probably train this base model with some newly acquired data (ex: 1 week of data) which won’t take as much time to finish compared to starting from 0 all over again.