• 4 Posts
  • 262 Comments
Joined 1 year ago
cake
Cake day: August 22nd, 2023

help-circle

  • I attended a federal contracting conference a few months ago, and they had one of these things (or a variant) walking around the lobby.

    From talking to the guy who was babysitting it, they can operate autonomously in units or be controlled in a general way (think higher level unit deployment and firing policies rather than individual remote control) given a satellite connection. In a panel at the same conference, they were discussing AI safety, and I asked:

    Given that AI seems to be developing from less complex tasks like chess (which is still complicated, obviously, but a constrained problem) to more complex and ill-defined tasks like image generation, it seems that it’s inevitable that we will develop AI capable of providing strategic or tactical plans, if we haven’t already. If two otherwise-equally-matched military units are fighting, it seems reasonable to believe that the one using an AI to make decisions within seconds would win over the one with human leadership, simply because they would react more quickly to changing battlefield conditions. This would place an enormous incentive on the US military to adopt AI assisted strategic control, which would likely lead to units of autonomous weapons which are also controlled by an autonomous system. Do any of you have any concerns about this, and if so, do you have any ideas about how we can mitigate the problem.

    (Paraphrasing, obviously, but this is close)

    The panel members looked at each other, looked at me, smiled, shrugged, and didn’t say anything. The moderator asked them explicitly if they would like to respond, and they all declined.

    I think we’re at the point where an AI could be used to create strategies, and I would be very surprised if no one were trying to do this. We already have autonomous weapons, and it’s only a matter of time before someone starts putting them together. Yeah, they will generally act reasonably, because they’ll be trained on human tactics in a variety of scenarios, but that will be cold comfort to dead civilians who happened to get in the way of a hallucinating strategic model.

    EDIT: I know I’m not actually addressing anything you said, but you seem to have thought about this a bit, and I was curious about what you thought of this scenario.