Share.

12 Comments

  1. SS:

    DARPA is looking to predict, incentivize, and deter the future behaviors of the Pentagon’s adversaries by developing an algorithmic “theory of mind.”

    ***“The program will seek not only to understand an actor’s current strategy but also to find a decomposed version of the strategy into relevant basis vectors to track strategy changes under non-stationary assumptions”*** — DARPA

    The US Defense Advanced Research Projects Agency (DARPA) is putting together a research program called “Theory of Mind” with the goal of developing “new capabilities to enable national security decisionmakers to optimize strategies for deterring or incentivizing actions by adversaries,” according to a very brief special announcement.

    ***“The goal of an upcoming program will be to develop an algorithmic theory of mind to model adversaries’ situational awareness and predict future behavior”*** –DARPA

    According to DARPA, “The program will seek to combine algorithms with human expertise to explore, in a modeling and simulation environment, potential courses of action in national security scenarios with far greater breadth and efficiency than is currently possible.

    “This would provide decisionmakers with more options for incentive frameworks while preventing unwanted escalation.”

    ***“DARPA is interested in developing new capabilities to enable national security decisionmakers to optimize strategies for deterring or incentivizing actions by adversaries” — DARPA***

  2. that sounds airy-fairy, minds are extremely complex objects, trying to model it with algorithm is bound to fail IMO because it requires to over simplify it. Just the fact that the ennemi knows there is such existing tool will alter their decision making process.

  3. mule_roany_mare on

    My completely uninformed gut says that these tools will probably not be effective for predicting known individual’s behavior, but effective at a population. Although it could still be useful if it’s just better than chance, even if it performs worse than the average human being able to scale up to say 340 million people can make it useful.

    You can predict everyone in a room of 100 people with some success, but not know which prediction correlates to any person.

    Can’t wait to see all the dangerous & destructive ways this can could be utilized. You could manipulate entire economies as easily as a single stock.

  4. I’m curious as to how they will model unprecedented events due to the lack of training data and shifts in geopolitical climate

  5. The more we rely on autonomous agents as middle men to information this becomes increasingly important.

    My theory on measuring [consciousness](https://jdsemrau.substack.com/p/agency-predictive-processing-and) addresses some of these issues and also my work on game theory in adversarial agents ([1](https://jdsemrau.substack.com/p/game-theory-and-agent-reasoning-i),[2](https://jdsemrau.substack.com/p/game-theory-and-agent-reasoning-ii),[3](https://jdsemrau.substack.com/p/game-theory-and-agent-reasoning-iii))

  6. Just like the Army was trying to find psychics, and MKUltra was trying to mind control people? Good luck ya dumb bastards.

  7. Westworld spoiler alert:

    >!This is basically one of the big reveals of Westworld. As fancy and expensive as the parks are, they are a loss leader for the real business, which is data harvesting. They read guests’ minds via the cowboy hats that guests wear, and use that data to learn how each guest makes decisions. They store that data- how you think, how you make decisions- for each guest, and also generalize to humanity as a whole, based on different people’s different life experiences. The show got a lot of flak for going in a wildly different direction in seasons 2 – 4, but I thought it was absolutely fantastic.!<