Nash equilibrium computation

Nash equilibrium (NE) computation is a class of computational problems in the intersection of game theory and computer science. The input to this problem is a normal-form game, usually represented as a list of payoff matrices. The required output is a Nash equilibrium of the game.

NE computation can be broadly divided into computing mixed-strategy NE vs computing pure-strategy NE.

Mixed-strategy equilibria

When mixed strategies are allowed, every game has a Nash equilibrium. This was proved by John Nash in 1950[1] using the Kakutani fixed-point theorem, and later in 1951[2] using the Brouwer fixed-point theorem. For games with a small number of actions per player, a NE can be computed manually by solving a set of equations. However, when the number of actions per player grows, the number of possible strategy vectors grows exponentially, and the computation becomes computationally hard.

Non-polynomial-time algorithms

There are various algorithms that work well in practice, but do not guarantee termination in polynomial time. One of the most famous such algorithms is the Lemke–Howson algorithm.[3]

Porter, Nudelman and Shoham[4] present an algorithm based on simple search heuristics, that performs well in practice on a large variety of games. They use the GAMUT testbed for testing the performance of their algorithm.

Lipton, Markakis and Mehta[5] presented a Quasi-polynomial time algorithm for computing an approximate NE. It takes time , where n is the number of possible actions per player. They do it by proving the existence of an approximate NE strategies with support logarithmic in n, and proving that the payoffs to all players in any exact NE can be ε-approximated by such an approximate NE. They also prove that, if the payoff matrices have constant rank, then an exact NE can be found in polytime.

Computational hardness

Daskalakis, Goldberg and Papadimitriou[6] proved that finding a NE is PPAD-complete in games with four or more players; later, Chen and Deng[7] extended the result even for two-player games (bimatrix games). Under standard complexity assumptions, these hardness results imply that no polynomial-time algorithm is expected for general equilibrium computation.

Computing a Nash equilibrium is PPAD-complete even for win-lose bimartix games, that is, two-player games in which the payoff of each player is either 0 or 1.[citation needed]

Approximation algorithms

Tsaknakis and Spirakis[8] presented a polytime algorithm that finds an 0.3393-approximate NE for a bimatrix game. Their algorithm minimizes a certain function, representing the distance from NE, using gradient descent. The procedure converges in polynomial time to local optima which are 0.3393-approximate NE.

Deligkas, Fearnley, Savani and Spirakis[9] extend the descent techniques to polymatrix games, attaining an (0.5+δ)-approximate NE in time polynomial in the input size and 1/δ. For general n-player games, the approximation ratio increases with n (e.g. it is 0.6022 for n=3 and 0.7153 for n=4).

Approximation hardness

The PPAD-completeness results in [6][7] in fact show that computing an ε-approximate NE is PPAD-complete, if ε is exponentially small (smaller than 2-m, where m is the number of actions per player).

Chen Deng and Teng[10] proved PPAD-hardness even for ε that is polynomially small. In other words, they proved that no algorithm with runtime polynomial in n and 1/ε can compute an ε-approximate Nash equilibrium in a two-player game with n actions per player, unless PPAD ≤ P. In particular, this means that there is probably no FPTAS for NE.

Aviad Rubinstein[11] showed that finding an ε-approximate Nash equilibrium is PPAD-complete even for a simple class of games: graphical games of degree three, in which each agent has only two actions; and even when ε is a constant. In particular, there is no PTAS for NE in general games (He also proved inapproximability for other related problems, such as: Bayesian Nash equilibrium in a two-player game, relative ε-Nash equilibrium in a two-player game, market equilibrium in a non-monotone market as well as approximate competitive equilibrium from equal incomes).

Later, Rubinstein[12] proved that, assuming the Exponential time hypothesis for PPAD, there exists a positive constant ε such that computing ε-approximate NE in a two-player game with n actions per player requires quasi-polynomial time, as in the [5] algorithm.

Smoothed complexity

Smoothed analysis has been used to prove that many problems that are computationally-hard in the worst case, are in fact "almost always" easy, that is, if a problem is perturbed randomly, then the perturbed problem is easy. Interestingly, this is not the case for the problem of computing a NE. In particular:

  • Chen, Deng and Teng[10] proved that no algorithm for computing NE in a two-player game has smoothed complexity polynomial in n and 1/s, where s is the input perturbation size, unless PPAD ≤ RP. In particular, the smoothed complexity of the Lemke-Howson algorithm is probably not polynomial.
  • Boodaghians, Brakensiek, Hopkins and Rubinstein[13] prove that computing NE in a 2-player game is PPAD-hard (under randomized reductions) even when smoothing with noise of constant magnitude.

Irrationality of outcomes

All two-player games with rational payoff matrices always have only NE with rational probabilities. However, there are three-player games with rational payoff matrices, in which every NE requires irrational probabilities, which cannot be output accurately in finite time. An example was already shown by Nash[2]: 293 : it is a simplified variant of Poker for three players, with a unique NE in which all probabilities are linear functions of sqrt(321), which is an irrational number. Another example can be found in [14]: Proposition 3 : it is a three-player game where each player has only two possible actions "0" and "1". There is a unique NE in which all players choose "0" with proabilities that are linear functions of sqrt(409).

Datta[15] shows that every real algebraic variety is isomorphic to the set of totally mixed Nash equilibria of some three-person game, and also to the set of totally mixed Nash equilibria of an n-person game in which each player has two actions. This means that, in each of these classes, there are games whose probabilities require roots of any real polynomial.[clarification needed]

Orzech and Rinard[16] show that, for any n ≥ 4, there is an n-player game where all payoffs are in {0,1,2}, with a unique NE in which all the probabilities are irradical numbers (algebraic numbers that cannot be expressed with m-th roots for any integer m).

Two-Player Zero-Sum Games

Two-player zero-sum games represent the most fundamental class with polynomial-time equilibrium computation. In these games, one player's gain equals the other's loss, creating a pure conflict scenario. The key insight is that NE in zero-sum games correspond to minimax strategies, which can be computed via linear programming. For a zero-sum game with payoff matrix A for the row player, the minimax theorem of John von Neumann establishes that the game value can be computed by solving dual linear programs.[17] Since linear programming can be solved in polynomial time using interior-point methods or the ellipsoid algorithm, NE can be computed in time polynomial in the size of the payoff matrices.

Anonymous games

In an anonymous game, the payoff of each player depends only on his own strategy and on the number of players taking every strategy, but not on their identity. Anonymous games admit efficient computation of approximate NE. In particular:

  • Daskalakis and Papadimitriou[18] presented polytime approximation algorithms for games with many players but few strategies (s strategies per player). Their algorithm computes a -approximate pure NE, where L is the Lipschitz constant of the utility functions. They also show a PTAS for mixed NE when s=2. In a later work,[19] they prove that approximate mixed NE in anonymous games can be computed in polytime to any desired accuracy, if the number of strategies is a constant (i.e., there is a PTAS). Moreover, if the utility functions are Lipschitz continuous, one can compute in polytime an approximate pure NE, whose quality depends on the number of strategies and the Lipschitz constant. If the game has two strategies, there always exists an approximate NE in which either only a small number of players randomize, or of those who do, they all randomize the same way.
  • Cheng, Diakonikolas and Stewart[20] present other algorithms for finding approximate mixed NE in anonymous games, where the strategies of the players are "simple": each player plays one strategy with probability 1—δ, for some small δ, and plays uniformly at random with probability δ.
  • In contrast, Chen Durfee and Orfanou[21] prove that, if the approximation parameter is exponentially small in the number of players, then the problem is PPAD-complete when there are at least 7 strategies.

Graphical Games

Graphical games represent strategic interactions using graphs where each player's payoff depends only on neighbors' actions, enabling algorithms that exploit sparsity.

In general, computing NE on graphical games is still PPAD-hard, even if the graph degree is at most 3 and each player has only 2 actions.[22] However, when the graph is a tree with a bounded degree, more efficient algorithms are possible.

Kearns, Littman and Singh[23] presented a dynamic programming algorithm for computing all NE of a tree-graphical game (with two actions per player); its run-time is exponential, but it allows to compute approximate NE in polynomial time. The algorithm requires only messages between neighboring nodes, and thus can be implemented in a distributed fashion.

They then proposed a modification that can find a single NE in polytime. However, Elkind, Goldberg and Goldberg[22] showed that the modification is incorrect (does not always find a NE). Based on their ideas, they proposed a new algorithm that computes all NE in quadratic time for a path graph, and in polynomial time for a general graph of degree at most 2. For general trees, the run-time of this algorithm is exponential even when the tree has bounded degree and its pathwidth is 2, but the same is true for any algorithm of the same kind. Finding a NE for a degree-3 graphical game with constant pathwidth and 2 actions per player is PPAD-complete.

Bramoulle, Kranton and D'Amours[24] study graphical games in which the best response function of each player is a linear function of the actions of his neighbors. They prove that the Nash equiibrium depends only on a single parameter of the graph: the smallest eigenvalue of the matrix representing the graph. Hence, it can be computed efficiently.[clarification needed]

Win-Lose Games

Win-lose games are games in which all payoffs are either 0 or 1.

  • Codenotti, Leoncini and Resta[25] presented a linear-time algorithm for win-lose bimatrix games where the number of winning positions per strategy of each of the players is at most two.
  • Liu, Li and Deng[26] showed that, for polymatrix games, approximating a Nash equilibrium with polynomial precision is PPAD-hard, even for sparse win-lose games.

Bounded-rank bimatrix games

Rank-1 bimatrix games have payoff matrices that can be written as outer products of vectors. Adsul, Garg, Mehta and Sohoni[27] presented a polytime algorithm for exact NE. They also presented an algorithm to enumerate all the NE of a rank-1 game.\

Bounded rank bimatrix games: Lipton, Markakis and Mehta[5] prove that, for two players, if the payoff matrices have constant rank, then an exact NE can be found in polytime.

Auction Games

Auctions with dominant strategies (e.g. second-price auction and some special cases of combinatorial auctions[28]) clearly have pure-strategy NE that can be computed efficiently. But even for first-price auctions, which do not have dominant strategies, it is often possible to compute NE directly, by direct computation using knowledge of agents' distributions. For example,[29] when all n buyers have uniform valuations on [0,1], the equilibrium bidding function is: b(v) = (n-1)v/n.

Pure-strategy equilibria

A pure-strategy Nash equilibrium (PNE) is not guaranteed to exist in general games, though there are special classes of games in which a PNE always exists.

Deciding existence

Gottlob, Greco and Sarcello[30] prove that, even in very restrictive settings, deciding the existence of a PNE is NP-hard. Deciding the existence of a strong Nash equilibrium is even harder: it is ΣP2-complete.

However, when the utility function for each player depends only on the actions of a logarithmically small number of other players (that is, the dependency graph of the game has a small degree), then finding a PNE can be done in polytime. For graphical games, these problems are LOGCFL-complete.[31]

Anonymous games

Daskalakis and Papadimitriou[18][19] proved that anonymous games with Lipschitz continuous utility functions and at most s strategies always contain an -approximate PNE, where L is the Lipschitz constant of the utility functions; such a PNE can compute in polytime.

Graphical games

Gottlob, Greco and Sarcello[30] present algorithms for graphical games where the graphs with bounded treewidth. They show that a PNE can be found in polynomial time using tree decomposition techniques. The algorithm's complexity is exponential in the treewidth but polynomial in the game size.

Potential games and Congestion games

Potential games always have a PNE. Potential games satisfy the Finite Improvement Property: every sequence of unilateral beneficial deviations terminates at a Nash equilibrium.[32] This immediately gives a simple algorithm: start with any strategy profile; while some player can improve their payoff by switching strategies - let that player switch to a better strategy; return the final strategy profile (guaranteed to be a Nash equilibrium). However, this algorithm might go through exponentially many different states before it converges.

Fabrikant, Papadimitriou and Talwar[33] studied the special case of congestion games (which are equivalent to exact potential games). They proved that:

  • PNE can be found in polytime in the special case of symmetric network congestion games. Such a game is represented by a directed graph with two nodes marked as "source" and "target". The set of actions available to each player is the set of paths from the source to the target. For each edge, there is a delay which is a function of the number of players who use it. The payoff to each player is minus the total delay on his chosen path. In this special case, a PNE can be computed in polynomial time by maximizing the potential, through reduction to min-cost flow.
  • The algorithm can be adapted to nonatomic congestion games: if the utility functions of all players are Lipschitz continuous with constant L, then an e-approximate PNE can be computed in strongly-polynomial time - polynomial in n, L and 1/e.
  • In contrast, for asymmetric network congestion games (which are network congestion games in which each player has a different source and a different target), computing a PNE is PLS-complete. The same holds for non-network congestion games (games in which all agents have the same set of strategies, but it is not the set of paths in any graph), whether symmetric or not. The proof implies that there are examples of such games in which some improvement paths are exponentially long. It also implies that the problem of finding a PNE reachable from a given input state is PSPACE-complete.
  • The class of ordinal potential games is even larger than the class of congestion games: every problem in the class PLS can be presented as an ordinal potential game, where the set of PNE coincides with the set of local optima.

Even-Dar, Kesselman and Mansour[34] analyze the number of steps required for convergence to equilibrium in a load-balancing setting.

Caragiannis, Fanelli, Gravin and Skopalik[35] study asymmetric congestion games. They present an algorithm that computes a constant-factor approximation PNE. In particular:

  • With linear delay functions, the approximation ratio is 2+ε, and the runtime is polynomial in the number of players, the number of resources, and 1/ε.
  • When delay functions are degree-d polynomials, the approximation ratio is dO(d).

Their algorithm identifies a short sequence of best-response moves, that leads to an approximate equilibrium. They also show that, for congestion games with more general delay functions (not bounded-degree polynomial), attaining any polynomial approximation of PNE is PLS-complete.

Fabrikant, Jaggard and Schapira[36] study weakly-acyclic games - a generalization of potential games in which, from any starting state, there is a sequence of better-response moves that leads to a PNE, so that natural distributed dynamics cannot enter "inescapable oscillations".

Generalized congestion games

A weighted CG is a congestion game in which each player may have a different weight, and the delay of a resource is the sum of weights of the players using the resource. A player-specific-cost CG is a congestion game in which each player has a different delay function. Both these generalized CGs do not necessarily have a potential, and do not necessarily have a PNE. Milchtaich[37] proves that deciding whether a given network CG has a PNE is NP-hard even in the following cases:

  • Weighted network CG with two players with the same delay functions; all players are allowed to use all paths; all delay functions are nonnegative.
  • Unweighted network CG with two players with different delay functions; all delay functions are nonnegative.

The proof is by reduction from the directed edge-disjoint paths problem.[38]

Fotakis, Kontogiannis and Spirakis[39] present an algorithm that, in any weighted network CG with linear delay functions, finds a PNE in pseudo-polynomial time (polynomial in the number of players n and the sum of players' weights W). Their algorithm is a greedy best-response algorithm: players enter the game in descending order of their weight, and choose a best-response to existing players' strategies.

Panagopoulou and Spirakis[40] show empirical evidence that the algorithm of Fotakis, Kontogiannis and Spirakis in fact runs in time polynomial in n and log W. They also propose an initial strategy-vector that dramatically speeds this algorithm.

Caragiannis, Fanelli, Gravin and Skopalik[41] present an algorithm that computes a constant-factor approximation PNE in weighted CGs. In particular:

  • With linear delay functions, the approximation ratio is , and the runtime is polynomial in the number of playres, the number of resources, and 1/ε.
  • When delay functions are degree-d polynomials, the approximation ratio is .

To prove their results, they show that, although weighted CGs may not have a potential function, every weighted CG can be approximated by a certain potential game. This lets them show that every weighted CG has a (d!)-approximate PNE. Their algorithm identifies a short sequence of best-response moves, that leads to such an approximate PNE.

Submodular games

A finite game is called submodular if (a) the set of feasible joint decisions is a sublattice; (b) the cost function of each player is submodular and has antitone differences. Topkis[42] proves, using fixed-point arguments, that some submodular games always have a PNE. Two algorithms, corresponding to fictitious play in dynamic games, generate sequences of feasible joint decisions converging monotonically to a PNE.

Concave Games

Concave games (where each player's payoff function is concave in their own strategy) admit efficient computation via convex optimization techniques.[43]

Correlated equilibria

Correlated equilibria are, in general, computationally easier to find.

Hart and Mas-Collel[44] present a dynamic called ‘regret-matching’, in which players depart from their current play with probabilities proportional to regret for not having used other strategies previously. These dynamics converge with probability 1 to the set of correlated equilibria.

Papadimitriou and Roughgarden[45] studied the computational problem of finding a correlated equilibrium (CE). They presented a polytiime algorithm for finding a CE in various multiplayer games, such as graphical games, symmetric (anonymous) games, polymatrix games, congestion games, scheduling games and local effect games.

They also studied the problem of optimizing a linear function of the set of CE. For this problem, they give a polytime algorithm for two cases: symmetric games, and graphical games of bounded treewidth; and prove NP-hardness for the other classes of games.

Repeated games

In a repeated game, the strategy of each player is a finite-state machine. Gilboa[46] studies the following problem: given a game and the FSM-s of some n-1 players, find a best response FSM for the n-th player. The problem is polytime solvable when n is fixed, but computationally hard when n is part of the input.

Littman and Stone[47] present a polytime algorithm computing NE for an average-payoff repeated game between two players. Their algorithm relies on the folk theorem. It computes finite-state equilibrium strategies which can be succinctly represented.

Gilboa and Zemel[48] study the following decision problems:

  1. Is there a NE in which all players' expected payoff is at least r (given in the input)? This is NP-hard (NP-complete for 2 players).
  2. Is there a unique NE? This is NP-hard (CoNP-complete for 2 players).
  3. Is there a NE in which each player i plays only actions from a subset Ti (given in the input)? This is NP-hard (NP-complete for 2 players).
  4. Is there a NE in which each player i plays all actions from a subset Ti (given in the input) with positive probability? This is NP-hard (NP-complete for 2 players).
  5. Is there a NE in which each player i plays at least r (given in the input) actions with positive probability? This is NP-hard (NP-complete for 2 players).
  6. Is there a NE in which each player i plays at most r (given in the input) actions with positive probability? This is NP-hard (NP-complete for 2 players; NP-hard even for zero-sum games).

For correlated equilibria, problems 1--5 are polytime solvable, whereas problem 6 is still NP-hard even for zero-sum games (NP-complete for any number of players).

Conitzer and Sandholm[49] prove the following hardness results, even for symmetric games for two players:

  • It is NP-hard to approximate some maximization problems on the set of NE;
  • It is #P-hard to count the Nash equilibria;
  • It is NP-complete to decide whether a pure-strategy Bayes–Nash equilibrium exists in a Bayesian game;
  • It is PSPACE-hard to decide whether a pure-strategy Nash equilibrium exists in a Markov game.

Bilo and Mavronicolas[14] study the following decision problems:

  • Does there exists a NE where all probabilities are rational numbers? - This problem is NP hard.
  • Does there exists a NE where at least one probability is irrational? - This problem is NP hard.
  • Is the NE sets of two games equivalent? - This problem is co-NP hard.
  • is there a Nash reduction between two games? - This problem is NP hard.

Recent developments and modern approaches

Machine learning methods

Deep learning approaches have emerged as promising techniques for large-scale equilibrium computation. Li, Long and Deng[50] introduce the Deep Iterative Nash Equilibrium Solver (DINES), that integrates deep learning into iterative algorithms, achieving polynomial time complexity by leveraging query-based access to utility functions rather than requiring full utility matrices.

Reinforcement learning approaches enabled advances in Nash equilibrium computation. Zhang, Wang, Cui, Zhou, Kakade and Du[51] introduce Preference-Based Multi-Agent Reinforcement Learning (PbMARL), which addresses Nash equilibrium identification from preference-only offline datasets. They show that single-policy coverage—sufficient for single-agent reinforcement learning—is inadequate for multi-agent settings, requiring unilateral dataset coverage conditions.

Generative adversarial networks

Generative Adversarial Networks (GANs) are a tool for training models for image identification, by modeling this as a game between the identifier and an adversary. Heusel, Ramsauer, Unterthiner, Nessler and Hochreiter[52] introduce a Two Time-Scale Update Rule (TTUR). Using the theory of stochastic approximation, they prove that it converges to a local Nash equilibrium of the GAN training game.

Blockchain systems

Reynouard, Laraki and Gorelkina[53] apply Nash equilibrium analysis to Blockchain systems through BAR Nash Equilibrium (BARNE) - equilibrium among Byzantine, Altruistic, and Rational agents. This framework addresses the verifier's dilemma in cryptocurrency systems, demonstrating how fines and forced errors can reestablish honest behavior as globally stable equilibria.

Software and practical implementations

Computational tools

Gambit is the primary comprehensive software package for game theory computations, supporting both extensive-form and strategic-form games.[54] Version 16.0 includes implementations of the Lemke-Howson algorithm, simplicial subdivision methods, and quantal response equilibrium computation, with both GUI and command-line interfaces plus Python integration.

Specialized tools include Game Theory Explorer (GTE) for web-based small game analysis, and various research implementations focusing on specific algorithms. Integration with modern deep learning frameworks (PyTorch, TensorFlow) enables scalable implementations and GPU acceleration for large-scale problems.

Scalability challenges

Computational scaling presents the primary practical limitation, with exponential growth in complexity as games increase in size. Current approaches handle small-to-medium games (up to roughly 100×100 strategies) through approximation algorithms, while larger games require heuristic methods.

Numerical stability issues arise from degenerate linear systems and floating-point precision limitations in equilibrium computation. Rational arithmetic implementations provide exact computation but at significant computational cost, making ε-equilibria the practical standard for providing robustness to numerical errors.

Further reading

  • Several chapters of the Algorithmic Game Theory book[55] survey topics related to complexity of equilibrium computation.

See also

References

  1. ^ Nash, John F. (1950). "Equilibrium points in n-person games". PNAS. 36 (1): 48–49. Bibcode:1950PNAS...36...48N. doi:10.1073/pnas.36.1.48. PMC 1063129. PMID 16588946.
  2. ^ a b Nash, John (1951). "Non-Cooperative Games". Annals of Mathematics. 54 (2): 286–295. doi:10.2307/1969529. JSTOR 1969529.
  3. ^ Lemke, C. E.; Howson, J. T. (1964). "Equilibrium points of bimatrix games". SIAM Journal on Applied Mathematics. 12 (2): 413–423. doi:10.1137/0112033.
  4. ^ Porter, Ryan; Nudelman, Eugene; Shoham, Yoav (2008-07-01). "Simple search methods for finding a Nash equilibrium". Games and Economic Behavior. Second World Congress of the Game Theory Society. 63 (2): 642–662. doi:10.1016/j.geb.2006.03.015. ISSN 0899-8256.
  5. ^ a b c Lipton, Richard J.; Markakis, Evangelos; Mehta, Aranyak (2003-06-09). "Playing large games using simple strategies". Proceedings of the 4th ACM conference on Electronic commerce. EC '03. New York, NY, USA: Association for Computing Machinery. pp. 36–41. doi:10.1145/779928.779933. ISBN 978-1-58113-679-1.
  6. ^ a b Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou (2009). "The complexity of computing a Nash equilibrium."
  7. ^ a b Xi Chen and Xiaotie Deng (2006). "Settling the complexity of two-player Nash equilibrium." 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2006), 261–272.
  8. ^ Tsaknakis, Haralampos; Spirakis, Paul G. (2007). Deng, Xiaotie; Graham, Fan Chung (eds.). "An Optimization Approach for Approximate Nash Equilibria". Internet and Network Economics. Berlin, Heidelberg: Springer: 42–56. doi:10.1007/978-3-540-77105-0_8. ISBN 978-3-540-77105-0.
  9. ^ Deligkas, Argyrios; Fearnley, John; Savani, Rahul; Spirakis, Paul (2017-02-01). "Computing Approximate Nash Equilibria in Polymatrix Games". Algorithmica. 77 (2): 487–514. doi:10.1007/s00453-015-0078-7. ISSN 1432-0541.
  10. ^ a b Chen, Xi; Deng, Xiaotie; Teng, Shang-hua (October 2006). "Computing Nash Equilibria: Approximation and Smoothed Complexity". 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). pp. 603–612. arXiv:cs/0602043. doi:10.1109/FOCS.2006.20. ISBN 0-7695-2720-5.
  11. ^ Rubinstein, Aviad (2015-06-14). "Inapproximability of Nash Equilibrium". Proceedings of the forty-seventh annual ACM symposium on Theory of Computing. STOC '15. New York, NY, USA: Association for Computing Machinery. pp. 409–418. arXiv:1405.3322. doi:10.1145/2746539.2746578. ISBN 978-1-4503-3536-2.
  12. ^ Rubinstein, Aviad (2016-08-29), Settling the complexity of computing approximate two-player Nash equilibria, arXiv:1606.04550
  13. ^ Boodaghians, Shant; Brakensiek, Joshua; Hopkins, Samuel B.; Rubinstein, Aviad (2020-07-21), Smoothed Complexity of 2-player Nash Equilibria, arXiv:2007.10857
  14. ^ a b Bilò, Vittorio; Mavronicolas, Marios (2014-04-01). "Complexity of Rational and Irrational Nash Equilibria". Theory of Computing Systems. 54 (3): 491–527. doi:10.1007/s00224-013-9523-7. ISSN 1433-0490.
  15. ^ Datta, Ruchira S. (August 2003). "Universality of Nash Equilibria". Mathematics of Operations Research. 28 (3): 424–432. arXiv:math/0301001. doi:10.1287/moor.28.3.424.16397. ISSN 0364-765X.
  16. ^ Orzech, Edan; Rinard, Martin (2025-07-12), Nash Equilibria with Irradical Probabilities, arXiv:2507.09422
  17. ^ von Neumann, John (1928). "Zur Theorie der Gesellschaftsspiele". Mathematische Annalen. 100 (1): 295–320. doi:10.1007/BF01448847.
  18. ^ a b Daskalakis, Constantinos; Papadimitriou, Christos (October 2007). "Computing Equilibria in Anonymous Games". 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07). pp. 83–93. arXiv:0710.5582. doi:10.1109/FOCS.2007.24. ISBN 978-0-7695-3010-9.
  19. ^ a b Daskalakis, Constantinos; Papadimitriou, Christos H. (2015-03-01). "Approximate Nash equilibria in anonymous games". Journal of Economic Theory. Computer Science and Economic Theory. 156: 207–245. doi:10.1016/j.jet.2014.02.002. ISSN 0022-0531.
  20. ^ Cheng, Yu; Diakonikolas, Ilias; Stewart, Alistair (January 2017), "Playing Anonymous Games using Simple Strategies", Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Proceedings, Society for Industrial and Applied Mathematics, pp. 616–631, doi:10.1137/1.9781611974782.40, ISBN 978-1-61197-478-2, retrieved 2025-07-29
  21. ^ Chen, Xi; Durfee, David; Orfanou, Anthi (2015-06-14). "On the Complexity of Nash Equilibria in Anonymous Games". Proceedings of the forty-seventh annual ACM symposium on Theory of Computing. STOC '15. New York, NY, USA: Association for Computing Machinery. pp. 381–390. doi:10.1145/2746539.2746571. ISBN 978-1-4503-3536-2.
  22. ^ a b Elkind, Edith; Goldberg, Leslie Ann; Goldberg, Paul (2006-06-11). "Nash equilibria in graphical games on trees revisited". Proceedings of the 7th ACM conference on Electronic commerce. EC '06. New York, NY, USA: Association for Computing Machinery. pp. 100–109. doi:10.1145/1134707.1134719. ISBN 978-1-59593-236-5.
  23. ^ Kearns, Michael; Littman, Michael L.; Singh, Satinder (2001). "Graphical models for game theory". Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence: 253–260. arXiv:1301.2281.
  24. ^ Bramoullé, Yann; Kranton, Rachel; D'Amours, Martin (March 2014). "Strategic Interaction and Networks". American Economic Review. 104 (3): 898–930. doi:10.1257/aer.104.3.898. ISSN 0002-8282.
  25. ^ Codenotti, Bruno; Leoncini, Mauro; Resta, Giovanni (2006). "Efficient Computation of Nash Equilibria for Very Sparse Win-Lose Bimatrix Games". In Azar, Yossi; Erlebach, Thomas (eds.). Algorithms – ESA 2006. Lecture Notes in Computer Science. Vol. 4168. Berlin, Heidelberg: Springer. pp. 232–243. doi:10.1007/11841036_23. hdl:11380/641688. ISBN 978-3-540-38876-0.
  26. ^ Liu, Zhengyang; Li, Jiawei; Deng, Xiaotie (2021-05-18). "On the Approximation of Nash Equilibria in Sparse Win-Lose Multi-player Games". Proceedings of the AAAI Conference on Artificial Intelligence. 35 (6): 5557–5565. doi:10.1609/aaai.v35i6.16699. ISSN 2374-3468.
  27. ^ Adsul, Bharat; Garg, Jugal; Mehta, Ruta; Sohoni, Milind (2011-06-06). "Rank-1 bimatrix games: A homeomorphism and a polynomial time algorithm". Proceedings of the forty-third annual ACM symposium on Theory of computing. STOC '11. New York, NY, USA: Association for Computing Machinery. pp. 195–204. doi:10.1145/1993636.1993664. ISBN 978-1-4503-0691-1.
  28. ^ Lehmann, Daniel; O'Callaghan, Liadan; Shoham, Yoav (2002). "Truth revelation in approximately efficient combinatorial auctions". Journal of the ACM. 49 (5): 577–602. doi:10.1145/585265.585266.
  29. ^ Krishna, Vijay (2009). Auction Theory. Academic Press. ISBN 978-0123745071.
  30. ^ a b Gottlob, Georg; Greco, Gianluigi; Scarcello, Francesco (2005-09-01). "Pure Nash equilibria: hard and easy games". J. Artif. Int. Res. 24 (1): 357–406. arXiv:1109.2152. doi:10.1613/jair.1683. ISSN 1076-9757.
  31. ^ Gottlob, Georg; Greco, Gianluigi; Scarcello, Francesco (2003-06-20). "Pure Nash equilibria: Hard and easy games". Proceedings of the 9th conference on Theoretical aspects of rationality and knowledge. TARK '03. New York, NY, USA: Association for Computing Machinery. pp. 215–230. doi:10.1145/846241.846269. ISBN 978-1-58113-731-6.
  32. ^ Monderer, Dov; Shapley, Lloyd S. (1996). "Potential Games". Games and Economic Behavior. 14 (1): 124–143. doi:10.1006/game.1996.0044.
  33. ^ Fabrikant, Alex; Papadimitriou, Christos; Talwar, Kunal (2004-06-13). "The complexity of pure Nash equilibria". Proceedings of the thirty-sixth annual ACM symposium on Theory of computing. STOC '04. New York, NY, USA: Association for Computing Machinery. pp. 604–612. doi:10.1145/1007352.1007445. ISBN 978-1-58113-852-8. S2CID 1037326.
  34. ^ Even-Dar, Eyal; Kesselman, Alex; Mansour, Yishay (2003). "Convergence Time to Nash Equilibria". In Baeten, Jos C. M.; Lenstra, Jan Karel; Parrow, Joachim; Woeginger, Gerhard J. (eds.). Automata, Languages and Programming. Lecture Notes in Computer Science. Vol. 2719. Berlin, Heidelberg: Springer. pp. 502–513. doi:10.1007/3-540-45061-0_41. ISBN 978-3-540-45061-0.
  35. ^ Caragiannis, Ioannis; Fanelli, Angelo; Gravin, Nick; Skopalik, Alexander (2011-10-01). "Efficient Computation of Approximate Pure Nash Equilibria in Congestion Games". 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science. pp. 532–541. arXiv:1104.2690. doi:10.1109/FOCS.2011.50. ISBN 978-0-7695-4571-4. S2CID 14879292.
  36. ^ Fabrikant, Alex; Jaggard, Aaron D.; Schapira, Michael (2013-07-01). "On the Structure of Weakly Acyclic Games". Theory of Computing Systems. 53 (1): 107–122. doi:10.1007/s00224-013-9457-0. ISSN 1433-0490.
  37. ^ Milchtaich, Igal (2015-08-01). "Network topology and equilibrium existence in weighted network congestion games". International Journal of Game Theory. 44 (3): 515–541. doi:10.1007/s00182-014-0443-9. hdl:10419/95995. ISSN 1432-1270. S2CID 253723798.
  38. ^ Fortune, Steven; Hopcroft, John; Wyllie, James (1980-02-01). "The directed subgraph homeomorphism problem". Theoretical Computer Science. 10 (2): 111–121. doi:10.1016/0304-3975(80)90009-2. ISSN 0304-3975.
  39. ^ Fotakis, Dimitris; Kontogiannis, Spyros; Spirakis, Paul (2005-12-08). "Selfish unsplittable flows". Theoretical Computer Science. Automata, Languages and Programming: Algorithms and Complexity (ICALP-A 2004). 348 (2): 226–239. doi:10.1016/j.tcs.2005.09.024. ISSN 0304-3975.
  40. ^ Panagopoulou, Panagiota N.; Spirakis, Paul G. (2007-02-09). "Algorithms for pure Nash equilibria in weighted congestion games". ACM Journal of Experimental Algorithmics. 11: 2.7–es. doi:10.1145/1187436.1216584. ISSN 1084-6654. S2CID 17903962.
  41. ^ Caragiannis, Ioannis; Fanelli, Angelo; Gravin, Nick; Skopalik, Alexander (2015-03-27). "Approximate Pure Nash Equilibria in Weighted Congestion Games: Existence, Efficient Computation, and Structure". ACM Transactions on Economics and Computation. 3 (1): 2:1–2:32. doi:10.1145/2614687. ISSN 2167-8375. S2CID 5581666.
  42. ^ Topkis, Donald M. (November 1979). "Equilibrium Points in Nonzero-Sum n-Person Submodular Games". SIAM Journal on Control and Optimization. 17 (6): 773–787. doi:10.1137/0317054. ISSN 0363-0129.
  43. ^ Rosen, J. B. (1965). "Existence and Uniqueness of Equilibrium Points for Concave N-Person Games". Econometrica. 33 (3): 520–534. doi:10.2307/1911749. hdl:2060/19650010164. ISSN 0012-9682.
  44. ^ Hart, Sergiu; Mas-Colell, Andreu (2000). "A Simple Adaptive Procedure Leading to Correlated Equilibrium". Econometrica. 68 (5): 1127–1150. doi:10.1111/1468-0262.00153. hdl:10230/525. ISSN 1468-0262.
  45. ^ Papadimitriou, Christos H.; Roughgarden, Tim (2008-08-06). "Computing correlated equilibria in multi-player games". J. ACM. 55 (3): 14:1–14:29. doi:10.1145/1379759.1379762. ISSN 0004-5411.
  46. ^ Gilboa, Itzhak (1988-08-01). "The complexity of computing best-response automata in repeated games". Journal of Economic Theory. 45 (2): 342–352. doi:10.1016/0022-0531(88)90274-8. ISSN 0022-0531.
  47. ^ Littman, Michael L.; Stone, Peter (2003-06-09). "A polynomial-time nash equilibrium algorithm for repeated games". Proceedings of the 4th ACM conference on Electronic commerce. EC '03. New York, NY, USA: Association for Computing Machinery. pp. 48–54. doi:10.1145/779928.779935. ISBN 978-1-58113-679-1.
  48. ^ Gilboa, Itzhak; Zemel, Eitan (1989-03-01). "Nash and correlated equilibria: Some complexity considerations". Games and Economic Behavior. 1 (1): 80–93. doi:10.1016/0899-8256(89)90006-7. ISSN 0899-8256.
  49. ^ Conitzer, Vincent; Sandholm, Tuomas (2008-07-01). "New complexity results about Nash equilibria". Games and Economic Behavior. Second World Congress of the Game Theory Society. 63 (2): 621–641. doi:10.1016/j.geb.2008.02.015. ISSN 0899-8256.
  50. ^ Li, Ningyuan; Long, Tianyan; Deng, Xiaotie (2024-10-04). Solving Nash Equilibrium Scalably via Deep-Learning-Augmented Iterative Algorithms. International Conference on Learning Representations.
  51. ^ Zhang, Natalia; Wang, Xinqi; Cui, Qiwen; Zhou, Runlong; Kakade, Sham M.; Du, Simon S. (2025-01-09), Preference-Based Multi-Agent Reinforcement Learning: Data Coverage and Algorithmic Techniques, arXiv:2409.00717
  52. ^ Heusel, Martin; Ramsauer, Hubert; Unterthiner, Thomas; Nessler, Bernhard; Hochreiter, Sepp (2017). "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium". arXiv:1706.08500 [cs.LG].
  53. ^ Reynouard, Maxime; Laraki, Rida; Gorelkina, Olga (2024-01-31), BAR Nash Equilibrium and Application to Blockchain Design, arXiv:2401.16856
  54. ^ Richard D. McKelvey, Andrew M. McLennan (2006). "Gambit: Software Tools for Game Theory". jmvidal.cse.sc.edu. Retrieved 2025-07-27.. Gambit website.
  55. ^ Vazirani, Vijay V.; Nisan, Noam; Roughgarden, Tim; Tardos, Éva (2007). Algorithmic Game Theory (PDF). Cambridge, UK: Cambridge University Press. ISBN 0-521-87282-0.
Prefix: a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9

Portal di Ensiklopedia Dunia

Kembali kehalaman sebelumnya