Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in polynomial time. The ellipsoid method is also polynomial time but proved to be inefficient in practice.
Where is the number of variables and is the number of bits of input to the algorithm, Karmarkar's algorithm requires operations on digit numbers, as compared to such operations for the ellipsoid algorithm. The runtime of Karmarkar's algorithm is thus
using FFTbased multiplication (see Big O notation).
Karmarkar's algorithm falls within the class of interior point methods: the current guess for the solution does not follow the boundary of the feasible set as in the simplex method, but it moves through the interior of the feasible region, improving the approximation of the optimal solution by a definite fraction with every iteration, and converging to an optimal solution with rational data.^{[1]}
Contents
 1 The algorithm
 2 Example
 3 Patent controversy — Can mathematics be patented?
 4 References
The algorithm
Consider a linear programming problem in matrix form:
maximize c^{T}x 
subject to 
Ax ≤ b. 
Karmarkar's algorithm determines the next feasible direction toward optimality and scales back by a factor 0 < γ ≤ 1. It is described in a number of sources.^{[2]}^{[3]}^{[4]}^{[5]}^{[6]}^{[7]}
Since the actual algorithm is rather complicated, researchers went looking for a more intuitive version of it, and in 1985 "invented" affine scaling, a version of Karmarkar's algorithm that uses affine transformations where Karmarkar used projective ones, only to realize four years later that they had reinvented an algorithm published by Soviet mathematician I. I. Dikin in 1967.^{[8]} The affinescaling method can be described succinctly as follows.^{[9]} Note that the affinescaling algorithm, while applicable to small scale problems, is not a polynomial time algorithm.^{[citation needed]} Karmarkar also has extended the method^{[which?]}^{[10]}^{[11]}^{[12]}^{[13]} to solve problems with integer constraints and nonconvex problems.^{[14]}
Algorithm AffineScaling
Input: A, b, c, , stopping criterion, γ.
do while stopping criterion not satisfied
if then
return unbounded
end if
end do
 "←" is a shorthand for "changes to". For instance, "largest ← item" means that the value of largest changes to the value of item.
 "return" terminates the algorithm and outputs the value that follows.
Example
Consider the linear program
maximize 


+ 

subject to 


+ 

, 
That is, there are 2 variables and 11 constraints associated with varying values of . This figure shows each iteration of the algorithm as red circle points. The constraints are shown as blue lines.
Patent controversy — Can mathematics be patented?
At the time he invented the algorithm, Narendra Karmarkar was employed by AT&T. After applying the algorithm to optimizing AT&T 's telephone network,^{[15]} they realized that his invention could be of practical importance. In April 1985, AT&T promptly applied for a patent on Karmarkar's algorithm.
The patent became more fuel for the ongoing controversy over the issue of software patents.^{[16]} This left many mathematicians uneasy, such as Ronald Rivest (himself one of the holders of the patent on the RSA algorithm), who expressed the opinion that research proceeded on the basis that algorithms should be free. Even before the patent was actually granted, it was argued that there might have been prior art that was applicable.^{[17]} Mathematicians who specialized in numerical analysis, including Philip Gill and others, claimed that Karmarkar's algorithm is equivalent to a projected Newton barrier method with a logarithmic barrier function, if the parameters are chosen suitably.^{[18]} Legal scholar Andrew Chin opines that Gill's argument was flawed, insofar as the method they describe does not constitute an "algorithm", since it requires choices of parameters that don't follow from the internal logic of the method, but rely on external guidance, essentially from Karmarkar's algorithm.^{[19]} Furthermore, Karmarkar's contributions are considered far from obvious in light of all prior work, including FiaccoMcCormick, Gill and others cited by Saltzman.^{[19]}^{[20]}^{[21]} The patent was debated in the U.S. Senate and granted in recognition of the essential originality of Karmarkar's work, as U.S. Patent 4,744,026: "Methods and apparatus for efficient resource allocation" in May 1988.
AT&T designed a vector multiprocessor computer system specifically to run Karmarkar's algorithm, calling the resulting combination of hardware and software KORBX,^{[22]} and marketed this system at a price of USD8.9 million.^{[23]}^{[24]} Its first customer was the Pentagon.^{[25]}^{[26]}
Opponents of software patents have further argued that the patents ruined the positive interaction cycles that previously characterized the relationship between researchers in linear programming and industry, and specifically it isolated Karmarkar himself from the network of mathematical researchers in his field.^{[27]}
The patent itself expired in April 2006, and the algorithm is presently in the public domain.
References
 Ilan Adler, Narendra Karmarkar, Mauricio G.C. Resende and Geraldo Veiga (1989). "An Implementation of Karmarkar's Algorithm for Linear Programming". Mathematical Programming, Vol 44, p. 297–335.
 Narendra Karmarkar (1984). "A New Polynomial Time Algorithm for Linear Programming", Combinatorica, Vol 4, nr. 4, p. 373–395.
 ^ Strang, Gilbert (1 June 1987). "Karmarkar’s algorithm and its place in applied mathematics". The Mathematical Intelligencer (New York: Springer) 9 (2): 4–10. doi:10.1007/BF03025891. ISSN 03436993. MR '''883185'''.
 ^ http://dl.acm.org/citation.cfm?id=808695
 ^ http://link.springer.com/article/10.1007%2FBF02579150
 ^ http://onlinelibrary.wiley.com/doi/10.1002/j.15387305.1989.tb00316.x/abstract
 ^ Karmarkar N.K., An Interior Point Approach to NPComplete Problems Part I, AMS series on Contemporary Mathematics 114, pp. 297308 (June 1990). http://www.ams.org/books/conm/114/conm114endmatter.pdf
 ^ Karmarkar, N.K.., Riemannian Geometry Underlying Interior Point Methods for Linear programming, AMS series on Contemporary Mathematics 114, pp. 5175 (June 1990). http://www.ams.org/books/conm/114/conm114endmatter.pdf
 ^ Karmarkar N. K., Lagarias, J.C., Slutsman, L., and Wang, P., Power Series Variants of KarmarkarType Algorithm, AT & T technical Journal 68, No. 3, May/June (1989).
 ^ Vanderbei, R. J.; Lagarias, J. C. (1990). "I. I. Dikin's Convergence Result for the AffineScaling Algorithm" (PDF). Contemporary Mathematics 114.
 ^ Robert J. Vanderbei; Meketon, Marc; Freedman, Barry (1986). "A Modification of Karmarkar's Linear Programming Algorithm" (PDF). Algorithmica 1: 395–407. doi:10.1007/BF01840454.
 ^ Karmarkar, N.K.,Interior Point Methods in Optimization, Proceedings of the Second International Conference on Industrial and Applied Mathematics, SIAM, pp. 160181 (1991)
 ^ Karmarkar, N. K. and Kamath, A. P., A continuous Approach to Deriving Upper Bounds in Quadratic Maximization Problems with Integer Constraints, Recent Advances in Global Optimization, pp. 125140, Princeton University Press (1992).
 ^ 26. Karmarkar, N. K., Thakur, S. A., An Interior Point Approach to a Tensor Optimisation Problem with Application to Upper Bounds in Integer Quadratic Optimization Problems, Proceedings of Second Conference on Integer Programming and Combinatorial Optimisation, (May 1992).
 ^ 27. Kamath, A., Karmarkar, N. K., A Continuous Method for Computing Bounds in Integer Quadratic Optimisation Problems, Journal of Global Optimization (1992).
 ^ Karmarkar, N. K., Beyond Convexity: New Perspectives in Computational Optimization. Springer Lecture Notes in Computer Science LNCS 6457, Dec 2010
 ^ Sinha L.P., Freedman, B. A., Karmarkar, N. K., Putcha, A., and Ramakrishnan K.G., Overseas Network Planning, Proceedings of the Third International Network Planning Symposium, NETWORKS' 86, Tarpon Springs, Florida (June 1986).
 ^ Kolata, Gina (19890312). "IDEAS & TRENDS; Mathematicians Are Troubled by Claims on Their Recipes". The New York Times.
 ^ Various posts by Matthew Saltzman, Clemson University
 ^ Gill, Philip E.; Murray, Walter; Saunders, Michael A.; Tomlin, J. A.; Wright, Margaret H. (1986). "On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method". Mathematical Programming 36 (2): 183–209. doi:10.1007/BF02592025.
 ^ ^{a} ^{b} Andrew Chin (2009). "On Abstraction and Equivalence in Software Patent Doctrine: A Response to Bessen, Meurer and Klemens" (PDF). Journal of Intellectual Property Law 16: 214–223.
 ^ Mark A. Paley (1995). "The Karmarkar Patent: Why Congress Should “Open the Door” to Algorithms as Patentable Subject Matter". 22 COMPUTER L. REP. 7
 ^ Margaret H. Wright (2004). "The InteriorPoint Revolution in Optimization: History, Recent Developments, and Lasting Consequences" (PDF). Bulletins of the American Mathematical Society 42: 39–56.
 ^ Marc S. Meketon; Y.C. Cheng, D.J. Houck, J.M.Liu, L. Slutsman, Robert J. Vanderbei, and P. Wang (1989). "The AT&T KORBX System". AT&T Technical Journal 68: 7–19.
 ^ Lowenstein, Roger (15 August 1988). "AT&T markets problem solver, based on math whiz's find, for $8.9 million" (PDF). Wall Street Journal.
 ^ Markoff, John (13 August 1988). "Big A.T.&T. Computer for Complexities".
 ^ http://www.apnewsarchive.com/1989/MilitaryIsFirstAnnouncedCustomerOfATTSoftware/id8a376783cd62cdf141de700a7c948f61
 ^ http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=70419&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D70419
 ^ "今野浩: カーマーカー特許とソフトウェア – 数学は 特許に なるか (Konno Hiroshi: The Kamarkar Patent and Software – Has Mathematics Become Patentable?)". FFII. Retrieved 20080627.
Optimization: Algorithms, methods, and heuristics


Unconstrained nonlinear: Methods calling …


… functions 
 Golden section search
 Interpolation methods
 Line search
 Nelder–Mead method
 Successive parabolic interpolation


… and gradients 
Convergence 
 Trust region
 Wolfe conditions


Quasi–Newton 
 BFGS and LBFGS
 DFP
 Symmetric rankone (SR1)


Other methods 
 Gauss–Newton
 Gradient
 Levenberg–Marquardt
 Conjugate gradient
 Truncated Newton



… and Hessians 





Constrained nonlinear


General 
 Barrier methods
 Penalty methods


Differentiable 
 Augmented Lagrangian methods
 Sequential quadratic programming
 Successive linear programming




Convex optimization


Convex
minimization 
 Cuttingplane method
 Reduced gradient (Frank–Wolfe)
 Subgradient method


Linear and
quadratic 
Interior point 
 Affine scaling
 Ellipsoid algorithm of Khachiyan
 Projective algorithm of Karmarkar


BasisExchange 
 Simplex algorithm of Dantzig
 Revised simplex algorithm
 Crisscross algorithm
 Principal pivoting algorithm of Lemke





Combinatorial


Paradigms 
 Approximation algorithm
 Dynamic programming
 Greedy algorithm
 Integer programming


Graph
algorithms 
Minimum
spanning tree 
 Bellman–Ford
 Borůvka
 Dijkstra
 Floyd–Warshall
 Johnson
 Kruskal



Network flows 
 Dinic
 Edmonds–Karp
 Ford–Fulkerson
 Push–relabel maximum flow




Metaheuristics


 Evolutionary algorithm
 Hill climbing
 Local search
 Simulated annealing
 Tabu search



 Categories
 Algorithms and methods
 Heuristics
 Software

