Linear programming – A powerful problem solving method that works effectively in practice but is provably hard in principle

There was a time, scientists had to predict nature with pure reasoning and looked for affirmative from observations. There was a time, practitioners could barely scan through all the possible solutions to analyse the correctness of a problem, and thus had to ask for help from mathematicians. And, there was a time, when mathematics listed a method as hard to solve (in terms of polynomial time) in principle then it came out to be a good fit in practice. Among those, there is Linear Programming (LP).

This post will give you a brief overview of how LP works, suggest several useful solvers,  and assuredly to make you comfortable enough to use them, though it took, to say the least, some efforts to adapt it to your own problem. In other words, this is an introduction from engineering’s point of view. Audiences who are interested in its underlying principle could easily look up for plenty helpful blog posts or Optimization-related textbook.



LP is an optimization method, in which given a set of variables together with an objective function, can help us to find the optimal combination of values of this set, with respect to some constraints on its domain.

More formally, given a vector of variables x = (x1, x2, x3), 2 vectors c,  b and a matrix of fixed values, cT is the transpose of c, we can express our problem [1] as

maximize: cTx

subject to Ax <= b

and x >= 0

Simply put, so long as you can formulate any problem in this particular form, then it is likely to be solved. The LP solver has the ability to identify a non-solvable problem, no possible values of x satisfy the constraints, so the term likely here refers to the running time.

Example 1: Given a bag of coins, if I allow you to take at most 40 coins, how can you maximize the total values of the selected coin. You know that there are 3 types of coins: Galleon ($6.64), Sickle ($0.39), and Knut ($0.01)[2], and you could not take more than 20 Galleons, 10 Sickles but there is no limitation on Knut. Let’s call the number of the coins in decreasing order of value as g, s, k, the formulation is as follows

maximize: 6.64 * g + 0.39 * s + 0.01 * k

subject to

g <= 20

s <= 10

and g >= 0, s >= 0, k >= 0

One does not need to sketch on paper to know that the optimal solution is (g, s, k) = (20, 10, 10). However, this example serves the purpose to familiarize you with the formulating process rather than demonstrate the capacity of this method.

Note that, all variables must be first order to remain the linearity, in other words, can only be g, not g2, g3, or higher order.

When I first come across this method, instantly do I look for a connection to Machine Learning, since essentially it is an optimization problem. To who may not be familiar with Machine Learning, this is a field that studies the pattern of data. Provided a data point and its corresponding label (an image of a cat and the label says “Cat”, for instance), we need to find a model that automatically learns the mapping from data to its true label. This task is called classification. Admittedly, this is an oversimplified explanation, but it does suffice for the following example.

Intuitively, for the classification task, we need to minimize the loss function, which is the number of misclassified data points. However, what are the variables we need to solve? It turns out a bit tricky to actually formulate into the standard format.

Example 2: Given a set of n 2-dimension data points X = {(x11, x12), (x21, x22), …, (xn1, xn2)} and the set of corresponding binary labels Y = {-1, 1}n, find the weight vector w that correctly classified every element of X, i.e, for any data point xi = (xi1, xi2), sign( wTxi * yi) = 1. sign(x) is 1 if the sign of x is positive and -1 otherwise. The formulation is

maximize: sign(wTx1 * y1)+ sign(wTx2 *y2)+ … + sign(wTx*yn)

subject to: w1, w2 ∈ ℝ (or no constraints)


This might look confusing at first glance, but let me break it down bit by bit. Basically, we assume that the data points have 2 classes, -1 and 1, and there is a line that can separate them. The target line, or variables that need to be optimized, is in fact w, with wTxi  is the prediction. Any point that lies above or on the line, wTxi >= 0, have predicted label as 1. Likewise, any point that lies below having wTx < 0,  or predicted label as -1. From this observation, if we correctly classify a point, the multiplication of the actual value and prediction will always be a positive value, which has sign(wTx *y) = 1. There is no constraint on the value of w, hence we complete the formulation.

The point of this example is showing that it takes some analysis to deduce the appropriate form of LP. Later on, when learning of Support Vector Machine, I realize that this idea has been exploited already!!! Additionally, they modify elegantly to handle even the nonlinear separable data.



Okay, I now know how to formulate my problem, but how exactly do I solve them? Fortunately, there are plenty solvers out there can do the work once you feed them the formulation. For students, I recommend Gurobi, since it is commercial hence a good solver but also provides a free academic license. Other choices would be CPLEX, GLPK or standard library on your prefered programming language (MATLAB, Python).


  • Website:
  • Gurobi downloaded package provides mini problem sets in each programming language to help user get accustomed to.
  • Gurobi has API for most common-used languages: C, Java, Python, MATLAB…. Besides, it is possible to use as a software as well. In fact, first test out Gurobi with the software before proceeding to use API is a good idea. Try with Mip1 problem set.
  • All required information of installation and uses are here.
  • Issues can meet when installing on Window: [1], [2].
  • Window is easier to install than Ubuntu. However, when running on a server, usually there is no choice other than Ubuntu. A helpful referent blog of Abel. A few notes:
    • remember to check if grbgetkey file is executable or not: chmod +x file_name
    • check if a file is hidden or key file in hidden folder.
    • when I use Java to compile (command javac, java), you need to link to Gurobi folder. You can save this path to bashrc file. All declared path must be direct to folders, should not use relative path name like ~. A helpful thread.
      • Example:export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/vinh/opt/gurobi752/linux64/bin/
  • Set parameters for Gurobi here, though is is optimized by default.
  • If you install Gurobi with an academic license properly, you must have the following line printed out every time you call the library:
    • Academic license – for non-commercial use only




As stated previously, this post is for engineers, developers etc. who seek for a solution for their problem that matches the class of Linear Programming. It is noteworthy that at first, it might not cross your mind that they have any similarity, but the ability to formulate a problem is itself an art in my opinion. Next time, if you have an optimization problem, try this 😉

Hopefully, you find this post useful in some ways. Please feel free to make any suggestion on this post or share your opinion on the pros and cons of solvers you have tried.





Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.