Archive

Monthly Archives: October 2012

I found OpenSolver, a free Excel add-in, through Mike Trick’s OR Blog.  It took me about 5 minutes to download it, install it, and getting it working with one of my existing spreadsheets that I use in class.

I had used Excel’s built-in solver only because all my students had it installed and it was easy to learn.  The limitations, though, always kept the model size small and I never fully trusted it with integer programs.

One nice feature of OpenSolver is that it shows the model.  OpenSolver automatically added the color you see below to the model.  You can see in G208, the cell is red and it shows that I’m minimizing this cell.  Column F and the Y i-j matrix below is colored pink since these are decision variables.  In cell C213 you can see that it shows that these decision variables are binary.  And, F209:F210 show an example of a constraint and you can see that it is less than.

One of the reasons I use Excel is that the students are used to it.  And, it is logical for them to set up problems in Excel.  When OpenSolver adds this extra information to the spreadsheet, it is even better.  The students can see how their model is working and also look for bugs or problems.

I’ve only done a few large scale tests.  In one, I had 200 warehouses serving 200 customers and wanted the best 3 warehouses.  OpenSolver solved this fine, but took about 5-10 minutes to initially build the model.  By contrast, CPLEX Studio 12.4 read data from the same spreadsheet and solved the model in just a few seconds.

Advertisement

BusinessWeek just published an article on Big Data for the Dairy Industry.

I didn’t naturally think about big data being applied to the dairy industry.  But, this isn’t the first place I’ve seen this industry referenced.  Cows are expensive and produce a lot of milk.  Keeping the herd healthy and productive is important.  With new sensors, new tests, and the ability for dairy farmers to upload data for analysis the industry is a good target for big data efforts.

 

 

 

 

I often see articles in the business press that equate “analytics” with “big data.”  That is, these articles imply that the field of anlaytics is only about working with big data sets.

But, analytics is about much more than this.

Michael Schrage recently wrote that he asked executives what they would do with 100 times more data about their customers.  He said that no one would guess what they would do with that data, and  one CEO suggested that costs may actually increase since they couldn’t deal with the data.  The article points out that acquiring and analyzing big data is not the issue:

Instead of asking, “How can we get far more value from far more data?” successful big data overseers seek to answer, “What value matters most, and what marriage of data and algorithms gets us there?”

If you extend his key question, you can see that you may not even need big data to get value.  Analytics helps you determine what data you need to collect, how you need to analyze that data, and what actions you need to take as a result.

Depending on what you are trying to achieve, you may not even need a big data set.  We see many companies that already have the data they need to help improve their business.  They just need to use more advanced techniques– like optimization– to get value from that data and take action.

A more extensive definition of analytics breaks the field into the different types:  Descriptive Analytics, Predictive Analytics, and Prescriptive Analytics.  Different types of analytics are needed for different types of problems.

On Oct 30, Northwestern’s Transportation Center is hosting a workshop on dealing with big data in the transportation industry:

Data-Driven Business: Challenges and Best Practices in the Transportation Industry

Tuesday, October 30, 2012 – 2:00-4:45 pm

Location:
Transportation Center, Chambers Hall, Lower Level
Northwestern University
600 Foster Street
Evanston, IL

Transportation companies are confronted with growing – some may say exploding – and diverse sources of data. This data may be mined from social media, obtained from customer surveys, collected from environmental sensors, and gleaned from geo-positioning radios, among others. Looking through the lens of the transportation industry, the Northwestern University Transportation Center’s fall Industry Workshop will examine challenges and best practices in data-driven business. In the workshop a keynote speaker and two panel discussions will address questions, such as:

  • Why “data-driven” is a different way of competing?
  • What does it take to unlock opportunity in data?
  • What are the organizational implications for being “data-driven?”

2:00 – 3:15 pm – Panel 1: Marketing and Operations: Social media, Location-based, System-health

3:15 – 3:30 pm – Networking Break

3:30 – 4:45 pm – Panel 2: Freight Management & Logistics

The final program will be posted on October 23.

An article from Forbes reviewing Steve Sashihara’s book, The Optimization Edge, helps make a case for more optimization (and a good recommendation of the book):

One of his most interesting arguments is that a great deal of the effort spent on information gathering and analysis is wasted — or, at least, used sub-optimally — when it’s used to feed business intelligence systems that produce reports that ultimately wind up with being fed into spreadsheets and PowerPoint slides. Managers then sit around in a conference room listening to presentations and debating what the data means and what decisions should be made about it — when, in many cases, good software could make the decision itself. The GPS in your car is optimizing when it says “turn left at Main Street” rather than presenting you with a list of possible routes.

If you stop your analytics efforts short of applying optimization, you may be missing out on a lot of value.

Dan Gilmore at The SupplyChainDigest reported on a study that showed the importance of good inventory control:

“…permanently reducing your level of inventories relative to sales and sales growth can have a dramatic impact on a company’s share price.”

Inventory is a very visible way to measure a firm’s supply chain efficiency.  However, inventory is a great way to control for variability in the supply chain.  So, if you want to permanently reduce inventory, you need to go after the underlying variability.

 

Companies with a wide range of products always struggle with the question of how many SKU’s (unique product IDs) they should make.  Should they cut the number of different SKU’s? Should they increase the number of SKUs?

If you only offer a few SKUs, then you are potentially missing parts of your market, giving your competitors a chance to make inroads, and possibly losing shelf-space at the retailers.

On the other hand, if you offer a lot of SKU’s, you may be confusing your customers and adding extra costs through extra inventory and increased manufacturing costs.

Usually, the sales side of an organization wants more SKU’s and the supply chain organization wants fewer.  Our experience has shown that the sales side usually wins.

Two years ago, the Wall Street Journal published an article arguing that firms have too many products.

However, I don’t think there ever can be a right answer to this question.  But, posing the question and analyzing the situation should lead to better informed decisions.   Interestingly, the techniques of mass customization are an attempt to break through this dilemma.  If you can offer customers more choice, but without the extra cost, you can create a lot of value in the market.

With the rise of Big Data (see here), their is an increased need for people to analyze that data to turn it into information.  Thomas Davenport and D.J. Patil recently wrote an article in Harvard Business Review about how Data Scientists are going to be the hot job of the 21st Century.

A Data Scientist is

“a high-ranking professional with the training and curiosity to make discoveries in the world of big data. The title has been around for only a few years. (It was coined in 2008 by one of us, D.J. Patil, and Jeff Hammerbacher, then the respective leads of data and analytics efforts at LinkedIn and Facebook.)… ”

…”More than anything, what data scientists do is make discoveries while swimming in data. It’s their preferred method of navigating the world around them. At ease in the digital realm, they are able to bring structure to large quantities of formless data and make analysis possible.”

The article is well-done and worth a read.  I think you could extend the definition of Data Scientist to also include the field of Operations Research (which includes optimization).  Besides just analyzing data, optimization can help you get even more value from the data.

For example, optimization allows you to take the stream of data, automatically evaluate alternatives versus your goal, and determine the best course of action.  For example, optimization can help you route trucks, schedule maintenance with a limited budget, schedule your workforce, adjust the electricity output of a power plant, and so on.