Machine learning in Business optimization

“This is a phenomenon that Turing had predicted: that machine intelligence would become so pervasive, so comfortable, and so well integrated into our information-based economy that people would fail even to notice it.” (Ray Kurzweil 1999)

1 Historical background

According to Arthur Samuel (1959), Machine learning is “field of study that gives computers the ability to learn without being explicitly programmed”. Machine learning derives from a real world need to have intelligent machine, or an artificial intelligent in other words. But to understand why Machine learning was formulated as a field, we need to look a bit into history.

Back in time, mathematicians believed that they are able to describe each and every law in terms of finite and complete set of formulas, and then, they could pass that structured knowledge to a computer. In 1931 Kurt Gödel created incompleteness theorems, which are logical proofs that it is simply not possible to create the finite and complete set of axioms, and even if that will be achieved by some system, that system cannot prove its’ own consistency. A bit later, in 1937 Alan Truing formulated a so-called “Turing machine”, which helps to understand the limits of what can be computed.

Kurt Godel
Kurt Godel

Surprisingly, both proved the limitations of what mathematical logic could achieve, however within those limits mechanical devices still can carry out a mathematical reasoning. And so like that, later happened the change in the paradigm, where scientist instead of describing some laws and passing it to machine decided, that it might be much more productive if they give to a computer an ability to learn instead. One of the first successes was SNARC (Stochastic neural analog reinforcement calculator), the first neural network simulator created by Marvin Minsky in 1951. A bit later at IBM in 1959 Arthur Samuel created first self-learning program – checkers program.

2 Reasons to use

With all this have been said, here comes quite fair question – why would we use such a thing? It turns out that many industries already use learning algorithms. And first reason for that is data analysis. Indeed amount of data has to be handled is growing rapidly, and it becomes almost impossible for a researchers or analysts to analyse that data, and machine learning can help with that.

The second reason is that, there are certain situations where not a single engineer knows how to make a computer program for specific task, for example hand-writing recognition or computer vision.

Lastly, most common reason is need for self-customizing or self-improving programs, where we would like that certain system changes its self over time based on some inputs. Examples for this kind of systems could be recommendation systems, which recommend to a customer certain products based on history of their purchases.

3 Developments

3.1 Quality Control case

Many retailers have a business process of checking product delivered by manufacturers for further sales. And it happens in some situations that this process slows down sales process. Or in other cases when amount of products is huge and selling defective products don’t bring any risks to customers’ lives – retailers don’t check the whole delivery – they simply leave for customer rights to return defective product back. If that process exists in company – it is usually operational level process, which has influence directly on efficiency of sales process.

So we can represent the situation as follows: what is the probability P that object R is good and doesn’t have defects “P {yr = 1|xr; 0}”, thus no need to check it manually. To conduct this case I took a real data from machine learning data repository about quality of wine.

To create decision model – logistic regression learning algorithm was used. To give a bit of intuition, what exactly Machine learning agent is trying to accomplish, it tries to learn how to separate good and bad examples based on previously collected data. In even more simple words we can say that it tries to draw some decision boundary line.


After running learning algorithm it gave 95.18 percent of prediction accuracy, which can be considered as successful.


3.2 Credit Approval case

Some companies already use some credit rate calculators, but in cases when requested credit amount is not very big, it would be also possible to automate the decision process whether applicant is eligible. If amount of applications is large and sums are relatively not big then this decision can be brought to operational level and automated. So the problem can be formed like that: Will a customer based on his personal information likely to return the whole sum of the credit. To conduct this case I took a real data from machine learning data repository about credit approvals statistics.

Here it is again classification problem, but for the sake of difference – different approach was chosen – this time neural network was used for creating non-linear decision boundary. Here, to give more intuition we again can think about a line that separates good and bad previously collected examples, however this time computer will try to draw not a straight line but some curve which potentially will give us better accuracy.


After running the algorithm the program reported that it has 90.58 % of prediction accuracy, which can be considered as fairly not bad result. Taking into consideration that training set consists only of 552 records, I’m pretty sure that algorithm would have a better accuracy with bigger data set. For comparison – previous case used 3992 records as training set.



To run this programs – GNU Octave is required. The main function is in file main.m



5 References

Ray Kurzweil 1999. The Age of Spiritual Machines.
Stanford Encyclopedia of Philosophy 2013. “Gödel’s Incompleteness Theorems”. URL:
Stanford Encyclopedia of Philosophy 1995. “Turing Machine”. URL:
Alan Turing 1950. Computing Machinery and Intelligence. URL:
UC Irvine Machine Learning Repository. URL:

Smart technologies: Voice search

1. From history to nowadays

People have always had three main virtues: ability to dream, laziness and ability to dream about being even more lazy…
That’s how story begins.

One of the first who formulated “machine thinking” or artificial intelligence and also created a specific test was Alan Mathison Turing the creator of Turing test which he described in his article “Computing machinery and intelligence” (1950). Test in simply words looks like that – we have a judge who asks questions and evaluates the answers. In other room, so that judge couldn’t see, we have a normal person and a machine. Both answer questions and then judge evaluates the answers trying to understands whether answer belongs to a human or machine. So if machine’s answers will be counted as a human answers then we might think that machine passed the test and can be an intelligent. Of course question set is built on the difficult questions like “what is meaning of life”, “what is death”, etc.


So far still in our year 2014 no machine did pass this test.

1.1 Voice search

“Google voice search” or modern newer android implementation of it “Google now”, “iOS Siri” and “Bing voice search” systems are familiar to many smartphone mobile device owners. Simply that kind of systems can be logically divided into two separate systems.

Speech recognition
The first one is a speech recognition which is responsible for translation of spoken words into text. first device capable to understand at list something of human speech was “Audrey” system (Automatic Digit Recognition) developed by Bell laboratories in 1952. Although it could understand a human speech with a good accuracy rate, it only could recognize digits from 0 to 9. After that IBM and other companies decided to develop their own systems.

Natural language processing
The other part of the Voice search is “natural language processing” system. One which actually understands the meaning of what was told in a context and if needed able to delegate some work to web services like google maps. This one has roots in a simple chat-bot. One of the first successful representatives of chat bot systems was ELIZA – a chat bot developed by Joseph Weizenbaum between 1964 and 1966 at MIT. Later he wrote a book “Computer Power and Human Reason” which includes overview and explanations about that system.

2. Market overview

  • 2008 – Google Voice search
    So in a chronological order everything have been started in 2008 with google voice search which only takes voice input and pastes it to text box of search engine, and only search result were shown like it would have been done with simple input from keyboard.
  • 2010 – Bing Voice search
    Later Microsoft releases its’ own Bing voice search in 2010, which has the same functionality as google voice search. These application were nothing else than just a voice input system, so they fully rely on search engines.
  • 2011 – iOs Siri
    Situation was changed with iOS Siri in 2011. This one is something more than just input processing software, it have a natural language processing system, which means that it doesn’t straight send the phrase input to search engine, but it is trying to understand a context of what has been said. So, this one was first in the market. Of course some kind of similar apps were before available and even Siri itself wasn’t initially developed by Apple, but this one had a great marketing support, it has much greater abilities than any of the apps developed by other companies.
  • 2012 – Google Now
    One year later Google released “Google now” – their own software virtual assistant.
  • 2014 – Microsoft Cortana
    Together with update 8.1 for windows phones Microsoft is planning to ship their own Virtual assistant named Cortana.

3. Technology overview

Idea of that kind of software is simply awesome. More than that it is really good, that this kind of technologies already available to people nowadays and of course consumer interest pushes further developments.There are some obstacles of course. Time to time phrase might be not understood correctly. One of the issues is learning curve. You have to learn how to use this applications. The other thing is that sometime it is much faster or handy simple type your query into search engine text box.

Artificial Intelligence?

It is quite important to understand that the term “Artificial Intelligence” can be approached from different perspectives.

If we think about science, then we are trying to build non-organic brain, a very smart machine and we tend to expect, that machine will adapt to real world environment. That’s why no machine still passed the Turing test and scientists don’t have much promising results in that field.

But if we approach the term from engineering point of view, where we don’t care if machine really can think as long as is does the job in a right way. Then we are talking not necessary about smart, we can even think about “stupid robot agent”. And we’re not expecting that agent to adapt to real world environment – in fact we build a special environment, which is very friendly to him, it is structured, defined, rule-driven and easy explainable. In the result that “stupid robot agent” behave smart in that environment. In that sense we have outstanding results. The Siri in the environment of search engine, movie database, maps application – behave quite smart, thus can be viewed as “engineering Artificial Intelligence”.

Own devs

Echo Lynx is a product of my own developments. It is planned as a control over voice system.



At the end of the day

Everyday we hear technological news, every week it’s about new discovery, every month we hear about a breakthrough, every year – an important fundamental research.


First frontier

Already today robots in everyday life is not just a science fiction, you can buy from the shop robot that will clean the floor or one, which looks like a dog. But of course their possibilities are still restricted by some factors like cost of the production, energy consumption, but the most problematic are difference of the operating systems what doesn’t make it easy for programmers to develop some additional robot’s functions and weak marketing. Even though I don’t know how to solve marketing issues, I already have seen in the internet some kind of “robot App store” – where people can for an example buy a program which will make your robot to dance rumba.

Even though today machines are created for some specific tasks, future makes us to think about multi-purpose robots. However, many people afraid of so-called “doomsday” or “rise of the machines” – when robots will try to take over the world. I’m quite optimistic about that, and I am not the only one. Isaac Asimov (science fiction writer and creator of the basic three robots laws) in the middle of the twentieth century wrote the series of books “Foundation”, where he shows robots as a personal assistants of the whole humanity. Even if robots made some bad things in his books, they did it because of the true villain – human. Also people can sleep peacefully, because there is already today organisation (Life boat foundation), which has a program “AIShield” (Artificial intelligence shield) and involving a lot of scientists an organisation develops a methods of fighting with “terminators”.

Second frontier

Bioengineering, unity of human and mechanisms. A lot of laboratories are developing right now either near future technology like brain implants plus glasses with camera, which will allow a blind people to see, or they doing something futuristic like Nano-bots which will be able to heal us. Already today we can find videos, how American military forces are testing exoskeletons which will help soldiers to carry heavy things. Even in our everyday life soon we will see a lot of people with Google-glass (it is not really about bioengineering, but still a big step to the right direction).

There is “” (Russian avatar project) – group of scientists which has a goal to build up “avatar”, so that in the end of the life human could transfer his mind into it and by doing this human will become immortal. Personally I, being an educated man, understand that no one should live forever, however increasing the life duration by scientific methods, or helping people with limited abilities are always good ideas.


The last frontier

According to the Moore’s law, processor’s speed will be doubled every eighteen months. Futurists like Ray Kurzweil believe it will lead to technological singularity (Point, after which technical progress becomes so rapid and complex, that it won’t be possible to reach an understanding). In simple, but more confusing words it can sound like we will observe limitless progress in limited time. Within Moore’s law approximately at 2030 people will create an artificial intelligence which is capable of self-improvement, it will strengthen itself unboundedly, passing each acceleration cycle faster, and at each stage finding new technological and logical possibilities for self-improvement. Automation and efficiency will be everywhere around.

And here as always we can find two different opinion groups, where one says, that it will bring to us solution for all world’s problems like food distribution, global warming, because machine will find that solution much faster than anybody. And on the other side people either don’t believe that it will happen soon (or even happen at all) or that it will bring the mentioned above “doomsday” for humanity.

An the end of the day

Personally, I believe that truth is always somewhere in between. I don’t think that point (singularity) will come soon.

So at the end of the day, if we follow rational behaviour – we make rational decisions. If we work together for the common goal – we solve the common problems. Progress (technological or any other) is the symptom of life, and if we are not optimistic about that – we are not optimistic about life, we are doomed to failure. At the end of the day, progress is just a tool in our hands, and only we ourselves keep responsibility how we going to use that tool and how much benefits we will get from it.



Singularity –
(physics) central point of a black hole, at which gravitation is approaching infinity.
(mathematics) A point at which the derivative does not exist for a given function but every neighbourhood of which contains points for which the derivative exists.