Machine learning in Business optimization

“This is a phenomenon that Turing had predicted: that machine intelligence would become so pervasive, so comfortable, and so well integrated into our information-based economy that people would fail even to notice it.” (Ray Kurzweil 1999)

1 Historical background

According to Arthur Samuel (1959), Machine learning is “field of study that gives computers the ability to learn without being explicitly programmed”. Machine learning derives from a real world need to have intelligent machine, or an artificial intelligent in other words. But to understand why Machine learning was formulated as a field, we need to look a bit into history.

Back in time, mathematicians believed that they are able to describe each and every law in terms of finite and complete set of formulas, and then, they could pass that structured knowledge to a computer. In 1931 Kurt Gödel created incompleteness theorems, which are logical proofs that it is simply not possible to create the finite and complete set of axioms, and even if that will be achieved by some system, that system cannot prove its’ own consistency. A bit later, in 1937 Alan Truing formulated a so-called “Turing machine”, which helps to understand the limits of what can be computed.

Kurt Godel
Kurt Godel

Surprisingly, both proved the limitations of what mathematical logic could achieve, however within those limits mechanical devices still can carry out a mathematical reasoning. And so like that, later happened the change in the paradigm, where scientist instead of describing some laws and passing it to machine decided, that it might be much more productive if they give to a computer an ability to learn instead. One of the first successes was SNARC (Stochastic neural analog reinforcement calculator), the first neural network simulator created by Marvin Minsky in 1951. A bit later at IBM in 1959 Arthur Samuel created first self-learning program – checkers program.

2 Reasons to use

With all this have been said, here comes quite fair question – why would we use such a thing? It turns out that many industries already use learning algorithms. And first reason for that is data analysis. Indeed amount of data has to be handled is growing rapidly, and it becomes almost impossible for a researchers or analysts to analyse that data, and machine learning can help with that.

The second reason is that, there are certain situations where not a single engineer knows how to make a computer program for specific task, for example hand-writing recognition or computer vision.

Lastly, most common reason is need for self-customizing or self-improving programs, where we would like that certain system changes its self over time based on some inputs. Examples for this kind of systems could be recommendation systems, which recommend to a customer certain products based on history of their purchases.

3 Developments

3.1 Quality Control case

Many retailers have a business process of checking product delivered by manufacturers for further sales. And it happens in some situations that this process slows down sales process. Or in other cases when amount of products is huge and selling defective products don’t bring any risks to customers’ lives – retailers don’t check the whole delivery – they simply leave for customer rights to return defective product back. If that process exists in company – it is usually operational level process, which has influence directly on efficiency of sales process.

So we can represent the situation as follows: what is the probability P that object R is good and doesn’t have defects “P {yr = 1|xr; 0}”, thus no need to check it manually. To conduct this case I took a real data from machine learning data repository about quality of wine.

To create decision model – logistic regression learning algorithm was used. To give a bit of intuition, what exactly Machine learning agent is trying to accomplish, it tries to learn how to separate good and bad examples based on previously collected data. In even more simple words we can say that it tries to draw some decision boundary line.


After running learning algorithm it gave 95.18 percent of prediction accuracy, which can be considered as successful.


3.2 Credit Approval case

Some companies already use some credit rate calculators, but in cases when requested credit amount is not very big, it would be also possible to automate the decision process whether applicant is eligible. If amount of applications is large and sums are relatively not big then this decision can be brought to operational level and automated. So the problem can be formed like that: Will a customer based on his personal information likely to return the whole sum of the credit. To conduct this case I took a real data from machine learning data repository about credit approvals statistics.

Here it is again classification problem, but for the sake of difference – different approach was chosen – this time neural network was used for creating non-linear decision boundary. Here, to give more intuition we again can think about a line that separates good and bad previously collected examples, however this time computer will try to draw not a straight line but some curve which potentially will give us better accuracy.


After running the algorithm the program reported that it has 90.58 % of prediction accuracy, which can be considered as fairly not bad result. Taking into consideration that training set consists only of 552 records, I’m pretty sure that algorithm would have a better accuracy with bigger data set. For comparison – previous case used 3992 records as training set.



To run this programs – GNU Octave is required. The main function is in file main.m



5 References

Ray Kurzweil 1999. The Age of Spiritual Machines.
Stanford Encyclopedia of Philosophy 2013. “Gödel’s Incompleteness Theorems”. URL:
Stanford Encyclopedia of Philosophy 1995. “Turing Machine”. URL:
Alan Turing 1950. Computing Machinery and Intelligence. URL:
UC Irvine Machine Learning Repository. URL:

Smart technologies: Voice search

1. From history to nowadays

People have always had three main virtues: ability to dream, laziness and ability to dream about being even more lazy…
That’s how story begins.

One of the first who formulated “machine thinking” or artificial intelligence and also created a specific test was Alan Mathison Turing the creator of Turing test which he described in his article “Computing machinery and intelligence” (1950). Test in simply words looks like that – we have a judge who asks questions and evaluates the answers. In other room, so that judge couldn’t see, we have a normal person and a machine. Both answer questions and then judge evaluates the answers trying to understands whether answer belongs to a human or machine. So if machine’s answers will be counted as a human answers then we might think that machine passed the test and can be an intelligent. Of course question set is built on the difficult questions like “what is meaning of life”, “what is death”, etc.


So far still in our year 2014 no machine did pass this test.

1.1 Voice search

“Google voice search” or modern newer android implementation of it “Google now”, “iOS Siri” and “Bing voice search” systems are familiar to many smartphone mobile device owners. Simply that kind of systems can be logically divided into two separate systems.

Speech recognition
The first one is a speech recognition which is responsible for translation of spoken words into text. first device capable to understand at list something of human speech was “Audrey” system (Automatic Digit Recognition) developed by Bell laboratories in 1952. Although it could understand a human speech with a good accuracy rate, it only could recognize digits from 0 to 9. After that IBM and other companies decided to develop their own systems.

Natural language processing
The other part of the Voice search is “natural language processing” system. One which actually understands the meaning of what was told in a context and if needed able to delegate some work to web services like google maps. This one has roots in a simple chat-bot. One of the first successful representatives of chat bot systems was ELIZA – a chat bot developed by Joseph Weizenbaum between 1964 and 1966 at MIT. Later he wrote a book “Computer Power and Human Reason” which includes overview and explanations about that system.

2. Market overview

  • 2008 – Google Voice search
    So in a chronological order everything have been started in 2008 with google voice search which only takes voice input and pastes it to text box of search engine, and only search result were shown like it would have been done with simple input from keyboard.
  • 2010 – Bing Voice search
    Later Microsoft releases its’ own Bing voice search in 2010, which has the same functionality as google voice search. These application were nothing else than just a voice input system, so they fully rely on search engines.
  • 2011 – iOs Siri
    Situation was changed with iOS Siri in 2011. This one is something more than just input processing software, it have a natural language processing system, which means that it doesn’t straight send the phrase input to search engine, but it is trying to understand a context of what has been said. So, this one was first in the market. Of course some kind of similar apps were before available and even Siri itself wasn’t initially developed by Apple, but this one had a great marketing support, it has much greater abilities than any of the apps developed by other companies.
  • 2012 – Google Now
    One year later Google released “Google now” – their own software virtual assistant.
  • 2014 – Microsoft Cortana
    Together with update 8.1 for windows phones Microsoft is planning to ship their own Virtual assistant named Cortana.

3. Technology overview

Idea of that kind of software is simply awesome. More than that it is really good, that this kind of technologies already available to people nowadays and of course consumer interest pushes further developments.There are some obstacles of course. Time to time phrase might be not understood correctly. One of the issues is learning curve. You have to learn how to use this applications. The other thing is that sometime it is much faster or handy simple type your query into search engine text box.

Artificial Intelligence?

It is quite important to understand that the term “Artificial Intelligence” can be approached from different perspectives.

If we think about science, then we are trying to build non-organic brain, a very smart machine and we tend to expect, that machine will adapt to real world environment. That’s why no machine still passed the Turing test and scientists don’t have much promising results in that field.

But if we approach the term from engineering point of view, where we don’t care if machine really can think as long as is does the job in a right way. Then we are talking not necessary about smart, we can even think about “stupid robot agent”. And we’re not expecting that agent to adapt to real world environment – in fact we build a special environment, which is very friendly to him, it is structured, defined, rule-driven and easy explainable. In the result that “stupid robot agent” behave smart in that environment. In that sense we have outstanding results. The Siri in the environment of search engine, movie database, maps application – behave quite smart, thus can be viewed as “engineering Artificial Intelligence”.

Own devs

Echo Lynx is a product of my own developments. It is planned as a control over voice system.



At the end of the day

Everyday we hear technological news, every week it’s about new discovery, every month we hear about a breakthrough, every year – an important fundamental research.


First frontier

Already today robots in everyday life is not just a science fiction, you can buy from the shop robot that will clean the floor or one, which looks like a dog. But of course their possibilities are still restricted by some factors like cost of the production, energy consumption, but the most problematic are difference of the operating systems what doesn’t make it easy for programmers to develop some additional robot’s functions and weak marketing. Even though I don’t know how to solve marketing issues, I already have seen in the internet some kind of “robot App store” – where people can for an example buy a program which will make your robot to dance rumba.

Even though today machines are created for some specific tasks, future makes us to think about multi-purpose robots. However, many people afraid of so-called “doomsday” or “rise of the machines” – when robots will try to take over the world. I’m quite optimistic about that, and I am not the only one. Isaac Asimov (science fiction writer and creator of the basic three robots laws) in the middle of the twentieth century wrote the series of books “Foundation”, where he shows robots as a personal assistants of the whole humanity. Even if robots made some bad things in his books, they did it because of the true villain – human. Also people can sleep peacefully, because there is already today organisation (Life boat foundation), which has a program “AIShield” (Artificial intelligence shield) and involving a lot of scientists an organisation develops a methods of fighting with “terminators”.

Second frontier

Bioengineering, unity of human and mechanisms. A lot of laboratories are developing right now either near future technology like brain implants plus glasses with camera, which will allow a blind people to see, or they doing something futuristic like Nano-bots which will be able to heal us. Already today we can find videos, how American military forces are testing exoskeletons which will help soldiers to carry heavy things. Even in our everyday life soon we will see a lot of people with Google-glass (it is not really about bioengineering, but still a big step to the right direction).

There is “” (Russian avatar project) – group of scientists which has a goal to build up “avatar”, so that in the end of the life human could transfer his mind into it and by doing this human will become immortal. Personally I, being an educated man, understand that no one should live forever, however increasing the life duration by scientific methods, or helping people with limited abilities are always good ideas.


The last frontier

According to the Moore’s law, processor’s speed will be doubled every eighteen months. Futurists like Ray Kurzweil believe it will lead to technological singularity (Point, after which technical progress becomes so rapid and complex, that it won’t be possible to reach an understanding). In simple, but more confusing words it can sound like we will observe limitless progress in limited time. Within Moore’s law approximately at 2030 people will create an artificial intelligence which is capable of self-improvement, it will strengthen itself unboundedly, passing each acceleration cycle faster, and at each stage finding new technological and logical possibilities for self-improvement. Automation and efficiency will be everywhere around.

And here as always we can find two different opinion groups, where one says, that it will bring to us solution for all world’s problems like food distribution, global warming, because machine will find that solution much faster than anybody. And on the other side people either don’t believe that it will happen soon (or even happen at all) or that it will bring the mentioned above “doomsday” for humanity.

An the end of the day

Personally, I believe that truth is always somewhere in between. I don’t think that point (singularity) will come soon.

So at the end of the day, if we follow rational behaviour – we make rational decisions. If we work together for the common goal – we solve the common problems. Progress (technological or any other) is the symptom of life, and if we are not optimistic about that – we are not optimistic about life, we are doomed to failure. At the end of the day, progress is just a tool in our hands, and only we ourselves keep responsibility how we going to use that tool and how much benefits we will get from it.



Singularity –
(physics) central point of a black hole, at which gravitation is approaching infinity.
(mathematics) A point at which the derivative does not exist for a given function but every neighbourhood of which contains points for which the derivative exists.

Programming in Linux

It’s time to talk about programming. In Linux we can work with pretty much every programming language, so in the end of the day it is only matter of choice or specifics of task. To start with any of programming languages – all you need is text editor and interpreter/compiler, get one and you can do pretty much what ever you want.

This old-school guy is very useful for many purposes, especially when the talk comes to high performance. to start with it first thing to do is to check whether compiler is installed. In terminal simply type ‘g++’ and if something like ‘no input file’ appear – that means you already have. So let’s start with program.  In some text editor, create following program:

using namespace std;
int main()
cout << “Hello world” << endl;
return 0;

Save it, for example, with name ‘HelloCpp.cpp’. And in the command line compile like that: “g++ -o helloCpp helloCpp.cpp”, where parameter ‘-o’ specifies a name for output file. Here we go, to run the program simply type “./helloCpp” and that’s it.

Mostly used in production for many years programming language. Just as before ensure that you have development kit by following command “sudo apt-get install openjdk-7-jdk”. Now lets go to text editor and do same program:

class HelloJava {
public static void main(String[] args){
System.out.println(“Hallo world”);

To compile it use following “javac”, and then just run it “java HelloJava”.

This language along with Python suits very well for quick development. To start, firstly get interpreter and libraries by command “sudo apt-get install ruby-full”.  When it is installed just as with any other language create a file ‘helloRuby.rb’ and in that file simply put:

puts ‘Hello world!!!’

Since Ruby is scripting language – there is no need to have entry point for a program. After that simply type following command: “ruby helloRuby.rb”. More than that ruby has Interactive Ruby Shell, which can be called in command line ‘irb’, and it gives possibility to run ruby commands in real time without need to create source code file.

Octave is high-level interpreted language with specific purpose – mathematical computations. It perfectly suits for algorithm prototyping. It can be installed with simple command “sudo apt-get install octave”. Octave can be used only through interactive shell, however it doesn’t mean that no files can be used. Let’s create file “helloOctave.m” and in that file we will define following function:

function helloOctave(a, c)
x = (1:10)’
y = x.*a+c
plot(x, y);

Then, we just need to run Octave by command ‘octave’, and call the function “helloOctave(5,2)”

To try this one, simply either create file and run it “echo ‘print “Hello”‘ >” and then “python” or just call interactive shell by command “Python”.

###Have fun, and don’t forget – Chuck Norris is watching for you###

Ssh and openSsh server

So first of all to start I need to have a server to provide connection. To do so – type in terminal “sudo apt-get install openssh-server”. You won’t have a time even to make a cup of coffee ’cause your server will be up and running after couple of seconds. Now to test it I will create new user and connect to my own computer through ssh. “sudo adduser [nameForUser]” in my case it was like “sudo adduser testalex123″. And yeaps – new user is created.

#Ssh connection
Now for test purposes it is time to logon. To do so the following command should be executed: “ssh localhost@testalex123″ where localhost is domain name for my own ip address and after (at) sign comes user name. The same can be done through typing the ip, to know your ip is enough to type “ip addr”. So my private ip inside my lan is So let’s try that one. And as I said the following: “ssh testalex123@″ works exactly the same way just fine.

#Doing things remotely
After successful connection – everything you can do on your own computer – you can do as well remotely, or in my case being logged on with one account I imitate that my other account is located somewhere else. But yeah, in previous post “Appache and MySql” I already installed web server and set it up to user own directories. So let’s create a personal web page for our new, figuratively speaking, remote user using ssh. After establishing connection check the working directory by command “pwd”. If it shows “/home/testalex123″ then we are good to go. Firstly we need folder for personal web page, so type following: “mkdir public_html” and then simply go into it “cd public_html”. Now we need an index page and some simple html mark-up to test things. To do so, we can used terminal based text editor, but since we just need to test things we can simply type command: “echo ‘<h1>Hello</h1><p>ssh working properly. <h3>We are online and ready</h3>’ > index.html”. Alright so the final part is actual test – navigate browser to “localhost/~testalex123″
So if you see that in your browser – then the surgery was successful. Yes, we manged to do things figuratively remotely and what is more important – it wan no more difficult then doing things on your own machine. That’s it!


Apache and MySql

So… Apache is apache – nothing to say – just install and use it. To do so in the terminal the following command should be executed: “sudo apt-get install apache2″. And if you manage to type your sudo password correctly – after couple of seconds it will be installed. You even can check it right away by navigating your browser to “localhost”.
“It works!” your browser will say – no jokes. But to start creating your personal page would be nice to change physical directory on your computer for websites. The easy way is to set it up under user home directory where you and other user can have dedicated pages. To do so in good old terminal the following command should be executed – “sudo a2enmod userdir” after which the server need to be restarted – “sudo service apache2 restart”. Now to complete the preparation a directory “public_html” has to be created under user’s home directory. To check that it works – create simple “index.html” file under public_html directory and define there simple test paragraph or header.
Here we go. Quick and easy just in few minutes we can start creating a web page.

Well, you know, sooner or later you’ll come to a point where there will be need for database and MySQL is a good choice. To manage with it you can use either phpMyAdmin or command based client. Today I’m in a command line mood so just type in terminal the following command: “sudo apt-get install mysql-server mysql-client”. And yeps, now we can play with it a bit. You can run it by command “mysql -u root -p” and then just create first database.

For test purposes, I’ll create database where information about courses I’m currently taking will be kept.

  • So “Create Database hhCourses”.
  • The next thing is to create little teacher table: “CREATE TABLE Teacher(id INTEGER PRIMARY KEY, name NVARCHAR(50), email NVARCHAR(50));”.
  • Second table I need is Room: “CREATE TABLE Room(number INTEGER PRIMARY KEY, type NVARCHAR(50));”.
  • After that I need table which would represent course modules: “CREATE TABLE Module(id INTEGER PRIMARY KEY, code NVARCHAR(10));”.
  • And finally the main course table: “CREATE TABLE Course(id INTEGER PRIMARY KEY, name NVARCHAR(100), dayOfWeek NVARCHAR(10), courseTime TIME, moduleId INTEGER REFERENCES Module(id), roomNumber INTEGER REFERENCES Room(number), teacherId INTEGER REFERENCES Teacher(id));”.


Now it is time to put into it some data:
INSERT INTO Teacher VALUES(1, ‘Pekka’, ‘’);
INSERT INTO Teacher VALUES(2, ‘Juhani’, ‘’);
INSERT INTO Teacher VALUES(3, ‘Tero’, ‘’);


INSERT INTO Room VALUES(4004, ‘lab’);
INSERT INTO Room VALUES(1001, ‘lecture’);
INSERT INTO Room VALUES(5001, ‘lab’);
INSERT INTO Room VALUES(5009, ‘lab’);

INSERT INTO Course VALUES(101, ‘Development project’, ‘MON’, ’10:00:00′, 1, 4004, 1);
INSERT INTO Course VALUES(102, ‘Linux’, ‘THU’, ’12:00:00′, 2, 5001, 3);
INSERT INTO Course VALUES(103, ‘Java’, ‘WED’, ’16:00:00′, 3, 5009, 2);
INSERT INTO Course VALUES(104, ‘Mobile’, ‘FRI’, ’08:00:00′, 3, 5009, 2);


That’s it folks! Play around and enjoy.

Terminal hands-on and logs

Alright it is time to install couple of apps without using graphical interface of package manager and look into logs a bit.

#1 – Game
First of all it is a perfect time to play some game. And after command “apt-cache search rpg” simply because rpg is one of my favourite genre,  the console gives me a lot of choice, but after reading some of the description I decided to go with “freedroid”, so with simple command “sudo apt-get install freedroid” in few seconds I can start playing. And I should say it is incredible. A lot of action, it is not rpg at all but still playable if you are fan of 8-bit music.


#2 System monitoring

So searching for some tool to monitor system activity – search game me an option called “ksysguard”, Seems that it has quite many dependencies and because of my slow internet it takes a while to install. Seems that everything is running smoothly and nothing to worry.

#3 Glitch in the matrix
You know that – matrix has you. But don’t be upset and just follow a white rabbit and chose right pill. An application cMatrix will turn you console into matrix monitor as well as it is possible to run the program in a screen saver mode.

Logs are located in var/log directory so being in var working directory typing a command “tail -f log/*” will allow you to monitor and follow all logs in the directory. And for example, changing a language will affect in changes in Xorg.0.log

Or for example, opening a date and time settings will cause the changes in syslog like that


Terminal is very efficient, fast and useful tool in most of the activities but only as long as you know how to use it. There is no way to read a book and learn all the commands you might need. So the only option is to learn “at-need” by doing.

Command line and arguments

In many cases command line is more handy to use. But that’s applies only if you know what to do. Well the basic commands are quite easy “ls” – list files and folders in current directory, “pwd” – print working directory, “cd” – change directory and so on. But on top of that with different commands it is possible to use arguments or so-called flags.

One of the example would be “ls -a” which means something like list All the folders and files in the directory. When it comes to application it is always possible to see the arguments that particular application can accept using “-h” parameter, for example “nano -h”, and then I can for example find a “-V” parameter which with most of other applications as well will display current version and year of build of the application. The other example would be with apt-get package manager, it is possible to pass a parameter “-y” so that all the question will be answered with yes.

There are many other flags and it is quite difficult to learn them and with which application they can be used without really trying out at need. So probably it would be a good idea to learn command line commands and arguments while you actually trying to do something with terminal. Over and out.

Apps with Software Center

Today is the day… To get into Ubuntu software center and install couple of apps. Based on my personal experience the software center is quite good thing, however for new user it’s might be a little difficult to learn about open source software alternatives for apps they used previously. But google search – helps a lot. So in the end it is nothing but pleasure.

Here we go
First of all I have one problem – I sleep soundly. Even if in nearest neighbourhood an earthquake occurred, it wouldn’t disturb me even a bit. So the usual case is that – in order to wake up at 6 a.m. I need couple of loud alarm clocks around. One of them can be found in software center – Alarm clock applet.

The App is just amazing. Easy, Simple, straight forward and what most important is reliable. What else you need to ensure that you’ll wake up in time?


Numero deux
Well, if you want to succeed in life – waking up in time is not the only thing you need to ensure. So called “To Do” list is what each and every busy person should have, otherwise anarchy and chaos in task management might simply ruin your life. For that purpose one of the best apps in software center is ‘Nitro’
It has not only a GUI client, but also can be assessed through browser as a web app. As many other linux apps it keeps that simplicity that I like so much – nothing extra – just exactly what you need to accomplish your goals.
Easy to use, easy to set up and order items in the list. It has search box as well in case you are very lazy person and list will expand insanely over the time.

Numero trois
And finally just to stay busy – we need an IDE. There are plenty, but at that time I’ll go with Eclipse.
Installation is nice and easy for all the apps, it takes literally seconds for package manager to resolve dependencies and download app. And you know what? – linux doesn’t ask you for reboots when you installing or deleting something as windows does it.

Anyway, after downloading goes usual procedure – define working directory and create your new awesome project. That’s pretty much it – there is nothing else to tell, since the software center is so nice and easy to use. Hope you’ll enjoy. Over and out.

Linux try out

First of all the reason – Why
Here might come a question why would you do that… Answer is quite simple:

  • It comes for free, no tricks, no catches.
  • Linux is faster than windows
  • More freedom to change things
  • You might be tired of windows crashes all the time

And here is my favorite:

  • I tired that windows asks for reboots almost daily, whatever happens windows in panic wants you to reboot the system, and time to time it even doesn’t ask you for permission to do so…. Maaaaan… You know how many times I needed to reboot Linux?!? – Once!! right after installation.


First of all you need to check system type (32 or 64). To do that the easiest way – right click “Computer” icon on the desktop or in the file explorer and chose properties. There you will find system type.


Downloading Distribution
For that time I will go with Xubuntu, though I have already Ubuntu installed. there you can read about it a bit, and after just press “Get Xubuntu”. I would recomend LTS (Long Term Support) release. Then you have a choice to use torrent for downloading or not. Pick the one that matches your System type (32/64) as we’ve just checked it. Through torrent it took for me about 20 minutes to download the distribution.

Creating Live CD / USB
To burn a disk image on CD – you can use Nero, Deamon Tools Lite or Alcohol 120%. I will go with USB option. To create Live USB – I will use Universal USB installer. In just free steps the magic will happen. Be careful, all the data from Usb will be erased, so before doing that copy files from Usb somewhere else.

Back ups
Usually everything going alright, but back ups is good practice anyway. Get external hard drive and leave your computer copying all the files you need for a night. In case you really worry about your Windows – you can create a System image and recovery CD

(Notes: recovery CD isn’t the same as installation CD, so you won’t be able to use it if you unistall windows. You cannnot create recovery USB, so in case you don’t have internal CD drive – buy an external one.)

To create both System Image and recovery CD – go to Control Panel/Back up and Restore.

Try out
Turn off your computer, plug in Live USB. If you have newer computer – for sure there will be Boot option, where you can chose which device from operating system should be loaded. Otherwise you can edit Boot order in BIOS. Usually to go there you need to press F12 right after pressing power button, however on a different computers it might be something else (F10, F2). Press Try Xubuntu. Check things out and if it is fine for you – install, either erasing windows, or aside (dual boot). What to do after – enjoy!

Different distributions
Xubuntu is not the only one Linux distribution. Easiest way to chose – download several of them and run them under virtual machine (VirtualBox).
Most popular ones for the last 6 months are:

  • Mint
  • Debian
  • Ubuntu
  • Mageia
  • Fedora

After installing Ubuntu I can name couple of good changes from my personal expirience:

  • It is really faster than Win 7 (boot and applications)
  • It requires less resources (I can declare that, based on the temperature of the laptop)
  • Getting apps through repositories is more handy
  • Sound is louder and has higher quality
  • Workspaces are awesome
  • It works without reboots

However there is one small thing: for some reason Skype doesn’t work… Anyway, I’m pretty much satisfied.