Saturday, 27 September 2014

Approximating differential equation with Euler’s method

It looks like it’s time for differential equations.

So, let’s say that you have a differential equation like the one below and you are striving trying to find an analytical solution. Chill out, maybe take a walk outside or ride your bike and calm down: sometimes an analytical solution does not exist. Some differential equations simply don’t have an analytical solution and might drive you mad trying to find it.

Fortunately, there is a method to find the answer you may need, or rather, an approximation of it. This approximation sometimes can be enough of a good answer.

While there are different methods to approximate a solution, I find Euler’s method quite easy to use and implement in Python. Furthermore, this method has just been explained by Sal from Khan Academy in a video on YouTube which is a great explanation in my opinion. You can find it here.

Here below is the differential equation we will be using:

clip_image002

and the solution (in this case it exists)

clip_image002[5]

Here is the Python code algorithm. Note that the smaller the step (h), the better the approximation (although it may take longer).


Here are some approximations using different values of h:


5


05


005



Hope this was interesting.

Sunday, 21 September 2014

Generate slope fields in R and Python

Here is a short post on how to generate a quick slope field in R and Python.

If you do not know what a slope field is, well I am not the best person to explain it to you since it is a relative new concept for me as well. From what I’ve understood on the subject, a slope field is a graphical way to sort out a first-order differential equation and get a grasp at the solution without having to solve it analytically.

As the Wikipedia’s page says, “it may be used to qualitatively visualize solutions, or to numerically approximate them.”

In general, I feel safe saying that a slope field is some sort of a graphical approach to a differential equation.

Say you have the following differential equation:

clip_image002[8]

drawing the slope field would look something like this:
In Python (without arrows)
figure_1_3figure_2_1

and in R (with arrows, x=f and y=h)
Rplot

Of course these plots are just very quick and can be improved.

Here is the Python code I used to draw them.


And the R code


Here is a beautiful slope field for the following differential equation:

clip_image002[10]

In Python
x y


If you need a quick tool for drawing slope fields, this online resource is good, click here.


Hope this was interesting.

Tuesday, 9 September 2014

Multivariable gradient descent

This article is a follow up of the following:
Gradient descent algorithm

Here below you can find the multivariable, (2 variables version) of the gradient descent algorithm. You could easily add more variables. For sake of simplicity and for making it more intuitive I decided to post the 2 variables case. In fact, it would be quite challenging to plot functions with more than 2 arguments.

Say you have the function f(x,y) = x**2 + y**2 –2*x*y plotted below (check the bottom of the page for the code to plot the function in R):

im

Well in this case, we need to calculate two thetas in order to find the point (theta,theta1) such that f(theta,theta1) = minimum.

Here is the simple algorithm in Python to do this:

This function though is really well behaved, in fact, it has a minimum each time x = y. Furthermore, it has not got many different local minimum which could have been a problem. For instance, the function here below would have been harder to deal with.


im2


Finally, note that the function I used in my example is again, convex.
For more information on gradient descent check out the wikipedia page here.
Hope this was useful and interesting.


R code to plot the function

Gradient descent algorithm

Today I’m going to post a simple Python implementation of gradient descent, a first-order optimization algorithm. In Machine Learning this technique is pretty useful to find the values of the parameters you need. To do this I will use the module sympy, but you can also do it manually, if you do not have it.

The idea behind it is pretty simple. Imagine you have a function f(x) = x^2 and you want to find the value of x, let’s call it theta, such that f(theta) is the minimum value that f can assume. By iterating the following process a sufficient number of times, you can obtain the desired value of theta:

im

image

 

Now, for this method to work, and theta to converge to a value, some conditions must be met, namely:
-The function f must be convex
-The value of alpha must not be too large or too small, since in the first case you’ll end up with the value of theta diverging and in the latter you’ll approach the desired value really slowly
-Depending on the function f, the value of alpha can change significantly.

Note that, if the function has local minimums and not just an absolute minimum, this optimization algorithm may well end “trapped” in a local minimum and not find your desired global minimum. For the sake of argument, suppose the function below goes to infinity as x gets bigger and that the global minimum is somewhat near 1. With the algorithm presented here, you may well end up with x = –0.68 or something like that as an answer when you are looking roughly for x = 0.95.

im2

In this case of course, it is trivial to find out the value and you don’t even need derivatives. However, for a different function it may not be that easy, furthermore for multivariable functions it may be even harder (in the next article I will cover multivariable functions).

Here is the Python code of my implementation of the algorithm

Hope this was interesting and useful.

Thursday, 4 September 2014

Generate words through the use of Markov chains

In general, computer programs are believed to be bad at being creative and doing tasks which require “a human mind”. Dealing with the meaning of text, words and sentences is one of these tasks. That’s not always the case. For instance sentiment analysis is a branch of Machine Learning where computer programs try to convey the overall feeling coming from tons of articles.

But here we are talking a lot more simpler: by using Markov chains and some statistics, I developed a simple computer program which generates words. The model works as follow:
As a first step, the program is fed with a long text in the selected language. The longer the text, the better. Next, the program analyses each word and gets the probability distributions for the following:
-Length of the word
-First character (or letter)
-Character (or letter) next to a given one
-Last character (or letter)

Then, once gathered all this data, the following function is run in order to generate words:

theModel

Each part is then glued together and returned by the function.

On average, 10% of generated words are English (or French, German or Italian) real words or prepositions or some kind of real sequence of characters.

The most interesting aspect, in my opinion, is the fact that by shifting the language of the text fed into the program, one can clearly see how the sequences of characters change dramatically, for instance, by feeding English, the program will be more likely to write “th” in words, by using German words will often contain “ge” and by using Italian words will often end by a vowel.

Here is the code. Note that in order to speed up the word checking process, I had to use the NLTK package which is availably only in Python 2. To avoid the use of Python 2 you could check each word using urllib and an online dictionary by parsing the web page but this way is tediously slow. It would take about 15 minutes to check 1000 words. By using NLTK you can speed up the checking process.


Hope this was interesting.