Saturday, December 19, 2009

Benford's law

This is the probabilistic law that rules the distribution of de digits that appear in for example, our bills, bank statements, national debts, ..., where the digit 1 is seen as the most probable.

The reason of this counter-intuitive law is that the natural underlying distribution is logarithmic. Benford's law

When probabilities are involved, our intuition may catastrophically lead us to wrong conclusions.

Sunday, December 13, 2009

The Black-Scholes equation

I do not remember the first time I heard about the Black-Scholes equation, but I finally fulfilled my curiosity. The Black-Scholes equation is a differential equation for the value of a contract that establishes the optional right to buy (call options) or sell stocks (put options). The typical call option is a contract that gives the option to buy the stock at a certain point in the future T for a given strike price k. The value of the stock changes in time so that if I buy a call option written with a strike price k, I am betting that the price of the stock at time T will be higher than k (plus the cost of the contract itself), so that I will be able to execute the transaction recovering the difference. Otherwise, I could let the contract expire, losing all the money I paid for the call option.

The value of the contract is well know in the last day T, when the option can be executed because it is exactly the difference of price of the stock minus the strike price of the contract if the stock price is higher than k, otherwise, the contract's value is ZERO. The big problem is to estimate the fair value of the contract as a function of the stock price S and time t. Under certain assumptions, the value of the contract obeys the Black-Scholes partial differential equation.

The figure below shows a typical solution of the Black-Scholes partial differential equation. The curve in red corresponds to the value of the contract at time T as a function of the stock price, acting as boundary condition. The remaining curves correspond to times farther and farther in the past. The strike price is k=120, the maturity time is T=1, the interest rate is r=0.05 and the volatility is sigma=0.5


This is an example adapted from the book:
Computational Financial Mathematics using Mathematica, by Srdjan Stojanovic.

Now, having a reasonable idea of the value of the contract, I can even buy and sell contracts!.

Tuesday, December 8, 2009

Functional Programming: MapReduce

Functional programming is a programming paradigm that was introduced by LISP. Many modern languages computational systems were inspired by LISP and one them is Mathematica.

It seems that the next biggest impact of functional programming is going to come from the application to parallel/distributed programming. One example of this emerging technology is the MapReduce framework developed by Google. In similar way, Yahoo is developing Haddop for the same purposes.
Another example is of course the implementation of WolframAlpha mostly developed in Mathematica.

There are many tutorials available including youtube videos such as
MapReduce Cluster Computing

One of the implementations that called my attention is MARS, which is developed on the top of CUDA.

Friday, December 4, 2009

Computational Linear Algebra

How do we implement programs that require linear algebra?
For most of the cases there are no reasons to spent effort in developing our own libraries for linear algebra because we can use very good and freely available libraries.
  • The first layer is the Fortran BLAS library that implements matrix products. There are many variants of BLAS developed by certain companies and research institutions.
  • The second layer is the Fortran Lapack library, which implements more advanced routines such as matrix decompositions and computation of eigenvalues on the top of BLAS. However, there are no high level routines such as matrix inverse, determinants, etc. The reason of the absence of these high level routines is that there are many ways to implement them in terms of the Lapack routines but one has to choose a particular method according to the personal requirements for maximum efficiency. The implementation of programs in Lapack has high potential for efficiency optimization because one can specifies the type of matrix in the operation ie. real, complex, symmetric, etc and the decomposition that fits better for the final purpose.
  • The third layer is the Blas/Lapack wrapper, which implements the operations that are taught in a first curse of Linear algebra. The routines in Lapack are usually taught in a second course of linear algebra. The wrappers are easier to use but some of the flexibility is lost.
  • For those who work with C, GSL must be the best option. For C++, one has the choice to use the Boost library, which includes a lot more than linear algebra. Personally, the library I really feel as the most friendly is the Armadillo C++ library for linear algebra.