Monday, February 1, 2010

Multidimensional Arrays

Linear algebra is very well developed with a vast literature in numerical methods. This not true for multi-dimensional arrays higher than 2, which correspond to the familiar matrices. In general it is difficult to generalize matrix operations to higher dimensions. One paper that gives an introduction to the topic and treats the problem of diagonalizing higher dimensional arrays is
Tensor Diagonalization

The literature usually refers multi-dimensional arrays as as Tensors, but I prefer to use the term array, because in physics a tensor is more than a multidimensional array.

The following figures correspond to an 9x9x9 array after and before a diagonalization attempt, with the application of orthogonal transformations.


Saturday, December 19, 2009

Benford's law

This is the probabilistic law that rules the distribution of de digits that appear in for example, our bills, bank statements, national debts, ..., where the digit 1 is seen as the most probable.

The reason of this counter-intuitive law is that the natural underlying distribution is logarithmic. Benford's law

When probabilities are involved, our intuition may catastrophically lead us to wrong conclusions.

Sunday, December 13, 2009

The Black-Scholes equation

I do not remember the first time I heard about the Black-Scholes equation, but I finally fulfilled my curiosity. The Black-Scholes equation is a differential equation for the value of a contract that establishes the optional right to buy (call options) or sell stocks (put options). The typical call option is a contract that gives the option to buy the stock at a certain point in the future T for a given strike price k. The value of the stock changes in time so that if I buy a call option written with a strike price k, I am betting that the price of the stock at time T will be higher than k (plus the cost of the contract itself), so that I will be able to execute the transaction recovering the difference. Otherwise, I could let the contract expire, losing all the money I paid for the call option.

The value of the contract is well know in the last day T, when the option can be executed because it is exactly the difference of price of the stock minus the strike price of the contract if the stock price is higher than k, otherwise, the contract's value is ZERO. The big problem is to estimate the fair value of the contract as a function of the stock price S and time t. Under certain assumptions, the value of the contract obeys the Black-Scholes partial differential equation.

The figure below shows a typical solution of the Black-Scholes partial differential equation. The curve in red corresponds to the value of the contract at time T as a function of the stock price, acting as boundary condition. The remaining curves correspond to times farther and farther in the past. The strike price is k=120, the maturity time is T=1, the interest rate is r=0.05 and the volatility is sigma=0.5


This is an example adapted from the book:
Computational Financial Mathematics using Mathematica, by Srdjan Stojanovic.

Now, having a reasonable idea of the value of the contract, I can even buy and sell contracts!.

Tuesday, December 8, 2009

Functional Programming: MapReduce

Functional programming is a programming paradigm that was introduced by LISP. Many modern languages computational systems were inspired by LISP and one them is Mathematica.

It seems that the next biggest impact of functional programming is going to come from the application to parallel/distributed programming. One example of this emerging technology is the MapReduce framework developed by Google. In similar way, Yahoo is developing Haddop for the same purposes.
Another example is of course the implementation of WolframAlpha mostly developed in Mathematica.

There are many tutorials available including youtube videos such as
MapReduce Cluster Computing

One of the implementations that called my attention is MARS, which is developed on the top of CUDA.

Friday, December 4, 2009

Computational Linear Algebra

How do we implement programs that require linear algebra?
For most of the cases there are no reasons to spent effort in developing our own libraries for linear algebra because we can use very good and freely available libraries.
  • The first layer is the Fortran BLAS library that implements matrix products. There are many variants of BLAS developed by certain companies and research institutions.
  • The second layer is the Fortran Lapack library, which implements more advanced routines such as matrix decompositions and computation of eigenvalues on the top of BLAS. However, there are no high level routines such as matrix inverse, determinants, etc. The reason of the absence of these high level routines is that there are many ways to implement them in terms of the Lapack routines but one has to choose a particular method according to the personal requirements for maximum efficiency. The implementation of programs in Lapack has high potential for efficiency optimization because one can specifies the type of matrix in the operation ie. real, complex, symmetric, etc and the decomposition that fits better for the final purpose.
  • The third layer is the Blas/Lapack wrapper, which implements the operations that are taught in a first curse of Linear algebra. The routines in Lapack are usually taught in a second course of linear algebra. The wrappers are easier to use but some of the flexibility is lost.
  • For those who work with C, GSL must be the best option. For C++, one has the choice to use the Boost library, which includes a lot more than linear algebra. Personally, the library I really feel as the most friendly is the Armadillo C++ library for linear algebra.

Thursday, November 19, 2009

Magnetic sensors

One of the most important technologies with the most important applications is the ability to detect small magnetic fields. This allows us to make more compact hard disks and more recently more sensitive detectors, including bio-detectors.

  • Gigant Magneto Resistance (GMR) is one of the important effects responsible for these technologies that exploit the difference of electrical resistivity when the current traverses two different magnetic materials (separated by a very thin non-magnetic material) and the spins of the materials are parallel or anti-parallel. Giant Magnetoresistance: The Really Big Idea Behind a Very Tiny Tool

  • The Tunnel Magnetoresistance effect (TM) , which relies in the quantum tunnel effect, is another important similar effect that can be even more sensitive. Tunnel Magnetoresistance effect
  • There transmission of light in a gas such as Rubidium can be also very sensitive to very small external magnetic fields. Laser magnetometer

A link with some of these technologies applied to sensors is
GMR for sensors

Wednesday, October 28, 2009

Geometric Optics with Lie Groups

Donald Barnharth, from Optical Software introduced me with
another very interesting application of Lie groups in geometric optics.

This should not be a surprise because geometric optics as well as classical mechanics can be expressed in terms of a variational principle. In classical mechanics we have the minimal action and in geometric optics we have Fermat's principle, which states that the trajectory of light rays minimize the optical path length defined as

where n is the refraction index, ds is the differential arc length and C is the path. The whole machinery of classical mechanics can be translated to geometric optics with the corresponding Hamiltonian formulation as the phase space formulation of geometric optics.

The key idea of Hamiltonian mechanics is that the trajectory of the particles can be seen as active continuous symplectic transformations. For example, one has the following time-evolution operator



where H is the Hamiltonian (independent of time), with the Lie operator defined in terms of the Poisson brackets as



and


The same formalism can be applied to geometric optics with some particular adjustments. The time-evolution operator is replaced by the transformation that propagates the optical phase-state in the space. For practical applications this transformation is factorized in a perturbative-like product expansion that reminds the Fer expansion as



where, the first factor represents the para-axial approximation while the rest represent the corresponding higher order corrections.

References
  1. V. Lakshminarayanan, Ajoy Ghatak, and K. Thyagarajan, Lagrangian Optics
  2. Alex J. Dragt, A Lie connection between Hamiltonian and
    Lagrangian optics
  3. Kurt B. Wolf, Geometric Optics on Phase Space, Springer, 2004

LaTeX was powered by MathTran