Saturday, December 19, 2009

Benford's law

This is the probabilistic law that rules the distribution of de digits that appear in for example, our bills, bank statements, national debts, ..., where the digit 1 is seen as the most probable.

The reason of this counter-intuitive law is that the natural underlying distribution is logarithmic. Benford's law

When probabilities are involved, our intuition may catastrophically lead us to wrong conclusions.

Sunday, December 13, 2009

The Black-Scholes equation

I do not remember the first time I heard about the Black-Scholes equation, but I finally fulfilled my curiosity. The Black-Scholes equation is a differential equation for the value of a contract that establishes the optional right to buy (call options) or sell stocks (put options). The typical call option is a contract that gives the option to buy the stock at a certain point in the future T for a given strike price k. The value of the stock changes in time so that if I buy a call option written with a strike price k, I am betting that the price of the stock at time T will be higher than k (plus the cost of the contract itself), so that I will be able to execute the transaction recovering the difference. Otherwise, I could let the contract expire, losing all the money I paid for the call option.

The value of the contract is well know in the last day T, when the option can be executed because it is exactly the difference of price of the stock minus the strike price of the contract if the stock price is higher than k, otherwise, the contract's value is ZERO. The big problem is to estimate the fair value of the contract as a function of the stock price S and time t. Under certain assumptions, the value of the contract obeys the Black-Scholes partial differential equation.

The figure below shows a typical solution of the Black-Scholes partial differential equation. The curve in red corresponds to the value of the contract at time T as a function of the stock price, acting as boundary condition. The remaining curves correspond to times farther and farther in the past. The strike price is k=120, the maturity time is T=1, the interest rate is r=0.05 and the volatility is sigma=0.5


This is an example adapted from the book:
Computational Financial Mathematics using Mathematica, by Srdjan Stojanovic.

Now, having a reasonable idea of the value of the contract, I can even buy and sell contracts!.

Tuesday, December 8, 2009

Functional Programming: MapReduce

Functional programming is a programming paradigm that was introduced by LISP. Many modern languages computational systems were inspired by LISP and one them is Mathematica.

It seems that the next biggest impact of functional programming is going to come from the application to parallel/distributed programming. One example of this emerging technology is the MapReduce framework developed by Google. In similar way, Yahoo is developing Haddop for the same purposes.
Another example is of course the implementation of WolframAlpha mostly developed in Mathematica.

There are many tutorials available including youtube videos such as
MapReduce Cluster Computing

One of the implementations that called my attention is MARS, which is developed on the top of CUDA.

Friday, December 4, 2009

Computational Linear Algebra

How do we implement programs that require linear algebra?
For most of the cases there are no reasons to spent effort in developing our own libraries for linear algebra because we can use very good and freely available libraries.
  • The first layer is the Fortran BLAS library that implements matrix products. There are many variants of BLAS developed by certain companies and research institutions.
  • The second layer is the Fortran Lapack library, which implements more advanced routines such as matrix decompositions and computation of eigenvalues on the top of BLAS. However, there are no high level routines such as matrix inverse, determinants, etc. The reason of the absence of these high level routines is that there are many ways to implement them in terms of the Lapack routines but one has to choose a particular method according to the personal requirements for maximum efficiency. The implementation of programs in Lapack has high potential for efficiency optimization because one can specifies the type of matrix in the operation ie. real, complex, symmetric, etc and the decomposition that fits better for the final purpose.
  • The third layer is the Blas/Lapack wrapper, which implements the operations that are taught in a first curse of Linear algebra. The routines in Lapack are usually taught in a second course of linear algebra. The wrappers are easier to use but some of the flexibility is lost.
  • For those who work with C, GSL must be the best option. For C++, one has the choice to use the Boost library, which includes a lot more than linear algebra. Personally, the library I really feel as the most friendly is the Armadillo C++ library for linear algebra.

Thursday, November 19, 2009

Magnetic sensors

One of the most important technologies with the most important applications is the ability to detect small magnetic fields. This allows us to make more compact hard disks and more recently more sensitive detectors, including bio-detectors.

  • Gigant Magneto Resistance (GMR) is one of the important effects responsible for these technologies that exploit the difference of electrical resistivity when the current traverses two different magnetic materials (separated by a very thin non-magnetic material) and the spins of the materials are parallel or anti-parallel. Giant Magnetoresistance: The Really Big Idea Behind a Very Tiny Tool

  • The Tunnel Magnetoresistance effect (TM) , which relies in the quantum tunnel effect, is another important similar effect that can be even more sensitive. Tunnel Magnetoresistance effect
  • There transmission of light in a gas such as Rubidium can be also very sensitive to very small external magnetic fields. Laser magnetometer

A link with some of these technologies applied to sensors is
GMR for sensors

Wednesday, October 28, 2009

Geometric Optics with Lie Groups

Donald Barnharth, from Optical Software introduced me with
another very interesting application of Lie groups in geometric optics.

This should not be a surprise because geometric optics as well as classical mechanics can be expressed in terms of a variational principle. In classical mechanics we have the minimal action and in geometric optics we have Fermat's principle, which states that the trajectory of light rays minimize the optical path length defined as

where n is the refraction index, ds is the differential arc length and C is the path. The whole machinery of classical mechanics can be translated to geometric optics with the corresponding Hamiltonian formulation as the phase space formulation of geometric optics.

The key idea of Hamiltonian mechanics is that the trajectory of the particles can be seen as active continuous symplectic transformations. For example, one has the following time-evolution operator



where H is the Hamiltonian (independent of time), with the Lie operator defined in terms of the Poisson brackets as



and


The same formalism can be applied to geometric optics with some particular adjustments. The time-evolution operator is replaced by the transformation that propagates the optical phase-state in the space. For practical applications this transformation is factorized in a perturbative-like product expansion that reminds the Fer expansion as



where, the first factor represents the para-axial approximation while the rest represent the corresponding higher order corrections.

References
  1. V. Lakshminarayanan, Ajoy Ghatak, and K. Thyagarajan, Lagrangian Optics
  2. Alex J. Dragt, A Lie connection between Hamiltonian and
    Lagrangian optics
  3. Kurt B. Wolf, Geometric Optics on Phase Space, Springer, 2004

LaTeX was powered by MathTran

Sunday, October 25, 2009

Relativistic Many-body dynamics

An intuitive generalization of the explicitly covariant single-body relativistic dynamical equation to multiple multiple interacting particles is difficult. The no-go theorem of Currie Jordan and Sudarshan is an example of the difficulties to device such explicitly covariant version of relativistic interacting particles. I do not know if a satisfactory elegant solution was found.

A book that explains this problem is

FROM CLASSICAL TO
QUANTUM MECHANICS
Giampiero Esposito, Giuseppe Marmo

Also see
Form of relativistic dynamics with world lines

The Lorentz-Dirac Force

There is a naive general idea that we already understand classical mechanics very well, but I do not think that is the case. There are many fundamental questions in classical mechanics without a satisfactory answer. One of them is the description of the trajectory of a charged particle in the presence of an electromagnetic field. Yes, we have the Lorentz force, but it does not take into account the effect of the radiation of the charged particle in the acceleration process. The Lorentz-Dirac force is an attempt to include the effect of the radiation but unfortunately, the solutions of this equation are pathological. Most books only mention this situation but Baylis's book in electrodynamics devotes a complete chapter this subject.
Electrodynamics: a modern geometric approach
By William Eric Baylis
(chapter 12)

Returning from IMUC

I am finally writing again after my return from the magnificent International Mathematica User Conference 2009, where I learned about the features of the future Mathematica 8 and met many people. I am going to devote a series of entries about IMUC 2009, but some general observations and things that I learned are
  • Mathematica is growing and improving at an increasingly faster rate.
  • The new Mathematica notebook will have nearly the same capacity found in LaTeX for creating static documents. However, is the dynamic capability what makes it revolutionary and much better than LaTeX.
  • The Mathematica kernel is eventually going to support efficient tensor computation. These routines are going to be implemented in separated modules, so that Tensorial (the tensor package I develop with David Park and Jean-Francois Gouyet) will benefit from these new features. Moreover, now I have more ideas on how to independently improve Tensorial.
  • The rendering of 3D images with CUDA is truly amazing and I am waiting to play with it when it gets implemented in the kernel. Now I also know that my next laptop has to have an NVidia graphics card.
  • I am also waiting for the Mathematica plugin for web browsers, so that I can directly publish mathematica notebooks on the web without the need to export any html.

Tuesday, October 13, 2009

Hyperdeterminants

The concept of hyperdeterminant is as new to me as a few days and is introducing me into a fascinating new branch of algebra that I did not even suspect. I am very familiar with tensors and how they can be seen as multidimensional arrays that generalize matrices, but the concept of hypermatrices goes further.

The hyperdeterminant was invented (discovered) by the famous mathematician Cayle, who gave us many things including the Cayle transform, useful to approximate the exponential of anti-Hermitian matrices, and the amazing Cayle-Hamilton theorem of linear algebra.

There is active research today and I even found a blog
hyperdeterminant.wordpress.com

More recently I found that Thomas Wolf and Sergey Tsarev are implemening serious computer work in order to find hyperdeterminants of higher order
Hyperdeterminants as integrable discrete systems.

What I had in mind was the hypertederminant of at least 5x5x5 cubic matrices, but now I known this is an exceedingly difficult problem. Now my question is if there is one way to find a good approximation.

MMM,.... I just, learned about other alternative definitions of hyperdeterminants, so the possibilities are still open.

Inverted Retina

If you did not know, we have our retinas inverted in the sense that the light that enters our eyes has to pass through many layers of nerves, blood vessels and all the wiring before reaching the photo detectors themselves. Even more troublesome, is the fact that it seems that the images are completely distorted and blurred when they arrive the photo-detectors. However we surely know we can see very well. Why did we evolve this feature? and how do we really see? Some of the answers can be found in this absolutely amazing paper

V. D. Svet and A. M. Khazen, About the formation of an image in the inverted retina of the eye, Biophysics, Volume 54, Number 2 / April, 2009, pp 193-203

The key point of this article is that we process the images in blocks. This means that we collect a sequence of images, which we process together in order to improve the signal/noise ratio.

There is a recent independent article here, suggesting that we actually see images in discrete sequences, which is completely consistent with Svet's theory.

I met Dr Svet in Windsor, Canada, where he gave a few seminars about topics concerning sonar imaging and detection. I expect to write more entries in my blog about other topics of his research.

Sunday, October 11, 2009

What really matters is to challenge our brains

Our brains thrive when we learn new challenging things. It is not enough to practice what we already know well
Effect of challenging our brains in our brains

Saturday, October 10, 2009

Rodolfo Sanchez Ph.D.

I was very glad to hear again from my old Bolivian friend Rodolfo Sanchez, who earned his Ph.D. degree in physics in Germany and now is working as a postdoctoral researcher at the GSI institute

Rodolfo Sanchez PhD

This world in small because it happens that he is now collaborating with Dr Gordon Drake, who is professor at Windsor, Canada.

Tree Fractal

I was a little bit surprised to find one of my Mathematica fractals as part of a collection of other fractals, as you can see in the October 17, 2004 entry

http://nylander.wordpress.co/2004/10/

Thursday, October 8, 2009

Calculation of the unitary part of the Bures measure for N-level quantum systems

My paper about the Bures measure was published

Calculation of the unitary part of the Bures measure for N-level quantum systems

The Bures measure can be written as the product of two factors. One corresponding to the populations and the other corresponding to the unitary transformation. Many people arrived to formulas for the Bures volume and the volume of the unitary part of the Bures measure, but in this paper we give an explicit expression of the measure itself in terms of even balls.

Much more about measures is going to come the public very soon!

Wednesday, October 7, 2009

Fisher Information In Natural Selection

I introduced myself with the concept of Fisher information in my readings about quantum estimation and the Cramer-Rao bound. It was when I read the book

B. Roy Frieden, Physics from Fisher Information: A Unification,

which showed me that the Fisher information may even play an important role in the foundations of physics itself. Unexpectedly, I run into the Fisher information again when I was reading about Jeffreys' prior for Bayesian estimation. Now, I came across with another interesting paper about the role of Fisher information in Natural selection

Frank, S. A. 2009. Natural selection maximizes Fisher
information. Journal of Evolutionary Biology 22:231–244

Can we use this in order to refine genetic/evolutionary optimization algorithms?

Quantum estimation

Qantum estimation/metrology is a research field closely tied with quantum information theory. Everything was born with the Heisenberg uncertainty principle, when we realized that in general measurements can be non-commutative in contrast with the classical world where everything is commutative i.e. the order of the measurements is irrelevant. The consequence of the non-commutativity is that two given observables may be incompatible for simultaneous arbitrarily precise measurements.

The next step was done by Helstrom in the 70's who introduced the Symmetric Logarithmic Derivative (SLD) operator to obtain the Quantum Fisher information. It was later in the 80's when Wootters came with the refined and elegant concept of quantum statistical distinguishability, which was tied with the work of Helstrom by Braunstein and Caves (1994) in their famous paper Statistical distance and the geometry of quantum states. This paper is remarkable because it also established the connection with the completely independent work of Uhlmann about the Bures metric, which was born in the generalization of the Berry phase for mixed states.

More recently, we witnessed many important developments in the quantum single parameter estimation with one unexpected twist. Now we know that quantum mechanics and in particular quantum entanglement have the potential to defeat their classical estimation methods in the quest for higher precision. For example, you can see my older post about NOON states and quantum interferometry.

However, this new knowledge cannot be applied in the real world until we learn how to generate a considerable number of entangled states in a reliable way.

You can also see my wikipedia article about the Bures metric

Tuesday, October 6, 2009

Learning from being wrong

Being wrong seems to be something to avoid, but it may be the price to pay if we want to improve or optimize something. This is the general philosophy behind Genetic/Evolutionary algorithms and even statistical sampling.

In Genetic/Evolutionar algorithms, the key ingredient is the introduction of mutation. In most cases a mutation will be destructive, but there will be occasions when the mutation will be beneficial. We improve by discarding the destructive mutations, while keeping the beneficial ones.

It was Thomas Bäck from Leiden University who told us how Toyota promotes mutation in the production system in order to make more efficient cars.

In statistical sampling, the objective is to explore the space of a probability distribution (explore the opportunities). Here we have the Metropolis (and Metropolis-Hastings) algorithm, where one is forced to be wrong sometimes in order to explore the probability distribution.

Ben Schumacher, the author of the quantum noiseless coding theorem has something to tell about this
Ben Schumacher on "Being wrong"

Disclaimer: The only way to profit from being wrong (mutate) is to be ready to correct ourselves as fast as possible.

Sunday, October 4, 2009

A scientific approach to science education

In 2006, when I was in Windsor, I had the opportunity to meet Carl Wieman, the winner of the physics Nobel price in 2001 for his experimental work in the production of the Bose-Einstein condensate. However, something that called my attention even more was his research and discoveries about science education. He showed us with experimental data how our current approach is far for being the most efficient way to teach science. Even worse, sometimes teaching science as we do it today, leads to students with even more misconceptions!

More about his research in education can be found HERE

Saturday, October 3, 2009

Quantum/Classical transition

The nature of the quantum/classical transition is according to my opinion, the most important fundamental question of physics. There are many approaches and many insights that have consequences on the continuing debate on the interpretation of quantum mechanics and its mysterious features such as the collapse of the wave function. Some of the approaches and related topics, not necessarily independent from each other are
  1. The Ehrenfest theorem (My favorite).
  2. The Koopman- von Neumann equation for the classical wave function.
  3. The Wigner function for a quantum formulation in the phase space.  Moyal brackets and Weyl quantization [1]
  4. Feynman Path integrals and decoherence.
  5. The classical spinor formalism. Electrodynamics: a modern geometric approach By William Eric Baylis
  6. The limit to the Thomas-Fermi model by Elliott H. Lieb
  7. The analogy of classical statistical mechanics with quantum mechanics inspired on the Wigner function by  Blokhintsev. Another researcher, A. O. Bolivar pursues this idea further but it seems that he was unaware of the previous work by Blokhintsev. 
  8. Related with the previous approach is also the work by Amir Caldeira and Anthony J. Leggett, who proposed a quantum dissipation model.  
  9. The very intriguing derivation of the Schrodinger equation from classical mechanics with complexified Brownian motion by Edward Nelson.  
  10. The p-mechanics formalism, which exploits the representations of the Heisemberg. This method is also closely related with the Weyl quantization.
  11. Geometric quantization, is another method inspired on the Weyl quantization trying to maintain a coordinate-free procedure based on differential geometry.


[1]  Zachos, C. and Fairlie, D. and Curtright, T., Quantum mechanics in phase space: an overview with selected papers, World Scientific Pub Co Inc, 2005

    Friday, October 2, 2009

    The creation of the universe in 6 days

    And God created the universe in 6 days, but in order to give us a nice sky full of beautiful shiny stars, he decided to simulate the light as though as it was coming from thousands, millions and billions of years ago in the past. This was necessary because the speed of light is so fast that in 6 days there would be no chances for us to see anything, not even the light from our closest star, excepting our own sun. We would not be able to see the Milky Way as we see it today, not even in 6 thousand years.

    However, God did his job so well that there is no way for anybody to find any hint of crack in the simulation. Unfortunately, this implies that humans will never be able to prove that the universe was created in 6 days, no matter how hard they could try.

    Shannons's noisy-chanel coding theorem

    This is an amazing theorem that completely defies intuition and lies at the core of classical information theory. One way to defeat the noise is to encode the original information with added redundancy. The price to pay is a reduction on the efficiency of transmission, also called transmission rate. Intuition says that as the communication error goes to zero, through redundancy coding, the transmission rate should go to zero as well. NOT TRUE!. Shannon actually found a finite bound for the transmission rate, even as the communication error goes to zero.
    http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem

    Princeton Physics Plasma Laboratory

    The director of the Princeton Physics Plasma Laboratory, Stewart Prager, gave today an overview presentation about the research carried out in the institution. They design and construct tokamaks for nuclear fusion and collaborate with other institutions and projects such as ITER.. After seeing the steady progress in this research field, I am much more optimistic about the feasibility to have one day a commercial energy generator plant based in nuclear fusion. Something that also called me the attention was the importance of Lithium as coating material for the inner surfaces of the plasma chamber.

    Wednesday, September 30, 2009

    Quantum Interferometer

    Jonathan Dowling from Louisiana State University gave a presentation about the use of entangled photons for high precision interferometry [1] showing that they can lead to much better measurements than it is possible with classical light. These entangled photon states are of the form

    Some problems remain such as the difficulty to produce NOON states with high power, but they are very promising anyway.

    So far they devised how to make ingenious conditional measurements in order to generate NOON states.

    Another important topic was the emulation of non-linear effects by conditional measurements within linear optics [2].

    Refererences

    1. J Dowling arXiv:0904.0163
    2. G. G. Lapaire1, et al, Conditional linear-optical measurement schemes generate effective photon nonlinearities DOI: 10.1103/PhysRevA.68.042314
    3. Jonathan Dowling ppt presentation

    LaTeX was possible in this entry with MathTran

    Saturday, September 26, 2009

    Human chromosome 2

    Humans have 23 chromosomes while all the other hominids have 24 chromosomes. The question is, how can this be possible if we are suppose to have a common ancestor. Now we know that human chromosome 2 is the result of the fusion of two chromosomes found in our relative hominids.
    YouTube:Chromosome 2

    Friday, September 25, 2009

    Quantum Feedback Control

    Kurt Jacobs gave a presentation about Quantum Feedback Control. He showed how to introduce the action of weak continuous measurements into a master equation that. This equation resembles the Lindblad equation with an extra stochastic term, where the feedback can be introduced in the Hamiltonian. One application was to create a cat state of a harmonic oscillator.

    Adiabatic quantum complexity

    The adiabatic quantum computer was proposed in arXiv:quant-ph/0001106v1 in order to solve certain type of problems.

    For low dimensions it was shown that the adiabatic quantum computer is polynomial, for a certain problem with exponential complexity in a classical computer. Peter Young, who gave a presentation at Princeton yesterday, is using Quantum Montecarlo Simuations, as used in statistical mechanics for the partition function, in order to engage this problem for higher dimensions where direct simulations are not feasible.
    The Complexity Of The Quantum Adiabatic Algorithm

    Wednesday, September 23, 2009

    Intelligence gene

    The following article seems to indicate that there is a gene that is associated with higher intelligence, but those who have it are in disadvantage in high pressure exams
    memory gene

    Tuesday, September 22, 2009

    Epigenetics

    Epigenetics studies the effect of the environment on the expression of genes. A saw the following NOVA tv program
    http://www.pbs.org/wgbh/nova/genes/issa.html
    and I was very surprised.

    LaTeX in my blog

    I want to write equations in my blog, so I found mimetex, but I could not run it from the cgi Princeton server. I do not know why. I may need to set up complicated permissions or there is a compatibility issue between mimetex and the cgi server.

    Here I am testing the cgi provided by mimetex



    It works, but I can only use it as a test.

    Another method, less elegant though, is using MathTran, which I used to make the following equation

    Monday, September 21, 2009

    Quantum mechanics in plants??

    Profesor Gregory D. Scholes gave a presentation in Princeton about the possible role of quantum mechanical effects in the absorption of light and energy transfer in photosynthetic processes in plants.

    www.nature.com/nature/journal/v431/n7006/full/431256a.html

    This seems to be very controversial because they claim high coherence at room temperature. Personally I find it difficult to believe it but I'll wait and see more developments.

    PkPd

    PKPD stands for Pharmacokinetic/Pharmacodynamic and it seems to be one of the most fruitful research areas in the boundaries between mathematics, statistics, chemistry and BIOLOGY.

    http://en.wikipedia.org/wiki/Pharmacodynamics

    Particularly, I am mostly interested in the modeling with differential equations and the use of control theory.

    Alpha

    Alpha is what they call knowledge data base

    http://www.wolframalpha.com/

    The difference from regular search engines is is designed to deliver well formatted data.

    For example, using "Dow Jones" it gives a very nice page with graphs, which can be downloaded along with the raw data from a Mathematica notebook.

    The Dow Jones Industrial Average is one of the most important indexes in Wall street.

    In the same way one can download the information for specific companies such as Toyota or Microsoft.

    International Mathematica User Conference 2009

    I am going to give an oral presentation at the International Mathematica User Conference 2009 in Champaign Illinois. October 22-24.

    List of presentations

    My presentation is going to be about my package developed for performing Magnus expansions. This expansion can be used to find approximate solutions for systems of linear equations and linear operators in general

    Magnus expansion in Wikipedia

    Sunday, September 20, 2009

    Entanglement measure

    I developed a very nice method to measure the entanglement of pure states

    Renan Cabrera, Herschel Rabitz, The landscape of quantum transitions driven by single-qubit unitary transformations with implications for entanglement, J. Phys. A: Math. Theor. 42 (2009) 275303.

    doi: 10.1088/1751-8113/42/27/275303

    This method is based in the measurement of the Bures distance between a given state and the closest separable state, which allows one to calculate the generalized Schmidt state. The entanglement can be measured from the coefficients of this Schmidt state.

    Bures measure

    My paper about the calculation of the unitary part of the Bures measure was recently accepted for publication

    Renan Cabrera, Herschel Rabitz, Calculation of the Unitary part of the Bures Measure for N-level Quantum Systems , Accepted at J. Phy A: Math. Theor.

    This paper shows how the Bures measure can be expressed as the product of the measure of even euclidean balls. This means that we now have a simple and easy formula for sampling. The ultimate application will be in Bayesian quantum estimation.


    My wikipedia article that explains the basics of the Bures metric is

    http://en.wikipedia.org/wiki/Bures_metric

    Group theory for wireless communications

    The technology of transmission of information between multiple antennas began to develop 10 years ago. The purpose is to increase the fidelity and transmission rate between multiple sources and multiple receptors.

    The particular technology that I am interested mostly is in the Unitary Space-Time Codes (UST). In this technology, the information is encoded in blocks spanning space and time. In the simplest case, we could think of N antennas with a sequence of N pulses, so that the information block can be represented as an NxN complex matrix. It is mathematically convenient to use unitary matrices, so the name UST is justified in this case.

    Inside the UST field I am paying attention to the Cayley encoding [1,2,3] because it seems to be elegant, simple and easy to understand.

    References
    1. Cayley Differential Unitary Space-Time Codes by B. Hochwald, B. Hassibi
    2. Unitary space-time modulation via Cayley transform by Y Jing, B Hassibi
    3. Yindi Jing Thesis
    4. Representation Theory for High-Rate Multiple-Antenna Code Design by B. Hassibi, B. Hochwald, A. Shokrollahi, W. Sweldens
    5. Differential Unitary Space-Time Modulation by B. Hochwald, W. Sweldens
    6. Random Matrices for Wireless Communications
      A. Tulino, S. Verdu (at Princeton)
    7. Circuits for Wireless Communications: Selected Readingsby Banlue Srisuchinwong (Editor), Wanlop Surakampontorn (Editor), Sawasd Tantaratana (Editor)
    8. Space-time coding for broadband wireless communications by By Georgios B. Giannakis, Zhiqiang Liu
    9. Random Matrices For Wireless Communications I
    10. Random Matrices For Wireless Communications II
    More information can be found at Bell labs

    http://mars.bell-labs.com/


    More applications of group theory in engineering can be found at

    http://www.usna.edu/Users/math/wdj/repn_thry_appl.htm

    Greetings

    Hi, My name is Renan Cabrera L.

    I am a physicist interested in quantum mechanics, group theory, and science in general.

    My web page is

    Renan Cabrera's Web Page