![the natural obscurity of curses the natural obscurity of curses](https://quotestats.com/topic/188121-quotes-about-rectitude-1788335.jpg)
There is a large body of interesting work going on in the mathematical sciences, both to attack the curse of dimensionality in specific ways, and to extend the benefits of dimensionality. The blessings of dimensionality are less widely noted, but they include the concentration of measure phenomenon (so-called in the geometry of Banach spaces), which means that certain random fluctuations are very well controlled in high dimensions and the success of asymptotic methods, used widely in mathematical statistics and statistical physics, which suggest that statements about very high-dimensional settings may be made where moderate dimensions would be too complicated. The curse of dimensionality is a phrase used by several subfields in the mathematical sciences I use it here to refer to the apparent intractability of systematically searching through a high-dimensional space, the apparent intractability of accurately approximating a general high-dimensional function, the apparent intractability of integrating a high-dimensional function.
![the natural obscurity of curses the natural obscurity of curses](http://x-aspirations.com/wp-content/uploads/2016/12/xmen120h_BeastBath.png)
Two of the most influential principles in the coming century will be principles originally discovered and cultivated by mathematicians: the blessings of dimensionality and the curse of dimensionality. Mathematicians are ideally prepared for appreciating the abstract issues involved in finding patterns in such high-dimensional data. We can say with complete confidence that in the coming century, high-dimensional data analysis will be a very significant activity, and completely new methods of high-dimensional data analysis will be developed we just don't know what they are yet. Classical methods are simply not designed to cope with this kind of explosive growth of dimensionality of the observation vector. We are seeing examples where the observations gathered on individual instances are curves, or spectra, or images, or even movies, so that a single observation has dimensions in the thousands or billions, while there are only tens or hundreds of instances available for study. The trend today is towards more observations but even more so, to radically larger numbers of variables voracious, automatic, systematic collection of hyper-informative detail about each observed instance. In traditional statistical methodology, we assumed many observations and a few, wellchosen variables. In traditional statistical data analysis, we think of observations of instances of particular phenomena, these observations being a vector of values we measured on several variables (e.g. Hyperspectral Imagery, Internet Portals, Financial tick-by-tick data, and DNA Microarrays are just a few of the betterknown sources, feeding data in torrential streams into scientific and business databases worldwide. A combination of blind faith and serious purpose makes our society invest massively in the collection and processing of data of all kinds, on scales unimaginable until recently. The coming century is surely the century of data.