Big Data: History, Development, Application, and Dangers

History & Development

The background of big data traces back to the beginning of measurement itself. Measurement and basic counting were practiced in the Indus Valley, Mesopotamia and Egypt as early as the third century B.C. the dawn of the earliest modern civilization. Over time accuracy and use of measurement continued to increase and made the later discoveries of volume and area measurements possible.

A new first century numeral system from India was improved by Persians before the Arabs refined it to become the Arabic numerals that we use today. Translations into Latin spread the system all over Europe by the 12th century and led to the explosion of mathematics.

Mathematics would eventually ally with data.  One of the earliest recorded such pairings occurred in 1494 when Luca Pacioli, a Franciscan monk published a book on the commercial application of mathematics and explained a new accounting format called double-entry book keeping which enabled the merchant to compare profits and losses. This revolutionized business, particularly banking.

The Scientific Revolution in the 1600 and 1700’s saw a greater interest in measurement and mathematics as a very powerful tool for understanding the world and reality.

The 1888 discovery by Sir Francis Galton that a man’s height correlated with the length of his forearm was a big precedent of big data. The authors explain that two data values related statistically can be quantified by a correlation so that a change in one data value could predict a change in the second.

Currently, statistical correlation is one of the primary uses of big data in understanding cause and effect and to use predictive analysis to understand, prepare for, and in some instances, influence the future. Supercomputers use algorithms to identify correlations in uploaded data to come up with valuable insights.

By the close of the 19th century, all components of big data were in place and data management and analysis were in vogue, although some limitations continued to exist such as how much information could be analyzed and kept. The computer age would come to solve that problem.

Application

Computers expanded the variety and range of the things we could record and capture, which was multiplied exponentially by the introduction of the internet (see Disruptive Technologies: The Internet of Things). Additionally, our online behavior is now used to determine our tastes and preferences with sites such as Google, Twitter, Facebook, and LinkedIn (among many others) recording, analyzing and storing even private information relating to our health, relationships, financial records, and anything you can think of. Sensors, even in our phones take care of tracking our every move and computers allow us to quantify everything under the sun-from location, to heart rate, to engine vibration.

Computers are able to measure, record, analyze and store data on a near limitless scale, with faster processing speeds and greater storage capacity improving and increasing daily. The authors note that it took a decade to originally sequence three billion base pairs of the human genome while the same amount of DNA could be sequenced in a single day by 2012.

Computers have enabled us to move on from the previous method of analyzing small samples of data and drawing conclusions with varying margins of error, and basing entire theories based on those limited samples, to the current ability to analyze the entire data set thereby getting an exact insight into a given subject. This is expressed by N=all.

Even in the recent past, we had to manually look for correlations in data values, opt for proxies, and run correlations against those to validate correlations.  Today we can load up even disorganized data and have “intelligent” algorithms find correlations we may not have even suspected. And while that potential is great, gleaning insights from correlations have their downside; sometimes correlations can be coincidental and not properly reflective of causal relationships. Careful interpretation of computer output is still necessary.

 Practical Application

Google applied big data to search for terms that were run through it in order to track the spread of the pandemic H1N1 in real time in 2009. The researchers used data from the Center for Disease Control (CDC) on the pandemic between 2003 and 2008, against the most popular search terms entered into Google during that time.  Google’s system correlated between the frequency with which certain queries were entered and how the flu spread over time and space. The software found a combination of some 45 search terms that showed a high correlation between Google’s predictions and official figures countrywide. This predictive model actually proved more reliable than even state figures developed at the height of the pandemic in 2009.

Google has also used big data to track down translation pages by the billions off the internet in order to come up with their own database that would allow any uploaded text to be translated; the service is known as Google translate. Google uses much more data than other translation services, making their system superior to the competition.

Other Google projects such as Google books use this same big data model to ensure thorough service.

As one would expect, some if not most of the information that Google collects is also diverted toward its profit making ventures. For example, while Google’s street view cars service is advertised as a service used for Google maps, they have been known to collect data from open wifi connections and use it to develop their driverless car technology. They are certainly in the driver’s seat for some serious profits on this disruptive technology.

Targeted Advertisement

Amazon.com, a pioneer of targeted advertising became a big data user when Greg Linden, one of its software engineers realized the potential of book reviewing from the average results of their in-house review project. About a dozen critics were hired to review Amazon.com books in what was called ‘the Amazon voice,’ when they first came online in 1995. Linden designed software that could identify associations between books, and recommend books to people based on their previous choices. When Amazon compared the results of the computer sales against the in house reviews, the results were much better for the data-derived material, and revolutionized e-commerce.

Facebook also tracks user locations, ‘status updates’ and ‘likes’ to determine which ads to display for a user. These target ads can seem invasive, and to most a bit creepy. An analytical team reviews people’s behaviors and locations and determine which ads to display to them. Determining correlations between users and needs has since become a model form of advertising.

Manufacturers have started using big data to streamline their operations while improving safety. The sensors placed on machinery help to monitor the data patterns which the machines provide on vibration, stress, heat and sound, plus they help to detect any changes that might show problems in the future. These early-detection/prediction systems help avert breakdowns and ensure timely maintenance.

Big data can also boost efficiency outside the factory by using data algorithms to determine more efficient and safer routes for trucks, as is routinely done by UPS. This has proven useful and accidents have reduced in addition to fuel consumption and other negative issues/cost factors. UPS also has sensors that identify potential breakdowns using the method the authors explain above.

Car makers use the same model to determine car use on the roads and use the information to try and improve vehicles depending on drivers’ behaviors.

Predictive models and sensors have also been employed by governments to predict possible dangers in infrastructure and when maintenance could be done to avert possible disaster. It was even used in 2009 by Michael Flowers, head of New York City’s first analytics department as appointed by Mayor Michael Bloomberg. Flowers used big data to fight crime rates by coming up with a model that analyzed and predicted likely false calls and legit calls.

Startups, Health Care and Governments

Big data has also helped to launch predictive business models such as Farecast, a business that predicts air ticket prices saving its customers a lot of money while raking in the profits. The owner, Oren Etzioni simply collected data on air ticket prices from travel websites and analyzed the price changes towards the actual flight to come up with his business model that could predict prices saving travelers an average of $50 per ticket. He would later sell Farecast to Microsoft for $110 million, before opening up Decide.com, which dealt with consumer goods using the same successful model, this time saving consumers some $100 for every product purchased.

The ability to measure each and everything in the human body inspired IBM and a team of researchers from University of Ontario Institute of Technology to come up with software that can analyze physiological data from premature babies. This can help to determine the likelihood of infections and how well the infants respond to treatment.

Big data has also helped to reduce hospital readmission rates.  For example MedStar Washington Hospital used Microsoft’s Amalga software and “analyzed several years of its anonymized medical records—patient demographics, tests, diagnoses, treatments, and more—” which helped to track the factors that most caused readmission.  A common factor in this hospital was found to be mental distress. Treating mental distress before discharge helped reduce readmission rates.

Companies such as 23andMe have already been sequencing individual genomes of people to help detect specific genetic susceptibilities. Sequencing DNA in this format is still costly, but people like Apple’s Steve Jobs have undergone it in their battle against cancer. The procedure bought Steve Jobs a few extra years, thanks to big data. This technology will be very useful as soon as it becomes affordable to everyone else.

Governments have been slow to catch the scent of the enormous use of big data (for other than surveillance purposes, that is), especially since they can have access to a lot of information about their citizens. Some governments are catching on though, especially those interested in curbing costs and ensuring safety and efficiency. The American government- naturally- has led the way, using big data to estimate consumer price index (CPI) and to measure the inflation rate.

Open Data

Since some governments are taking their sweet time applying big data methods to state business, many trust that making much of the information that they can access free to organizations and individuals would be more useful. The authors say governments only act as custodians of information in their collections and would need to publicly release that data for commercial and civic purposes since the private sector would be more innovative with it.

To that end, the US government responded by opening up a free data website data.gov where information from federal government can be freely accessed. From 47 datasets in 2009, the site had 450,000 in 172 agencies three years later. The UK has also made strides in this regard as have Australia, Chile, Brazil and Kenya.

Big Data Ends Privacy: Enter Profiling

Some governments maybe be releasing more information but all of them are stacking away a lot more than they give away, and most of it is personal, private information. For example, “the U.S. National Security Agency (NSA) is said to intercept and store 1.7 billion emails, phone calls, and other communications every day, according to a Washington Post investigation in 2010…” As we have seen before, even the private sector does this; the internet tracks our locations, what we like and everything else in between. We are losing our privacy.

Now, parole boards and police are using big data to profile people, predicting chances of where crime might be higher, and even whether or not to release a prisoner on parole. In many US states, people are ‘questioned’ based on location and whether they fall into a certain statistical category as pointed out by an algorithm that they may be likely to commit a crime.

The Solutions and Conclusion

The authors, as would be expected, suggest a few solutions. The regulation of the use of big data should be given to internal and independent auditors (they have even suggested a name- ‘algorithmists’) to impartially and confidentially scrutinize all big data practices for any ethical or legal infractions as well as technical errors. They also suggest the amendment of the law to accommodate big data.

While the big data era is just beginning, the authors remind that not every societal problem is one that big data can address. Obviously business models and governments continue to make use of big data, but it has been used and misused, and our privacy is all but evaporated (governments spy on citizens and other states, social media spy on people’s likes and dislikes, etc.) even as it has made communication very easy and its predictive model has helped us to avert looming danger. While the public opinion jury is still out on big data, with many people unsure whether to love it or hate it, it is a reality.  Perhaps, they suggest, if regulation can help us keep some privacy maybe it will be more welcomed; otherwise paranoia is very high and not without reason.  People are being categorized according to where big data puts them (statistically rather than any specific personal action), and some have been arrested or denied freedom unnecessarily.

Visualizing the Universe: Virtual Reality

Virtual reality holds tremendous promise for the future. It’s progressing on numerous fronts with movies like the remake of TRON, the entire Matrix series, Minority Report, and Avatar having given us a glimpse of what could be, scientists and technicians both public and private are racing to make virtual indistinguishable from reality to your five senses.

The Electronic Visualization Laboratory at the University of Illinois, Chicago is one of the best known virtual reality development institutes in the world. Inventor of the CAVE (CAVE Automatic Virtual Environment), and its second iteration, CAVE2.

Interesting intro to EVL@UIC

Virtual reality, simply put, is a three dimensional, computer generated simulation in which one can navigate around, interact with, and so become immersed in another environment.

Douglas Engelbart, an electrical engineer and former naval radar technician, is credited with the first exploration into virtual reality. He viewed computers as more than glorified adding machines. It was the 1950s, and TVs had barely turned color. His goal was to connect the computer to a screen.

By the early 1960s, communications technology intersecting with computing and graphics was well underway. Vacuum tubes turned into transistors. Pinball machines were being replaced by video games.

Scientific visualization moved from bar charts, mathematical diagrams and line drawings to dynamic images, using computer graphics. Computerized scientific visualization enabled scientists to assimilate huge amounts of data and increase understanding of complex processes like DNA sequences, molecular models, brain maps, fluid flows, and celestial events. A goal of scientific visualization is to capture the dynamic qualities of a wide range of systems and processes in images, but computer graphics and animation was not interactive. Animation, despite moving pictures, was static because once created, it couldn’t be altered. Interactivity became the primary driver in the development of Virtual Reality.

By the end of the 1980s, super computers and high-resolution graphic workstations were paving the way towards a more interactive means of visualization. As computer technology developed, MIT and other high tech research centers began exploring Human Computer Interaction (HCI), which is still a major area of research, now combined with artificial intelligence.

The mouse seemed clumsy, and such devices as light pens and touch screens were explored as alternatives. Eventually CAD–computer-aided design–programs emerged with the ability of designers to model and simulate the inner workings of vehicles, create blueprints for city development, and experiment with computerized blueprints for a wide range of industrial products.

Flight simulators were the predecessors to computerized programs and models and might be considered the first virtual reality -like environments. The early flight simulators consisted of mock cockpits built on motion platforms that pitched and rolled. A limitation was they lacked visual feedback. This changed when video displays were coupled with model cockpits.

In 1979, the military began experimenting with head-mounted displays. By the early 1980s, better software, hardware, and motion-control platforms enabled pilots to navigate through highly detailed virtual worlds.

US Army’s New Virtual Simulation Training System

A natural consumer of computer graphics was the entertainment industry, which, like the military and industry, was the source of many valuable spin-offs in virtual reality. By the 1970s, some of Hollywood’s most dazzling special effects were computer-generated. Plus, the video game business boomed.

One direct spin-off of entertainment’s venture into computer graphics was the dataglove, a computer interface device that detects hand movements. It was invented to produce music by linking hand gestures to a music synthesizer. NASA was one of the first customers for the new device. The biggest consumer of the dataglove was the Mattel company, which adapted it into the PowerGlove, and used it in video games for kids. The glove is no longer sold.

Helmet-mounted displays and power gloves combined with 3D graphics and sounds hinted at the potential for experiencing totally immersive environments. There were practical applications as well. Astronauts, wearing goggles and gloves, could manipulate robotic rovers on the surface of Mars. Of course, some people might not consider a person on Mars as a practical endeavor. But at least the astronaut could explore dangerous terrain without risk of getting hurt.

NASA VIDEO

The virtual reality laboratory at the Johnson Space Center is helping astronauts Steve Bowen and Al Drew train for two spacewalks they conducted in 2011. Watch the first 2 minutes to get the gist.

Virtual Reality is not just a technological marvel easily engaged like sitting in a movie theater or in front of a TV. Human factors are crucial to VR. Age, gender, health and fitness, peripheral vision, and posture come into play. Everyone perceives reality differently, and it’s the same for VR. Human Computer Interaction (HCI) is a major area of research.

The concept of a room with graphics projected from behind the walls was invented at the Electronic Visualization Lab at the University of Illinois Chicago Circle in 1992. The images on the walls were in stereo to give a depth cue. The main advantage over ordinary graphics systems is that the users are surrounded by the projected images, which means that the images are in the users’ main field of vision. This environment was dubbed the “CAVE (CAVE Automatic Virtual Environment).” In 2012 the dramatically improved environment CAVE2 was launched.

CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D)

The original CAVE was designed from the beginning to be a useful tool for scientific visualization. The CAVE2 can be coupled to remote data sources, supercomputers and scientific instruments via dedicated, super high-speed fiber networks. Various CAVE-like environments exist all over the world today. Projection on all 72 surfaces of its room allows users to turn around and look in all directions. Thus, their perception and experience are never limited, which is necessary for full immersion.

Any quick review of the history of optics, photography, computer graphics, media, broadcasting and even sci-fi, is enough to believe virtual reality will become as commonplace as the TV and movies. There are far too many practical applications, such as in surgery, flight simulation, space exploration, chemical engineering and underwater exploration.

And these are just the immediate applications assuming we only receive virtual reality data via our external senses. Once the information is transmitted via electrical impulses directly to the brain, not only will virtual be indistinguishable from reality, it will be a mere option to stop there. We’ll be able to stimulate the brain in ways nature never could, producing never before experienced sensations or even entirely new senses. New emotions, ranges, and combinations of the visceral and ethereal.

Maybe it will be Hollywood that stops speculating and starts experimenting. The thought of being chased by Freddy Kruger is one thing, but to actually be chased by Freddy Kruger is utterly terrifying. No more jumping out of seats when the face of a giant shark snaps its teeth as us. Now we can really know what it’s like to be chased by cops speeding down a thruway at 100 mph. We can feel and smell pineapples on a tropical beach. We can catch bad guys, defeat aliens in a starship battle, and have conversations with Presidents in our bare feet. We can simulate what it would feel like to soar across the galaxy as a stream of subatomic particles.

With virtual reality, the only limit is the imagination.

Visualizing the Universe: Scientific Visualization

Visualization in its broadest terms represents any technique for creating images to represent abstract data. Scientific Visualization has grown to encompass many other areas like business (information visualization), computing (process visualization), medicine, chemical engineering, flight simulation, and architecture. Actually there’s not a single area of human endeavor that does not fall under scientific visualization in one form or another.

From a crude perspective, scientific visualization was born out of the conversion of text into graphics. For instance, describing an apple with words. Bar graphs, charts and diagrams were a 2-dimensional forerunner in converting data into a visual representation. Obviously words and 2-dimensional representations can only go so far, and the need for more mathematically accurate datasets was needed to describe an object’s exterior, interior, and functioning processes.

visualizing-scientific-modeling-minard

Early Scientific Visualization: Charles Minard’s 1869 chart showing the number of men in Napoleon’s 1812 Russian campaign army, their movements, as well as the temperature they encountered on the return path.

Such datasets were huge, and it wasn’t until the development of supercomputers with immense processing power combined with sophisticated digital graphics workstations that conversion from data into a more dynamic, 3-D graphical representation was possible. From the early days of computer graphics, users saw the potential of computer visualization to investigate and explain physical phenomena and processes, from repairing space vehicles to chaining molecules together.

In general the term “scientific visualization” is used to refer to any technique involving the transformation of data into visual information. It characterizes the technology of using computer graphics techniques to explore results from numerical analysis and extract meaning from complex, mostly multi-dimensional data sets.

Traditionally, the visualization process consists of filtering raw data to select a desired resolution and region of interest, mapping that result into a graphical form, and producing an image, animation, or other visual product. The result is evaluated, the visualization parameters modified, and the process run again.

Three-dimensional imaging of medical datasets was introduced after clinical CT (Computed axial tomography) scanning became a reality in the 1970s. The CT scan processes images of the internals of an object by obtaining a series of two-dimensional x-ray axial images.

The individual x-ray axial slice images are taken using a x-ray tube that rotates around the object, taking many scans as the object is gradually passed through a tube. The multiple scans from each 360 degree sweep are then processed to produce a single cross-section. See MRI and CAT scanning in the Optics section.

The goal in the visualization process is to generate visually understandable images from abstract data. Several steps must be done during the generation process. These steps are arranged in the so called Visualization Pipeline.

Visualization Methods

Data is obtained either by sampling or measuring, or by executing a computational model. Filtering is a step which pre-processes the raw data and extracts information which is to be used in the mapping step. Filtering includes operations like interpolating missing data, or reducing the amount of data. It can also involve smoothing the data and removing errors from the data set.

Mapping is the main core of the visualization process. It uses the pre-processed filtered data to transform it into 2D or 3D geometric primitives with appropriate attributes like color or opacity. The mapping process is very important for the later visual representation of the data. Rendering generates the image by using the geometric primitives from the mapping process to generate the output image. There are number of different filtering, mapping and rendering methods used in the visualization process.

Some of the earliest medical visualizations, created 3D representations from CT scans with help from electron microscopy. Images were geometrical shapes like polygons and lines creating a wire frame, representing three-dimensional volumetric objects. Similar techniques are used in creating animation for Hollywood films. With sophisticated rendering capability, motion could be added to the wired model illustrating such processes as blood flow, or fluid dynamics in chemical and physical engineering.

The development of integrated software environments took visualization to new levels. Some of the systems developed during the 80s include IBM’s Data Explorer, Ohio State University’s apE, Wavefront’s Advanced Visualizer, SGI’s IRIS Explorer, Stardent’s AVS and Wavefront’s Data Visualizer, Khoros (University of New Mexico), and PV-WAVE (Precision Visuals’ Workstation Analysis and Visualization Environment).

Crude 2005 Simulation – Typhoon Mawar

These visualization systems were designed to help scientists, who often knew little about how graphics are generated. The most usable systems used an interface. Software modules were developed independently, with standardized inputs and outputs, and were visually linked together in a pipeline. These interface systems are sometimes called modular visualization environments (MVEs).

MVEs allowed the user to create visualizations by selecting program modules from a library and specifying the flow of data between modules using an interactive graphical networking or mapping environment. Maps or networks could be saved for later recall.

General classes of modules included:

•  data readers – input the data from the data source
•  data filters – convert the data from a simulation or other source into another form which is more informative or less voluminous
•  data mappers – convert information into another domain, such as 2D or 3D geometry or sound
•  viewers or renderers – rendering the 2D and 3D data as images
•  control structures – display devices, recording devices, open graphics windows
•  data writers – output the original or filtered data

MVEs required no graphics expertise, allowed for rapid prototyping and interactive modifications, promoted code reuse, allowed new modules to be created and allowed computations to be distributed across machines, networks and platforms.

Earlier systems were not always good performers, especially on larger datasets. Imaging was poor.

Newer visualization systems came out of the commercial animation software industry. The Wavefront Advanced Visualizer was a modeling, animation and rendering package which provided an environment for interactive construction of models, camera motion, rendering and animation without any programming. The user could use many supplied modeling primitives and model deformations, create surface properties, adjust lighting, create and preview model and camera motions, do high quality rendering, and save images to video tape.

Acquiring data is accomplished in a variety of ways: CT scans, MRI scans, ultrasound, confocal microscopy, computational fluid dynamics, and remote sensing. Remote sensing involves gathering data and information about the physical “world” by detecting and measuring phenomena such as radiation, particles, and fields associated with objects located beyond the immediate vicinity of a sensing device(s). It is most often used to acquire and interpret geospatial data for features, objects, and classes on the Earth’s land surface, oceans, atmosphere, and in outerspace for mapping the exteriors of planets, stars and galaxies. Data is also obtained via aerial photography, spectroscopy, radar, radiometry and other sensor technologies.

Another major approach to 3D visualization is Volume Rendering. Volume rendering allows the display of information throughout a 3D data set, not just on the surface. Pixar Animation, a spin-off from George Lukas’s Industrial, Light and Magic (ILM) created a volume rendering method, or algorithm, that used independent 3D cells within the volume, called “voxels”.

The volume was composed of voxels that each had the same property, such as density. A surface would occur between groups of voxels with two different values. The algorithm used color and intensity values from the original scans and gradients obtained from the density values to compute the 3D solid. Other approaches include ray-tracing and splatting.

Scientific visualization draws from many disciplines such as computer graphics, image processing, art, graphic design, human-computer interface (HCI), cognition, and perception. The Fine Arts are extremely useful to Scientific Visualization. Art history can help to gain insights into visual form as well as imagining scenarios that have little or no data backup. Along with all the uses for a computer an important part of the computers future is the invention of the LCD screens, which helped tie it all together. This brought the visual graphics to life, with better resolution, lighter weight and faster processing of data than the computer monitors of the past.

Computer simulations have become a useful part of modeling natural systems in physics, chemistry and biology, human systems in economics and social science, and engineering new technology. Simulations have rendered mathematical models into visual representations easier to understand. Computer models can be classified as Stochastic or deterministic.

Stochastic models use random number generators to model the chance or random events, such as genetic drift. A discrete event simulation (DE) manages events in time. Most simulations are of this type. A continuous simulation uses differential equations (either partial or ordinary), implemented numerically. The simulation program solves all the equations periodically, and uses the numbers to change the state and output of the simulation. Most flight and racing-car simulations are of this type, as well as simulated electrical circuits.

Other methods include agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules which determine how the agent’s state is updated from one time-step to the next.

Winter Simulation Conference

The Winter Simulation Conference is an important annual event covering leading-edge developments in simulation analysis and modeling methodology. Areas covered include agent-based modeling, business process reengineering, computer and communication systems, construction engineering and project management, education, healthcare, homeland security, logistics, transportation, distribution, manufacturing, military operations, risk analysis, virtual reality, web-enabled simulation, and the future of simulation. The WSC provides educational opportunity for both novices and experts.

References

Ohio State University, Department of Design http://design.osu.edu/carlson/history/lesson18.html

Visualizing the Universe: Telescopes

Quicklinks on this Page

Telescopes

visualizing-the-future-telescope-galileo

visualizing-the-future-telescope-galileo

Without telescopes, the stars in the sky we see every night would just be twinkling little lights. Hard to imagine what people in pre-telescope times thought these twinkling lights were. For some it must’ve been frightening. For others, it was awe-inspiring.

It began with optics; the lens. Spectacles were being worn in Italy as early as 1300. In the one-thing-leads-to-another theory, no doubt the ability to see better led to the desire to see farther. Three hundred years later, a Dutch spectacle maker named Hans Lippershey, put two lens together and achieved magnification. But he also discovered quite a number of other experimenters made the same discovery when he tried to sell the idea.

Also in the 1600s, Galileo, an instrument maker in Venice, started working on a device that many thought had little use other than creating optical illusions (although they weren’t
called that at the time). In 1610 he published a description of his night sky observations in a small paper called, Starry Messenger (Sidereus Nuncius).

He reported that the moon was not smooth, as many had believed. It was rough and covered with craters. He also proposed the “Milky Way” was composed of millions of stars and Jupiter had four moons. He also overturned the geocentric view of the world system–the universe revolves around the Earth–with the heliocentric view–the solar system revolves around the Sun, a notion proposed around 50 years earlier by Copernicus. The device he invented to make these discoveries came to be known as the telescope.

The telescope was a long thin tube where light passes in a straight line from the aperture (the front objective lens) to the eyepiece at the opposite end of the tube. Galileo’s earlier device was the forerunner of what are now called refractive telescopes, because the objective lens bends, or refracts, light.

NASA’s Great Observatory Program saw a series of space telescopes designed to give the most complete picture of objects across many different wavelengths. Each observatory studies a particular wavelength region in detail.

The telescopes in order of launch were: the Hubble Space Telescope (1990), Compton Gamma Ray Observatory (1991), Chandra X-ray Observatory (1999), and the Spitzer Space Telescope (2003).

The Kepler mission joined the great observatories, launched in 2009, and spent about 4 years looking for Earth-like planets in Earth-like orbits around Sun-like stars. It scanned over 100,000 stars in the constellations Lyra and Cygnus in hopes of finding a few dozen planetary transits, in which the star’s light dims slightly as a planet passes across its disk. Instead it found thousands of exosolar planets, more than any scientist’s wildest dream! Just recently put to rest after the gyroscopes finally gave out, scientists have years of data to examine with more hope than ever that earth-like planets exist in the galaxy and beyond.

visualizing-the-universe-kepler-habitable-planet-finder

visualizing-the-universe-kepler-habitable-planet-finder

NASA also has launched many smaller observatories through its Explorer program. These missions have probed the “afterglow” of the Big Bang (COBE and WMAP), the ultraviolet light from other galaxies (GALEX and EUVE), and the violent explosions known as gamma-ray bursts (SWIFT).

Sometimes several of the observatories are used to look at the same object. Astronomers can analyze an object thoroughly by studying it in many different kinds of light. An object will look different in X-ray, visible, and infrared light.

Recent experiments with color explored the way a prism refracts white light into a array of colors. A circular prism separating colors of visible light is known as chromatic aberration, but the process limits the effectiveness of existing telescopes. A new telescope design using a parabolic mirror to collect light and concentrate the image before it was presented to the eyepiece. This resulted in the Reflective Telescope.

Reflective Telescopes

Reflective Telescopes are constructed with giant mirrors–or lenses–and collect more light than can be seen by the human eye in order to see objects that are too faint and far away.

Solar Telescopes, designed to see the Sun, have the opposite problem: the target emits too much light. Because of the sun’s brightness, astronomers must filter out much of the light to study it. Solar telescopes are ordinary reflecting telescopes with some important changes.

Because the Sun is so bright, solar telescopes don’t need huge mirrors that capture as much light as possible. The mirrors only have to be large enough to provide good resolution. Instead of light-gathering power, solar telescopes are built to have high magnification. Magnification depends on focal length. The longer the focal length, the higher the magnification, so solar telescopes are usually built to be quite long.

Since the telescopes are so long, the air in the tube becomes a problem. As the temperature of the air changes, the air moves. This causes the telescope to create blurry images. Originally, scientists tried to keep the air inside the telescope at a steady temperature by painting solar telescopes white to reduce heating. White surfaces reflect more light and absorb less heat. Today the air is simply pumped out of the solar telescopes’ tubes, creating a vacuum.

Because it’s so necessary to control the air inside the telescope and the important instruments are large and bulky, solar telescopes are designed not to move. They stay in one place, while a moveable mirror located at the end of the telescope, called a tracking mirror, follows the Sun and reflects its light into the tube. To minimize the effects of heating, these mirrors are mounted high above the ground.

Astronomers have studied the Sun for a long time. Galileo, among others, had examined sunspots. Other early astronomers investigated the outer area of the Sun, called the corona, which was only visible during solar eclipses.

visualizing-the-universe-telescopes-sunspots

Sun Spots

Before the telescope, other instruments were used to study the Sun. The spectroscope, a device invented in 1815 by the German optician Joseph von Fraunhofer, spread sunlight into colors and helps astronomers figure out what elements stars contain. Scientists used a spectrum of the Sun to discover the element helium, named after the Greek word for Sun, “helio.”

In the 1890s, when the American astronomer George Ellery Hale combined the technology of spectroscopy and photography and came up with a new and better way to study the Sun. Hale called his device the “spectroheliograph.”

The spectroheliograph allowed astronomers to choose a certain type of light to analyze. For example, they could take a picture of the Sun using only the kind of light produced by calcium atoms. Some types of light make it easier to see details such as sunspots and solar prominences.

In 1930, the French astronomer Bernard Lyot came up with another device that helped scientists study both the Sun and objects nearby. The coronagraph uses a disk to block much of the light from the Sun, revealing features that would otherwise be erased by the bright glare. Close observations of the Sun’s corona, certain comets, and other details and objects are made possible by the coronagraph. Coronagraphs also allow scientists to study features like solar flares and the Sun’s magnetic field.

Today, more technologically advanced versions of the spectroheliograph and coronagraph are used to study the Sun. The McMath-Pierce Solar Telescope on Kitt Peak in Arizona is the world’s largest solar telescope. The Solar and Heliospheric Observatory project is a solar telescope in space that studies the Sun’s interior and corona, and solar wind, in ultraviolet and X-rays as well as visible light. Astronomers also use a technique called helioseismology, a kind of spectroscopy that studies sound waves in the Sun, to examine the Sun down to its core.

Basic telescope terms:

  • Concave – lens or mirror that causes light to spread out.
  • Convex – lens or mirror that causes light to come together to a focal point.
  • Field of view – area of the sky that can be seen through the telescope with a given eyepiece.
  • Focal length – distance required by a lens or mirror to bring the light to a focus.
  • Focal point or focus – point at which light from a lens or mirror comes together.
  • Magnification (power) – telescope’s focal length divided by the eyepiece’s focal length.
  • Resolution – how close two objects can be and yet still be detected as separate objects, usually measured in arc-seconds (this is important for revealing fine details of an object, and is related to the telescope’s aperture).

Telescopes come in all shapes and sizes, from a little plastic tube bought at a toy store for $2, to the Hubble Space Telescope weighing several tons. Amateur telescopes fit somewhere in between. Even though they are not nearly as powerful as the Hubble, they can do some incredible things. For example, a small 6-inch (15 centimeter) scope can read the writing on a dime from 150 feet (55 meters) away.

Most telescopes come in two forms: the refractor and reflector telescope. The refractor telescope uses glass lenses. The reflector telescope uses mirrors instead of the lenses. They both try to accomplish the same thing but in different ways.

Telescopes are metaphorically giant eyes. The reason our eyes can’t see the printing on a dime 150 feet away is because they are simply too small. The eyes, obviously, have limits. A bigger eye would collect more light from an object and create a brighter image.

The objective lens (in refractors) or primary mirror (in reflectors) collects light from a distant object and brings that light, or image, to a point or focus. An eyepiece lens takes the bright light from the focus of the objective lens or primary mirror and “spreads it out” (magnifies it) to take up a large portion of the retina. This is the same principle that a magnifying glass (lens) uses. A magnifying glass takes a small image on a sheet of paper, for instance, and spreads it out over the retina of the eye so that it looks big.

When the objective lens or primary mirror is combined with the eyepiece, it makes a telescope. The basic idea is to collect light to form a bright image inside the telescope, then magnifying that image. Therefore, the simplest telescope design is a big lens that gathers the light and directs it to a focal point with a small lens used to bring the image to a person’s eye.

A telescope’s ability to collect light is directly related to the diameter of the lens or mirror (the aperture) used to gather light. Generally, the larger the aperture, the more light the telescope collects and brings to focus, and the brighter the final image. The telescope’s magnification, its ability to enlarge an image, depends on the combination of lenses used. The eyepiece performs the magnification. Magnification can be achieved by almost any telescope using different eyepieces.

Refractors

Hans Lippershey, living in Holland, is credited with inventing the refractor in 1608. It was first used by the military. Galileo was the first to use it in astronomy. Both Lippershey’s and Galileo’s designs used a combination of convex and concave lenses. Around 1611, Kepler improved the design to have two convex lenses, which made the image upside-down. Kepler’s design is still the major design of refractors today, with a few later improvements in the lenses and in the glass used to make the lenses.

visualizing-the-universe-telescopes-refractorRefractors have a long tube, made of metal, plastic, or wood, a glass combination lens at the front end (objective lens), and a second glass combination lens (eyepiece). The tube holds the lenses in place at the correct distance from one another. The tube also helps to keeps out dust, moisture and light that would interfere with forming a good image. The objective lens gathers the light, and bends or refracts it to a focus near the back of the tube. The eyepiece brings the image to the eye, and magnifies the image. Eyepieces have much shorter focal lengths than objective lenses.

Achromatic refractors use lenses that are not extensively corrected to prevent chromatic aberration, which is a rainbow halo that sometimes appears around images seen through a refractor. Instead, they usually have “coated” lenses to reduce this problem. Apochromatic refractors use either multiple-lens designs or lenses made of other types of glass (such as fluorite) to prevent chromatic aberration. Apochromatic refractors are much more expensive than achromatic refractors.

Refractors have good resolution, high enough to see details in planets and binary stars. However, it is difficult to make large objective lenses (greater than 4 inches or 10 centimeters) for refractors. Refractors are relatively expensive. Because the aperture is limited, a refractor is less useful for observing faint, deep-sky objects, like galaxies and nebulae, than other types of telescopes.

Isaac Newton developed the reflector telescope around 1680, in response to the chromatic aberration (rainbow halo) problem that plagued refractors during his time. Instead of using a lens to gather light, Newton used a curved, metal mirror (primary mirror) to collect the light and reflect it to a focus. Mirrors do not have the chromatic aberration problems that lenses do. Newton placed the primary mirror in the back of the tube.

Because the mirror reflected light back into the tube, he had to use a small, flat mirror (secondary mirror) in the focal path of the primary mirror to deflect the image out through the side of the tube, to the eyepiece; the reason being his head would get in the way of incoming light. Because the secondary mirror is so small, it does not block the image gathered by the primary mirror.

The Newtonian reflector remains one of the most popular telescope designs in use today.

Rich-field (or wide-field) reflectors are a type of Newtonian reflector with short focal ratios and low magnification. The focal ratio, or f/number, is the focal length divided by the aperture, and relates to the brightness of the image. They offer wider fields of view than longer focal ratio telescopes, and provide bright, panoramic views of comets and deep-sky objects like nebulae, galaxies and star clusters.

Dobsonian telescopes are a type of Newtonian reflector with a simple tube and alt-azimuth mounting. They are relatively inexpensive because they are made of plastic, fiberglass or plywood. Dobsonians can have large apertures (6 to 17 inches, 15 to 43 centimeters). Because of their large apertures and low price, Dobsonians are well-suited to observing deep-sky objects.

Reflector telescopes have problems. Spherical aberration is when light reflected from the mirror’s edge gets focused to a slightly different point than light reflected from the center. Astigmatism is when the mirror is not ground symmetrically about its center.

Consequently, images of stars focus to crosses rather than to points. Coma is when stars near the edge of the field look elongated, like comets, while those in the center are sharp points of light. All reflector telescopes experience some loss of light. The secondary mirror obstructs some of the light coming into the telescope and the reflective coating for a mirror returns up to 90 percent of incoming light.

Compound or catadioptric telescopes are hybrid telescopes that have a mix of refractor and reflector elements in the design. The first compound telescope was made by German astronomer Bernhard Schmidt in 1930. The Schmidt telescope had a primary mirror at the back of the telescope, and a glass corrector plate in the front of the telescope to remove spherical aberration. The telescope was used primarily for photography, because it had no secondary mirror or eyepieces. Photographic film is placed at the prime focus of the primary mirror. Today, the Schmidt-Cassegrain design, which was invented in the 1960s, is the most popular type of telescope. It uses a secondary mirror that bounces light through a hole in the primary mirror to an eyepiece.

The second type of compound telescope was invented by Russian astronomer, D. Maksutov, although a Dutch astronomer, A. Bouwers, came up with a similar design in 1941, before Maksutov. The Maksutov telescope is similar to the Schmidt design, but uses a more spherical corrector lens. The Maksutov-Cassegrain design is similar to the Schmidt Cassegrain design.

Telescope Mounts

Telescope Mounts are another important feature of telescopes. The alt-azimuth is a type of telescope mount, similar to a camera tripod, that uses a vertical (altitude) and a horizontal (azimuth) axis to locate an object. An equatorial mount uses two axes (right ascension, or polar, and declination) aligned with the poles to track the motion of an object across the sky.

The telescope mount keep the telescope steady, points the telescope at whatever object is being viewed, and adjusts the telescope for the movement of the stars caused by the Earth’s rotation. Hands need to be free to focus, change eyepieces, and other activities.

The alt-azimuth mount has two axes of rotation, a horizontal axis and a vertical axis. To point the telescope at an object, the mount is rotated along the horizon (azimuth axis) to the object’s horizontal position. Then, it tilts the telescope, along the altitude axis, to the object’s vertical position. This type of mount is simple to use, and is most common in inexpensive telescopes.

visualizing-the-universe-altazimuth_mount

altazimuth_mount

There are two variations of the alt-azimuth mount. The ball and socket is used in inexpensive rich-field telescopes. It has a ball shaped end that can rotate freely in the socket mount. The rocker box is a low center-of-gravity box mount, usually made of plywood, with a horizontal circular base (azimuth axis) and Teflon bearings for the altitude axis. This mount is usually used on Dobsonian telescopes. It provides good support for a heavy telescope, as well as smooth, frictionless motion.

Although the alt-azimuth mount is simple and easy to use, it does not properly track the motion of the stars. In trying to follow the motion of a star, the mount produces a “zigzag” motion, instead of a smooth arc across the sky. This makes this type of mount useless for taking photographs of the stars.

The equatorial mount also has two perpendicular axes of rotation: right ascension and declination. However, instead of being oriented up and down, it is tilted at the same angle as the Earth’s axis of rotation. The equatorial mount also comes in two variations. The German equatorial mount is shaped like a “T.” The long axis of the “T” is aligned with the Earth’s pole. The Fork mount is a two-pronged fork that sits on a wedge that is aligned with the Earth’s pole. The base of the fork is one axis of rotation and the prongs are the other.

When properly aligned with the Earth’s poles, equatorial mounts can allow the telescope to follow the smooth, arc-like motion of a star across the sky. They can also be equipped with “setting circles,” which allow easy location of a star by its celestial coordinates (right ascension, declination). Motorized drives allow a computer (laptop, desktop or PDA) to continuously drive the telescope to track a star. Equatorial mounts are used for astrophotography.

Eyepiece

An eyepiece is the second lens in a refractor, or the only lens in a reflector. Eyepieces come in many optical designs, and consist of one or more lenses in combination, functioning almost like mini-telescopes. The purposes of the eyepiece are to produce and allow changing the telescope’s magnification, produce a sharp image, provide comfortable eye relief (the distance between the eye and the eyepiece when the image is in focus), and determine the telescope’s field of view.

Field of view is “apparent,” or, how much of the sky, in degrees, is seen edge-to-edge through the eyepiece alone (specified on the eyepiece). “True or real” is how much of the sky can be seen when that eyepiece is placed in the telescope (true field = apparent field/magnification).

There are many types of eyepiece designs: Huygens, Ramsden, Orthoscopic, Kellner and RKE, Erfle, Plossl, Nagler, and Barlow (used in combination with another eyepiece to increase magnification 2 to 3 times). All eyepieces have problems and are designed to fit specific telescopes.

Eyepieces with illuminated reticules are used exclusively for astrophotography. They aid in guiding the telescope to track an object during a film exposure, which can take anywhere from 10 minutes to an hour.

Other Components

Finders are devices used to help aim the telescope at its target, similar to the sights on a rifle. Finders come in three basic types. Peep sights are notches or circles that allow alignment with the target. Reflex sights use a mirror box that shows the sky and illuminates the target with a red LED diode spot, similar to a laser sight on a gun. A telescope sight is a small, low magnification (5x to 10x) telescope mounted on the side with a cross hair reticule, like a telescopic sight on a rifle.

Filters are pieces of glass or plastic placed in the barrel of an eyepiece to restrict the wavelengths of light that come through in the image. Filters are used to enhance the viewing of faint sky objects in light-polluted skies, enhance the contrast of fine features and details on the moon and planets, and safely view the sun. The filter screws into the barrel of the eyepiece.

Another add-on component is a dew shield, which prevents moisture condensation. For taking photographs, conventional lens and film cameras or CCD devices/digital cameras are used. Some astronomers use telescopes to make scientific measurements with photometers (devices to measure the intensity of light) or spectroscopes (devices to measure the wavelengths and intensities of light from an object).

Visualizing the Universe: Eye Glasses

Contact Lenses

Contact lenses are thin transparent plastic discs that sit on the cornea. Just like eyeglasses, they correct refractive errors such as myopia (nearsightedness) and hyperopia (farsightedness). With these conditions, the eye doesn’t focus light directly on the retina as it should, leading to blurry vision. Contact lenses are shaped based on the vision problem to help the eye focus light directly on the retina. Some contact lenses are less about seeing than being seen.

Contact lenses for Fashion - Wait for Genetic Engineering!

Contact lenses for Fashion – Wait for Genetic Engineering!

Contact lenses are closer to natural sight than eyeglasses. They move with the eye/
Normal glasses can get in the way of the line of sight. Contact lenses don’t. They can be worn several days at a time.

Contact lenses stay in place by sticking to the layer of tear fluid that floats on the surface of the eye and by eyelid pressure. The eyes provide natural lubrication and help flush away any impurities that may become stuck to the lens.

Originally, all contact lenses were made of a hard plastic called polymethyl methacrylate (PMMA). This is the same plastic used to make Plexiglas. But hard lenses don’t absorb water, which is needed to help oxygen pass through the lens and into the cornea. Because the eye needs oxygen to stay healthy, hard lenses can cause irritation and discomfort. However, they are easy to clean.

visualizing-the-universe-hydro-lens

Hydrophilic Lens

Soft contact lenses are more pliable and easier to wear because they’re made of a soft, gel-like plastic. Soft lenses are hydrophilic, or “water loving,” and absorb water. This allows oxygen to flow to the eye and makes the lens flexible and more comfortable. More oxygen to the eye means soft contact lenses can be worn for long periods with less irritation.

Daily-wear lenses are the type of contacts removed every night before going to bed (or whenever someone decides to sleep). Extended-wear lenses are worn for several days without removal. Disposable lenses are just what the name implies: they are worn for a certain period of time and then thrown away. Cosmetic lenses change the color of a person’s eyes. Ultraviolet (UV) protection lenses act as sunglasses, protecting the eyes against harmful ultraviolet rays from the sun.

Corneal reshaping lenses are worn to reshape the cornea and correct vision. Rigid, gas-permeable lenses have both hard and soft contact lens features. They are more durable than soft lenses but still allow oxygen to pass through to the eye. They don’t contain water, so are less likely to develop bacteria and cause infection than soft lenses. They are also hard enough to provide clear vision.

Contact lenses are frequently customized for athletes, computer operators and other applications. Many contacts don’t just correct vision problems but improve it.

Sunglasses

Sunglasses provide protection from harmful ultraviolet rays in sunlight. Some sunglasses filter out UV light completely. They also provide protection from intense light or glare, like the light reflected off snow or water on a bright day. Glare can be blinding, with distracting bright spots hiding otherwise visible objects. Good sunglasses can completely eliminate glare using polarization.

visualizing-the-universe-polarized-lens

visualizing-the-universe-polarized-lens

Sunglasses have become a cultural phenomenon. In the fashion world, designer sunglasses make people look “cool,” or mysterious. They can also be ominous, such as the mirrored sunglasses worn by roughneck bikers and burly state troopers.

Cheap sunglasses are risky because although they are tinted and block some of the light, they don’t necessarily block out UV light. Cheap sunglasses are made out of ordinary plastic with a thin tinted coating on them.

There are several types of lens material, such as CR-39, a plastic made from hard resin, or polycarbonate, a synthetic plastic that has great strength and is very lightweight. These kinds of lens are usually lighter, more durable, and scratch-resistant. Optical-quality polycarbonate and glass lenses are generally free from distortions, such as blemishes or waves. The color is evenly distributed. Some sunglasses are very dark and can block up to 97 percent of light.

More expensive sunglasses use special technologies to achieve increased clarity, better protection, and higher contrast or to block certain types of light. Normal frames similar to prescription eyeglasses filter light but sometimes offer little protection from ambient light, direct light and glare. Wrap-around frames, larger lenses and special attachments can compensate for these weaknesses. Most cheap sunglasses use simple plastic or wire frames, while more expensive brands use high-strength, light-weight composite or metal frames.

The brightness or intensity of light is measured in lumens. Indoors, most artificial light is around 400 to 600 lumens. Outside on a sunny day, the brightness ranges from about 1,000 lumens in the shade to more than 6,000 lumens from bright light reflected off of hard surfaces, like concrete or highways.

Comfort levels are around 3,500 lumens. Brightness above this level produces glare. Squinting is the natural way to filter such light. In the 10,000 lumens range, prolonged exposure to light of such intensity can cause temporary or even permanent blindness. A large snowfield, for instance, can produce more than 12,000 lumens, resulting in what is commonly called, “snowblind.”

Three kinds of light are associated with sunglasses: direct, reflected, and ambient. Direct light is light that goes straight from the light source (like the sun) to the eyes. Too much direct light can wash out details and even cause pain. Reflected light (glare) is light that has bounced off a reflective object to enter the eyes. Strong reflected light can be equally as damaging as direct light, such as light reflected from snow, water, glass, white sand and metal.

Ambient light is light that has bounced and scattered in many directions so that it is does not seem to have a specific source, such as the glow in the sky around a major city. Good sunglasses can compensate for all three forms of light.

Sunglasses use a variety of technologies to eliminate problems with light: tinting, polarization, photochromic lenses, mirroring, scratch-resistant coating, anti-reflective coating, and UV coating.

The color of the tint determines the parts of the light spectrum that are absorbed by the lenses. Gray tints are great all-purpose tints that reduce the overall amount of brightness with the least amount of color distortion. Gray lenses offer good protection against glare. Yellow or gold tints reduce the amount of blue light while allowing a larger percentage of other frequencies through.

Blue light tends to bounce and scatter off a lot of things; it can create a kind of glare known as blue haze. The yellow tint eliminates the blue part of the spectrum and has the effect of making everything bright and sharp. Snow glasses are usually yellow. Tinting distorts color perception are tinted glasses are not very useful with there is a need to accurately see color. Other colors include amber, green, purple and rose, all of which filter out certain colors of the light spectrum.

Light waves from the sun or even from an artificial light source such as a lightbulb, vibrate and radiate outward in all directions. Whether the light is transmitted, reflected, scattered or refracted, when its vibrations are aligned into one or more planes of direction, the light is said to be polarized.

Polarization can occur naturally or artificially. On a lake, for instance, natural polarization is the reflected glare off the surface is the light that does not make it through the “filter” of the water. This explains why part of a lake looks shiny and another part looks rough (like waves). It’s also why nothing can be seen below the surface, even when the water is very clear.

Polarized filters are most commonly made of a chemical film applied to a transparent plastic or glass surface. The chemical compound used will typically be composed of molecules that naturally align in parallel relation to one another. When applied uniformly to the lens, the molecules create a microscopic filter that absorbs any light matching their alignment. When light strikes a surface, the reflected waves are polarized to match the angle of that surface. So, a highly reflective horizontal surface, such as a lake, will produce a lot of horizontally polarized light. Polarized lenses in sunglasses are fixed at an angle that only allows vertically polarized light to enter.

Sunglasses or prescription eyeglasses that darken when exposed to the sun are called photochromic, or sometimes photochromatic. Because photochromic lenses react to UV light and not to visible light, there are circumstances under which the darkening will not occur.

A good example is in the car. As the windshield blocks out most of the UV light, photochromic lenses will not darken inside the car. Consequently, many photochromic sunglasses are tinted. Photochromic lenses have millions of molecules of substances, such as silver chloride or silver halide, embedded in them. The molecules are transparent to visible light in the absence of UV light, which is the normal makeup of artificial lighting. But when exposed to UV rays in sunlight, the molecules undergo a chemical process that causes them to change shape.

The new molecular structure absorbs portions of the visible light, causing the lenses to darken. Indoors, out of the UV light, a reverse chemical reaction takes place. The sudden absence of UV radiation causes the molecules to “snap back” to their original shape, resulting in the loss of their light absorbing properties.

With some prescription glasses, different parts of the lens can vary in thickness. The thicker parts can appear darker than the thinner areas. By immersing plastic lenses in a chemical bath, the photochromic molecules are actually absorbed to a depth of about 150 microns into the plastic. This depth of absorption is much better than a simple coating, which is only about 5 microns thick and not enough to make glass lenses sufficiently dark.