Visualizing the Universe: Virtual Reality

Virtual reality holds tremendous promise for the future. It’s progressing on numerous fronts with movies like the remake of TRON, the entire Matrix series, Minority Report, and Avatar having given us a glimpse of what could be, scientists and technicians both public and private are racing to make virtual indistinguishable from reality to your five senses.

The Electronic Visualization Laboratory at the University of Illinois, Chicago is one of the best known virtual reality development institutes in the world. Inventor of the CAVE (CAVE Automatic Virtual Environment), and its second iteration, CAVE2.

Interesting intro to EVL@UIC

Virtual reality, simply put, is a three dimensional, computer generated simulation in which one can navigate around, interact with, and so become immersed in another environment.

Douglas Engelbart, an electrical engineer and former naval radar technician, is credited with the first exploration into virtual reality. He viewed computers as more than glorified adding machines. It was the 1950s, and TVs had barely turned color. His goal was to connect the computer to a screen.

By the early 1960s, communications technology intersecting with computing and graphics was well underway. Vacuum tubes turned into transistors. Pinball machines were being replaced by video games.

Scientific visualization moved from bar charts, mathematical diagrams and line drawings to dynamic images, using computer graphics. Computerized scientific visualization enabled scientists to assimilate huge amounts of data and increase understanding of complex processes like DNA sequences, molecular models, brain maps, fluid flows, and celestial events. A goal of scientific visualization is to capture the dynamic qualities of a wide range of systems and processes in images, but computer graphics and animation was not interactive. Animation, despite moving pictures, was static because once created, it couldn’t be altered. Interactivity became the primary driver in the development of Virtual Reality.

By the end of the 1980s, super computers and high-resolution graphic workstations were paving the way towards a more interactive means of visualization. As computer technology developed, MIT and other high tech research centers began exploring Human Computer Interaction (HCI), which is still a major area of research, now combined with artificial intelligence.

The mouse seemed clumsy, and such devices as light pens and touch screens were explored as alternatives. Eventually CAD–computer-aided design–programs emerged with the ability of designers to model and simulate the inner workings of vehicles, create blueprints for city development, and experiment with computerized blueprints for a wide range of industrial products.

Flight simulators were the predecessors to computerized programs and models and might be considered the first virtual reality -like environments. The early flight simulators consisted of mock cockpits built on motion platforms that pitched and rolled. A limitation was they lacked visual feedback. This changed when video displays were coupled with model cockpits.

In 1979, the military began experimenting with head-mounted displays. By the early 1980s, better software, hardware, and motion-control platforms enabled pilots to navigate through highly detailed virtual worlds.

US Army’s New Virtual Simulation Training System

A natural consumer of computer graphics was the entertainment industry, which, like the military and industry, was the source of many valuable spin-offs in virtual reality. By the 1970s, some of Hollywood’s most dazzling special effects were computer-generated. Plus, the video game business boomed.

One direct spin-off of entertainment’s venture into computer graphics was the dataglove, a computer interface device that detects hand movements. It was invented to produce music by linking hand gestures to a music synthesizer. NASA was one of the first customers for the new device. The biggest consumer of the dataglove was the Mattel company, which adapted it into the PowerGlove, and used it in video games for kids. The glove is no longer sold.

Helmet-mounted displays and power gloves combined with 3D graphics and sounds hinted at the potential for experiencing totally immersive environments. There were practical applications as well. Astronauts, wearing goggles and gloves, could manipulate robotic rovers on the surface of Mars. Of course, some people might not consider a person on Mars as a practical endeavor. But at least the astronaut could explore dangerous terrain without risk of getting hurt.

NASA VIDEO

The virtual reality laboratory at the Johnson Space Center is helping astronauts Steve Bowen and Al Drew train for two spacewalks they conducted in 2011. Watch the first 2 minutes to get the gist.

Virtual Reality is not just a technological marvel easily engaged like sitting in a movie theater or in front of a TV. Human factors are crucial to VR. Age, gender, health and fitness, peripheral vision, and posture come into play. Everyone perceives reality differently, and it’s the same for VR. Human Computer Interaction (HCI) is a major area of research.

The concept of a room with graphics projected from behind the walls was invented at the Electronic Visualization Lab at the University of Illinois Chicago Circle in 1992. The images on the walls were in stereo to give a depth cue. The main advantage over ordinary graphics systems is that the users are surrounded by the projected images, which means that the images are in the users’ main field of vision. This environment was dubbed the “CAVE (CAVE Automatic Virtual Environment).” In 2012 the dramatically improved environment CAVE2 was launched.

CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D)

The original CAVE was designed from the beginning to be a useful tool for scientific visualization. The CAVE2 can be coupled to remote data sources, supercomputers and scientific instruments via dedicated, super high-speed fiber networks. Various CAVE-like environments exist all over the world today. Projection on all 72 surfaces of its room allows users to turn around and look in all directions. Thus, their perception and experience are never limited, which is necessary for full immersion.

Any quick review of the history of optics, photography, computer graphics, media, broadcasting and even sci-fi, is enough to believe virtual reality will become as commonplace as the TV and movies. There are far too many practical applications, such as in surgery, flight simulation, space exploration, chemical engineering and underwater exploration.

And these are just the immediate applications assuming we only receive virtual reality data via our external senses. Once the information is transmitted via electrical impulses directly to the brain, not only will virtual be indistinguishable from reality, it will be a mere option to stop there. We’ll be able to stimulate the brain in ways nature never could, producing never before experienced sensations or even entirely new senses. New emotions, ranges, and combinations of the visceral and ethereal.

Maybe it will be Hollywood that stops speculating and starts experimenting. The thought of being chased by Freddy Kruger is one thing, but to actually be chased by Freddy Kruger is utterly terrifying. No more jumping out of seats when the face of a giant shark snaps its teeth as us. Now we can really know what it’s like to be chased by cops speeding down a thruway at 100 mph. We can feel and smell pineapples on a tropical beach. We can catch bad guys, defeat aliens in a starship battle, and have conversations with Presidents in our bare feet. We can simulate what it would feel like to soar across the galaxy as a stream of subatomic particles.

With virtual reality, the only limit is the imagination.

Visualizing the Universe: Scientific Visualization

Visualization in its broadest terms represents any technique for creating images to represent abstract data. Scientific Visualization has grown to encompass many other areas like business (information visualization), computing (process visualization), medicine, chemical engineering, flight simulation, and architecture. Actually there’s not a single area of human endeavor that does not fall under scientific visualization in one form or another.

From a crude perspective, scientific visualization was born out of the conversion of text into graphics. For instance, describing an apple with words. Bar graphs, charts and diagrams were a 2-dimensional forerunner in converting data into a visual representation. Obviously words and 2-dimensional representations can only go so far, and the need for more mathematically accurate datasets was needed to describe an object’s exterior, interior, and functioning processes.

visualizing-scientific-modeling-minard

Early Scientific Visualization: Charles Minard’s 1869 chart showing the number of men in Napoleon’s 1812 Russian campaign army, their movements, as well as the temperature they encountered on the return path.

Such datasets were huge, and it wasn’t until the development of supercomputers with immense processing power combined with sophisticated digital graphics workstations that conversion from data into a more dynamic, 3-D graphical representation was possible. From the early days of computer graphics, users saw the potential of computer visualization to investigate and explain physical phenomena and processes, from repairing space vehicles to chaining molecules together.

In general the term “scientific visualization” is used to refer to any technique involving the transformation of data into visual information. It characterizes the technology of using computer graphics techniques to explore results from numerical analysis and extract meaning from complex, mostly multi-dimensional data sets.

Traditionally, the visualization process consists of filtering raw data to select a desired resolution and region of interest, mapping that result into a graphical form, and producing an image, animation, or other visual product. The result is evaluated, the visualization parameters modified, and the process run again.

Three-dimensional imaging of medical datasets was introduced after clinical CT (Computed axial tomography) scanning became a reality in the 1970s. The CT scan processes images of the internals of an object by obtaining a series of two-dimensional x-ray axial images.

The individual x-ray axial slice images are taken using a x-ray tube that rotates around the object, taking many scans as the object is gradually passed through a tube. The multiple scans from each 360 degree sweep are then processed to produce a single cross-section. See MRI and CAT scanning in the Optics section.

The goal in the visualization process is to generate visually understandable images from abstract data. Several steps must be done during the generation process. These steps are arranged in the so called Visualization Pipeline.

Visualization Methods

Data is obtained either by sampling or measuring, or by executing a computational model. Filtering is a step which pre-processes the raw data and extracts information which is to be used in the mapping step. Filtering includes operations like interpolating missing data, or reducing the amount of data. It can also involve smoothing the data and removing errors from the data set.

Mapping is the main core of the visualization process. It uses the pre-processed filtered data to transform it into 2D or 3D geometric primitives with appropriate attributes like color or opacity. The mapping process is very important for the later visual representation of the data. Rendering generates the image by using the geometric primitives from the mapping process to generate the output image. There are number of different filtering, mapping and rendering methods used in the visualization process.

Some of the earliest medical visualizations, created 3D representations from CT scans with help from electron microscopy. Images were geometrical shapes like polygons and lines creating a wire frame, representing three-dimensional volumetric objects. Similar techniques are used in creating animation for Hollywood films. With sophisticated rendering capability, motion could be added to the wired model illustrating such processes as blood flow, or fluid dynamics in chemical and physical engineering.

The development of integrated software environments took visualization to new levels. Some of the systems developed during the 80s include IBM’s Data Explorer, Ohio State University’s apE, Wavefront’s Advanced Visualizer, SGI’s IRIS Explorer, Stardent’s AVS and Wavefront’s Data Visualizer, Khoros (University of New Mexico), and PV-WAVE (Precision Visuals’ Workstation Analysis and Visualization Environment).

Crude 2005 Simulation – Typhoon Mawar

These visualization systems were designed to help scientists, who often knew little about how graphics are generated. The most usable systems used an interface. Software modules were developed independently, with standardized inputs and outputs, and were visually linked together in a pipeline. These interface systems are sometimes called modular visualization environments (MVEs).

MVEs allowed the user to create visualizations by selecting program modules from a library and specifying the flow of data between modules using an interactive graphical networking or mapping environment. Maps or networks could be saved for later recall.

General classes of modules included:

•  data readers – input the data from the data source
•  data filters – convert the data from a simulation or other source into another form which is more informative or less voluminous
•  data mappers – convert information into another domain, such as 2D or 3D geometry or sound
•  viewers or renderers – rendering the 2D and 3D data as images
•  control structures – display devices, recording devices, open graphics windows
•  data writers – output the original or filtered data

MVEs required no graphics expertise, allowed for rapid prototyping and interactive modifications, promoted code reuse, allowed new modules to be created and allowed computations to be distributed across machines, networks and platforms.

Earlier systems were not always good performers, especially on larger datasets. Imaging was poor.

Newer visualization systems came out of the commercial animation software industry. The Wavefront Advanced Visualizer was a modeling, animation and rendering package which provided an environment for interactive construction of models, camera motion, rendering and animation without any programming. The user could use many supplied modeling primitives and model deformations, create surface properties, adjust lighting, create and preview model and camera motions, do high quality rendering, and save images to video tape.

Acquiring data is accomplished in a variety of ways: CT scans, MRI scans, ultrasound, confocal microscopy, computational fluid dynamics, and remote sensing. Remote sensing involves gathering data and information about the physical “world” by detecting and measuring phenomena such as radiation, particles, and fields associated with objects located beyond the immediate vicinity of a sensing device(s). It is most often used to acquire and interpret geospatial data for features, objects, and classes on the Earth’s land surface, oceans, atmosphere, and in outerspace for mapping the exteriors of planets, stars and galaxies. Data is also obtained via aerial photography, spectroscopy, radar, radiometry and other sensor technologies.

Another major approach to 3D visualization is Volume Rendering. Volume rendering allows the display of information throughout a 3D data set, not just on the surface. Pixar Animation, a spin-off from George Lukas’s Industrial, Light and Magic (ILM) created a volume rendering method, or algorithm, that used independent 3D cells within the volume, called “voxels”.

The volume was composed of voxels that each had the same property, such as density. A surface would occur between groups of voxels with two different values. The algorithm used color and intensity values from the original scans and gradients obtained from the density values to compute the 3D solid. Other approaches include ray-tracing and splatting.

Scientific visualization draws from many disciplines such as computer graphics, image processing, art, graphic design, human-computer interface (HCI), cognition, and perception. The Fine Arts are extremely useful to Scientific Visualization. Art history can help to gain insights into visual form as well as imagining scenarios that have little or no data backup. Along with all the uses for a computer an important part of the computers future is the invention of the LCD screens, which helped tie it all together. This brought the visual graphics to life, with better resolution, lighter weight and faster processing of data than the computer monitors of the past.

Computer simulations have become a useful part of modeling natural systems in physics, chemistry and biology, human systems in economics and social science, and engineering new technology. Simulations have rendered mathematical models into visual representations easier to understand. Computer models can be classified as Stochastic or deterministic.

Stochastic models use random number generators to model the chance or random events, such as genetic drift. A discrete event simulation (DE) manages events in time. Most simulations are of this type. A continuous simulation uses differential equations (either partial or ordinary), implemented numerically. The simulation program solves all the equations periodically, and uses the numbers to change the state and output of the simulation. Most flight and racing-car simulations are of this type, as well as simulated electrical circuits.

Other methods include agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules which determine how the agent’s state is updated from one time-step to the next.

Winter Simulation Conference

The Winter Simulation Conference is an important annual event covering leading-edge developments in simulation analysis and modeling methodology. Areas covered include agent-based modeling, business process reengineering, computer and communication systems, construction engineering and project management, education, healthcare, homeland security, logistics, transportation, distribution, manufacturing, military operations, risk analysis, virtual reality, web-enabled simulation, and the future of simulation. The WSC provides educational opportunity for both novices and experts.

References

Ohio State University, Department of Design http://design.osu.edu/carlson/history/lesson18.html

Visualizing the Universe: Electron Microscopes

Under an electron microscope, the infinitesimal begins to look like sweeping geographical landscapes. Blood clots look like UFO’s caught in an extraterrestrial traffic jam. Micro-minerals give the appearance of vast landscapes dotted with buttes and canyons. Synthetic kidney stone crystals look like falling snowflakes. The shells of microscopic plants stand out like Christmas tree ornaments. Nylon looks like a plate of spaghetti. Bugs look like monsters.

visualizing-the-universe-electron-microscope

Electron Microscope

The world of a grain of sand was once as far as the human eye could go. Now, using electron microscopes, a grain of sand is like the universe, filled with untold galaxies, planetary systems and maybe even a few black holes. At the organic level, humans are learning how to Mother Nature builds life, one atom at a time.

Conventional microscopes use particles of light, or photons, to look directly at small objects, employing glass lenses to magnify things several thousand times. The SEM opens the door to an even tinier level by using electrons, which are much smaller than photons.

The process is the same for all electron microscopes, where a stream of electrons is formed (by the Electron Source) and accelerated toward the specimen using a positive electrical potential. This stream is confined and focused using metal apertures and magnetic lenses into a thin, focused, monochromatic beam. This beam is focused onto the sample using a magnetic lens. Interactions occur inside the irradiated sample, affecting the electron beam. These interactions and effects are detected and transformed into an image.

Electron microscopes provide morphological, compositional and crystallographic information at the atomic level (nanometers). Topography is the surface features of an object or “how it looks.” Texture is the direct relation between these features and materials properties (hardness, reflectivity). Morphology is the shape, size and relationship of the particles making up the object (ductility, strength, reactivity).

Composition explains the relative amounts of elements and compounds that the object is composed of (melting point, reactivity, hardness). Crystallographic Information determines how the atoms are arranged in the object and their relationships with other properties (conductivity, electrical properties, strength).

To create the images, a filament inside an electron “gun” shoots a stream of electrons through a stack of electromagnetic lenses, which focus the electrons into a beam. The beam is directed to a fine point on the specimen, and scans across it rapidly. The sample responds by emitting electrons that are picked up by a detector inside the sample chamber, beginning an electronic process that results in an image that can be displayed on a TV screen.

The Transmission Electron Microscope (TEM), developed by Max Knoll and Ernst Ruska in Germany in 1931, was the first type of Electron Microscope and is patterned exactly on the Light Transmission Microscope except that a focused beam of electrons is used instead of light to “see through” the specimen.

The first Scanning Electron Microscope (SEM) appeared in 1942 with the first commercial instruments around 1965. A TEM works much like a slide projector. A projector shines a beam of light through (transmits) the slide, as the light passes through it is affected by the structures and objects on the slide. These effects result in only certain parts of the light beam being transmitted through certain parts of the slide. This transmitted beam is then projected onto the viewing screen, forming an enlarged image of the slide.

visualizing-the-universe-electron-microscope-image

Nanostructure via a Scanning Microscope

Scanning Electron Microscopes (SEM) are patterned after Reflecting Light Microscopes and yield similar information as TEMs. Unlike the TEM, where electrons are detected by beam transmission, the SEM produces images by detecting secondary electrons which are emitted from the surface due to excitation by the primary electron beam. In the SEM, the electron beam is rastered across the sample, with detectors building up an image by mapping the detected signals with beam position.

Scientists have used the SEM to identify micro-plankton in ocean sediments, fossilized remains found in underwater canyons, the structure of earthquake-induced micro-fractures in rocks and micro-minerals, the microstructure of wires, dental implants, cells damaged from infectious diseases, and even the teeth of microscopic prehistoric creatures.

There are other types of electron microscopes. A Scanning Transmission Electron Microscope (STEM) is a specific sort of TEM, where the electrons still pass through the specimen, but, as in SEM, the sample is scanned in a raster fashion. A Reflection Electron Microscope (REM), like the TEM, uses a technique involving electron beams incident on a surface, but instead of using the transmission (TEM) or secondary electrons (SEM), the reflected beam is detected.

Near-field scanning optical microscopy (NSOM) is a type of microscopy where a sub-wavelength light source is used as a scanning probe. The probe is scanned over a surface at a height above the surface of a few nanometers.

A Scanning Tunneling Microscope (STM) can be considered a type of electron microscope, but it is a type of Scanning probe microscopy and it is non-optical. The STM employs principles of quantum mechanics to determine the height of a surface. An atomically sharp probe (the tip) is moved over the surface of the material under study, and a voltage is applied between probe and the surface.

Depending on the voltage electrons will tunnel or jump from the tip to the surface (or vice-versa depending on the polarity), resulting in a weak electric current. The size of this current is exponentially dependent on the distance between probe and the surface. The STM was invented by scientists at IBM’s Zurich Research Laboratory. The STM could image some types of individual atoms on electrically conducting surfaces. For this, the inventors won a Nobel Prize.

Visualizing the Universe: Everyday Optics

visualizing-the-universe-everyday-optics-lipstick-camera

Lipstick Camera

An FBI surveillance agent plants a lipstick camera in an overhead ceiling light in the hotel room of a suspected terrorist.

Meanwhile, an airport security adviser doesn’t think traditional x-ray scanners are sufficient anymore, and decides to install a portable detection device that employs resonance-enhanced multiphoton ionization (REMPI) to ionize specific “target” molecules given off by explosives and drugs. The detection method uses a laser beam to ionize the vapor from the explosive.

Someone is always losing their glasses, and nothing could be worse than when a contact lens falls into a field of grass or mud puddle. In the movie Nerds, those who are stereotyped as nerds almost always wear glasses–oversized ones at that. And, they are usually scientists or into science, always peering through microscopes and telescopes.

The eyes get blurry when a dust particle invades them. We can’t see at night without night vision goggles. And nothing will make a person go blind quicker than staring at a computer screen all day.

A forest ranger scans miles of forest with binoculars, looking for the slightest hint of smoke. A 12-year girl doesn’t appreciate very much the boy sitting behind her in class, trying to look at her hair through a magnifying glass.

Tourists buy millions of instamatic cameras with one-hour photo services available on every corner. Why anyone needs their photos that fast can only be explained by the need for instant gratification. Journalists might need photos ASAP when covering a breaking news story. But journalists don’t use instamatics. With digital photography, photos are available instantly. However, it still takes time to print them so we’re back to the waiting game…unless the photos are uploaded to the Internet.

An amateur astronomer discovers another meteor, like so many amateurs have done before, and names it after his wife, Gertrude.

In many households, TVs are on 24 hours a day, whether someone is watching it or not. Some say TVs rot the brain. A large screen TV rests in the living room, with smaller ones positioned in the kitchen, all the bedrooms, the bathroom and in some cases, one in the garage. Now, TV junkies can watch favorite programs on their iPods or in their car.

During the Iraqi War, infrared photos of a variety of Iraqi targets (bunkers, buildings, training camps, etc.), are broadcast back to the States. TV viewers watch disinterestedly, when there’s nothing else on 200 other plus channels. The targets are destroyed by precision-guided bombs, with targets pin-pointed through the crosshairs of an aircraft high-tech laser targeting system. Next, U.S. General H. Norman Schwarzkopf describes the purpose of the attack, followed by a quick blurb featuring a U.S. Marine sitting on top of tank, wishing he could be home.

From there, Hollywood picks up on the news story and a new movie is released. The techno-thriller, i, Fighter Jet (Jamie Foxx), is a story seemingly ripped right from the headlines. In the story, an elite trio of U.S. Navy pilots are picked to fly highly classified stealth fighter jets, called Talons. A fourth, virtual wingman–an artificially intelligent based Unmanned Combat Aerial Vehicle, or UCAV–is added to one of their flight missions. The pilots face being replaced and envision a new world where war is fought by androids. But then, ever since Star Wars, robot wars have become a staple of sci-fi movies.

Following the movie, a slew of high resolution video games hit the streets, featuring laser-shooting super jets fighting a host of enemies, from aliens to artificially intelligent super soldiers. Video games also rot the brain, so they say. But video game technology is largely responsible for the high end graphics cards now used in most computers.

Speaking of aliens, the Hubble Space Telescope is really a glorified instamatic camera…sort’a. It takes pictures of things we can’t see, like black holes. But one can’t help wonder how a telescope can see a black hole if it’s black. Science does have a sense of humor.

A major selling point of Smartphones and iPods is the ability to download/upload photos from the Internet and view them on the go. Portable media devices, ranging from laptops to Microsoft’s Media Center to the Iphone, can store 1000s of photos. The family photo album goes digital. Digital cameras eliminate the need for film and can plug directly into a computer via a USB port. Instagram.

A brutal beating during a riot is captured on a digital camcorder and uploaded to the Internet. Smaller digital cameras capture speeders and red light runners on unsuspecting street corners. A scientist explores nanotubes using a scanning tunneling microscope. Nanorobots are injected into the human blood stream and flash back photos of bad cancer cells on a monitor viewed by a doctor 100s of miles away. Well, not yet, anyway.

All of the above vignettes illustrate the wide range of areas and applications influenced by the science of optics.

From the study of electromagnetic radiation to distant galaxies, optics has given humans the ability to see far beyond normal vision. And, it can all be captured on film.  Film? What’s that? Digitally.

But it’s not just a lot of fun gadgets. Electron microscopes are the key to understanding disease. Surveillance cameras question the issue of privacy. Media–meaning movies, TV, print and the Internet–bombard us with a tremendous array of images that can deeply affect our daily lives. Understanding how light works gave us the lightbulb, perhaps the single most important device in the history of modernization. Some might argue the car or the telephone. But even cars and telephones are optically influenced, whether it’s headlights and glare-proof windows or sending millions of telephone messages across fiber optic cable.

Eyeglasses, contact lenses and laser surgery gave a whole new slant to the meaning of natural selection. Those who would’ve gone blind can now see far into the future.

But, seeing into the future takes more than glasses. It takes imagination. It takes vision of another kind. Then again, with telescopes mounted on space probes capturing images of what might be the big bang, who knows what this will tell us about the history of the universe…and its future. We may yet design artificial eyeballs. Someone just might figure out a way to project our dreams onto a screen.

Visualizing the Universe: Lasers and Holography

Dr. Charles H. Townes (PhD in Physics, California Institute of Technology) started working for Bell Telephone Labs, designing radar bombing systems during WWII. He turned his attention to applying the microwave technique of wartime radar research to spectroscopy, a powerful new tool for the study of the structure of atoms and molecules and as a potential new basis for controlling electromagnetic waves.

visualizing-the-universe-lasers-and-holography

visualizing-the-universe-lasers-and-holography

More research followed in microwave physics, particularly studying the interactions between microwaves, molecules, and atoms. In the early 50s he invented the “maser,” a device and an acronym for “microwave amplification by stimulated emission of radiation.” A few years later with his brother-in-law, Dr. A.L. Schavlow (Stanford), he showed theoretically that masers could operate in the optical and infrared regions. The laser was born. Laser stands for “light amplification by stimulated emission of radiation.

Ordinary natural and artificial light is released by energy changes on the atomic and molecular level that occur without any outside intervention. A second type of light exists, however, and occurs when an atom or molecule retains its excess energy until stimulated to emit the energy in the form of light.

Lasers are designed to produce and amplify this stimulated form of light into intense and focused beams. The special nature of laser light has made laser technology a vital tool in nearly every aspect of everyday life including communications, entertainment, manufacturing, and medicine. Laser surgery used for correcting vision problems has become routine, if not big business.

The lasers commonly employed in optical microscopy are high-intensity monochromatic light sources, which are useful as tools for a variety of techniques including optical trapping, lifetime imaging studies, photobleaching recovery, and total internal reflection fluorescence. In addition, lasers are also the most common light source for scanning confocal fluorescence microscopy, and have been utilized, although less frequently, in conventional widefield fluorescence investigations.

In a few decades since the 1960s, the laser has gone from being a science fiction fantasy, to a laboratory research curiosity, to an expensive but valuable tool in esoteric scientific applications, to its current role as an integral part of everyday tasks as mundane as reading grocery prices or measuring a room for wallpaper.

Any substantial list of the major technological achievements of the twentieth century would include the laser near the top. The pervasiveness of the laser in all areas of current life can be best appreciated by the range of applications that utilize laser technology.

visualizing-the-universe-laser-defense-in-space

visualizing-the-universe-laser-defense-in-space

At the spectacular end of this range are military applications, which include using lasers as weapons to possibly defend against missile attack, and at the other end are daily activities such as playing music on compact disks and printing or copying paper documents.
Somewhere in between are numerous scientific and industrial applications, including microscopy, astronomy, spectroscopy, surgery, integrated circuit fabrication, surveying, and communications.

The two major concerns in safe laser operation are exposure to the beam and the electrical hazards associated with high voltages within the laser and its power supply. While there are no known cases of a laser beam contributing to a person’s death, there have been several instances of deaths attributable to contact with high voltage laser-related components.

Beams of sufficiently high power can burn the skin, or in some cases create a hazard by burning or damaging other materials, but the primary concern with regard to the laser beam is potential damage to the eyes, which are the part of the body most sensitive to light.

Visualizing the Universe - Compact Disc Laser

Visualizing the Universe – Compact Disc Laser

A pre-recorded compact disk is read by tracking a finely focused laser across the spiral pattern of lands and pits stamped into the disk by a master diskette. The laser beam is focused onto the surface of a spinning compact disk, and variations between the height of pits and lands determine whether the light is scattered by the disk surface or reflected back into a detector.

There are many other kinds of lasers, like ion lasers, argon-ion lasers, diode lasers, helium-neon lasers, Ti:Sapphire Mode-Locked Lasers, and Nd:YLF Mode-Locked Pulsed Lasers (neodymium: yttrium lithium fluoride).

In 2005, two Americans and a German won the Nobel Prize in Physics for Laser Research. Roy J. Glauber of Harvard University was honored for work applying quantum theory to light emitted by lasers. His work allegedly will help explain a major scientific paradox: the dual nature of light behaving like both a particle and a wave. This along with John L. Hall, JILA Institute, University of Colorado (Boulder), and Theodore W. Hansch, Ludwig-Maximilians University in Munich who shared the Prize for their development of techniques to precisely control the frequency of lasers, allowing measurement of physical properties not only of atoms, but of space and time, with unprecedented accuracy.

Before the laser, researchers used classical 19th century optics theory to explain the behavior of light. Many researchers believed that quantum theory, which had proved successful in describing the behavior of matter, could not be applied to light.

The development of lasers operating at single frequencies made advances in the study of atoms and molecules possible. Laser’s latest developments include single or specific frequency operation.

Such accurate measurement will increase the accuracy of atomic clocks from the current 10 digit to 15 digit accuracy. This kind of precision will not only enhance the accuracy of clocks but also the global positioning system, improve the navigation of long spaceflights, and help in the pointing of space telescopes.

Typical applications of single-frequency lasers today occur in the areas of optical metrology (e.g. with fiber-optic sensors) and interferometry, optical data storage, high-resolution spectroscopy (e.g. LIDAR), and optical fiber communications. In some cases such as spectroscopy, the narrow spectral width of the output is directly important. In other cases, such as optical data storage, a low intensity noise is required, thus the absence of any mode beating noise.

Single-frequency sources are also attractive because they can be used for driving resonant enhancement cavities, e.g. for nonlinear frequency conversion, and for coherent beam combining. The latter technique is currently used to develop laser systems with very high output powers and good beam quality.

Holography

Holography was invented in 1948 by Hungarian physicist Dennis Gabor. He received the Nobel Prize in physics in 1971. The discovery was a result of research involving electron microscopes, but it was the laser that ultimately made holography possible. Holography is the science of producing 3-dimensional images called holograms. Holography is also used to optically store and retrieve information. Holograms gained popularity in such movies as Star Wars, Star Trek and AI: Artificial Intelligence.