Visualizing the Universe: Virtual Reality

Virtual reality holds tremendous promise for the future. It’s progressing on numerous fronts with movies like the remake of TRON, the entire Matrix series, Minority Report, and Avatar having given us a glimpse of what could be, scientists and technicians both public and private are racing to make virtual indistinguishable from reality to your five senses.

The Electronic Visualization Laboratory at the University of Illinois, Chicago is one of the best known virtual reality development institutes in the world. Inventor of the CAVE (CAVE Automatic Virtual Environment), and its second iteration, CAVE2.

Interesting intro to EVL@UIC

Virtual reality, simply put, is a three dimensional, computer generated simulation in which one can navigate around, interact with, and so become immersed in another environment.

Douglas Engelbart, an electrical engineer and former naval radar technician, is credited with the first exploration into virtual reality. He viewed computers as more than glorified adding machines. It was the 1950s, and TVs had barely turned color. His goal was to connect the computer to a screen.

By the early 1960s, communications technology intersecting with computing and graphics was well underway. Vacuum tubes turned into transistors. Pinball machines were being replaced by video games.

Scientific visualization moved from bar charts, mathematical diagrams and line drawings to dynamic images, using computer graphics. Computerized scientific visualization enabled scientists to assimilate huge amounts of data and increase understanding of complex processes like DNA sequences, molecular models, brain maps, fluid flows, and celestial events. A goal of scientific visualization is to capture the dynamic qualities of a wide range of systems and processes in images, but computer graphics and animation was not interactive. Animation, despite moving pictures, was static because once created, it couldn’t be altered. Interactivity became the primary driver in the development of Virtual Reality.

By the end of the 1980s, super computers and high-resolution graphic workstations were paving the way towards a more interactive means of visualization. As computer technology developed, MIT and other high tech research centers began exploring Human Computer Interaction (HCI), which is still a major area of research, now combined with artificial intelligence.

The mouse seemed clumsy, and such devices as light pens and touch screens were explored as alternatives. Eventually CAD–computer-aided design–programs emerged with the ability of designers to model and simulate the inner workings of vehicles, create blueprints for city development, and experiment with computerized blueprints for a wide range of industrial products.

Flight simulators were the predecessors to computerized programs and models and might be considered the first virtual reality -like environments. The early flight simulators consisted of mock cockpits built on motion platforms that pitched and rolled. A limitation was they lacked visual feedback. This changed when video displays were coupled with model cockpits.

In 1979, the military began experimenting with head-mounted displays. By the early 1980s, better software, hardware, and motion-control platforms enabled pilots to navigate through highly detailed virtual worlds.

US Army’s New Virtual Simulation Training System

A natural consumer of computer graphics was the entertainment industry, which, like the military and industry, was the source of many valuable spin-offs in virtual reality. By the 1970s, some of Hollywood’s most dazzling special effects were computer-generated. Plus, the video game business boomed.

One direct spin-off of entertainment’s venture into computer graphics was the dataglove, a computer interface device that detects hand movements. It was invented to produce music by linking hand gestures to a music synthesizer. NASA was one of the first customers for the new device. The biggest consumer of the dataglove was the Mattel company, which adapted it into the PowerGlove, and used it in video games for kids. The glove is no longer sold.

Helmet-mounted displays and power gloves combined with 3D graphics and sounds hinted at the potential for experiencing totally immersive environments. There were practical applications as well. Astronauts, wearing goggles and gloves, could manipulate robotic rovers on the surface of Mars. Of course, some people might not consider a person on Mars as a practical endeavor. But at least the astronaut could explore dangerous terrain without risk of getting hurt.


The virtual reality laboratory at the Johnson Space Center is helping astronauts Steve Bowen and Al Drew train for two spacewalks they conducted in 2011. Watch the first 2 minutes to get the gist.

Virtual Reality is not just a technological marvel easily engaged like sitting in a movie theater or in front of a TV. Human factors are crucial to VR. Age, gender, health and fitness, peripheral vision, and posture come into play. Everyone perceives reality differently, and it’s the same for VR. Human Computer Interaction (HCI) is a major area of research.

The concept of a room with graphics projected from behind the walls was invented at the Electronic Visualization Lab at the University of Illinois Chicago Circle in 1992. The images on the walls were in stereo to give a depth cue. The main advantage over ordinary graphics systems is that the users are surrounded by the projected images, which means that the images are in the users’ main field of vision. This environment was dubbed the “CAVE (CAVE Automatic Virtual Environment).” In 2012 the dramatically improved environment CAVE2 was launched.

CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D)

The original CAVE was designed from the beginning to be a useful tool for scientific visualization. The CAVE2 can be coupled to remote data sources, supercomputers and scientific instruments via dedicated, super high-speed fiber networks. Various CAVE-like environments exist all over the world today. Projection on all 72 surfaces of its room allows users to turn around and look in all directions. Thus, their perception and experience are never limited, which is necessary for full immersion.

Any quick review of the history of optics, photography, computer graphics, media, broadcasting and even sci-fi, is enough to believe virtual reality will become as commonplace as the TV and movies. There are far too many practical applications, such as in surgery, flight simulation, space exploration, chemical engineering and underwater exploration.

And these are just the immediate applications assuming we only receive virtual reality data via our external senses. Once the information is transmitted via electrical impulses directly to the brain, not only will virtual be indistinguishable from reality, it will be a mere option to stop there. We’ll be able to stimulate the brain in ways nature never could, producing never before experienced sensations or even entirely new senses. New emotions, ranges, and combinations of the visceral and ethereal.

Maybe it will be Hollywood that stops speculating and starts experimenting. The thought of being chased by Freddy Kruger is one thing, but to actually be chased by Freddy Kruger is utterly terrifying. No more jumping out of seats when the face of a giant shark snaps its teeth as us. Now we can really know what it’s like to be chased by cops speeding down a thruway at 100 mph. We can feel and smell pineapples on a tropical beach. We can catch bad guys, defeat aliens in a starship battle, and have conversations with Presidents in our bare feet. We can simulate what it would feel like to soar across the galaxy as a stream of subatomic particles.

With virtual reality, the only limit is the imagination.

Visualizing the Universe: Scientific Visualization

Visualization in its broadest terms represents any technique for creating images to represent abstract data. Scientific Visualization has grown to encompass many other areas like business (information visualization), computing (process visualization), medicine, chemical engineering, flight simulation, and architecture. Actually there’s not a single area of human endeavor that does not fall under scientific visualization in one form or another.

From a crude perspective, scientific visualization was born out of the conversion of text into graphics. For instance, describing an apple with words. Bar graphs, charts and diagrams were a 2-dimensional forerunner in converting data into a visual representation. Obviously words and 2-dimensional representations can only go so far, and the need for more mathematically accurate datasets was needed to describe an object’s exterior, interior, and functioning processes.


Early Scientific Visualization: Charles Minard’s 1869 chart showing the number of men in Napoleon’s 1812 Russian campaign army, their movements, as well as the temperature they encountered on the return path.

Such datasets were huge, and it wasn’t until the development of supercomputers with immense processing power combined with sophisticated digital graphics workstations that conversion from data into a more dynamic, 3-D graphical representation was possible. From the early days of computer graphics, users saw the potential of computer visualization to investigate and explain physical phenomena and processes, from repairing space vehicles to chaining molecules together.

In general the term “scientific visualization” is used to refer to any technique involving the transformation of data into visual information. It characterizes the technology of using computer graphics techniques to explore results from numerical analysis and extract meaning from complex, mostly multi-dimensional data sets.

Traditionally, the visualization process consists of filtering raw data to select a desired resolution and region of interest, mapping that result into a graphical form, and producing an image, animation, or other visual product. The result is evaluated, the visualization parameters modified, and the process run again.

Three-dimensional imaging of medical datasets was introduced after clinical CT (Computed axial tomography) scanning became a reality in the 1970s. The CT scan processes images of the internals of an object by obtaining a series of two-dimensional x-ray axial images.

The individual x-ray axial slice images are taken using a x-ray tube that rotates around the object, taking many scans as the object is gradually passed through a tube. The multiple scans from each 360 degree sweep are then processed to produce a single cross-section. See MRI and CAT scanning in the Optics section.

The goal in the visualization process is to generate visually understandable images from abstract data. Several steps must be done during the generation process. These steps are arranged in the so called Visualization Pipeline.

Visualization Methods

Data is obtained either by sampling or measuring, or by executing a computational model. Filtering is a step which pre-processes the raw data and extracts information which is to be used in the mapping step. Filtering includes operations like interpolating missing data, or reducing the amount of data. It can also involve smoothing the data and removing errors from the data set.

Mapping is the main core of the visualization process. It uses the pre-processed filtered data to transform it into 2D or 3D geometric primitives with appropriate attributes like color or opacity. The mapping process is very important for the later visual representation of the data. Rendering generates the image by using the geometric primitives from the mapping process to generate the output image. There are number of different filtering, mapping and rendering methods used in the visualization process.

Some of the earliest medical visualizations, created 3D representations from CT scans with help from electron microscopy. Images were geometrical shapes like polygons and lines creating a wire frame, representing three-dimensional volumetric objects. Similar techniques are used in creating animation for Hollywood films. With sophisticated rendering capability, motion could be added to the wired model illustrating such processes as blood flow, or fluid dynamics in chemical and physical engineering.

The development of integrated software environments took visualization to new levels. Some of the systems developed during the 80s include IBM’s Data Explorer, Ohio State University’s apE, Wavefront’s Advanced Visualizer, SGI’s IRIS Explorer, Stardent’s AVS and Wavefront’s Data Visualizer, Khoros (University of New Mexico), and PV-WAVE (Precision Visuals’ Workstation Analysis and Visualization Environment).

Crude 2005 Simulation – Typhoon Mawar

These visualization systems were designed to help scientists, who often knew little about how graphics are generated. The most usable systems used an interface. Software modules were developed independently, with standardized inputs and outputs, and were visually linked together in a pipeline. These interface systems are sometimes called modular visualization environments (MVEs).

MVEs allowed the user to create visualizations by selecting program modules from a library and specifying the flow of data between modules using an interactive graphical networking or mapping environment. Maps or networks could be saved for later recall.

General classes of modules included:

•  data readers – input the data from the data source
•  data filters – convert the data from a simulation or other source into another form which is more informative or less voluminous
•  data mappers – convert information into another domain, such as 2D or 3D geometry or sound
•  viewers or renderers – rendering the 2D and 3D data as images
•  control structures – display devices, recording devices, open graphics windows
•  data writers – output the original or filtered data

MVEs required no graphics expertise, allowed for rapid prototyping and interactive modifications, promoted code reuse, allowed new modules to be created and allowed computations to be distributed across machines, networks and platforms.

Earlier systems were not always good performers, especially on larger datasets. Imaging was poor.

Newer visualization systems came out of the commercial animation software industry. The Wavefront Advanced Visualizer was a modeling, animation and rendering package which provided an environment for interactive construction of models, camera motion, rendering and animation without any programming. The user could use many supplied modeling primitives and model deformations, create surface properties, adjust lighting, create and preview model and camera motions, do high quality rendering, and save images to video tape.

Acquiring data is accomplished in a variety of ways: CT scans, MRI scans, ultrasound, confocal microscopy, computational fluid dynamics, and remote sensing. Remote sensing involves gathering data and information about the physical “world” by detecting and measuring phenomena such as radiation, particles, and fields associated with objects located beyond the immediate vicinity of a sensing device(s). It is most often used to acquire and interpret geospatial data for features, objects, and classes on the Earth’s land surface, oceans, atmosphere, and in outerspace for mapping the exteriors of planets, stars and galaxies. Data is also obtained via aerial photography, spectroscopy, radar, radiometry and other sensor technologies.

Another major approach to 3D visualization is Volume Rendering. Volume rendering allows the display of information throughout a 3D data set, not just on the surface. Pixar Animation, a spin-off from George Lukas’s Industrial, Light and Magic (ILM) created a volume rendering method, or algorithm, that used independent 3D cells within the volume, called “voxels”.

The volume was composed of voxels that each had the same property, such as density. A surface would occur between groups of voxels with two different values. The algorithm used color and intensity values from the original scans and gradients obtained from the density values to compute the 3D solid. Other approaches include ray-tracing and splatting.

Scientific visualization draws from many disciplines such as computer graphics, image processing, art, graphic design, human-computer interface (HCI), cognition, and perception. The Fine Arts are extremely useful to Scientific Visualization. Art history can help to gain insights into visual form as well as imagining scenarios that have little or no data backup. Along with all the uses for a computer an important part of the computers future is the invention of the LCD screens, which helped tie it all together. This brought the visual graphics to life, with better resolution, lighter weight and faster processing of data than the computer monitors of the past.

Computer simulations have become a useful part of modeling natural systems in physics, chemistry and biology, human systems in economics and social science, and engineering new technology. Simulations have rendered mathematical models into visual representations easier to understand. Computer models can be classified as Stochastic or deterministic.

Stochastic models use random number generators to model the chance or random events, such as genetic drift. A discrete event simulation (DE) manages events in time. Most simulations are of this type. A continuous simulation uses differential equations (either partial or ordinary), implemented numerically. The simulation program solves all the equations periodically, and uses the numbers to change the state and output of the simulation. Most flight and racing-car simulations are of this type, as well as simulated electrical circuits.

Other methods include agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules which determine how the agent’s state is updated from one time-step to the next.

Winter Simulation Conference

The Winter Simulation Conference is an important annual event covering leading-edge developments in simulation analysis and modeling methodology. Areas covered include agent-based modeling, business process reengineering, computer and communication systems, construction engineering and project management, education, healthcare, homeland security, logistics, transportation, distribution, manufacturing, military operations, risk analysis, virtual reality, web-enabled simulation, and the future of simulation. The WSC provides educational opportunity for both novices and experts.


Ohio State University, Department of Design

Visualizing the Universe: Telescopes

Quicklinks on this Page




Without telescopes, the stars in the sky we see every night would just be twinkling little lights. Hard to imagine what people in pre-telescope times thought these twinkling lights were. For some it must’ve been frightening. For others, it was awe-inspiring.

It began with optics; the lens. Spectacles were being worn in Italy as early as 1300. In the one-thing-leads-to-another theory, no doubt the ability to see better led to the desire to see farther. Three hundred years later, a Dutch spectacle maker named Hans Lippershey, put two lens together and achieved magnification. But he also discovered quite a number of other experimenters made the same discovery when he tried to sell the idea.

Also in the 1600s, Galileo, an instrument maker in Venice, started working on a device that many thought had little use other than creating optical illusions (although they weren’t
called that at the time). In 1610 he published a description of his night sky observations in a small paper called, Starry Messenger (Sidereus Nuncius).

He reported that the moon was not smooth, as many had believed. It was rough and covered with craters. He also proposed the “Milky Way” was composed of millions of stars and Jupiter had four moons. He also overturned the geocentric view of the world system–the universe revolves around the Earth–with the heliocentric view–the solar system revolves around the Sun, a notion proposed around 50 years earlier by Copernicus. The device he invented to make these discoveries came to be known as the telescope.

The telescope was a long thin tube where light passes in a straight line from the aperture (the front objective lens) to the eyepiece at the opposite end of the tube. Galileo’s earlier device was the forerunner of what are now called refractive telescopes, because the objective lens bends, or refracts, light.

NASA’s Great Observatory Program saw a series of space telescopes designed to give the most complete picture of objects across many different wavelengths. Each observatory studies a particular wavelength region in detail.

The telescopes in order of launch were: the Hubble Space Telescope (1990), Compton Gamma Ray Observatory (1991), Chandra X-ray Observatory (1999), and the Spitzer Space Telescope (2003).

The Kepler mission joined the great observatories, launched in 2009, and spent about 4 years looking for Earth-like planets in Earth-like orbits around Sun-like stars. It scanned over 100,000 stars in the constellations Lyra and Cygnus in hopes of finding a few dozen planetary transits, in which the star’s light dims slightly as a planet passes across its disk. Instead it found thousands of exosolar planets, more than any scientist’s wildest dream! Just recently put to rest after the gyroscopes finally gave out, scientists have years of data to examine with more hope than ever that earth-like planets exist in the galaxy and beyond.



NASA also has launched many smaller observatories through its Explorer program. These missions have probed the “afterglow” of the Big Bang (COBE and WMAP), the ultraviolet light from other galaxies (GALEX and EUVE), and the violent explosions known as gamma-ray bursts (SWIFT).

Sometimes several of the observatories are used to look at the same object. Astronomers can analyze an object thoroughly by studying it in many different kinds of light. An object will look different in X-ray, visible, and infrared light.

Recent experiments with color explored the way a prism refracts white light into a array of colors. A circular prism separating colors of visible light is known as chromatic aberration, but the process limits the effectiveness of existing telescopes. A new telescope design using a parabolic mirror to collect light and concentrate the image before it was presented to the eyepiece. This resulted in the Reflective Telescope.

Reflective Telescopes

Reflective Telescopes are constructed with giant mirrors–or lenses–and collect more light than can be seen by the human eye in order to see objects that are too faint and far away.

Solar Telescopes, designed to see the Sun, have the opposite problem: the target emits too much light. Because of the sun’s brightness, astronomers must filter out much of the light to study it. Solar telescopes are ordinary reflecting telescopes with some important changes.

Because the Sun is so bright, solar telescopes don’t need huge mirrors that capture as much light as possible. The mirrors only have to be large enough to provide good resolution. Instead of light-gathering power, solar telescopes are built to have high magnification. Magnification depends on focal length. The longer the focal length, the higher the magnification, so solar telescopes are usually built to be quite long.

Since the telescopes are so long, the air in the tube becomes a problem. As the temperature of the air changes, the air moves. This causes the telescope to create blurry images. Originally, scientists tried to keep the air inside the telescope at a steady temperature by painting solar telescopes white to reduce heating. White surfaces reflect more light and absorb less heat. Today the air is simply pumped out of the solar telescopes’ tubes, creating a vacuum.

Because it’s so necessary to control the air inside the telescope and the important instruments are large and bulky, solar telescopes are designed not to move. They stay in one place, while a moveable mirror located at the end of the telescope, called a tracking mirror, follows the Sun and reflects its light into the tube. To minimize the effects of heating, these mirrors are mounted high above the ground.

Astronomers have studied the Sun for a long time. Galileo, among others, had examined sunspots. Other early astronomers investigated the outer area of the Sun, called the corona, which was only visible during solar eclipses.


Sun Spots

Before the telescope, other instruments were used to study the Sun. The spectroscope, a device invented in 1815 by the German optician Joseph von Fraunhofer, spread sunlight into colors and helps astronomers figure out what elements stars contain. Scientists used a spectrum of the Sun to discover the element helium, named after the Greek word for Sun, “helio.”

In the 1890s, when the American astronomer George Ellery Hale combined the technology of spectroscopy and photography and came up with a new and better way to study the Sun. Hale called his device the “spectroheliograph.”

The spectroheliograph allowed astronomers to choose a certain type of light to analyze. For example, they could take a picture of the Sun using only the kind of light produced by calcium atoms. Some types of light make it easier to see details such as sunspots and solar prominences.

In 1930, the French astronomer Bernard Lyot came up with another device that helped scientists study both the Sun and objects nearby. The coronagraph uses a disk to block much of the light from the Sun, revealing features that would otherwise be erased by the bright glare. Close observations of the Sun’s corona, certain comets, and other details and objects are made possible by the coronagraph. Coronagraphs also allow scientists to study features like solar flares and the Sun’s magnetic field.

Today, more technologically advanced versions of the spectroheliograph and coronagraph are used to study the Sun. The McMath-Pierce Solar Telescope on Kitt Peak in Arizona is the world’s largest solar telescope. The Solar and Heliospheric Observatory project is a solar telescope in space that studies the Sun’s interior and corona, and solar wind, in ultraviolet and X-rays as well as visible light. Astronomers also use a technique called helioseismology, a kind of spectroscopy that studies sound waves in the Sun, to examine the Sun down to its core.

Basic telescope terms:

  • Concave – lens or mirror that causes light to spread out.
  • Convex – lens or mirror that causes light to come together to a focal point.
  • Field of view – area of the sky that can be seen through the telescope with a given eyepiece.
  • Focal length – distance required by a lens or mirror to bring the light to a focus.
  • Focal point or focus – point at which light from a lens or mirror comes together.
  • Magnification (power) – telescope’s focal length divided by the eyepiece’s focal length.
  • Resolution – how close two objects can be and yet still be detected as separate objects, usually measured in arc-seconds (this is important for revealing fine details of an object, and is related to the telescope’s aperture).

Telescopes come in all shapes and sizes, from a little plastic tube bought at a toy store for $2, to the Hubble Space Telescope weighing several tons. Amateur telescopes fit somewhere in between. Even though they are not nearly as powerful as the Hubble, they can do some incredible things. For example, a small 6-inch (15 centimeter) scope can read the writing on a dime from 150 feet (55 meters) away.

Most telescopes come in two forms: the refractor and reflector telescope. The refractor telescope uses glass lenses. The reflector telescope uses mirrors instead of the lenses. They both try to accomplish the same thing but in different ways.

Telescopes are metaphorically giant eyes. The reason our eyes can’t see the printing on a dime 150 feet away is because they are simply too small. The eyes, obviously, have limits. A bigger eye would collect more light from an object and create a brighter image.

The objective lens (in refractors) or primary mirror (in reflectors) collects light from a distant object and brings that light, or image, to a point or focus. An eyepiece lens takes the bright light from the focus of the objective lens or primary mirror and “spreads it out” (magnifies it) to take up a large portion of the retina. This is the same principle that a magnifying glass (lens) uses. A magnifying glass takes a small image on a sheet of paper, for instance, and spreads it out over the retina of the eye so that it looks big.

When the objective lens or primary mirror is combined with the eyepiece, it makes a telescope. The basic idea is to collect light to form a bright image inside the telescope, then magnifying that image. Therefore, the simplest telescope design is a big lens that gathers the light and directs it to a focal point with a small lens used to bring the image to a person’s eye.

A telescope’s ability to collect light is directly related to the diameter of the lens or mirror (the aperture) used to gather light. Generally, the larger the aperture, the more light the telescope collects and brings to focus, and the brighter the final image. The telescope’s magnification, its ability to enlarge an image, depends on the combination of lenses used. The eyepiece performs the magnification. Magnification can be achieved by almost any telescope using different eyepieces.


Hans Lippershey, living in Holland, is credited with inventing the refractor in 1608. It was first used by the military. Galileo was the first to use it in astronomy. Both Lippershey’s and Galileo’s designs used a combination of convex and concave lenses. Around 1611, Kepler improved the design to have two convex lenses, which made the image upside-down. Kepler’s design is still the major design of refractors today, with a few later improvements in the lenses and in the glass used to make the lenses.

visualizing-the-universe-telescopes-refractorRefractors have a long tube, made of metal, plastic, or wood, a glass combination lens at the front end (objective lens), and a second glass combination lens (eyepiece). The tube holds the lenses in place at the correct distance from one another. The tube also helps to keeps out dust, moisture and light that would interfere with forming a good image. The objective lens gathers the light, and bends or refracts it to a focus near the back of the tube. The eyepiece brings the image to the eye, and magnifies the image. Eyepieces have much shorter focal lengths than objective lenses.

Achromatic refractors use lenses that are not extensively corrected to prevent chromatic aberration, which is a rainbow halo that sometimes appears around images seen through a refractor. Instead, they usually have “coated” lenses to reduce this problem. Apochromatic refractors use either multiple-lens designs or lenses made of other types of glass (such as fluorite) to prevent chromatic aberration. Apochromatic refractors are much more expensive than achromatic refractors.

Refractors have good resolution, high enough to see details in planets and binary stars. However, it is difficult to make large objective lenses (greater than 4 inches or 10 centimeters) for refractors. Refractors are relatively expensive. Because the aperture is limited, a refractor is less useful for observing faint, deep-sky objects, like galaxies and nebulae, than other types of telescopes.

Isaac Newton developed the reflector telescope around 1680, in response to the chromatic aberration (rainbow halo) problem that plagued refractors during his time. Instead of using a lens to gather light, Newton used a curved, metal mirror (primary mirror) to collect the light and reflect it to a focus. Mirrors do not have the chromatic aberration problems that lenses do. Newton placed the primary mirror in the back of the tube.

Because the mirror reflected light back into the tube, he had to use a small, flat mirror (secondary mirror) in the focal path of the primary mirror to deflect the image out through the side of the tube, to the eyepiece; the reason being his head would get in the way of incoming light. Because the secondary mirror is so small, it does not block the image gathered by the primary mirror.

The Newtonian reflector remains one of the most popular telescope designs in use today.

Rich-field (or wide-field) reflectors are a type of Newtonian reflector with short focal ratios and low magnification. The focal ratio, or f/number, is the focal length divided by the aperture, and relates to the brightness of the image. They offer wider fields of view than longer focal ratio telescopes, and provide bright, panoramic views of comets and deep-sky objects like nebulae, galaxies and star clusters.

Dobsonian telescopes are a type of Newtonian reflector with a simple tube and alt-azimuth mounting. They are relatively inexpensive because they are made of plastic, fiberglass or plywood. Dobsonians can have large apertures (6 to 17 inches, 15 to 43 centimeters). Because of their large apertures and low price, Dobsonians are well-suited to observing deep-sky objects.

Reflector telescopes have problems. Spherical aberration is when light reflected from the mirror’s edge gets focused to a slightly different point than light reflected from the center. Astigmatism is when the mirror is not ground symmetrically about its center.

Consequently, images of stars focus to crosses rather than to points. Coma is when stars near the edge of the field look elongated, like comets, while those in the center are sharp points of light. All reflector telescopes experience some loss of light. The secondary mirror obstructs some of the light coming into the telescope and the reflective coating for a mirror returns up to 90 percent of incoming light.

Compound or catadioptric telescopes are hybrid telescopes that have a mix of refractor and reflector elements in the design. The first compound telescope was made by German astronomer Bernhard Schmidt in 1930. The Schmidt telescope had a primary mirror at the back of the telescope, and a glass corrector plate in the front of the telescope to remove spherical aberration. The telescope was used primarily for photography, because it had no secondary mirror or eyepieces. Photographic film is placed at the prime focus of the primary mirror. Today, the Schmidt-Cassegrain design, which was invented in the 1960s, is the most popular type of telescope. It uses a secondary mirror that bounces light through a hole in the primary mirror to an eyepiece.

The second type of compound telescope was invented by Russian astronomer, D. Maksutov, although a Dutch astronomer, A. Bouwers, came up with a similar design in 1941, before Maksutov. The Maksutov telescope is similar to the Schmidt design, but uses a more spherical corrector lens. The Maksutov-Cassegrain design is similar to the Schmidt Cassegrain design.

Telescope Mounts

Telescope Mounts are another important feature of telescopes. The alt-azimuth is a type of telescope mount, similar to a camera tripod, that uses a vertical (altitude) and a horizontal (azimuth) axis to locate an object. An equatorial mount uses two axes (right ascension, or polar, and declination) aligned with the poles to track the motion of an object across the sky.

The telescope mount keep the telescope steady, points the telescope at whatever object is being viewed, and adjusts the telescope for the movement of the stars caused by the Earth’s rotation. Hands need to be free to focus, change eyepieces, and other activities.

The alt-azimuth mount has two axes of rotation, a horizontal axis and a vertical axis. To point the telescope at an object, the mount is rotated along the horizon (azimuth axis) to the object’s horizontal position. Then, it tilts the telescope, along the altitude axis, to the object’s vertical position. This type of mount is simple to use, and is most common in inexpensive telescopes.



There are two variations of the alt-azimuth mount. The ball and socket is used in inexpensive rich-field telescopes. It has a ball shaped end that can rotate freely in the socket mount. The rocker box is a low center-of-gravity box mount, usually made of plywood, with a horizontal circular base (azimuth axis) and Teflon bearings for the altitude axis. This mount is usually used on Dobsonian telescopes. It provides good support for a heavy telescope, as well as smooth, frictionless motion.

Although the alt-azimuth mount is simple and easy to use, it does not properly track the motion of the stars. In trying to follow the motion of a star, the mount produces a “zigzag” motion, instead of a smooth arc across the sky. This makes this type of mount useless for taking photographs of the stars.

The equatorial mount also has two perpendicular axes of rotation: right ascension and declination. However, instead of being oriented up and down, it is tilted at the same angle as the Earth’s axis of rotation. The equatorial mount also comes in two variations. The German equatorial mount is shaped like a “T.” The long axis of the “T” is aligned with the Earth’s pole. The Fork mount is a two-pronged fork that sits on a wedge that is aligned with the Earth’s pole. The base of the fork is one axis of rotation and the prongs are the other.

When properly aligned with the Earth’s poles, equatorial mounts can allow the telescope to follow the smooth, arc-like motion of a star across the sky. They can also be equipped with “setting circles,” which allow easy location of a star by its celestial coordinates (right ascension, declination). Motorized drives allow a computer (laptop, desktop or PDA) to continuously drive the telescope to track a star. Equatorial mounts are used for astrophotography.


An eyepiece is the second lens in a refractor, or the only lens in a reflector. Eyepieces come in many optical designs, and consist of one or more lenses in combination, functioning almost like mini-telescopes. The purposes of the eyepiece are to produce and allow changing the telescope’s magnification, produce a sharp image, provide comfortable eye relief (the distance between the eye and the eyepiece when the image is in focus), and determine the telescope’s field of view.

Field of view is “apparent,” or, how much of the sky, in degrees, is seen edge-to-edge through the eyepiece alone (specified on the eyepiece). “True or real” is how much of the sky can be seen when that eyepiece is placed in the telescope (true field = apparent field/magnification).

There are many types of eyepiece designs: Huygens, Ramsden, Orthoscopic, Kellner and RKE, Erfle, Plossl, Nagler, and Barlow (used in combination with another eyepiece to increase magnification 2 to 3 times). All eyepieces have problems and are designed to fit specific telescopes.

Eyepieces with illuminated reticules are used exclusively for astrophotography. They aid in guiding the telescope to track an object during a film exposure, which can take anywhere from 10 minutes to an hour.

Other Components

Finders are devices used to help aim the telescope at its target, similar to the sights on a rifle. Finders come in three basic types. Peep sights are notches or circles that allow alignment with the target. Reflex sights use a mirror box that shows the sky and illuminates the target with a red LED diode spot, similar to a laser sight on a gun. A telescope sight is a small, low magnification (5x to 10x) telescope mounted on the side with a cross hair reticule, like a telescopic sight on a rifle.

Filters are pieces of glass or plastic placed in the barrel of an eyepiece to restrict the wavelengths of light that come through in the image. Filters are used to enhance the viewing of faint sky objects in light-polluted skies, enhance the contrast of fine features and details on the moon and planets, and safely view the sun. The filter screws into the barrel of the eyepiece.

Another add-on component is a dew shield, which prevents moisture condensation. For taking photographs, conventional lens and film cameras or CCD devices/digital cameras are used. Some astronomers use telescopes to make scientific measurements with photometers (devices to measure the intensity of light) or spectroscopes (devices to measure the wavelengths and intensities of light from an object).

Visualizing the Universe: Medical Imaging

Nuclear Medicine



Radiation therapy for cancer and PET scans fall in the realm of nuclear medicine. Nuclear medicine uses radioactive substances to image the body and treat disease. Nuclear medicine looks at both the physiology (functioning) and the anatomy of the body in establishing diagnosis and treatment.

The techniques combine the use of computers, detectors, and radioactive substances. Techniques include Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), cardiovascular imaging, and bone scanning. These techniques can detect tumors, aneurysms (weak spots in blood vessel walls), bad blood flow to various tissues, blood cell disorders, dysfunctional organs, and other diseases and ailments.

Positron Emission Tomography (PET)

PET produces images of the body by detecting the radiation emitted from radioactive substances. These substances are injected into the body, and are usually tagged with a radioactive atom, such as Carbon-11, Fluorine-18, Oxygen-15, or Nitrogen-13, that has a short decay time.

These radioactive atoms are formed by bombarding normal chemicals with neutrons to create short-lived radioactive isotopes. PET detects the gamma rays given off at the site where a positron emitted from the radioactive substance collides with an electron in the tissue.

In a PET scan, the patient is injected with a radioactive substance and placed on a flat table that moves in increments through a donut-shaped housing, similar to a CAT scan. This housing contains the circular gamma ray detector array, which has a series of scintillation crystals, each connected to a photomultiplier tube. The crystals convert the gamma rays, emitted from the patient, to photons of light, and the photomultiplier tubes convert and amplify the photons to electrical signals. These electrical signals are then processed by the computer to generate images.

Again, like CAT scans, the table moves and the process is repeated, resulting in a series of thin slice images of the body. The images are assembled into a 3-D model. PET provides images of blood flow or other biochemical functions, depending upon the type of molecule that is radioactively tagged. PET scans can show images of glucose metabolism in the brain or rapid changes in activity in various areas of the body. There are few PET centers because they must be located near a particle accelerator device that produces the short-lived radioisotopes used in the technique.

Single Photon Emission Computed Tomography (SPECT)

SPECT is similar to PET, but the radioactive substances used in SPECT (Xenon-133, Technetium-99, Iodine-123) have longer decay times, and emit single instead of double gamma rays. SPECT can provide information about blood flow and the distribution of radioactive substances in the body. The images are less sensitive and detailed than PET images. However, SPECT is cheaper and do not have to be located near a particle accelerator.

Cardiovascular Imaging

Cardiovascular imaging techniques use radioactive substances to chart the flow of blood through the heart and blood vessels. One example of a cardiovascular imaging technique is a stress thallium test, in which the patient is injected with a radioactive thallium compound, exercised on a treadmill, and imaged with a gamma ray camera. After a period of rest, the study is repeated without the exercise. The images before and after exercising are compared to reveal changes in blood flow and are useful in detecting blocked arteries and other anomalies.



Bone Scanning

Bone scanning detects radiation from a radioactive substance (technetium-pp methyldiphosphate) that when injected into the body, collects in bone tissue. Bone tissue is good at accumulating phosphorus compounds. The substance accumulates in areas of high metabolic activity, and so the image shows “bright spots” of high activity and “dark spots” of low activity. Bone scanning is useful for detecting tumors, which generally have high metabolic activity.

Magnetic Resonance Imaging (MRI)

In 1977, the first MRI exam ever performed on a human being took place. It took almost five hours to produce one image, and the image quality was poor. The machine that performed the exam is now in the Smithsonian Museum. By the early 80s, there were a handful of MRI scanners. Now, in the new millennium, there are 1000s, with images produced in seconds, not hours.

The basic design used in most is a giant cube. The cube in a typical system might be 7 feet tall by 7 feet wide by 10 feet long. Newer models are getting smaller. There is a horizontal tube running through the magnet from front to back. This tube is known as the bore of the magnet. The patient slides into the bore on a special table. Once the body or body part to be scanned is in the exact center or isocenter of the magnetic field, the scan begins.

In conjunction with radio wave pulses of energy, the MRI scanner can pick out a very small point inside the patient’s body and determine tissue type. The MRI system goes through the patient’s body point by point, building up a 2-D or 3-D map of tissue types. It then integrates all of this information together to create 2-D images or 3-D models.

MRI provides an unparalleled view inside the human body. The level of detail is extraordinary compared with any other imaging technique. MRI is the method of choice for the diagnosis of many types of injuries and conditions because of the incredible ability to tailor the exam to the particular medical question being asked. MRI systems can also image flowing blood in any part of the body.



The MRI machine applies an RF (radio frequency) pulse that is specific only to hydrogen. The system directs the pulse toward the area of the body being examined. The pulse causes the protons in that area to absorb the energy required, making them spin in a different direction. This is the “resonance” part of MRI.

The RF pulse forces the protons to spin at a particular frequency, in a particular direction. When the RF pulse is turned off, the hydrogen protons begin to slowly (relatively speaking) return to their natural alignment within the magnetic field and release their excess stored energy. When they do this, they give off a signal that the coil now picks up and sends to the computer system. What the system receives is mathematical data that is converted into a picture that can be put on film. This is the “imaging” part of MRI.

Most imaging techniques use injectable contrast, or dyes, for certain procedures. So does MRI. These agents work by blocking the X-ray photons from passing through the area where they are located and reaching the X-ray film. This results in differing levels of density on the X-ray film. The dyes have no physiologic impact on the tissue in the body.

MRI contrast works by altering the local magnetic field in the tissue being examined. Normal and abnormal tissue will respond differently to this slight alteration, giving differing signals. These varied signals are converted into images, allowing the visualization of many different types of tissue abnormalities and disease processes.

Before MRI and other imaging techniques, the only way to see inside the body was to cut it open. MRI is used for a variety of diagnoses, such as multiple sclerosis, tumors, infections in the brain, spine or joints, seeing torn ligaments, tendonitis, cysts, herniated discs, strokes, and much more. MRI systems do not use ionizing radiation or contrast materials that produce side effects.

MRIs can image in any plane. They have a very low incidence of side effects. Another major advantage of MRI is its ability to image in any plane or cross-section. The patient doesn’t have to move as is required in x-ray analysis. The magnets used in the MRI system control exactly where in the body images are to be taken.

Some people are too big to fit into an MRI scanner. Pacemakers prevent MRI analysis as well. MRI machines make a lot of noise and can be claustrophobic. Patients don’t have to move, but they do have to lie very still for long periods of time. The slightest movement can cause distorted images. Artificial joints and other metallic devices in the body can cause distorted images. The machines are very expensive and so are the exams.

Very small scanners for imaging specific body parts are being developed. Another development is functional brain mapping–scanning a person’s brain while performing a physical task. New research will image the ventilation dynamics of the lungs and produce new ways to image strokes.

Computerized Axial Tomography (CAT Scans)

Computerized axial tomography (CAT) scan machines produce X-rays. X-ray photons are basically the same thing as visible light photons, but have much more energy. This higher energy level allows X-ray beams to pass straight through most of the soft material in the human body.

A conventional X-ray image is basically a shadow where light is shined on one side of the body and film on the other side captures the silhouette of bones. Shadows provide an incomplete picture of an object’s shape. If a larger bone is directly between the X-ray machine and a smaller bone, the larger bone may cover the smaller bone on the film. In order to see the smaller bone, the body has to turn.

In a CAT scan machine, the X-ray beam moves all around the patient, scanning from hundreds of different angles. The computer takes all this information and puts together a 3-D image of the body. A CAT machine looks like a giant donut tipped on its side. The patient lies down on a platform, which slowly moves through the hole in the machine.

The X-ray tube is mounted on a movable ring around the edges of the hole. The ring supports an array of X-ray detectors directly opposite the X-ray tube. A motor turns the ring so that the X-ray tube and the X-ray detectors revolve around the body. Another kind of design is where the tube remains stationary and the X-ray beam is bounced off a revolving reflector.

Each full revolution scans a narrow, horizontal “slice” of the body. The control system moves the platform farther into the hole so the tube and detectors can scan the next slice. The machine records X-ray slices across the body in a spiral motion. The computer varies the intensity of the X-rays in order to scan each type of tissue with the optimum power.

After the patient passes through the machine, the computer combines all the information from each scan to form a detailed image of the body. Usually only part of the body is scanned. Doctors usually operate CAT scan machines from a separate room so they aren’t repeatedly exposed to radiation. Since they examine the body slice by slice, from all angles, CAT scans are much more comprehensive than conventional X-rays. CAT scans are used to diagnose and treat a wide variety of ailments, including head trauma, cancer and osteoporosis.


Visualizing the Universe: Electron Microscopes

Under an electron microscope, the infinitesimal begins to look like sweeping geographical landscapes. Blood clots look like UFO’s caught in an extraterrestrial traffic jam. Micro-minerals give the appearance of vast landscapes dotted with buttes and canyons. Synthetic kidney stone crystals look like falling snowflakes. The shells of microscopic plants stand out like Christmas tree ornaments. Nylon looks like a plate of spaghetti. Bugs look like monsters.


Electron Microscope

The world of a grain of sand was once as far as the human eye could go. Now, using electron microscopes, a grain of sand is like the universe, filled with untold galaxies, planetary systems and maybe even a few black holes. At the organic level, humans are learning how to Mother Nature builds life, one atom at a time.

Conventional microscopes use particles of light, or photons, to look directly at small objects, employing glass lenses to magnify things several thousand times. The SEM opens the door to an even tinier level by using electrons, which are much smaller than photons.

The process is the same for all electron microscopes, where a stream of electrons is formed (by the Electron Source) and accelerated toward the specimen using a positive electrical potential. This stream is confined and focused using metal apertures and magnetic lenses into a thin, focused, monochromatic beam. This beam is focused onto the sample using a magnetic lens. Interactions occur inside the irradiated sample, affecting the electron beam. These interactions and effects are detected and transformed into an image.

Electron microscopes provide morphological, compositional and crystallographic information at the atomic level (nanometers). Topography is the surface features of an object or “how it looks.” Texture is the direct relation between these features and materials properties (hardness, reflectivity). Morphology is the shape, size and relationship of the particles making up the object (ductility, strength, reactivity).

Composition explains the relative amounts of elements and compounds that the object is composed of (melting point, reactivity, hardness). Crystallographic Information determines how the atoms are arranged in the object and their relationships with other properties (conductivity, electrical properties, strength).

To create the images, a filament inside an electron “gun” shoots a stream of electrons through a stack of electromagnetic lenses, which focus the electrons into a beam. The beam is directed to a fine point on the specimen, and scans across it rapidly. The sample responds by emitting electrons that are picked up by a detector inside the sample chamber, beginning an electronic process that results in an image that can be displayed on a TV screen.

The Transmission Electron Microscope (TEM), developed by Max Knoll and Ernst Ruska in Germany in 1931, was the first type of Electron Microscope and is patterned exactly on the Light Transmission Microscope except that a focused beam of electrons is used instead of light to “see through” the specimen.

The first Scanning Electron Microscope (SEM) appeared in 1942 with the first commercial instruments around 1965. A TEM works much like a slide projector. A projector shines a beam of light through (transmits) the slide, as the light passes through it is affected by the structures and objects on the slide. These effects result in only certain parts of the light beam being transmitted through certain parts of the slide. This transmitted beam is then projected onto the viewing screen, forming an enlarged image of the slide.


Nanostructure via a Scanning Microscope

Scanning Electron Microscopes (SEM) are patterned after Reflecting Light Microscopes and yield similar information as TEMs. Unlike the TEM, where electrons are detected by beam transmission, the SEM produces images by detecting secondary electrons which are emitted from the surface due to excitation by the primary electron beam. In the SEM, the electron beam is rastered across the sample, with detectors building up an image by mapping the detected signals with beam position.

Scientists have used the SEM to identify micro-plankton in ocean sediments, fossilized remains found in underwater canyons, the structure of earthquake-induced micro-fractures in rocks and micro-minerals, the microstructure of wires, dental implants, cells damaged from infectious diseases, and even the teeth of microscopic prehistoric creatures.

There are other types of electron microscopes. A Scanning Transmission Electron Microscope (STEM) is a specific sort of TEM, where the electrons still pass through the specimen, but, as in SEM, the sample is scanned in a raster fashion. A Reflection Electron Microscope (REM), like the TEM, uses a technique involving electron beams incident on a surface, but instead of using the transmission (TEM) or secondary electrons (SEM), the reflected beam is detected.

Near-field scanning optical microscopy (NSOM) is a type of microscopy where a sub-wavelength light source is used as a scanning probe. The probe is scanned over a surface at a height above the surface of a few nanometers.

A Scanning Tunneling Microscope (STM) can be considered a type of electron microscope, but it is a type of Scanning probe microscopy and it is non-optical. The STM employs principles of quantum mechanics to determine the height of a surface. An atomically sharp probe (the tip) is moved over the surface of the material under study, and a voltage is applied between probe and the surface.

Depending on the voltage electrons will tunnel or jump from the tip to the surface (or vice-versa depending on the polarity), resulting in a weak electric current. The size of this current is exponentially dependent on the distance between probe and the surface. The STM was invented by scientists at IBM’s Zurich Research Laboratory. The STM could image some types of individual atoms on electrically conducting surfaces. For this, the inventors won a Nobel Prize.