Visualizing the Universe: Virtual Reality

Virtual reality holds tremendous promise for the future. It’s progressing on numerous fronts with movies like the remake of TRON, the entire Matrix series, Minority Report, and Avatar having given us a glimpse of what could be, scientists and technicians both public and private are racing to make virtual indistinguishable from reality to your five senses.

The Electronic Visualization Laboratory at the University of Illinois, Chicago is one of the best known virtual reality development institutes in the world. Inventor of the CAVE (CAVE Automatic Virtual Environment), and its second iteration, CAVE2.

Interesting intro to EVL@UIC

Virtual reality, simply put, is a three dimensional, computer generated simulation in which one can navigate around, interact with, and so become immersed in another environment.

Douglas Engelbart, an electrical engineer and former naval radar technician, is credited with the first exploration into virtual reality. He viewed computers as more than glorified adding machines. It was the 1950s, and TVs had barely turned color. His goal was to connect the computer to a screen.

By the early 1960s, communications technology intersecting with computing and graphics was well underway. Vacuum tubes turned into transistors. Pinball machines were being replaced by video games.

Scientific visualization moved from bar charts, mathematical diagrams and line drawings to dynamic images, using computer graphics. Computerized scientific visualization enabled scientists to assimilate huge amounts of data and increase understanding of complex processes like DNA sequences, molecular models, brain maps, fluid flows, and celestial events. A goal of scientific visualization is to capture the dynamic qualities of a wide range of systems and processes in images, but computer graphics and animation was not interactive. Animation, despite moving pictures, was static because once created, it couldn’t be altered. Interactivity became the primary driver in the development of Virtual Reality.

By the end of the 1980s, super computers and high-resolution graphic workstations were paving the way towards a more interactive means of visualization. As computer technology developed, MIT and other high tech research centers began exploring Human Computer Interaction (HCI), which is still a major area of research, now combined with artificial intelligence.

The mouse seemed clumsy, and such devices as light pens and touch screens were explored as alternatives. Eventually CAD–computer-aided design–programs emerged with the ability of designers to model and simulate the inner workings of vehicles, create blueprints for city development, and experiment with computerized blueprints for a wide range of industrial products.

Flight simulators were the predecessors to computerized programs and models and might be considered the first virtual reality -like environments. The early flight simulators consisted of mock cockpits built on motion platforms that pitched and rolled. A limitation was they lacked visual feedback. This changed when video displays were coupled with model cockpits.

In 1979, the military began experimenting with head-mounted displays. By the early 1980s, better software, hardware, and motion-control platforms enabled pilots to navigate through highly detailed virtual worlds.

US Army’s New Virtual Simulation Training System

A natural consumer of computer graphics was the entertainment industry, which, like the military and industry, was the source of many valuable spin-offs in virtual reality. By the 1970s, some of Hollywood’s most dazzling special effects were computer-generated. Plus, the video game business boomed.

One direct spin-off of entertainment’s venture into computer graphics was the dataglove, a computer interface device that detects hand movements. It was invented to produce music by linking hand gestures to a music synthesizer. NASA was one of the first customers for the new device. The biggest consumer of the dataglove was the Mattel company, which adapted it into the PowerGlove, and used it in video games for kids. The glove is no longer sold.

Helmet-mounted displays and power gloves combined with 3D graphics and sounds hinted at the potential for experiencing totally immersive environments. There were practical applications as well. Astronauts, wearing goggles and gloves, could manipulate robotic rovers on the surface of Mars. Of course, some people might not consider a person on Mars as a practical endeavor. But at least the astronaut could explore dangerous terrain without risk of getting hurt.

NASA VIDEO

The virtual reality laboratory at the Johnson Space Center is helping astronauts Steve Bowen and Al Drew train for two spacewalks they conducted in 2011. Watch the first 2 minutes to get the gist.

Virtual Reality is not just a technological marvel easily engaged like sitting in a movie theater or in front of a TV. Human factors are crucial to VR. Age, gender, health and fitness, peripheral vision, and posture come into play. Everyone perceives reality differently, and it’s the same for VR. Human Computer Interaction (HCI) is a major area of research.

The concept of a room with graphics projected from behind the walls was invented at the Electronic Visualization Lab at the University of Illinois Chicago Circle in 1992. The images on the walls were in stereo to give a depth cue. The main advantage over ordinary graphics systems is that the users are surrounded by the projected images, which means that the images are in the users’ main field of vision. This environment was dubbed the “CAVE (CAVE Automatic Virtual Environment).” In 2012 the dramatically improved environment CAVE2 was launched.

CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D)

The original CAVE was designed from the beginning to be a useful tool for scientific visualization. The CAVE2 can be coupled to remote data sources, supercomputers and scientific instruments via dedicated, super high-speed fiber networks. Various CAVE-like environments exist all over the world today. Projection on all 72 surfaces of its room allows users to turn around and look in all directions. Thus, their perception and experience are never limited, which is necessary for full immersion.

Any quick review of the history of optics, photography, computer graphics, media, broadcasting and even sci-fi, is enough to believe virtual reality will become as commonplace as the TV and movies. There are far too many practical applications, such as in surgery, flight simulation, space exploration, chemical engineering and underwater exploration.

And these are just the immediate applications assuming we only receive virtual reality data via our external senses. Once the information is transmitted via electrical impulses directly to the brain, not only will virtual be indistinguishable from reality, it will be a mere option to stop there. We’ll be able to stimulate the brain in ways nature never could, producing never before experienced sensations or even entirely new senses. New emotions, ranges, and combinations of the visceral and ethereal.

Maybe it will be Hollywood that stops speculating and starts experimenting. The thought of being chased by Freddy Kruger is one thing, but to actually be chased by Freddy Kruger is utterly terrifying. No more jumping out of seats when the face of a giant shark snaps its teeth as us. Now we can really know what it’s like to be chased by cops speeding down a thruway at 100 mph. We can feel and smell pineapples on a tropical beach. We can catch bad guys, defeat aliens in a starship battle, and have conversations with Presidents in our bare feet. We can simulate what it would feel like to soar across the galaxy as a stream of subatomic particles.

With virtual reality, the only limit is the imagination.

Visualizing the Universe: Scientific Visualization

Visualization in its broadest terms represents any technique for creating images to represent abstract data. Scientific Visualization has grown to encompass many other areas like business (information visualization), computing (process visualization), medicine, chemical engineering, flight simulation, and architecture. Actually there’s not a single area of human endeavor that does not fall under scientific visualization in one form or another.

From a crude perspective, scientific visualization was born out of the conversion of text into graphics. For instance, describing an apple with words. Bar graphs, charts and diagrams were a 2-dimensional forerunner in converting data into a visual representation. Obviously words and 2-dimensional representations can only go so far, and the need for more mathematically accurate datasets was needed to describe an object’s exterior, interior, and functioning processes.

visualizing-scientific-modeling-minard

Early Scientific Visualization: Charles Minard’s 1869 chart showing the number of men in Napoleon’s 1812 Russian campaign army, their movements, as well as the temperature they encountered on the return path.

Such datasets were huge, and it wasn’t until the development of supercomputers with immense processing power combined with sophisticated digital graphics workstations that conversion from data into a more dynamic, 3-D graphical representation was possible. From the early days of computer graphics, users saw the potential of computer visualization to investigate and explain physical phenomena and processes, from repairing space vehicles to chaining molecules together.

In general the term “scientific visualization” is used to refer to any technique involving the transformation of data into visual information. It characterizes the technology of using computer graphics techniques to explore results from numerical analysis and extract meaning from complex, mostly multi-dimensional data sets.

Traditionally, the visualization process consists of filtering raw data to select a desired resolution and region of interest, mapping that result into a graphical form, and producing an image, animation, or other visual product. The result is evaluated, the visualization parameters modified, and the process run again.

Three-dimensional imaging of medical datasets was introduced after clinical CT (Computed axial tomography) scanning became a reality in the 1970s. The CT scan processes images of the internals of an object by obtaining a series of two-dimensional x-ray axial images.

The individual x-ray axial slice images are taken using a x-ray tube that rotates around the object, taking many scans as the object is gradually passed through a tube. The multiple scans from each 360 degree sweep are then processed to produce a single cross-section. See MRI and CAT scanning in the Optics section.

The goal in the visualization process is to generate visually understandable images from abstract data. Several steps must be done during the generation process. These steps are arranged in the so called Visualization Pipeline.

Visualization Methods

Data is obtained either by sampling or measuring, or by executing a computational model. Filtering is a step which pre-processes the raw data and extracts information which is to be used in the mapping step. Filtering includes operations like interpolating missing data, or reducing the amount of data. It can also involve smoothing the data and removing errors from the data set.

Mapping is the main core of the visualization process. It uses the pre-processed filtered data to transform it into 2D or 3D geometric primitives with appropriate attributes like color or opacity. The mapping process is very important for the later visual representation of the data. Rendering generates the image by using the geometric primitives from the mapping process to generate the output image. There are number of different filtering, mapping and rendering methods used in the visualization process.

Some of the earliest medical visualizations, created 3D representations from CT scans with help from electron microscopy. Images were geometrical shapes like polygons and lines creating a wire frame, representing three-dimensional volumetric objects. Similar techniques are used in creating animation for Hollywood films. With sophisticated rendering capability, motion could be added to the wired model illustrating such processes as blood flow, or fluid dynamics in chemical and physical engineering.

The development of integrated software environments took visualization to new levels. Some of the systems developed during the 80s include IBM’s Data Explorer, Ohio State University’s apE, Wavefront’s Advanced Visualizer, SGI’s IRIS Explorer, Stardent’s AVS and Wavefront’s Data Visualizer, Khoros (University of New Mexico), and PV-WAVE (Precision Visuals’ Workstation Analysis and Visualization Environment).

Crude 2005 Simulation – Typhoon Mawar

These visualization systems were designed to help scientists, who often knew little about how graphics are generated. The most usable systems used an interface. Software modules were developed independently, with standardized inputs and outputs, and were visually linked together in a pipeline. These interface systems are sometimes called modular visualization environments (MVEs).

MVEs allowed the user to create visualizations by selecting program modules from a library and specifying the flow of data between modules using an interactive graphical networking or mapping environment. Maps or networks could be saved for later recall.

General classes of modules included:

•  data readers – input the data from the data source
•  data filters – convert the data from a simulation or other source into another form which is more informative or less voluminous
•  data mappers – convert information into another domain, such as 2D or 3D geometry or sound
•  viewers or renderers – rendering the 2D and 3D data as images
•  control structures – display devices, recording devices, open graphics windows
•  data writers – output the original or filtered data

MVEs required no graphics expertise, allowed for rapid prototyping and interactive modifications, promoted code reuse, allowed new modules to be created and allowed computations to be distributed across machines, networks and platforms.

Earlier systems were not always good performers, especially on larger datasets. Imaging was poor.

Newer visualization systems came out of the commercial animation software industry. The Wavefront Advanced Visualizer was a modeling, animation and rendering package which provided an environment for interactive construction of models, camera motion, rendering and animation without any programming. The user could use many supplied modeling primitives and model deformations, create surface properties, adjust lighting, create and preview model and camera motions, do high quality rendering, and save images to video tape.

Acquiring data is accomplished in a variety of ways: CT scans, MRI scans, ultrasound, confocal microscopy, computational fluid dynamics, and remote sensing. Remote sensing involves gathering data and information about the physical “world” by detecting and measuring phenomena such as radiation, particles, and fields associated with objects located beyond the immediate vicinity of a sensing device(s). It is most often used to acquire and interpret geospatial data for features, objects, and classes on the Earth’s land surface, oceans, atmosphere, and in outerspace for mapping the exteriors of planets, stars and galaxies. Data is also obtained via aerial photography, spectroscopy, radar, radiometry and other sensor technologies.

Another major approach to 3D visualization is Volume Rendering. Volume rendering allows the display of information throughout a 3D data set, not just on the surface. Pixar Animation, a spin-off from George Lukas’s Industrial, Light and Magic (ILM) created a volume rendering method, or algorithm, that used independent 3D cells within the volume, called “voxels”.

The volume was composed of voxels that each had the same property, such as density. A surface would occur between groups of voxels with two different values. The algorithm used color and intensity values from the original scans and gradients obtained from the density values to compute the 3D solid. Other approaches include ray-tracing and splatting.

Scientific visualization draws from many disciplines such as computer graphics, image processing, art, graphic design, human-computer interface (HCI), cognition, and perception. The Fine Arts are extremely useful to Scientific Visualization. Art history can help to gain insights into visual form as well as imagining scenarios that have little or no data backup. Along with all the uses for a computer an important part of the computers future is the invention of the LCD screens, which helped tie it all together. This brought the visual graphics to life, with better resolution, lighter weight and faster processing of data than the computer monitors of the past.

Computer simulations have become a useful part of modeling natural systems in physics, chemistry and biology, human systems in economics and social science, and engineering new technology. Simulations have rendered mathematical models into visual representations easier to understand. Computer models can be classified as Stochastic or deterministic.

Stochastic models use random number generators to model the chance or random events, such as genetic drift. A discrete event simulation (DE) manages events in time. Most simulations are of this type. A continuous simulation uses differential equations (either partial or ordinary), implemented numerically. The simulation program solves all the equations periodically, and uses the numbers to change the state and output of the simulation. Most flight and racing-car simulations are of this type, as well as simulated electrical circuits.

Other methods include agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules which determine how the agent’s state is updated from one time-step to the next.

Winter Simulation Conference

The Winter Simulation Conference is an important annual event covering leading-edge developments in simulation analysis and modeling methodology. Areas covered include agent-based modeling, business process reengineering, computer and communication systems, construction engineering and project management, education, healthcare, homeland security, logistics, transportation, distribution, manufacturing, military operations, risk analysis, virtual reality, web-enabled simulation, and the future of simulation. The WSC provides educational opportunity for both novices and experts.

References

Ohio State University, Department of Design http://design.osu.edu/carlson/history/lesson18.html

Visualizing the Universe: In Review

It’s ironic that optics is not a more common subject or theme in Hollywood movies, considering that movies are in many ways the result of optics. There are a few exceptions. AI: Artificial Intelligence and the Minority Report feature virtual reality. Numerous military-based movies draw attention to laser fighting jets and rifles with sophisticated scopes.

Close up shots and sound effects bring out the drama of these high tech devices. Of course, Star Wars made the laser popular for kids. The sound of camera shutters is another dramatic device used to heighten action in a story, especially where surveillance is involved in the plot, or a serial killer uses a camera to take photos of his victims.

Cameras are featured in many films, but usually as props for characters like journalists and cops when a crime scene is being photographed. Many characters wear glasses and sunglasses, which can play a pivotal role in characterization.

The prison guard in Cool Hand Luke wore mirrored sunglasses, dramatically emphasizing his cold demeanor when it came time to shooting a prisoner. Sylvester Stallone wore them in Cobra because, well, it made him look cool. In other movies, the audience follows the camera straight into an eye of a character. With the help of special effects, we journey straight into the brain and can see what a character sees inside their mind. Still, there aren’t a lot of movies about microscopes and telescopes.

Electron microscopy is a highly specialized field with applications and techniques dazzling in their sophistication. Science is kept out of the public eye, probably because no one understands it, except a select few.

Most people barely know what an electron is yet alone such things as apoptosis, intracellular signaling, pathology, anaphase A and anaphase B during mitosis, quantification and characterization of DNA in chloroplasts and mitochondria, characterization of nuclear structure and nuclear pore complexes, cytoskeletal organization in parasites, DNA repair, materials analysis of additives in weaponized microorganisms, genomics

A ton of new fields have sprung up in the last decade or so, either as a result of electron microscopy or the need for it. For instance, specialized branches in forensic analysis, chemical and biological weapon detection, lithography, nanomaterials and nanodevices, structure and chemistry of nanoparticles, nanotubes, nanobelt and nanowires, polymers, clean environments, crystallography, hydrocarbon catalysis, production and storage of energy, climate control and surface modifications for sensors, pollution and auto exhaust emission control, photocatalysis, biocatalysis, surface engineering, advanced fuel cells, alternative energy sources, and quantitative x-ray microanalysis of terrestrial and extraterrestrial materials.

As widespread as science is in our world, from medicine to auto design, from energy to building construction, it’s a secretive world requiring a big dictionary. Science invades everyday life, in fact, it created it. But we turn the lights off in our houses, hop in our cars and turn on our MP3 players without any awareness of how such processes, techniques and devices came into being. Einstein is just some gray-haired bearded genius who knew a lot of math.

Now we live in a world of language that includes nano-electronics, nano-photonics, micromechanical devices (MEMS) and Bio-MEMs, Cryo-Preparation, Cryo-Sectioning, and Cryo-Approaches Using TEM, SEM, Cryo-examination of frozen hydrated biological samples, Focused ion beam instruments, scanning transmission electron microscopy, electron energy loss spectroscopy, x-ray mapping, Low voltage microscopy, Scanning cathodoluminescence microscopy, and other terms and concepts that require an advanced degree just to learn how to pronounce them.

It must be difficult for those who do understand advanced science, not being able to sit down and chat with others as easily as “regular folk” discuss the latest football scores or Washington political scandals, so prevalent in the news.

From chemistry to biology, geology to math, science has never really been comfortable in the cultural mainstream. It’s ironic, since much of what we call culture, like movies, TV and music, is driven by advanced science. We take pictures with digital cameras without any concern for optics. We listen to CDs without any knowledge of lithography.

Electron microscopy is complex enough, but it’s even more complex with sub-divided into High-Resolution Electron Microscopy (HREM), Analytical Electron Microscopy (AEM), Electron Energy-Loss Spectroscopy (EELS), Convergent Beam Electron Diffraction (CBED), Scanning Electron Microscopy (SEM), Low-voltage SEM, Variable Pressure SEM (VPSEM/ESEM), Electron Backscatter Diffraction (EBSD), X-ray Spectrometry, Quantitative X-ray Microanalysis, Spectral Imaging, X-ray Imaging, Diffraction and Spectroscopy, Crystallography, Scanned Probe Microscopy (SPM), Confocal Microscopy, Multi Photon Excitation Microscopy, Optical Fluorescence Microscopy, Infrared and Raman Microscopy and Microanalysis, Molecular Spectroscopy and Cryogenic Techniques and Methods.

In the future, ordinary silicon chips will move data using light rather than electrons, unleashing nearly limitless bandwidth and revolutionizing the world of computers. And to think, we’ve hardly tuned into the digital revolution that already took place.

Within the next decade, the circuitry found in today’s servers will be able to process billions of bits of data per second, fitting neatly on a silicon chip half the size of a postage stamp. Copper connections currently used in computers and servers will prove inadequate to handle such vast amounts of data.

At data rates approaching 10 billion bits per second, microscopic imperfections in the copper or irregularities in a printed-circuit board begin to weaken and distort the signals. One way to solve the problem is to replace copper with optical fiber and the electrons with photons. Integrated onto a silicon chip, an optical transceiver could send and receive data at 10 billion or even 100 billion bits per second. Movies will download in seconds rather than hours. Multiple simultaneous streams of video will open up new applications in remote monitoring and surveillance, teleconferencing, and entertainment.

Organic semiconductors are strong candidates for creating flexible, full-color displays and circuits on plastic. Using organic light-emitting devices (OLEDs), organic full-color displays are set to replace liquid-crystal displays (LCDs) for use with laptop and desktop computers. Such displays can be deposited on flexible plastic foils, eliminating the fragile and heavy glass substrates used in LCDs, and can emit bright light without the pronounced directionality inherent in LCD viewing.

Organic electronics have already entered the commercial world. Multicolor automobile stereo displays are now available. Future plans include OLED backlights to be used in LCDs and organic integrated circuits, film-thin solar cells, and portable, lightweight roll-up OLED displays (projected on a wall) designed to replace conventional TVs.

Organic circuitry is expected to exceed or replace silicon electronics. Organic semiconductors attracted industrial interest when it was recognized that many of them are photoconductive under visible light. This discovery led to their use in electrophotography (or xerography) and as light valves in LCDs.

The day will most likely come when every home has a particle accelerator, an electron microscope and miniature Hubble space probe as common as TVs, refrigerators and lightbulbs. But then, they’ll just be the “new” devices. Refrigerator doors are opened without any understanding of food processing. Few people know where TV images come from, beyond knowing they are either sexy or violent. And light is simply the flick of a switch…or the clap of hands.

Maybe it’s the way things should be, so we can get on with the business of living and leave the how and why to others. But in doing so, we inadvertently create power shifts. If knowledge is power, then specialized scientific knowledge is near God-like.

However, business people are shrewd and politicians are manipulative in ways they can control society, if not the world, without knowing what E=Mc2 means. Car dealers sell millions of cars without the slightest clue about pollution analysis or surface-to-road ratios, or how combustion works. And consumers don’t care much either, as look as the car can pass an emissions test and has a CD player, that’s good enough.

We put on our glasses and hope we don’t lose them. We take pictures of our children not because of a new kind of lens or photographic technique, but because we want to treasure a memory. We watch movies looking for thrills, without much regard for how they blew up that airplane, or where that sea of robot soldiers came from, or what supercomputer graphics workstation was used to create either image. We also trust our doctors, so when they order a PET scan, as frightening as it might be, we readily comply.

But somebody is out there working for an eyeglass company. Somebody is sitting behind a microscope all day much in the same way normal folks sit in front of the “boob tube” all day. Hopefully, the microscopist is more productive. And, there is such a thing as educational, informative and enriching TV watching.

So the future of human evolution is largely dependent not so much on knowledge and technology, but on the choices we make to engage ourselves in evolution/revolution. We can watch or we can participate. We can sit back passively or become interactive. It is the choices we make that will ultimately determine the constructive or destructive forces of advanced science. And with science moving to the atomic and sub-atomic level, perhaps average folk better pay a little closer attention to what’s going on in the universe.

Visualizing the Universe: Telescopes

Quicklinks on this Page

Telescopes

visualizing-the-future-telescope-galileo

visualizing-the-future-telescope-galileo

Without telescopes, the stars in the sky we see every night would just be twinkling little lights. Hard to imagine what people in pre-telescope times thought these twinkling lights were. For some it must’ve been frightening. For others, it was awe-inspiring.

It began with optics; the lens. Spectacles were being worn in Italy as early as 1300. In the one-thing-leads-to-another theory, no doubt the ability to see better led to the desire to see farther. Three hundred years later, a Dutch spectacle maker named Hans Lippershey, put two lens together and achieved magnification. But he also discovered quite a number of other experimenters made the same discovery when he tried to sell the idea.

Also in the 1600s, Galileo, an instrument maker in Venice, started working on a device that many thought had little use other than creating optical illusions (although they weren’t
called that at the time). In 1610 he published a description of his night sky observations in a small paper called, Starry Messenger (Sidereus Nuncius).

He reported that the moon was not smooth, as many had believed. It was rough and covered with craters. He also proposed the “Milky Way” was composed of millions of stars and Jupiter had four moons. He also overturned the geocentric view of the world system–the universe revolves around the Earth–with the heliocentric view–the solar system revolves around the Sun, a notion proposed around 50 years earlier by Copernicus. The device he invented to make these discoveries came to be known as the telescope.

The telescope was a long thin tube where light passes in a straight line from the aperture (the front objective lens) to the eyepiece at the opposite end of the tube. Galileo’s earlier device was the forerunner of what are now called refractive telescopes, because the objective lens bends, or refracts, light.

NASA’s Great Observatory Program saw a series of space telescopes designed to give the most complete picture of objects across many different wavelengths. Each observatory studies a particular wavelength region in detail.

The telescopes in order of launch were: the Hubble Space Telescope (1990), Compton Gamma Ray Observatory (1991), Chandra X-ray Observatory (1999), and the Spitzer Space Telescope (2003).

The Kepler mission joined the great observatories, launched in 2009, and spent about 4 years looking for Earth-like planets in Earth-like orbits around Sun-like stars. It scanned over 100,000 stars in the constellations Lyra and Cygnus in hopes of finding a few dozen planetary transits, in which the star’s light dims slightly as a planet passes across its disk. Instead it found thousands of exosolar planets, more than any scientist’s wildest dream! Just recently put to rest after the gyroscopes finally gave out, scientists have years of data to examine with more hope than ever that earth-like planets exist in the galaxy and beyond.

visualizing-the-universe-kepler-habitable-planet-finder

visualizing-the-universe-kepler-habitable-planet-finder

NASA also has launched many smaller observatories through its Explorer program. These missions have probed the “afterglow” of the Big Bang (COBE and WMAP), the ultraviolet light from other galaxies (GALEX and EUVE), and the violent explosions known as gamma-ray bursts (SWIFT).

Sometimes several of the observatories are used to look at the same object. Astronomers can analyze an object thoroughly by studying it in many different kinds of light. An object will look different in X-ray, visible, and infrared light.

Recent experiments with color explored the way a prism refracts white light into a array of colors. A circular prism separating colors of visible light is known as chromatic aberration, but the process limits the effectiveness of existing telescopes. A new telescope design using a parabolic mirror to collect light and concentrate the image before it was presented to the eyepiece. This resulted in the Reflective Telescope.

Reflective Telescopes

Reflective Telescopes are constructed with giant mirrors–or lenses–and collect more light than can be seen by the human eye in order to see objects that are too faint and far away.

Solar Telescopes, designed to see the Sun, have the opposite problem: the target emits too much light. Because of the sun’s brightness, astronomers must filter out much of the light to study it. Solar telescopes are ordinary reflecting telescopes with some important changes.

Because the Sun is so bright, solar telescopes don’t need huge mirrors that capture as much light as possible. The mirrors only have to be large enough to provide good resolution. Instead of light-gathering power, solar telescopes are built to have high magnification. Magnification depends on focal length. The longer the focal length, the higher the magnification, so solar telescopes are usually built to be quite long.

Since the telescopes are so long, the air in the tube becomes a problem. As the temperature of the air changes, the air moves. This causes the telescope to create blurry images. Originally, scientists tried to keep the air inside the telescope at a steady temperature by painting solar telescopes white to reduce heating. White surfaces reflect more light and absorb less heat. Today the air is simply pumped out of the solar telescopes’ tubes, creating a vacuum.

Because it’s so necessary to control the air inside the telescope and the important instruments are large and bulky, solar telescopes are designed not to move. They stay in one place, while a moveable mirror located at the end of the telescope, called a tracking mirror, follows the Sun and reflects its light into the tube. To minimize the effects of heating, these mirrors are mounted high above the ground.

Astronomers have studied the Sun for a long time. Galileo, among others, had examined sunspots. Other early astronomers investigated the outer area of the Sun, called the corona, which was only visible during solar eclipses.

visualizing-the-universe-telescopes-sunspots

Sun Spots

Before the telescope, other instruments were used to study the Sun. The spectroscope, a device invented in 1815 by the German optician Joseph von Fraunhofer, spread sunlight into colors and helps astronomers figure out what elements stars contain. Scientists used a spectrum of the Sun to discover the element helium, named after the Greek word for Sun, “helio.”

In the 1890s, when the American astronomer George Ellery Hale combined the technology of spectroscopy and photography and came up with a new and better way to study the Sun. Hale called his device the “spectroheliograph.”

The spectroheliograph allowed astronomers to choose a certain type of light to analyze. For example, they could take a picture of the Sun using only the kind of light produced by calcium atoms. Some types of light make it easier to see details such as sunspots and solar prominences.

In 1930, the French astronomer Bernard Lyot came up with another device that helped scientists study both the Sun and objects nearby. The coronagraph uses a disk to block much of the light from the Sun, revealing features that would otherwise be erased by the bright glare. Close observations of the Sun’s corona, certain comets, and other details and objects are made possible by the coronagraph. Coronagraphs also allow scientists to study features like solar flares and the Sun’s magnetic field.

Today, more technologically advanced versions of the spectroheliograph and coronagraph are used to study the Sun. The McMath-Pierce Solar Telescope on Kitt Peak in Arizona is the world’s largest solar telescope. The Solar and Heliospheric Observatory project is a solar telescope in space that studies the Sun’s interior and corona, and solar wind, in ultraviolet and X-rays as well as visible light. Astronomers also use a technique called helioseismology, a kind of spectroscopy that studies sound waves in the Sun, to examine the Sun down to its core.

Basic telescope terms:

  • Concave – lens or mirror that causes light to spread out.
  • Convex – lens or mirror that causes light to come together to a focal point.
  • Field of view – area of the sky that can be seen through the telescope with a given eyepiece.
  • Focal length – distance required by a lens or mirror to bring the light to a focus.
  • Focal point or focus – point at which light from a lens or mirror comes together.
  • Magnification (power) – telescope’s focal length divided by the eyepiece’s focal length.
  • Resolution – how close two objects can be and yet still be detected as separate objects, usually measured in arc-seconds (this is important for revealing fine details of an object, and is related to the telescope’s aperture).

Telescopes come in all shapes and sizes, from a little plastic tube bought at a toy store for $2, to the Hubble Space Telescope weighing several tons. Amateur telescopes fit somewhere in between. Even though they are not nearly as powerful as the Hubble, they can do some incredible things. For example, a small 6-inch (15 centimeter) scope can read the writing on a dime from 150 feet (55 meters) away.

Most telescopes come in two forms: the refractor and reflector telescope. The refractor telescope uses glass lenses. The reflector telescope uses mirrors instead of the lenses. They both try to accomplish the same thing but in different ways.

Telescopes are metaphorically giant eyes. The reason our eyes can’t see the printing on a dime 150 feet away is because they are simply too small. The eyes, obviously, have limits. A bigger eye would collect more light from an object and create a brighter image.

The objective lens (in refractors) or primary mirror (in reflectors) collects light from a distant object and brings that light, or image, to a point or focus. An eyepiece lens takes the bright light from the focus of the objective lens or primary mirror and “spreads it out” (magnifies it) to take up a large portion of the retina. This is the same principle that a magnifying glass (lens) uses. A magnifying glass takes a small image on a sheet of paper, for instance, and spreads it out over the retina of the eye so that it looks big.

When the objective lens or primary mirror is combined with the eyepiece, it makes a telescope. The basic idea is to collect light to form a bright image inside the telescope, then magnifying that image. Therefore, the simplest telescope design is a big lens that gathers the light and directs it to a focal point with a small lens used to bring the image to a person’s eye.

A telescope’s ability to collect light is directly related to the diameter of the lens or mirror (the aperture) used to gather light. Generally, the larger the aperture, the more light the telescope collects and brings to focus, and the brighter the final image. The telescope’s magnification, its ability to enlarge an image, depends on the combination of lenses used. The eyepiece performs the magnification. Magnification can be achieved by almost any telescope using different eyepieces.

Refractors

Hans Lippershey, living in Holland, is credited with inventing the refractor in 1608. It was first used by the military. Galileo was the first to use it in astronomy. Both Lippershey’s and Galileo’s designs used a combination of convex and concave lenses. Around 1611, Kepler improved the design to have two convex lenses, which made the image upside-down. Kepler’s design is still the major design of refractors today, with a few later improvements in the lenses and in the glass used to make the lenses.

visualizing-the-universe-telescopes-refractorRefractors have a long tube, made of metal, plastic, or wood, a glass combination lens at the front end (objective lens), and a second glass combination lens (eyepiece). The tube holds the lenses in place at the correct distance from one another. The tube also helps to keeps out dust, moisture and light that would interfere with forming a good image. The objective lens gathers the light, and bends or refracts it to a focus near the back of the tube. The eyepiece brings the image to the eye, and magnifies the image. Eyepieces have much shorter focal lengths than objective lenses.

Achromatic refractors use lenses that are not extensively corrected to prevent chromatic aberration, which is a rainbow halo that sometimes appears around images seen through a refractor. Instead, they usually have “coated” lenses to reduce this problem. Apochromatic refractors use either multiple-lens designs or lenses made of other types of glass (such as fluorite) to prevent chromatic aberration. Apochromatic refractors are much more expensive than achromatic refractors.

Refractors have good resolution, high enough to see details in planets and binary stars. However, it is difficult to make large objective lenses (greater than 4 inches or 10 centimeters) for refractors. Refractors are relatively expensive. Because the aperture is limited, a refractor is less useful for observing faint, deep-sky objects, like galaxies and nebulae, than other types of telescopes.

Isaac Newton developed the reflector telescope around 1680, in response to the chromatic aberration (rainbow halo) problem that plagued refractors during his time. Instead of using a lens to gather light, Newton used a curved, metal mirror (primary mirror) to collect the light and reflect it to a focus. Mirrors do not have the chromatic aberration problems that lenses do. Newton placed the primary mirror in the back of the tube.

Because the mirror reflected light back into the tube, he had to use a small, flat mirror (secondary mirror) in the focal path of the primary mirror to deflect the image out through the side of the tube, to the eyepiece; the reason being his head would get in the way of incoming light. Because the secondary mirror is so small, it does not block the image gathered by the primary mirror.

The Newtonian reflector remains one of the most popular telescope designs in use today.

Rich-field (or wide-field) reflectors are a type of Newtonian reflector with short focal ratios and low magnification. The focal ratio, or f/number, is the focal length divided by the aperture, and relates to the brightness of the image. They offer wider fields of view than longer focal ratio telescopes, and provide bright, panoramic views of comets and deep-sky objects like nebulae, galaxies and star clusters.

Dobsonian telescopes are a type of Newtonian reflector with a simple tube and alt-azimuth mounting. They are relatively inexpensive because they are made of plastic, fiberglass or plywood. Dobsonians can have large apertures (6 to 17 inches, 15 to 43 centimeters). Because of their large apertures and low price, Dobsonians are well-suited to observing deep-sky objects.

Reflector telescopes have problems. Spherical aberration is when light reflected from the mirror’s edge gets focused to a slightly different point than light reflected from the center. Astigmatism is when the mirror is not ground symmetrically about its center.

Consequently, images of stars focus to crosses rather than to points. Coma is when stars near the edge of the field look elongated, like comets, while those in the center are sharp points of light. All reflector telescopes experience some loss of light. The secondary mirror obstructs some of the light coming into the telescope and the reflective coating for a mirror returns up to 90 percent of incoming light.

Compound or catadioptric telescopes are hybrid telescopes that have a mix of refractor and reflector elements in the design. The first compound telescope was made by German astronomer Bernhard Schmidt in 1930. The Schmidt telescope had a primary mirror at the back of the telescope, and a glass corrector plate in the front of the telescope to remove spherical aberration. The telescope was used primarily for photography, because it had no secondary mirror or eyepieces. Photographic film is placed at the prime focus of the primary mirror. Today, the Schmidt-Cassegrain design, which was invented in the 1960s, is the most popular type of telescope. It uses a secondary mirror that bounces light through a hole in the primary mirror to an eyepiece.

The second type of compound telescope was invented by Russian astronomer, D. Maksutov, although a Dutch astronomer, A. Bouwers, came up with a similar design in 1941, before Maksutov. The Maksutov telescope is similar to the Schmidt design, but uses a more spherical corrector lens. The Maksutov-Cassegrain design is similar to the Schmidt Cassegrain design.

Telescope Mounts

Telescope Mounts are another important feature of telescopes. The alt-azimuth is a type of telescope mount, similar to a camera tripod, that uses a vertical (altitude) and a horizontal (azimuth) axis to locate an object. An equatorial mount uses two axes (right ascension, or polar, and declination) aligned with the poles to track the motion of an object across the sky.

The telescope mount keep the telescope steady, points the telescope at whatever object is being viewed, and adjusts the telescope for the movement of the stars caused by the Earth’s rotation. Hands need to be free to focus, change eyepieces, and other activities.

The alt-azimuth mount has two axes of rotation, a horizontal axis and a vertical axis. To point the telescope at an object, the mount is rotated along the horizon (azimuth axis) to the object’s horizontal position. Then, it tilts the telescope, along the altitude axis, to the object’s vertical position. This type of mount is simple to use, and is most common in inexpensive telescopes.

visualizing-the-universe-altazimuth_mount

altazimuth_mount

There are two variations of the alt-azimuth mount. The ball and socket is used in inexpensive rich-field telescopes. It has a ball shaped end that can rotate freely in the socket mount. The rocker box is a low center-of-gravity box mount, usually made of plywood, with a horizontal circular base (azimuth axis) and Teflon bearings for the altitude axis. This mount is usually used on Dobsonian telescopes. It provides good support for a heavy telescope, as well as smooth, frictionless motion.

Although the alt-azimuth mount is simple and easy to use, it does not properly track the motion of the stars. In trying to follow the motion of a star, the mount produces a “zigzag” motion, instead of a smooth arc across the sky. This makes this type of mount useless for taking photographs of the stars.

The equatorial mount also has two perpendicular axes of rotation: right ascension and declination. However, instead of being oriented up and down, it is tilted at the same angle as the Earth’s axis of rotation. The equatorial mount also comes in two variations. The German equatorial mount is shaped like a “T.” The long axis of the “T” is aligned with the Earth’s pole. The Fork mount is a two-pronged fork that sits on a wedge that is aligned with the Earth’s pole. The base of the fork is one axis of rotation and the prongs are the other.

When properly aligned with the Earth’s poles, equatorial mounts can allow the telescope to follow the smooth, arc-like motion of a star across the sky. They can also be equipped with “setting circles,” which allow easy location of a star by its celestial coordinates (right ascension, declination). Motorized drives allow a computer (laptop, desktop or PDA) to continuously drive the telescope to track a star. Equatorial mounts are used for astrophotography.

Eyepiece

An eyepiece is the second lens in a refractor, or the only lens in a reflector. Eyepieces come in many optical designs, and consist of one or more lenses in combination, functioning almost like mini-telescopes. The purposes of the eyepiece are to produce and allow changing the telescope’s magnification, produce a sharp image, provide comfortable eye relief (the distance between the eye and the eyepiece when the image is in focus), and determine the telescope’s field of view.

Field of view is “apparent,” or, how much of the sky, in degrees, is seen edge-to-edge through the eyepiece alone (specified on the eyepiece). “True or real” is how much of the sky can be seen when that eyepiece is placed in the telescope (true field = apparent field/magnification).

There are many types of eyepiece designs: Huygens, Ramsden, Orthoscopic, Kellner and RKE, Erfle, Plossl, Nagler, and Barlow (used in combination with another eyepiece to increase magnification 2 to 3 times). All eyepieces have problems and are designed to fit specific telescopes.

Eyepieces with illuminated reticules are used exclusively for astrophotography. They aid in guiding the telescope to track an object during a film exposure, which can take anywhere from 10 minutes to an hour.

Other Components

Finders are devices used to help aim the telescope at its target, similar to the sights on a rifle. Finders come in three basic types. Peep sights are notches or circles that allow alignment with the target. Reflex sights use a mirror box that shows the sky and illuminates the target with a red LED diode spot, similar to a laser sight on a gun. A telescope sight is a small, low magnification (5x to 10x) telescope mounted on the side with a cross hair reticule, like a telescopic sight on a rifle.

Filters are pieces of glass or plastic placed in the barrel of an eyepiece to restrict the wavelengths of light that come through in the image. Filters are used to enhance the viewing of faint sky objects in light-polluted skies, enhance the contrast of fine features and details on the moon and planets, and safely view the sun. The filter screws into the barrel of the eyepiece.

Another add-on component is a dew shield, which prevents moisture condensation. For taking photographs, conventional lens and film cameras or CCD devices/digital cameras are used. Some astronomers use telescopes to make scientific measurements with photometers (devices to measure the intensity of light) or spectroscopes (devices to measure the wavelengths and intensities of light from an object).

Visualizing the Universe: Eye Glasses

Contact Lenses

Contact lenses are thin transparent plastic discs that sit on the cornea. Just like eyeglasses, they correct refractive errors such as myopia (nearsightedness) and hyperopia (farsightedness). With these conditions, the eye doesn’t focus light directly on the retina as it should, leading to blurry vision. Contact lenses are shaped based on the vision problem to help the eye focus light directly on the retina. Some contact lenses are less about seeing than being seen.

Contact lenses for Fashion - Wait for Genetic Engineering!

Contact lenses for Fashion – Wait for Genetic Engineering!

Contact lenses are closer to natural sight than eyeglasses. They move with the eye/
Normal glasses can get in the way of the line of sight. Contact lenses don’t. They can be worn several days at a time.

Contact lenses stay in place by sticking to the layer of tear fluid that floats on the surface of the eye and by eyelid pressure. The eyes provide natural lubrication and help flush away any impurities that may become stuck to the lens.

Originally, all contact lenses were made of a hard plastic called polymethyl methacrylate (PMMA). This is the same plastic used to make Plexiglas. But hard lenses don’t absorb water, which is needed to help oxygen pass through the lens and into the cornea. Because the eye needs oxygen to stay healthy, hard lenses can cause irritation and discomfort. However, they are easy to clean.

visualizing-the-universe-hydro-lens

Hydrophilic Lens

Soft contact lenses are more pliable and easier to wear because they’re made of a soft, gel-like plastic. Soft lenses are hydrophilic, or “water loving,” and absorb water. This allows oxygen to flow to the eye and makes the lens flexible and more comfortable. More oxygen to the eye means soft contact lenses can be worn for long periods with less irritation.

Daily-wear lenses are the type of contacts removed every night before going to bed (or whenever someone decides to sleep). Extended-wear lenses are worn for several days without removal. Disposable lenses are just what the name implies: they are worn for a certain period of time and then thrown away. Cosmetic lenses change the color of a person’s eyes. Ultraviolet (UV) protection lenses act as sunglasses, protecting the eyes against harmful ultraviolet rays from the sun.

Corneal reshaping lenses are worn to reshape the cornea and correct vision. Rigid, gas-permeable lenses have both hard and soft contact lens features. They are more durable than soft lenses but still allow oxygen to pass through to the eye. They don’t contain water, so are less likely to develop bacteria and cause infection than soft lenses. They are also hard enough to provide clear vision.

Contact lenses are frequently customized for athletes, computer operators and other applications. Many contacts don’t just correct vision problems but improve it.

Sunglasses

Sunglasses provide protection from harmful ultraviolet rays in sunlight. Some sunglasses filter out UV light completely. They also provide protection from intense light or glare, like the light reflected off snow or water on a bright day. Glare can be blinding, with distracting bright spots hiding otherwise visible objects. Good sunglasses can completely eliminate glare using polarization.

visualizing-the-universe-polarized-lens

visualizing-the-universe-polarized-lens

Sunglasses have become a cultural phenomenon. In the fashion world, designer sunglasses make people look “cool,” or mysterious. They can also be ominous, such as the mirrored sunglasses worn by roughneck bikers and burly state troopers.

Cheap sunglasses are risky because although they are tinted and block some of the light, they don’t necessarily block out UV light. Cheap sunglasses are made out of ordinary plastic with a thin tinted coating on them.

There are several types of lens material, such as CR-39, a plastic made from hard resin, or polycarbonate, a synthetic plastic that has great strength and is very lightweight. These kinds of lens are usually lighter, more durable, and scratch-resistant. Optical-quality polycarbonate and glass lenses are generally free from distortions, such as blemishes or waves. The color is evenly distributed. Some sunglasses are very dark and can block up to 97 percent of light.

More expensive sunglasses use special technologies to achieve increased clarity, better protection, and higher contrast or to block certain types of light. Normal frames similar to prescription eyeglasses filter light but sometimes offer little protection from ambient light, direct light and glare. Wrap-around frames, larger lenses and special attachments can compensate for these weaknesses. Most cheap sunglasses use simple plastic or wire frames, while more expensive brands use high-strength, light-weight composite or metal frames.

The brightness or intensity of light is measured in lumens. Indoors, most artificial light is around 400 to 600 lumens. Outside on a sunny day, the brightness ranges from about 1,000 lumens in the shade to more than 6,000 lumens from bright light reflected off of hard surfaces, like concrete or highways.

Comfort levels are around 3,500 lumens. Brightness above this level produces glare. Squinting is the natural way to filter such light. In the 10,000 lumens range, prolonged exposure to light of such intensity can cause temporary or even permanent blindness. A large snowfield, for instance, can produce more than 12,000 lumens, resulting in what is commonly called, “snowblind.”

Three kinds of light are associated with sunglasses: direct, reflected, and ambient. Direct light is light that goes straight from the light source (like the sun) to the eyes. Too much direct light can wash out details and even cause pain. Reflected light (glare) is light that has bounced off a reflective object to enter the eyes. Strong reflected light can be equally as damaging as direct light, such as light reflected from snow, water, glass, white sand and metal.

Ambient light is light that has bounced and scattered in many directions so that it is does not seem to have a specific source, such as the glow in the sky around a major city. Good sunglasses can compensate for all three forms of light.

Sunglasses use a variety of technologies to eliminate problems with light: tinting, polarization, photochromic lenses, mirroring, scratch-resistant coating, anti-reflective coating, and UV coating.

The color of the tint determines the parts of the light spectrum that are absorbed by the lenses. Gray tints are great all-purpose tints that reduce the overall amount of brightness with the least amount of color distortion. Gray lenses offer good protection against glare. Yellow or gold tints reduce the amount of blue light while allowing a larger percentage of other frequencies through.

Blue light tends to bounce and scatter off a lot of things; it can create a kind of glare known as blue haze. The yellow tint eliminates the blue part of the spectrum and has the effect of making everything bright and sharp. Snow glasses are usually yellow. Tinting distorts color perception are tinted glasses are not very useful with there is a need to accurately see color. Other colors include amber, green, purple and rose, all of which filter out certain colors of the light spectrum.

Light waves from the sun or even from an artificial light source such as a lightbulb, vibrate and radiate outward in all directions. Whether the light is transmitted, reflected, scattered or refracted, when its vibrations are aligned into one or more planes of direction, the light is said to be polarized.

Polarization can occur naturally or artificially. On a lake, for instance, natural polarization is the reflected glare off the surface is the light that does not make it through the “filter” of the water. This explains why part of a lake looks shiny and another part looks rough (like waves). It’s also why nothing can be seen below the surface, even when the water is very clear.

Polarized filters are most commonly made of a chemical film applied to a transparent plastic or glass surface. The chemical compound used will typically be composed of molecules that naturally align in parallel relation to one another. When applied uniformly to the lens, the molecules create a microscopic filter that absorbs any light matching their alignment. When light strikes a surface, the reflected waves are polarized to match the angle of that surface. So, a highly reflective horizontal surface, such as a lake, will produce a lot of horizontally polarized light. Polarized lenses in sunglasses are fixed at an angle that only allows vertically polarized light to enter.

Sunglasses or prescription eyeglasses that darken when exposed to the sun are called photochromic, or sometimes photochromatic. Because photochromic lenses react to UV light and not to visible light, there are circumstances under which the darkening will not occur.

A good example is in the car. As the windshield blocks out most of the UV light, photochromic lenses will not darken inside the car. Consequently, many photochromic sunglasses are tinted. Photochromic lenses have millions of molecules of substances, such as silver chloride or silver halide, embedded in them. The molecules are transparent to visible light in the absence of UV light, which is the normal makeup of artificial lighting. But when exposed to UV rays in sunlight, the molecules undergo a chemical process that causes them to change shape.

The new molecular structure absorbs portions of the visible light, causing the lenses to darken. Indoors, out of the UV light, a reverse chemical reaction takes place. The sudden absence of UV radiation causes the molecules to “snap back” to their original shape, resulting in the loss of their light absorbing properties.

With some prescription glasses, different parts of the lens can vary in thickness. The thicker parts can appear darker than the thinner areas. By immersing plastic lenses in a chemical bath, the photochromic molecules are actually absorbed to a depth of about 150 microns into the plastic. This depth of absorption is much better than a simple coating, which is only about 5 microns thick and not enough to make glass lenses sufficiently dark.