Nanotechnology Future Dangers and Human Defenses

As technology accelerates toward the full realization of GNR, we will see interweaving potentials: a feast of creativity resulting from human intelligence expanded manyfold, combined with many grave new dangers. A quintessential concern that has received considerable attention is unrestrained nanobot replication. Early proposals for molecular manufacturing required trillions of intelligently designed devices to be useful. To scale up to such levels it would have been necessary to enable them to self-replicate, essentially the same approach used in the biological world (that’s how one fertilized egg cell becomes the trillions of cells in a human).

Although the self-replication can be hidden and blocked in a variety of ways (for example, Ralph Merkle’s proposal1 for a “broadcast architecture” in which each replicating entity needs to get the replicating codes from a secure server), the overall system will have self-replication at some level. And in the same way that biological self-replication gone awry (that is, cancer) results in biological destruction, a defect in a mechanism curtailing nanobot self-replication—the so-called gray goo scenario—would endanger all physical entities, biological or otherwise.

Modern proposals, such as the use of large integrated manufacturing systems rather than trillions of quasi-independent nanobots, appear to prevent inadvertent release of destructive self-replication, but in general these safeguards can be worked around by a determined adversary. We see a similar situation today in biological technologies. The ethical guidelines for gene modification technologies adopted at the Asilomar Conference have worked well for over a quarter of a century, but these guidelines would not restrict a would-be bioterrorist because they don’t have to follow the guidelines (they don’t have to put their “inventions” through the FDA either).

These guidelines and strategies are likely to be effective for preventing accidental release of dangerous self-replicating nanotechnology entities. But dealing with the intentional design and release of such entities is a more complex and challenging problem. A sufficiently determined and destructive opponent could possibly defeat each of these layers of protections. Take, for example, the broadcast architecture. When properly designed, each entity is unable to replicate without first obtaining replication codes, which are not repeated from one replication generation to the next. However, a modification to such a design could bypass the destruction of the replication codes and thereby pass them on to the next generation. To counteract that possibility it has been recommended that the memory for the replication codes be limited to only a subset of the full code. However, this guideline could be defeated by expanding the size of the memory.

Another protection that has been suggested is to encrypt the codes and build in protections in the decryption systems, such as time-expiration limitations. However, we can see how easy is has been to defeat protections against unauthorized replications of intellectual property such as music files. Once replication codes and protective layers are stripped away, the information can be replicated without these restrictions.

This doesn’t mean that that protection is impossible. Rather, each level of protection will work only to a certain level of sophistication. The meta lesson here is that we will need to place twenty-first-century society’s highest priority on the continuing advance of defensive technologies, keeping them one or more steps ahead of the destructive technologies (or at least no more than a quick step behind).

Living creatures—including humans—would be the primary victims of an exponentially spreading nanobot attack. The principal designs for nanobot construction use carbon as a primary building block. Because of carbon’s unique ability to form four-way bonds, it is an ideal building block for molecular assemblies. Because biology has made the same use of carbon, pathological nanobots would find the Earth’s biomass an ideal source of this primary ingredient.

How long would it take an out-of-control replicating nanobot to destroy the Earth’s biomass? The biomass has on the order of 1045 carbon atoms. A reasonable estimate of the number of carbon atoms in a single replicating nanobot is about 106. (Note that this analysis is not very sensitive to the accuracy of these figures, only to the approximate order of magnitude.) This malevolent nanobot would need to create on the order of 1034 copies of itself to replace the biomass, which could be accomplished with 113 replications (each of which would potentially double the destroyed biomass). Rob Freitas has estimated a minimum replication time of approximately 100 seconds, so 113 replication cycles would require about three hours.2 However, the actual rate of destruction would be slower because biomass is not “efficiently” laid out. The limiting factor would be the actual movement of the front of destruction. Nanobots cannot travel very quickly because of their small size. It’s likely to take weeks for such a destructive process to circle the globe.

Based on this observation we can envision a more insidious possibility. In a two-phased attack, the nanobots take several weeks to spread throughout the biomass but use up an insignificant portion of the carbon atoms, say one out of every thousand trillion (1015). At this extremely low level of concentration, the nanobots would be as stealthy as possible. Then, at an “optimal” point, the second phase would begin with the seed nanobots expanding rapidly in place to destroy the biomass. For each seed nanobot to multiply itself a thousand trillionfold would require only about 50 binary replications, or about 90 minutes. With the nanobots having already spread out in position throughout the biomass, movement of the destructive wave front would no longer be a limiting factor.

The point is that without defenses, the available biomass could be destroyed by gray goo very rapidly. Clearly, we will need a nanotechnology immune system3 in place before these scenarios become a possibility. This immune system would have to be capable of contending not just with obvious destruction but any potentially dangerous (stealthy) replication, even at very low concentration.

Eric Drexler, Robert Freitas, Ralph Merkle, Mike Treder, Chris Phoenix, and others have pointed out that future nanotech manufacturing devices can be created with safeguards that would prevent the accidental creation of self-replicating nanodevices.4 However, this observation, although important, does not eliminate the threat of gray goo as I pointed out above. There are other reasons (beyond manufacturing) that self-replicating nanobots will need to be created. The nanotechnology immune system mentioned above, for example, will ultimately require self-replication; otherwise it would be unable to defend us against the development of increasingly sophisticated types of goo. It is also likely to find extensive military applications. Moreover, a determined adversary or terrorist can defeat safeguards against unwanted self-replication; hence, the need for defense.

Bill Joy and other observers have pointed out that such an immune system would itself be a danger because of the potential of “autoimmune” reactions (that is, the immune-system nanobots attacking the world they are supposed to defend). However, this possibility is not a compelling reason to avoid the creation of an immune system. No one would argue that humans would be better off without an immune system because of the potential of developing autoimmune diseases. Although the biological immune system can itself present a danger, humans would not last more than a few weeks (barring extraordinary efforts at isolation) without one. And even so, the development of a technological immune system for nanotechnology will happen even without explicit efforts to create one. This has effectively happened with regard to software viruses, creating an immune system not through a formal grand-design project but rather through incremental responses to each new challenge and by developing heuristic algorithms for early detection. We can expect the same thing will happen as challenges from nanotechnology-based dangers emerge. The point for public policy will be to specifically invest in these defensive technologies.

As a test case, we can take a small measure of comfort from how we have dealt with one recent technological challenge. There exists today a new fully nonbiological self-replicating entity that didn’t exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer-network medium in which they live. Yet the “immune system” that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage from time to time, the injury is but a small fraction of the benefit we receive from the computers and communication links that harbor them.

One might counter that computer viruses do not have the lethal potential of biological viruses or of destructive nanotechnology. This is not always the case; we rely on software to operate our 911 call centers, monitor patients in critical-care units, fly and land airplanes, guide intelligent weapons in our military campaigns, handle our financial transactions, operate our municipal utilities, and many other mission-critical tasks. To the extent that software viruses do not yet pose a lethal danger, however, this observation only strengthens my argument. The fact that computer viruses are not usually deadly to humans only means that more people are willing to create and release them. The vast majority of software virus authors would not release viruses if they thought they would kill people. It also means that our response to the danger is that much less intense. Conversely, when it comes to self-replicating entities that are potentially lethal on a large scale, our response on all levels will be vastly more serious.

Although software pathogens remain a concern, the danger exists today mostly at a nuisance level. Keep in mind that our success in combating them has taken place in an industry in which there is no regulation and minimal certification for practitioners. The largely unregulated computer industry is also enormously productive. One could argue that it has contributed more to our technological and economic progress than any other enterprise in human history.

But the battle concerning software viruses and the panoply of software pathogens will never end. We are becoming increasingly reliant on mission-critical software systems, and the sophistication and potential destructiveness of self-replicating software weapons will continue to escalate. When we have software running in our brains and bodies and controlling the world’s nanobot immune system, the stakes will be immeasurably greater.

The Right Level of Relinquishment

The only conceivable way that the accelerating pace of GNR technology advancement could be stopped would be through a worldwide totalitarian system that relinquishes the very idea of progress. Even this specter would be likely to fail in averting the dangers of GNR because the resulting underground activity would tend to favor the more destructive applications. This is because the responsible practitioners that we rely on to quickly develop defensive technologies would not have easy access to the needed tools. Fortunately, such a totalitarian outcome is unlikely because the increasing decentralization of knowledge is inherently a democratizing force.

I do think that relinquishment at the right level needs to be part of our ethical response to the dangers of 21st century technologies. One constructive example of this is the ethical guideline proposed by the Foresight Institute: namely, that nanotechnologists agree to relinquish the development of physical entities that can self-replicate in a natural environment. In my view, there are two exceptions to this guideline. First, we will ultimately need to provide a nanotechnology-based planetary immune system (nanobots embedded in the natural environment to protect against rogue self-replicating nanobots). Robert Freitas and I have discussed whether or not such an immune system would itself need to be self-replicating. Freitas writes: “A comprehensive surveillance system coupled with prepositioned resources—resources including high-capacity nonreplicating nanofactories able to churn our large numbers of nonreplicating defenders in response to specific threats—should suffice.”5 I agree with Freitas that a prepositioned immune system with the ability to augment the defenders will be sufficient in early stages. But once strong AI is merged with nanotechnology, and the ecology of nanoengineered entities becomes highly varied and complex, my own expectation is that we will find that the defending nanorobots need the ability to replicate in place quickly. Biological evolution essentially made the same “discovery.” The other exception is the need for self-replicating nanobot-based probes to explore planetary systems outside of our solar system.

Broad relinquishment of GNR technologies would be unwise for several reasons. However, I do think we need to take seriously the increasingly strident voices that advocate for it, even though many of these advocates are motivated by a general distrust of technology, and their proposals are not well considered. Although blanket relinquishment is not the answer, rational fear could lead to irrational solutions, and those solutions may cause severe negative consequences.

A summary of an overall strategy for defending against the downsides of emerging GNR technologies would include the following:

We need to streamline the regulatory process for genetic and medical technologies. The regulations do not impede the malevolent use of technology but significantly delay the needed defenses. As mentioned, we need to better balance the risks of new technology (for example, new medications) against the known harm of delay.

A global program of confidential, random serum monitoring for unknown or evolving biological pathogens should be funded. Diagnostic tools exist to rapidly identify the existence of unknown protein or nucleic acid sequences. Intelligence is key to defense, and such a program could provide invaluable early warning of an impending epidemic. Such a ‘pathogen sentinel’ program has been proposed for many years by public health authorities but has never received adequate funding.

Well-defined and targeted temporary moratoriums, such as the one that occurred in the genetics field in 1975, may be needed from time to time. But such moratoriums are unlikely to be necessary with nanotechnology. Broad efforts at relinquishing major areas of technology serve only to continue vast human suffering by delaying the beneficial aspects of new technologies, and actually make the dangers worse.

Efforts to define safety and ethical guidelines for nanotechnology should continue. Such guidelines will inevitably become more detailed and refined as we get closer to molecular manufacturing.

To create the political support to fund the efforts suggested above, it is necessary to raise public awareness of these dangers. Because, of course, there exists the downside of raising alarm and generating uninformed backing for broad antitechnology mandates, we also need to create a public understanding of the profound benefits of continuing advances in technology.

These risks cut across international boundaries—which is, of course, nothing new; biological viruses, software viruses, and missiles already cross such boundaries with impunity. International cooperation was vital to containing the SARS virus and will become increasingly vital in confronting future challenges. Worldwide organizations such as the World Health Organization, which helped coordinate the SARS response, and is now dealing with the possibility of a bird flu pandemic, need to be strengthened.

A contentious contemporary political issue is the need for preemptive action to combat threats, such as terrorists with access to weapons of mass destruction or rogue nations that support such terrorists. Such measures will always be controversial, but the potential need for them is clear. A nuclear explosion can destroy a city in seconds. A self-replicating pathogen, whether biological or nanotechnology based, could destroy our civilization in a matter of days or weeks. We cannot always afford to wait for the massing of armies or other overt indications of ill intent before taking protective action.

Intelligence agencies and policing authorities will have a vital role in forestalling the vast majority of potentially dangerous incidents. Their efforts need to involve the most powerful technologies available. For example, before this decade is out devices the size of dust particles will be able to carry out reconnaissance missions. When we reach the 2020s and have software running in our bodies and brains, government authorities will have a legitimate need on occasion to monitor these software streams. The potential for abuse of such powers is obvious. We will need to achieve a middle road of preventing catastrophic events while preserving our privacy and liberty.

The above approaches will be inadequate to deal with the danger from pathological R (strong AI). Our primary strategy in this area should be to optimize the likelihood that future nonbiological intelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity. The best way to accomplish this is to foster those values in our society today and going forward. If this sounds vague, it is. But there is no purely technical strategy that is workable in this area because greater intelligence will always find a way to circumvent measures that are the product of a lesser intelligence. The nonbiological intelligence we are creating is and will be embedded in our societies and will reflect our values, as inconsistent and conflicted as these may appear to be. The transbiological phase will involve nonbiological intelligence deeply integrated with biological intelligence. This will amplify our abilities, and our application of these greater intellectual powers will be governed by the values of its creators. The transbiological era will ultimately give way to the postbiological era, but it is to be hoped that our values will remain influential. This strategy is certainly not foolproof, but it is the primary means we have today to influence the future course of strong AI.

Technology will remain a double-edged sword. It represents vast power to be used for all humankind’s purposes. GNR will provide the means to overcome age-old problems such as illness and poverty, but also will empower destructive ideologies. We have no choice but to strengthen our defenses while we apply these quickening technologies to advance our human values, despite an apparent lack of consensus on what those values should be.

Reference:

Originally published in Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology, Volume 2, No. 1, March 27, 2006.

Disruptive Technologies: Cloud Computing

Cloud computing is an emerging technology that has evolved from the concept of grid computing. Using the internet or Web as a platform, it allows consumers to tap into a central network of hardware and software. This essentially offloads the burden of computational power from their end onto a virtual ‘cloud’ setup that provides everything from storage and processing to applications and services. For example, many internet services, including streaming media, social networking, and online searching are now delivered through the cloud.

The cloud computing paradigm has made a tidal impact on modern technologies, such as the Internet of Things, mobile Internet, and knowledge work automation. Smartphone popularity is largely driven by the ability to access a multitude of applications, many of which are reliant on cloud resources. It is estimated that by 2025 cloud technology will have an economic impact of up to $6.2 trillion per year, and a bulk of this will be attributable to Internet applications and services dependent on cloud computing technology.1

Cloud_Computing_and_Future-Human-EvolutionCloud computing also has major implications for enterprises. Rather than investing in expensive IT infrastructure, a company can simply hire the required resources and applications to be delivered through the cloud. This has several advantages. For one, valuable capital need not be tied up in bulky hardware or inflexible software that may become rapidly obsolete. Secondly, cloud systems are flexible: companies can easily match use with demand by terminating redundant services or rapidly adding them as needed. Thirdly, the cost of setting up a cloud system is only about a third of what it takes to install and manage the necessary hardware and software locally. Finally, cloud setups are generally more reliable. Since equipment in the cloud network is interconnected in a way that allows dynamic workload distribution, localized failures in the cloud setup are less likely to interrupt services and cause productivity losses.

Smaller businesses are particularly likely to benefit from cloud computing. By outsourcing IT needs to cloud-based services, young companies may enjoy productivity boosts without sacrificing a hefty amount of capital, allowing them to compete with well-established businesses. In the long term, cloud technology is anticipated to be a key factor in enabling new entrepreneurship.

Since its inception, not only has cloud computing improved in terms of performance but has also become less expensive. The future will likely see an increasing number of software companies revising their business models to provide their services through the cloud. For example, the popular Microsoft Office suite is already being offered over the Internet as Office 365. According to one estimate, as the Internet continues to grow in keeping with the current trend, nearly all media consumption could eventually be provided via cloud-based systems. Furthermore, modern technologies like the Internet of Things and automated knowledge work could flourish by harnessing the power of pooled computational resources delivered through the cloud.  By 2025, the additional revenue from a cloud-computing enabled Internet could value trillions of dollars.1

While cloud computing has gained widespread popularity, it is still a relatively new technology that needs to overcome certain challenges. Technological barriers, for instance, are a major hindrance to the delivery of cloud-based services. The burgeoning use of cloud technology is beginning to strain mobile network capacity, and as more consumers try to stream HD media, the discrepancy between demand and network capacity will widen. To counter this problem, technology providers are already trialing the next generation of broadband networks. An example is the Google Fiber, which is about a 100 times as fast as the best commercial broadband.

Cloud computing technology also poses privacy and reliability concerns. Many users have reservations about using cloud storage as the primary repository for memorable photos, videos, or personal records. Similarly, some enterprises are reluctant to entrust a third-party cloud with sensitive data pertaining to the company. It certainly doesn’t help that current policies are unclear on the legal ownership of data stored on cloud systems.

The upcoming decade will see the cloud-enabled Internet become a major player in the acceleration of economic growth worldwide. As cloud platforms become increasingly sophisticated the use of cloud-based services will proliferate, making it increasingly vital that providers of cloud computing services strengthen the reliability and security of their systems. At the same time, policy makers will be tasked with laying down clear regulations to protect the privacy of consumers and prevent misuse of information contained within cloud systems.

As we look into the far future and the impact on human evolution it is not too difficult extrapolate the use of this platform out for using stored information accessible by our coming neural implants to exponentially increase the instantly available knowledge as we go about our daily pursuits and interactions.

References:

  1. McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy, May 2013

Disruptive Technologies: 3D Printing

3D printing is a technology that allows the creation of three-dimensional objects by a printer. While the concept was first conceived decades ago, it is only in recent years that we have been able to appreciate some of the potential applications of 3D printing. Initially, the use of 3D printers was largely limited to the production of prototypes by architects and product designers. As patents began to expire, 3D printing underwent many improvements and novel applications of the technology were conceived. 3D printers can now use a variety of materials, including metal, ceramics, plastics, glass, and even human cells. With the technology evolving so rapidly, 3D printing raises exciting prospects for the future where we may be able to print anything from commonplace items such as buttons, to life-saving vital organs.

3D printers create objects using several techniques classified as additive manufacturing. Rather than molding or machining raw materials to achieve the desired product, additive manufacturing creates objects in successive layers. The type of object that 3D printers can produce is dependent on the additive manufacturing technique involved. For instance, fused deposition modelling is mainly for the low-volume production of prototypes and manufacturing parts. Selective laser sintering is best for creating complex parts, and is already being used by General Electric to build jet engine components. Direct metal laser sintering is capable of processing metallic raw material without losing any of its original properties, and is currently used for producing medical implants, tools, and aerospace parts. Stereolithography involves lasers acting on liquid light-sensitive polymers to create intricate shapes such as for jewellery. Laminated object manufacturing works on sheets of raw material to produce colored objects that require less detail. Finally, inkjet-bioprinting sprays human cells into scaffolds of tissue that may be later transplanted into a patient.

Disruptive-technology-and-future-human-evolution-3D-printing3D printing technology has made considerable strides in terms of improved additive manufacturing machinery, and the range and cost of usable raw materials. As a result, 3D printing now has several advantages over traditional manufacturing techniques. For one, additive techniques greatly reduce the amount of material wasted during the production of an object. They are also capable of reproducing highly intricate designs, such as the complex network of blood vessels. Finally, 3D printed products bypass many of the standard manufacturing steps. With all these advantages, it is clear that the technology has tremendous potential to reshape the manufacturing industry.

The largest share of the economic impact of 3D printing will likely be attributable to consumer uses of the technology. A range of consumer products, from toys and shoes to jewellery and other accessories, could be manufactured using 3D printers. The products available for 3D printing would be highly amenable to customization, adding extra value. With the current cost of purchasing a personal 3D printer already down to around $1,000, most people in the next decade are anticipated to have access to 3D printing. Consumers may either own 3D printers or pay for 3D printing services at local shops. As the cost of 3D printing materials continues to decrease, consumers are expected to realize as much as 60 percent cost savings from using 3D-printed products.1

Today, start-up companies like Shapeways and Sculpteo are already providing 3D-printing services commercially. Users upload design templates, and have the option to get their 3D-printed product in a variety of materials. The rapid expansion of Shapeways to thousands of online shops demonstrates a growing market for 3D-printed goods.

3D printing also has important implications for direct product manufacturing. Unlike the subtractive techniques currently used to create items like engine components or medical implants, direct product manufacture will allow parts to be created much faster and with a smaller amount of material. Most of the economic advantage will come from manufacturing high-volume products such as simple tools, while a smaller impact will be made by printing complex products that require a lot of customization. In the next decade, the use of 3D printing in direct manufacturing could generate an economic value of approximately $500 billion dollars annually.1

While 3D printing technology has undoubtedly made rapid advances in the past decade, it still needs further progress before widespread adoption becomes feasible. For instance, 3D printed objects are relatively slow to build, and are limited in size, strength, and detail. The technology is also quite expensive. If 3D printing is to become cost-competitive with traditional manufacturing techniques, the price of both the printers and the materials will have to decline further. Additionally, in order to facilitate consumer use of 3D printing in the future, further advances will have to be made in the development of supportive products such as 3D scanners, and design software.

As 3D printing proliferates, original 3D product designs may suffer from piracy. Governments will be tasked with creating laws to protect intellectual property rights, while clarifying how the protection will be enforced. Policy makers will also be responsible for the approval of newly developed materials for use in 3D printing. Finally, it is possible that certain restrictions will have to be imposed on the general availability and use of 3D printing. With the recent firing of a 3D-printed gun, there is reasonable concern that 3D printers could be misused with potentially catastrophic consequences.

Reference:

  1. McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy, May 2013

Disruptive Technologies: Driverless Cars

Most of us are familiar with the concept of autopilot, the self-maneuvering technology that provides relief to pilots by maintaining an aircraft on a preset course. A similar system in road vehicles enables cruise-control. Most modern commercial merchant vessels and tankers are also heavily automated and can transport goods around the world with the help of smaller, less specialized crew members. These are all examples of ways in which transport has already benefitted from some degree of automation, and it is only inevitable that we find ourselves entering an era of autonomous road vehicles.

Autonomous vehicles include self-driving cars and trucks that operate with little or no intervention from humans. While fully autonomous vehicles are still in the experimental stage, partly autonomous driving features are already being introduced in production vehicles. These features include automated braking and self-parking systems.

Driverless Car

Google-Lexus Driverless Car

Numerous advances in technology have made autonomous vehicles a reality, and the development of machine vision has been particularly crucial. It involves the use of multiple sensors to capture images of the environment, which are then processed to extract relevant details, such as road signs and obstacles. 3D cameras enable accurate spatial measurements, while pattern recognition software allows identification of characters like numbers and symbols. Additionally, laser-imaging detection and ranging, or LIDAR, and advanced GPS technologies are being used to allow a vehicle to identify its location, and navigate smoothly along road networks.

A self-driving vehicle is also equipped with artificial intelligence software that integrates data from sensors and machine vision to analyze the next best move. The decisions of the software are further modified based on traffic rules, such as red lights or speed limits. Actuators then receive driving instructions from the control engineering software, and the vehicle accelerates, brakes, or changes direction as needed.

The introduction of automobiles has revolutionized the world in the last century, and autonomous vehicles are expected to yield just as tremendous an impact. One of the major advantages of autonomous vehicles will be in their ability to prevent collisions. Elimination of drivers is expected to greatly reduce the injuries, deaths, and damages that result from driving accidents caused by human error. According to one estimate, self-driving technology can result in a 20 percent overall reduction in road accidents. By 2025, this could potentially save 140,000 lives annually.1

Vehicular automation also has promising implications for fuel economy. The technology behind autonomous vehicles ensures supremely precise maneuvering that allows cars in a lane to safely drive within a narrow distance of each other. Such streamlined vehicles experience lower aerodynamic drag, with a consequent reduction in fuel consumption.2 Moreover, autonomous vehicles have acceleration and braking systems that are efficiently operated by the computer, further reducing fuel consumption. Automobiles are a notorious source of pollution; with the improved fuel economy of self-driven cars, it is estimated that C02 emission could be reduced by up to 100 million tons annually.1

The trucking industry could also benefit from automated vehicles. Self-driving truck convoys would theoretically be able to make long haul trips without having to stop for the needs of a human driver. An autonomous trucking system has already been successfully tested by a Japanese research organization. The system consists of several radar-technology equipped trucks that are led by a single driver from the front. Similarly, the mining giant Rio Tinto has used partially autonomous trucks that stay on a predefined route and are able to unload cargo without personnel.

Fully autonomous cars are still in the testing stage and have been developed by several major automakers. Audi has produced a self-parking car that can also start and stop itself in heavy traffic.3 Cadillac has built cars with advanced cruise-control systems that provide steering assistance. Mercedes-Benz is introducing the 2014 S-class that comes with multiple advanced autonomous driving features. The car can maintain speed and distance from other vehicles, and also has a lane-keeping system.4

If the full potential of autonomic vehicles is to be realized in the future, governments will have to be decisive about supporting the technology. Automakers are already testing fully autonomous cars that are likely to become commercially available within a few years. Their ultimate appearance on the roads, however, will be dependent upon government regulations. New laws will have to be established to determine legal responsibilities, and roads may require investments to optimize them for self-driving vehicles.

Finally, as with any computerized machine, autonomous vehicles would be potential targets for hackers. Criminals gaining access to the automated navigation systems could inflict devastating harm, making it crucial for strong cyber security systems to be established before allowing self-driven transportation on the road.

References:

McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy, May 2013

Kevin Bullis, How vehicle automation will cut fuel consumption, MIT Technology Review, October 24, 2011.

Angus MacKenzie, The future is here: Piloted driving in Audi’s autonomous A6, MotorTrend, January 2013.

Andrew English, New car tech: 2014 Mercedes-Benz S-class, Road & Track, November 2012.

Disruptive Technologies: Nanomaterials

In the last few decades, scientists around the world have been experimenting with potentially revolutionary new materials. An impressive number of advanced materials have been uncovered, and many of them have exceptional properties. Smart materials, for instance, are a category of advanced materials with fascinating features. They include self-healing materials such as the self-healing concrete, memory metals that revert to their original forms when heated, and piezoelectric crystals and ceramics that can produce electricity from pressure. While these smart materials have considerable potential for future applications, their effects are anticipated to be overshadowed by the impact of nanomaterials.

Nanomaterials are substances that have least one structural dimension less than 100 nm. This is roughly equivalent to the molecular level, with one nanometer being equal to a billionth of a meter. The value of nanomaterials lies in the fact that at nanoscales, many ordinary materials develop spectacular new properties, such as superconductivity or magnetism. Furthermore, all nanoparticles are governed by the rules of quantum mechanics and have a significantly large surface area per unit of volume. This property makes nanoparticles highly reactive.

Simple nanoparticles are found in many products today. For example, the antimicrobial properties of nanosilver make it excellent for use in detergents and odor-resistant socks. Zinc oxide has unique wavelength-blocking characteristics and is an important constituent of many sunscreens. Clay nanoparticles are used to make composites that are stronger and more elastic, making them useful for car bumpers.

Future Disrupting Technologies: nanomaterials

Future Disrupting Technologies: Nanomaterials

Advanced nanoparticles, being much more expensive, currently have a limited number of applications. However, health care is one area that could soon experience a strong impact from the use of advanced nanomaterials. The phenomenal reactivity of nanoparticles makes them excellent candidates for use in diagnostic tools as well as in targeted drug delivery. In cancer patients, the efficacy of chemotherapy could be greatly enhanced by using nanoparticles to concentrate drugs inside cancer cells. Not only would this improve the efficacy of the drug, but could also potentially reduce many of the side-effects, allowing higher doses to be administered without harming the patient. For example, tumor necrosis factor, or TNF, is a potent antitumor agent that is also highly toxic. AstraZeneca is developing a therapy that will use gold nanoparticles to deliver TNF directly into cancer beds. Another pharmaceutical company Celgene has already developed a nanoparticle-bound formulation of paclitaxel, making it the first chemotherapy drug of its kind. 1 With the potential for extending the lives of millions of people, nano-based anticancer drugs could create an economic impact of as much as $500 billion per year by 2025.2

Graphene and carbon nanotubes are two other advanced nanoparticles. These nanomaterials are essentially units of ultrathin single-atom layers of carbon. While both graphene and nanotubes are prohibitively expensive to manufacture, they possess an extraordinary combination of properties. For instance, not only is graphene ten times as conductive as copper, it also only a sixth the density of steel while being a hundred times as strong. Moreover, graphene is extremely elastic and can revert to its original shape even after being subjected to extremes of pressure. With these remarkable specifications, graphene could one day replace silicon in microchips to create processors that are a 1000 times faster than the best ones currently available.

Today, graphene is being used to develop supercapacitors that will potentially lead to batteries with extremely high performance. Such batteries would keep devices like smartphones powered for weeks on end, 3 and could be recharged with a few seconds. Furthermore, graphene-based lithium ion batteries would be more efficient, potentially accelerating the adoption of electric-powered vehicles. The scope of the potential applications of graphene is endless. In fact, with its absorptive properties, graphene may also prove to be an effective means of purifying water. Lockheed Martin is currently developing graphene-based water filters that could convert sea water into potable water at a fraction of the cost of existing methods.4

Carbon nanotubes also have great potential. With their large surface areas and high reactivity, they make for excellent sensors. In health care, nanotubes could be used to augment the sensitivity of existing techniques for detecting biomarkers, such as those for cancer. They could also be used to detect minute levels of noxious substances in the environment. Additionally, nanotubes could be integrated with graphene to create thin, flexible, and transparent display screens.

Quantum dots are another type of nanomaterial. They are semiconductors with unique optical properties, and can emit light in different colors. The potential applications include electronic displays, and medical diagnostic tools, whereby contrast dyes could be replaced by quantum dots that light up under imaging.

Nanomaterials are still in their infancy and most of their impact on the world is not likely to be significantly appreciable in the near-term. One of the greatest obstacles to achieving the full potential of advanced nanomaterials is in the cost of production. Graphene films can cost as much as $819 a unit, while nanotubes go for up to $700 per gram. Large scale production of applications using these materials is greatly constrained by the expense. In addition to the cost, there are also significant technological barriers preventing the production of high-quality nanomaterials. For example, it’s still quite difficult to create long nanotubes or larger graphene sheets of good quality.

Finally, there is significant concern that loose nanoparticles may damage the environment, or even prove hazardous to health. We already have evidence that inhaled nanotubes could be just as dangerous to the lungs as asbestos. 5 As nanomaterial applications become more widespread, governments will play a key role in regulating the technology to ensure the safety of its citizens.

References:

Matthew Herper, Celgene’s Abraxane extends life by 1.8 months in advanced pancreatic cancer, Forbes, January 22, 2013.

McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy, May 2013

Maher El-Kady and Richard Kaner, Scalable fabrication of high-power graphene microsupercapacitors for flexible and on-chip energy storage, Nature Communications, volume 4, February 2013.

Paul Borm and Wolfgang Kreyling, Toxicological hazards of inhaled nanoparticles—Potential implications for drug delivery, Journal of Nanoscience and Nanotechnology, volume 4, number 5, October 2004.