Big Data: History, Development, Application, and Dangers

History & Development

The background of big data traces back to the beginning of measurement itself. Measurement and basic counting were practiced in the Indus Valley, Mesopotamia and Egypt as early as the third century B.C. the dawn of the earliest modern civilization. Over time accuracy and use of measurement continued to increase and made the later discoveries of volume and area measurements possible.

A new first century numeral system from India was improved by Persians before the Arabs refined it to become the Arabic numerals that we use today. Translations into Latin spread the system all over Europe by the 12th century and led to the explosion of mathematics.

Mathematics would eventually ally with data.  One of the earliest recorded such pairings occurred in 1494 when Luca Pacioli, a Franciscan monk published a book on the commercial application of mathematics and explained a new accounting format called double-entry book keeping which enabled the merchant to compare profits and losses. This revolutionized business, particularly banking.

The Scientific Revolution in the 1600 and 1700’s saw a greater interest in measurement and mathematics as a very powerful tool for understanding the world and reality.

The 1888 discovery by Sir Francis Galton that a man’s height correlated with the length of his forearm was a big precedent of big data. The authors explain that two data values related statistically can be quantified by a correlation so that a change in one data value could predict a change in the second.

Currently, statistical correlation is one of the primary uses of big data in understanding cause and effect and to use predictive analysis to understand, prepare for, and in some instances, influence the future. Supercomputers use algorithms to identify correlations in uploaded data to come up with valuable insights.

By the close of the 19th century, all components of big data were in place and data management and analysis were in vogue, although some limitations continued to exist such as how much information could be analyzed and kept. The computer age would come to solve that problem.


Computers expanded the variety and range of the things we could record and capture, which was multiplied exponentially by the introduction of the internet (see Disruptive Technologies: The Internet of Things). Additionally, our online behavior is now used to determine our tastes and preferences with sites such as Google, Twitter, Facebook, and LinkedIn (among many others) recording, analyzing and storing even private information relating to our health, relationships, financial records, and anything you can think of. Sensors, even in our phones take care of tracking our every move and computers allow us to quantify everything under the sun-from location, to heart rate, to engine vibration.

Computers are able to measure, record, analyze and store data on a near limitless scale, with faster processing speeds and greater storage capacity improving and increasing daily. The authors note that it took a decade to originally sequence three billion base pairs of the human genome while the same amount of DNA could be sequenced in a single day by 2012.

Computers have enabled us to move on from the previous method of analyzing small samples of data and drawing conclusions with varying margins of error, and basing entire theories based on those limited samples, to the current ability to analyze the entire data set thereby getting an exact insight into a given subject. This is expressed by N=all.

Even in the recent past, we had to manually look for correlations in data values, opt for proxies, and run correlations against those to validate correlations.  Today we can load up even disorganized data and have “intelligent” algorithms find correlations we may not have even suspected. And while that potential is great, gleaning insights from correlations have their downside; sometimes correlations can be coincidental and not properly reflective of causal relationships. Careful interpretation of computer output is still necessary.

 Practical Application

Google applied big data to search for terms that were run through it in order to track the spread of the pandemic H1N1 in real time in 2009. The researchers used data from the Center for Disease Control (CDC) on the pandemic between 2003 and 2008, against the most popular search terms entered into Google during that time.  Google’s system correlated between the frequency with which certain queries were entered and how the flu spread over time and space. The software found a combination of some 45 search terms that showed a high correlation between Google’s predictions and official figures countrywide. This predictive model actually proved more reliable than even state figures developed at the height of the pandemic in 2009.

Google has also used big data to track down translation pages by the billions off the internet in order to come up with their own database that would allow any uploaded text to be translated; the service is known as Google translate. Google uses much more data than other translation services, making their system superior to the competition.

Other Google projects such as Google books use this same big data model to ensure thorough service.

As one would expect, some if not most of the information that Google collects is also diverted toward its profit making ventures. For example, while Google’s street view cars service is advertised as a service used for Google maps, they have been known to collect data from open wifi connections and use it to develop their driverless car technology. They are certainly in the driver’s seat for some serious profits on this disruptive technology.

Targeted Advertisement, a pioneer of targeted advertising became a big data user when Greg Linden, one of its software engineers realized the potential of book reviewing from the average results of their in-house review project. About a dozen critics were hired to review books in what was called ‘the Amazon voice,’ when they first came online in 1995. Linden designed software that could identify associations between books, and recommend books to people based on their previous choices. When Amazon compared the results of the computer sales against the in house reviews, the results were much better for the data-derived material, and revolutionized e-commerce.

Facebook also tracks user locations, ‘status updates’ and ‘likes’ to determine which ads to display for a user. These target ads can seem invasive, and to most a bit creepy. An analytical team reviews people’s behaviors and locations and determine which ads to display to them. Determining correlations between users and needs has since become a model form of advertising.

Manufacturers have started using big data to streamline their operations while improving safety. The sensors placed on machinery help to monitor the data patterns which the machines provide on vibration, stress, heat and sound, plus they help to detect any changes that might show problems in the future. These early-detection/prediction systems help avert breakdowns and ensure timely maintenance.

Big data can also boost efficiency outside the factory by using data algorithms to determine more efficient and safer routes for trucks, as is routinely done by UPS. This has proven useful and accidents have reduced in addition to fuel consumption and other negative issues/cost factors. UPS also has sensors that identify potential breakdowns using the method the authors explain above.

Car makers use the same model to determine car use on the roads and use the information to try and improve vehicles depending on drivers’ behaviors.

Predictive models and sensors have also been employed by governments to predict possible dangers in infrastructure and when maintenance could be done to avert possible disaster. It was even used in 2009 by Michael Flowers, head of New York City’s first analytics department as appointed by Mayor Michael Bloomberg. Flowers used big data to fight crime rates by coming up with a model that analyzed and predicted likely false calls and legit calls.

Startups, Health Care and Governments

Big data has also helped to launch predictive business models such as Farecast, a business that predicts air ticket prices saving its customers a lot of money while raking in the profits. The owner, Oren Etzioni simply collected data on air ticket prices from travel websites and analyzed the price changes towards the actual flight to come up with his business model that could predict prices saving travelers an average of $50 per ticket. He would later sell Farecast to Microsoft for $110 million, before opening up, which dealt with consumer goods using the same successful model, this time saving consumers some $100 for every product purchased.

The ability to measure each and everything in the human body inspired IBM and a team of researchers from University of Ontario Institute of Technology to come up with software that can analyze physiological data from premature babies. This can help to determine the likelihood of infections and how well the infants respond to treatment.

Big data has also helped to reduce hospital readmission rates.  For example MedStar Washington Hospital used Microsoft’s Amalga software and “analyzed several years of its anonymized medical records—patient demographics, tests, diagnoses, treatments, and more—” which helped to track the factors that most caused readmission.  A common factor in this hospital was found to be mental distress. Treating mental distress before discharge helped reduce readmission rates.

Companies such as 23andMe have already been sequencing individual genomes of people to help detect specific genetic susceptibilities. Sequencing DNA in this format is still costly, but people like Apple’s Steve Jobs have undergone it in their battle against cancer. The procedure bought Steve Jobs a few extra years, thanks to big data. This technology will be very useful as soon as it becomes affordable to everyone else.

Governments have been slow to catch the scent of the enormous use of big data (for other than surveillance purposes, that is), especially since they can have access to a lot of information about their citizens. Some governments are catching on though, especially those interested in curbing costs and ensuring safety and efficiency. The American government- naturally- has led the way, using big data to estimate consumer price index (CPI) and to measure the inflation rate.

Open Data

Since some governments are taking their sweet time applying big data methods to state business, many trust that making much of the information that they can access free to organizations and individuals would be more useful. The authors say governments only act as custodians of information in their collections and would need to publicly release that data for commercial and civic purposes since the private sector would be more innovative with it.

To that end, the US government responded by opening up a free data website where information from federal government can be freely accessed. From 47 datasets in 2009, the site had 450,000 in 172 agencies three years later. The UK has also made strides in this regard as have Australia, Chile, Brazil and Kenya.

Big Data Ends Privacy: Enter Profiling

Some governments maybe be releasing more information but all of them are stacking away a lot more than they give away, and most of it is personal, private information. For example, “the U.S. National Security Agency (NSA) is said to intercept and store 1.7 billion emails, phone calls, and other communications every day, according to a Washington Post investigation in 2010…” As we have seen before, even the private sector does this; the internet tracks our locations, what we like and everything else in between. We are losing our privacy.

Now, parole boards and police are using big data to profile people, predicting chances of where crime might be higher, and even whether or not to release a prisoner on parole. In many US states, people are ‘questioned’ based on location and whether they fall into a certain statistical category as pointed out by an algorithm that they may be likely to commit a crime.

The Solutions and Conclusion

The authors, as would be expected, suggest a few solutions. The regulation of the use of big data should be given to internal and independent auditors (they have even suggested a name- ‘algorithmists’) to impartially and confidentially scrutinize all big data practices for any ethical or legal infractions as well as technical errors. They also suggest the amendment of the law to accommodate big data.

While the big data era is just beginning, the authors remind that not every societal problem is one that big data can address. Obviously business models and governments continue to make use of big data, but it has been used and misused, and our privacy is all but evaporated (governments spy on citizens and other states, social media spy on people’s likes and dislikes, etc.) even as it has made communication very easy and its predictive model has helped us to avert looming danger. While the public opinion jury is still out on big data, with many people unsure whether to love it or hate it, it is a reality.  Perhaps, they suggest, if regulation can help us keep some privacy maybe it will be more welcomed; otherwise paranoia is very high and not without reason.  People are being categorized according to where big data puts them (statistically rather than any specific personal action), and some have been arrested or denied freedom unnecessarily.

Disruptive Technologies: Cloud Computing

Cloud computing is an emerging technology that has evolved from the concept of grid computing. Using the internet or Web as a platform, it allows consumers to tap into a central network of hardware and software. This essentially offloads the burden of computational power from their end onto a virtual ‘cloud’ setup that provides everything from storage and processing to applications and services. For example, many internet services, including streaming media, social networking, and online searching are now delivered through the cloud.

The cloud computing paradigm has made a tidal impact on modern technologies, such as the Internet of Things, mobile Internet, and knowledge work automation. Smartphone popularity is largely driven by the ability to access a multitude of applications, many of which are reliant on cloud resources. It is estimated that by 2025 cloud technology will have an economic impact of up to $6.2 trillion per year, and a bulk of this will be attributable to Internet applications and services dependent on cloud computing technology.1

Cloud_Computing_and_Future-Human-EvolutionCloud computing also has major implications for enterprises. Rather than investing in expensive IT infrastructure, a company can simply hire the required resources and applications to be delivered through the cloud. This has several advantages. For one, valuable capital need not be tied up in bulky hardware or inflexible software that may become rapidly obsolete. Secondly, cloud systems are flexible: companies can easily match use with demand by terminating redundant services or rapidly adding them as needed. Thirdly, the cost of setting up a cloud system is only about a third of what it takes to install and manage the necessary hardware and software locally. Finally, cloud setups are generally more reliable. Since equipment in the cloud network is interconnected in a way that allows dynamic workload distribution, localized failures in the cloud setup are less likely to interrupt services and cause productivity losses.

Smaller businesses are particularly likely to benefit from cloud computing. By outsourcing IT needs to cloud-based services, young companies may enjoy productivity boosts without sacrificing a hefty amount of capital, allowing them to compete with well-established businesses. In the long term, cloud technology is anticipated to be a key factor in enabling new entrepreneurship.

Since its inception, not only has cloud computing improved in terms of performance but has also become less expensive. The future will likely see an increasing number of software companies revising their business models to provide their services through the cloud. For example, the popular Microsoft Office suite is already being offered over the Internet as Office 365. According to one estimate, as the Internet continues to grow in keeping with the current trend, nearly all media consumption could eventually be provided via cloud-based systems. Furthermore, modern technologies like the Internet of Things and automated knowledge work could flourish by harnessing the power of pooled computational resources delivered through the cloud.  By 2025, the additional revenue from a cloud-computing enabled Internet could value trillions of dollars.1

While cloud computing has gained widespread popularity, it is still a relatively new technology that needs to overcome certain challenges. Technological barriers, for instance, are a major hindrance to the delivery of cloud-based services. The burgeoning use of cloud technology is beginning to strain mobile network capacity, and as more consumers try to stream HD media, the discrepancy between demand and network capacity will widen. To counter this problem, technology providers are already trialing the next generation of broadband networks. An example is the Google Fiber, which is about a 100 times as fast as the best commercial broadband.

Cloud computing technology also poses privacy and reliability concerns. Many users have reservations about using cloud storage as the primary repository for memorable photos, videos, or personal records. Similarly, some enterprises are reluctant to entrust a third-party cloud with sensitive data pertaining to the company. It certainly doesn’t help that current policies are unclear on the legal ownership of data stored on cloud systems.

The upcoming decade will see the cloud-enabled Internet become a major player in the acceleration of economic growth worldwide. As cloud platforms become increasingly sophisticated the use of cloud-based services will proliferate, making it increasingly vital that providers of cloud computing services strengthen the reliability and security of their systems. At the same time, policy makers will be tasked with laying down clear regulations to protect the privacy of consumers and prevent misuse of information contained within cloud systems.

As we look into the far future and the impact on human evolution it is not too difficult extrapolate the use of this platform out for using stored information accessible by our coming neural implants to exponentially increase the instantly available knowledge as we go about our daily pursuits and interactions.


  1. McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy, May 2013

Disruptive Technologies: 3D Printing

3D printing is a technology that allows the creation of three-dimensional objects by a printer. While the concept was first conceived decades ago, it is only in recent years that we have been able to appreciate some of the potential applications of 3D printing. Initially, the use of 3D printers was largely limited to the production of prototypes by architects and product designers. As patents began to expire, 3D printing underwent many improvements and novel applications of the technology were conceived. 3D printers can now use a variety of materials, including metal, ceramics, plastics, glass, and even human cells. With the technology evolving so rapidly, 3D printing raises exciting prospects for the future where we may be able to print anything from commonplace items such as buttons, to life-saving vital organs.

3D printers create objects using several techniques classified as additive manufacturing. Rather than molding or machining raw materials to achieve the desired product, additive manufacturing creates objects in successive layers. The type of object that 3D printers can produce is dependent on the additive manufacturing technique involved. For instance, fused deposition modelling is mainly for the low-volume production of prototypes and manufacturing parts. Selective laser sintering is best for creating complex parts, and is already being used by General Electric to build jet engine components. Direct metal laser sintering is capable of processing metallic raw material without losing any of its original properties, and is currently used for producing medical implants, tools, and aerospace parts. Stereolithography involves lasers acting on liquid light-sensitive polymers to create intricate shapes such as for jewellery. Laminated object manufacturing works on sheets of raw material to produce colored objects that require less detail. Finally, inkjet-bioprinting sprays human cells into scaffolds of tissue that may be later transplanted into a patient.

Disruptive-technology-and-future-human-evolution-3D-printing3D printing technology has made considerable strides in terms of improved additive manufacturing machinery, and the range and cost of usable raw materials. As a result, 3D printing now has several advantages over traditional manufacturing techniques. For one, additive techniques greatly reduce the amount of material wasted during the production of an object. They are also capable of reproducing highly intricate designs, such as the complex network of blood vessels. Finally, 3D printed products bypass many of the standard manufacturing steps. With all these advantages, it is clear that the technology has tremendous potential to reshape the manufacturing industry.

The largest share of the economic impact of 3D printing will likely be attributable to consumer uses of the technology. A range of consumer products, from toys and shoes to jewellery and other accessories, could be manufactured using 3D printers. The products available for 3D printing would be highly amenable to customization, adding extra value. With the current cost of purchasing a personal 3D printer already down to around $1,000, most people in the next decade are anticipated to have access to 3D printing. Consumers may either own 3D printers or pay for 3D printing services at local shops. As the cost of 3D printing materials continues to decrease, consumers are expected to realize as much as 60 percent cost savings from using 3D-printed products.1

Today, start-up companies like Shapeways and Sculpteo are already providing 3D-printing services commercially. Users upload design templates, and have the option to get their 3D-printed product in a variety of materials. The rapid expansion of Shapeways to thousands of online shops demonstrates a growing market for 3D-printed goods.

3D printing also has important implications for direct product manufacturing. Unlike the subtractive techniques currently used to create items like engine components or medical implants, direct product manufacture will allow parts to be created much faster and with a smaller amount of material. Most of the economic advantage will come from manufacturing high-volume products such as simple tools, while a smaller impact will be made by printing complex products that require a lot of customization. In the next decade, the use of 3D printing in direct manufacturing could generate an economic value of approximately $500 billion dollars annually.1

While 3D printing technology has undoubtedly made rapid advances in the past decade, it still needs further progress before widespread adoption becomes feasible. For instance, 3D printed objects are relatively slow to build, and are limited in size, strength, and detail. The technology is also quite expensive. If 3D printing is to become cost-competitive with traditional manufacturing techniques, the price of both the printers and the materials will have to decline further. Additionally, in order to facilitate consumer use of 3D printing in the future, further advances will have to be made in the development of supportive products such as 3D scanners, and design software.

As 3D printing proliferates, original 3D product designs may suffer from piracy. Governments will be tasked with creating laws to protect intellectual property rights, while clarifying how the protection will be enforced. Policy makers will also be responsible for the approval of newly developed materials for use in 3D printing. Finally, it is possible that certain restrictions will have to be imposed on the general availability and use of 3D printing. With the recent firing of a 3D-printed gun, there is reasonable concern that 3D printers could be misused with potentially catastrophic consequences.


  1. McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy, May 2013

Disruptive Technologies: Driverless Cars

Most of us are familiar with the concept of autopilot, the self-maneuvering technology that provides relief to pilots by maintaining an aircraft on a preset course. A similar system in road vehicles enables cruise-control. Most modern commercial merchant vessels and tankers are also heavily automated and can transport goods around the world with the help of smaller, less specialized crew members. These are all examples of ways in which transport has already benefitted from some degree of automation, and it is only inevitable that we find ourselves entering an era of autonomous road vehicles.

Autonomous vehicles include self-driving cars and trucks that operate with little or no intervention from humans. While fully autonomous vehicles are still in the experimental stage, partly autonomous driving features are already being introduced in production vehicles. These features include automated braking and self-parking systems.

Driverless Car

Google-Lexus Driverless Car

Numerous advances in technology have made autonomous vehicles a reality, and the development of machine vision has been particularly crucial. It involves the use of multiple sensors to capture images of the environment, which are then processed to extract relevant details, such as road signs and obstacles. 3D cameras enable accurate spatial measurements, while pattern recognition software allows identification of characters like numbers and symbols. Additionally, laser-imaging detection and ranging, or LIDAR, and advanced GPS technologies are being used to allow a vehicle to identify its location, and navigate smoothly along road networks.

A self-driving vehicle is also equipped with artificial intelligence software that integrates data from sensors and machine vision to analyze the next best move. The decisions of the software are further modified based on traffic rules, such as red lights or speed limits. Actuators then receive driving instructions from the control engineering software, and the vehicle accelerates, brakes, or changes direction as needed.

The introduction of automobiles has revolutionized the world in the last century, and autonomous vehicles are expected to yield just as tremendous an impact. One of the major advantages of autonomous vehicles will be in their ability to prevent collisions. Elimination of drivers is expected to greatly reduce the injuries, deaths, and damages that result from driving accidents caused by human error. According to one estimate, self-driving technology can result in a 20 percent overall reduction in road accidents. By 2025, this could potentially save 140,000 lives annually.1

Vehicular automation also has promising implications for fuel economy. The technology behind autonomous vehicles ensures supremely precise maneuvering that allows cars in a lane to safely drive within a narrow distance of each other. Such streamlined vehicles experience lower aerodynamic drag, with a consequent reduction in fuel consumption.2 Moreover, autonomous vehicles have acceleration and braking systems that are efficiently operated by the computer, further reducing fuel consumption. Automobiles are a notorious source of pollution; with the improved fuel economy of self-driven cars, it is estimated that C02 emission could be reduced by up to 100 million tons annually.1

The trucking industry could also benefit from automated vehicles. Self-driving truck convoys would theoretically be able to make long haul trips without having to stop for the needs of a human driver. An autonomous trucking system has already been successfully tested by a Japanese research organization. The system consists of several radar-technology equipped trucks that are led by a single driver from the front. Similarly, the mining giant Rio Tinto has used partially autonomous trucks that stay on a predefined route and are able to unload cargo without personnel.

Fully autonomous cars are still in the testing stage and have been developed by several major automakers. Audi has produced a self-parking car that can also start and stop itself in heavy traffic.3 Cadillac has built cars with advanced cruise-control systems that provide steering assistance. Mercedes-Benz is introducing the 2014 S-class that comes with multiple advanced autonomous driving features. The car can maintain speed and distance from other vehicles, and also has a lane-keeping system.4

If the full potential of autonomic vehicles is to be realized in the future, governments will have to be decisive about supporting the technology. Automakers are already testing fully autonomous cars that are likely to become commercially available within a few years. Their ultimate appearance on the roads, however, will be dependent upon government regulations. New laws will have to be established to determine legal responsibilities, and roads may require investments to optimize them for self-driving vehicles.

Finally, as with any computerized machine, autonomous vehicles would be potential targets for hackers. Criminals gaining access to the automated navigation systems could inflict devastating harm, making it crucial for strong cyber security systems to be established before allowing self-driven transportation on the road.


McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy, May 2013

Kevin Bullis, How vehicle automation will cut fuel consumption, MIT Technology Review, October 24, 2011.

Angus MacKenzie, The future is here: Piloted driving in Audi’s autonomous A6, MotorTrend, January 2013.

Andrew English, New car tech: 2014 Mercedes-Benz S-class, Road & Track, November 2012.

Disruptive Technologies: Nanomaterials

In the last few decades, scientists around the world have been experimenting with potentially revolutionary new materials. An impressive number of advanced materials have been uncovered, and many of them have exceptional properties. Smart materials, for instance, are a category of advanced materials with fascinating features. They include self-healing materials such as the self-healing concrete, memory metals that revert to their original forms when heated, and piezoelectric crystals and ceramics that can produce electricity from pressure. While these smart materials have considerable potential for future applications, their effects are anticipated to be overshadowed by the impact of nanomaterials.

Nanomaterials are substances that have least one structural dimension less than 100 nm. This is roughly equivalent to the molecular level, with one nanometer being equal to a billionth of a meter. The value of nanomaterials lies in the fact that at nanoscales, many ordinary materials develop spectacular new properties, such as superconductivity or magnetism. Furthermore, all nanoparticles are governed by the rules of quantum mechanics and have a significantly large surface area per unit of volume. This property makes nanoparticles highly reactive.

Simple nanoparticles are found in many products today. For example, the antimicrobial properties of nanosilver make it excellent for use in detergents and odor-resistant socks. Zinc oxide has unique wavelength-blocking characteristics and is an important constituent of many sunscreens. Clay nanoparticles are used to make composites that are stronger and more elastic, making them useful for car bumpers.

Future Disrupting Technologies: nanomaterials

Future Disrupting Technologies: Nanomaterials

Advanced nanoparticles, being much more expensive, currently have a limited number of applications. However, health care is one area that could soon experience a strong impact from the use of advanced nanomaterials. The phenomenal reactivity of nanoparticles makes them excellent candidates for use in diagnostic tools as well as in targeted drug delivery. In cancer patients, the efficacy of chemotherapy could be greatly enhanced by using nanoparticles to concentrate drugs inside cancer cells. Not only would this improve the efficacy of the drug, but could also potentially reduce many of the side-effects, allowing higher doses to be administered without harming the patient. For example, tumor necrosis factor, or TNF, is a potent antitumor agent that is also highly toxic. AstraZeneca is developing a therapy that will use gold nanoparticles to deliver TNF directly into cancer beds. Another pharmaceutical company Celgene has already developed a nanoparticle-bound formulation of paclitaxel, making it the first chemotherapy drug of its kind. 1 With the potential for extending the lives of millions of people, nano-based anticancer drugs could create an economic impact of as much as $500 billion per year by 2025.2

Graphene and carbon nanotubes are two other advanced nanoparticles. These nanomaterials are essentially units of ultrathin single-atom layers of carbon. While both graphene and nanotubes are prohibitively expensive to manufacture, they possess an extraordinary combination of properties. For instance, not only is graphene ten times as conductive as copper, it also only a sixth the density of steel while being a hundred times as strong. Moreover, graphene is extremely elastic and can revert to its original shape even after being subjected to extremes of pressure. With these remarkable specifications, graphene could one day replace silicon in microchips to create processors that are a 1000 times faster than the best ones currently available.

Today, graphene is being used to develop supercapacitors that will potentially lead to batteries with extremely high performance. Such batteries would keep devices like smartphones powered for weeks on end, 3 and could be recharged with a few seconds. Furthermore, graphene-based lithium ion batteries would be more efficient, potentially accelerating the adoption of electric-powered vehicles. The scope of the potential applications of graphene is endless. In fact, with its absorptive properties, graphene may also prove to be an effective means of purifying water. Lockheed Martin is currently developing graphene-based water filters that could convert sea water into potable water at a fraction of the cost of existing methods.4

Carbon nanotubes also have great potential. With their large surface areas and high reactivity, they make for excellent sensors. In health care, nanotubes could be used to augment the sensitivity of existing techniques for detecting biomarkers, such as those for cancer. They could also be used to detect minute levels of noxious substances in the environment. Additionally, nanotubes could be integrated with graphene to create thin, flexible, and transparent display screens.

Quantum dots are another type of nanomaterial. They are semiconductors with unique optical properties, and can emit light in different colors. The potential applications include electronic displays, and medical diagnostic tools, whereby contrast dyes could be replaced by quantum dots that light up under imaging.

Nanomaterials are still in their infancy and most of their impact on the world is not likely to be significantly appreciable in the near-term. One of the greatest obstacles to achieving the full potential of advanced nanomaterials is in the cost of production. Graphene films can cost as much as $819 a unit, while nanotubes go for up to $700 per gram. Large scale production of applications using these materials is greatly constrained by the expense. In addition to the cost, there are also significant technological barriers preventing the production of high-quality nanomaterials. For example, it’s still quite difficult to create long nanotubes or larger graphene sheets of good quality.

Finally, there is significant concern that loose nanoparticles may damage the environment, or even prove hazardous to health. We already have evidence that inhaled nanotubes could be just as dangerous to the lungs as asbestos. 5 As nanomaterial applications become more widespread, governments will play a key role in regulating the technology to ensure the safety of its citizens.


Matthew Herper, Celgene’s Abraxane extends life by 1.8 months in advanced pancreatic cancer, Forbes, January 22, 2013.

McKinsey Global Institute, Disruptive technologies: Advances that will transform life, business, and the global economy, May 2013

Maher El-Kady and Richard Kaner, Scalable fabrication of high-power graphene microsupercapacitors for flexible and on-chip energy storage, Nature Communications, volume 4, February 2013.

Paul Borm and Wolfgang Kreyling, Toxicological hazards of inhaled nanoparticles—Potential implications for drug delivery, Journal of Nanoscience and Nanotechnology, volume 4, number 5, October 2004.