AI and the Future of Drones

“Every so often in history, the emergence of a new technology changes our world. Like gunpowder, the printing press, or even the atomic bomb, such “revolutionary” technologies are game-changers not merely because of their capabilities, but rather because the ripple effects that they have outwards onto everything from our wars to our politics. That is, something is revolutionary not so much because of what it can do, but rather the tough social, military, business, political, ethical, and legal questions it forces us to ask.”

– Peter W. Singer [1]

Imagine that there are large boxes in your brain for sorting stuff that your mind learns about. Ever since you were a baby, you’ve been categorizing things into the correct “brain-box”. When you were a toddler, you got your first picture book and learned about fish. You learned that fish swim and have fins. Even as a child, you understood that a salmon is similar to a trout, but very different from a cat. Because even though cats and trout have eyes and mouths, cats don’t have fins, so they are not fish.


Then you saw a picture of a whale, and what brain-box did you dunk that into? Hmm … it lives in the ocean, swims, has fins – why, that’s a fish! Many years later, you learned about the process of evolution and how whales are marine mammals, which means that they are more closely related to cats than to fish. Have fun processing that, Brain!

Drones use wings or rotors to fly, so it’s easy to think of them as smaller versions of modern-day aircraft. However, it is a mistake to predict the future of any technology based on what it does today. A single innovation can significantly change how a technology is used. For example, computers were once restricted to factories and labs, but IC chips dramatically reduced their size, leading to the birth of personal computing.

A better way of predicting the future of a technology, we need to understand what it is, not what it does. A whale swims in the ocean, as fish do (that’s what it does), but it has lungs, the skeletal structure of a mammal and suckles its young (that’s what it is – a mammal). A drone flies in the air is (this is what it does), but it’s a machine designed to perform specific tasks, as directed by its software systems or a human controller. A drone is, essentially, a robot. So if we want to envision the future of drones, we need to understand the advances happening in robotics, and not merely aviation. And the most important advances happening in robotics pertain to Artificial Intelligence.

 

AI Explained

If you have a smartphone, you might already be using AI. Personal phone assistants like Apple’s Siri or Microsoft’s Cortana convert your speech into a command, refers to a cloud-based database to interpret it, and chooses the right app to execute it. Self-driving cars improve their techniques with each new experience (for instance, learning to slow down before a sharp curve). Now, like human beings, AI learns in two ways. One is pattern-based, somewhat like human habits. But the other is inferential – predicting events by understanding underlying principles. Inferential learning is a lot easier for humans than it is for AI. For example, loud noises might frighten a child (patterns are habit-forming), but as that child grows, he might fall in love with Heavy Metal because he finds it therapeutic. In this case, the person is breaking a pattern instead of reinforcing it. And while AI can compose Heavy Metal music using algorithms, it will never create a new genre of music from scratch. Unlike a human composer, AI is not affected by life experiences or world events. Its creativity is not inspired by raw emotion.

When scientists started working on AI, they pretty much picked the wrong end of the stick. They tried to build systems to excel at skills that need high IQ. You see, we are impressed by people who are good at chess or flying airplanes, because playing chess or flying planes is “hard” – but we aren’t impressed by someone who can walk or understand a joke, because anyone can do that. Turns out, playing chess or flying planes is very easy for AI. On the other hand, walking is hard for a robot, and we haven’t even come close to AI that can make sense of a joke. Another problem is that we have defined intelligence in a narrow, human-centric manner. A dog cannot comprehend a (verbal) joke either, but sniffing a pile of poop to guess the age, gender, social status, health and relationship status of another dog is at least as impressive as appreciating a Shakespearean sonnet (just being honest, guys).

These are the kinds of problems we’re grappling with. To handle them better, scientists have classified AI into 3 levels:

  • Artificial Narrow Intelligence (ANI): AI that excels at performing a specific task. Siri, Cortana, aircraft autopilot and factory robots are all examples of ANI.
  • Artificial General Intelligence (AGI): AI that is well-rounded and comparable to human beings in its depth and breadth of intelligence. Scientists estimate that AGI might be a reality in another 10 or 20 years, but we aren’t quite there yet.
  • Artificial Super Intelligence (ASI): AI that is vastly superior to human intelligence. Still a pipe dream.

AGI will not allow robots to appreciate beautiful sunsets or classical music, and yet it will be more capable than we are. Just because we can experience emotions, doesn’t mean that we’re better. Many a times, emotions simply get in our way. Our brilliant minds are easily distracted. Depression, rejection and bullying gravely affect our effectiveness. We rarely make perfectly rational choices. Even if a well-rounded AGI is less intelligent than a human being, it will probably be more capable because it makes objective decisions.

 

How to ensure that AI doesn’t destroy us

Sci-fi movies like the Terminator series suggest that a highly intelligent, global AI may become “self-aware” and start a war against humanity. But why would it do that? Even if AI became self-aware, how would it benefit from going to war with humans? I mean, think about it – we humans go to war with other humans, with animals, with nature, because we have a desire to reproduce and pass on our genes. When we commit large-scale violence, it is to have a better future for our descendants, and for people who are genetically similar to us (our ethnic group). Gaining as many resources as possible for our descendants and eliminating the risks they face are key motivators for most of the nasty things we’ve done throughout history. AI has no such motive, because it cannot reproduce. Even if intelligent robots could replicate themselves, they are not driven to do so by hormones and genes as biological organisms are.

Or perhaps biological organisms have evolved the drive to reproduce because we are mortal. But as Tim Oates explains, for AI to turn on us, it would have to develop a sense of self, then desire something that makes coexistence with humans impossible, and then choose a plan that involves mayhem (unlikely, since under most circumstances, this won’t be the most effective plan). Any of these is highly improbable, let alone all of them. But even if an AI system does develop a sense of self, Stephen Hawking states the obvious fact that this would be a consequence of sloppy programming. If the AI system is programmed to protect humanity, that’s what it will do.

And yet, this also makes creating a foolproof AI system incredibly challenging. For instance, how do we explain what’s to the benefit of humanity? Hurt no human being? It’s only a matter of time before AI is leveraged by governments for fighting insurgents, terrorists and drug cartels (or the other way round). Should AI act in the larger interest of humanity? What if a country is building a new dam that is crucial to their economy, and AI recommends a site that would displace a small, indigenous tribe?

This might be a good place to explain what these terms mean. Labor market forecasting will help predict what jobs will be replaced by AI systems. Research in policy will help us manage the wealth created by AI systems to ensure that underemployed people have the income, resources and opportunities they need to live fulfilling lives. Much of the new research in law pertains to liability – “If a self-driving car has a freak accident, should the car company be held responsible?” Ethics are about how AI is allowed to use personal data, and how it interacts with people. “Should the military use robots to fight wars, without the tempering effect of soldier casualties?” “Should AI ever replace human jury, even if it is more efficient?” Machine learning applies data mining (discovering patterns in data) and inferential statistics (using data to make predictions) to build AI that improves itself. It’s a huge aspect of creating human-like AGI. Verification research is for establishing stronger standards for testing software (AI) and hardware, so that we do not build AI systems that are to our detriment.

And what if we build AI systems to provide comfortable lives to citizens? If AI robots work all the factories, do all the chores in homes, run the economy, and keep us entertained, will that necessarily be a good thing? Will we still be driven to take risks, innovate, learn and explore? And what if this AI system decides that it is in our best interest for some information to be censored?

This has prompted calls for research into how AGI can be developed so that we do not inadvertently program it to harm human beings or our environment. The AAAI 2008-09 Presidential Panel on Long-Term AI Futures agreed that such research must necessarily be multi-disciplinary, including economics, labor market forecasting, policy, laws, ethics, computer security, AI, machine learning, neuroscience, logic, probability, verification research, control systems and other areas. Significantly, Elon Musk has donated $10 million for funding research into how AI can be made more relevant for solving our society’s problems, rather than harming humanity.

Military AI drones

Military robots (including drones) can be classified into 3 categories based on the level of human involvement in their missions:

  1. Human-in-the-Loop Systems: Although a drone may fly itself over a target area, only a human operator can select and fire at a target.
  2. Human-on-the-Loop Systems: The drone is fully capable of selecting and attacking targets, but under the oversight of a human operator who can override its decisions.
  3. Human-out-of-the-Loop Systems: The drone is expected to fly, select targets and attack them without human intervention.

Drones such as the MQ-1 Predator are human-in-the-loop systems – the drone cannot fire unless explicitly commanded to do so. However, the Predator is merely among the first of increasingly sophisticated drones under development – “the equivalent of the Model T Ford or the Wright Brothers’ Flyer.” [1] A number of countries around the world are investing in the development of military robots and drones, and given its extensive use of drones on the battlefield, US military policies in this regard indicate future trends. While the US Department of Defense endorses a policy of retaining human control over the use of force “for the foreseeable future,” [2] it’s only a matter of time before human operators become irrelevant to drone operations. The USAF envisages that the current Human-in-the-Loop drones will be replaced by Human-on-the-Loop weapon systems, as “advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.” [3] They also predict that by 2030, the capabilities of AI-enabled drones will have advanced “to the point that humans will have become the weakest component in a wide array of systems and processes.” [4]

A thinking, reasoning drone that can select its own targets could be extremely dangerous in the wrong hands. Such a drone will be capable of surveying, choosing the most devastating targets, plotting a series of attacks on the enemy’s factories, military bases or cities, perhaps even waiting for months for the right time to attack, like a sleeper agent. In this case, its ability to learn about the enemy’s culture and plan its attacks independently (thanks to AGI) will make it an ideal weapon for terrorists or government agencies seeking deniability.

But that is not the only problem with AI weapon systems. As I explained, AI not encumbered by emotions, unlike humans. Yet sometimes, emotions are key to fulfilling a war’s objectives, because they let a soldier make better decisions. A human combatant can tell the difference between a frightened civilian with a gun and a trained killer – a robot cannot. And since automated systems minimize military casualties, they could encourage countries to go to war – wars that would not be fought if they feared high casualties. And when such wars break out, it will be civilians who suffer the most.

This is also a fear shared by technologists who are monitoring the progress of AI. In July 2015, over 20,000 scientists and technologists, including Stephen Hawking, Elon Musk, Steve Wozniak (co-founder of Apple), Noam Chomsky, Jaan Tallinn (co-founder of Skype) and 3037 AI and robotics researchers signed an open letter to the UN emphasizing the need to regulate the use of AI in military systems, including drones. They expressed their concern that unlike “remotely piloted drones for which humans make all targeting decisions,” AI has made it feasible to deploy, within years, “armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria”. They continue:

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable … Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.” [5]

Building drones is not hard. A lot of countries are doing it. The factor limiting the effectiveness of drones today is general and spatial awareness. This is the reason why countries spend billions on developing new fighter jets and training human pilots. Present-day combat drones are impressive – they can discreetly monitor an area for over 40 hours at a stretch, waiting for the target to show up, communicate with a human team thousands of miles away in real-time, and then fire a weapon such as the Hellfire missile to destroy it. But although they have been invaluable in dismantling terrorist networks, they are no match for a human fighter pilot. Consider a modern-day battlefield in all its complexity – involving SAM batteries, electronic jamming, AWAC support, stealthy fighters, cruise missiles, sophisticated armored vehicles, combat helicopters and powerful shoulder-fired missiles – a drone simply does not have the same situational awareness as a human pilot in the cockpit of a conventional fighter. A human pilot has all-round visibility (not restricted to the aircraft’s radar, RWR and other sensors), instinct and experience. He can share data with other human pilots, change tactics as the situation demands, and even reconfigure their aircraft, mid-flight, to better engage specific targets in the air or on the ground. Present-day drones cannot do all this.

As AI advances make it possible for drones to choose targets, make decisions and use a wide range of tactics for dogfighting, they will increasingly replace human pilots for combat operations. As USAF assessments state,

“Although humans today remain more capable than machines for many tasks, natural human capacities are becoming increasingly mismatched to the enormous data volumes, processing capabilities, and decision speeds that technologies offer or demand; closer human-machine coupling and augmentation of human performance will become possible and essential.” [3]

Eventually, AI drones may replace human combat pilots entirely.

“For many applications the weight, volume and endurance limitations associated with such crew, as well as the cockpit environmental control systems needed to support crew operations, extract an unacceptable performance penalty.” [3]

Despite its current limitations, there is reason to be alarmed about the military use of AI. In the wrong hands, AI could easily be used to build armies of inexpensive autonomous military robots and drones to attack targets of their choice (including swarm attacks), permanently changing the nature of warfare and terrorism. Even if AI technology is not shared with another country, AI-equipped drones can be sold to other countries, creating an added risk. So there is a need for international frameworks and treaties, and effective control mechanisms, as we have for chemical and biological weapons, for preventing its proliferation. The open letter to the UN also draws an analogy with chemical and biological weapons:

“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.” [5]

The International Human Rights Clinic at Harvard Law School points out another problem with the use of drones: prosecuting war crimes. For instance, if an autonomous drone mistakenly destroys a civilian home, who should be held responsible for the error? The AI programmers, the drone manufacturer, the mission commander, or someone else? We can’t prosecute a drone, after all. Given these challenges with fully automated weapon systems, the IHRC and Human Rights Watch made these recommendations to all nations:

  • Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
  • Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
  • Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases. [6]

 

Drones in the Near-Future

Just because a technology is “advanced” does not mean that it will succeed. The success of a technology depends on a complex mix of socio-economic factors.

For one, technological change goes hand in hand with cultural change. The per capita GDP of Saudi Arabia is at par with that of the United States [7], so why is the United States, and not Saudi Arabia, the crucible of cutting-edge innovation? Because America is open to cultural change, and Saudi Arabia is not. Cultural change motivates research into new technologies – e.g. the environmental movement of the 1970’s sparked innovations in renewable energy, cleaner fuels and waste management. And new technologies also spark cultural change – e.g. the Web is creating a more cohesive, global culture. Technology and culture thus feed into each other.

But for a technology to gain traction in society, it has to be appealing and affordable. Tesla Motors is doing that with electric cars. When the technology was expensive, they built the Roadster, a high-performance car for wealthy enthusiasts. With each technological cycle, Tesla has been building more affordable vehicles. Their next car, the Model 3, will be a truly affordable mass-market car, with zero expense on fuel for life. This is a great example of how good business skills can proliferate technology and change the world.

Of course, as a technology becomes more affordable and popular, governments step in to regulate it. Ultimately, governmental regulations, incentives, taxes, etc. can have a huge impact on whether a technology ultimately succeeds or not.

It’s easy enough to guess future trends, but not the effect of disruptive events and technologies. For example, when cellphones first came out, anyone could have guessed that they will get lighter, and cheaper, and have bigger screens. However, few could have predicted that cellphones would be used for social networking. Social networking wasn’t even a thing when mobile phones were invented! So if we want to imagine the future of drones, we’ll need to consider not just what drones can do today, but what unrelated technologies or industries will change our world, compelling the use of drones in as-yet unknown roles. And this makes it really difficult to predict what the world will be like 40, 20 or even 10 years from now.

As present-day drones lack the decision-making capabilities that AGI would offer, they are being used where they’ll be more cost-effective than conventional aircraft. And since drones are robots, it’s easy to predict near-future roles for drones. Today, using robots is more cost-effective than using human workers when:

1. The work involves simple and repetitive tasks

In automobile factories, robots perform simple, repetitive tasks along the assembly line. Unlike humans, drones are not prone to fatigue, boredom and errors. This is why drones are used for surveillance flights that can last over 40 hours. As the expenses of using drones plummet, they will find use with communities and individuals. Possible uses include:

  • Neighborhood watches: Drones monitored by people helping protect areas with high crime rates
  • Spot advertising: Creative marketers can use drones to advertise on the sides of buildings or on sidewalks by using drones as projectors. While this will save money from renting billboards, it could lead to a consumer backlash if not done right.
  • Crowd control: Managing large crowds during protests, concerts and outdoor events. Drones hovering overhead can display important instructions and communicate in these chaotic, noisy situations as land-bound signboards or loudspeakers can’t, potentially preventing panic, stampedes or crimes.
  • Hacking wireless communications: Not all uses of drones will be beneficial. Ingenious criminals could use drones fitted with sensors to collect sensitive data transmitted wirelessly.

As drones get cheaper still, we could enter an era of personal drones. If paired with smartphone apps, their potential capabilities are limitless.

  • Signal jamming: A person who does not want to be tracked, filmed on CCTV or have her calls or Web activity monitored could use drones fitted with jammers to retain her privacy.
  • Showing directions: What do you do if you’re looking for directions to a place? Look up a map on your phone and keep an eye on the screen while you try to walk without stumbling? A personal drone paired with a maps app could project directions on a sidewalk (arrows and notes). Devices like Google Glass could potentially do the same, but there’s been an enormous backlash against them (not without reason). A drone that projects directions that can be seen by anyone does not have the same creep factor as a pair of stalkerish glasses.
  • Pest control: Not my favorite idea, but a drone fitted with an ultrasonic sound emitter or chemical dispenser can be used to discourage mosquitoes, arachnids, snakes, bears and other animals from inadvertently entering a campsite.
  • Selfies (sigh): A new era for selfies. The future is glorious indeed.

2. Where human life is at risk

Robots are frequently used where human life might be at risk, such as bomb disposal and rescuing people from rubble. Drones could find similar use:

  • Studying and surveying volcanoes, hurricanes and wildfires: Recently, scientists used drones to observe active volcanoes from up-close. They could also be used for collecting data from hurricanes as they build up, helping to predict disasters and reduce casualties.
  • Riot control and war photography: For monitoring riots, dispensing tear gas, filming combat action, etc. While this will require human control, future drones with AGI could manage mob violence at their own discretion, taking a great deal of pressure off law enforcement teams.
  • Wildlife conservation: Need to monitor an eagle’s nest? Collect tissue samples from a surfacing whale? Dart a rhino? Drones could be used to monitor species that are aggressive or otherwise dangerous to approach.
  • Resolving hostage situations: Small drones can enter buildings and gather intelligence in hostage scenarios. They could also potentially maim or decapacitate terrorists – land robots are nowhere as capable as police dogs, but a drone can move swiftly and with great agility, as a dog can.

3. Where small size suffices or provides an advantage

As a general rule, smaller machines are less capable than their larger counterparts. However, there are times when they are good enough (and cost significantly less). For example, it’s a lot cheaper to deploy a RQ-4 Global Hawk for reconnaissance than to send in a U-2 spy plane. At other times, smaller size is not only cheaper, but advantageous.

  • Drone ambulances: A drone ambulance or rescue aircraft doesn’t need to carry pilots as helicopters do. That makes them cheaper to operate, and smaller. Being small, they can be used in crowded cities, capable of bypassing traffic and landing almost anywhere.
  • Mail delivery: Today, mail packages are shipped in bulk to reduce costs of transportation, but this also adds delays. Companies such as Amazon and DHL are already testing drones for delivering packages. This also makes it possible to deliver groceries.
  • Combating poaching and drug trafficking: Combating poachers and drug cartels entails the monitoring vast areas, which can only be done by aircraft, which are expensive to operate. Using drones is a far more cost-effective and feasible solution. While current technology requires human monitoring, AGI-capable drones could analyse targets, report to law agencies, and initiate action as the situation demands.
  • Crop-dusting: Using drones for crop-dusting will greatly reduce costs and improve production. As AI improves and miniature drones are a reality, drones could be used for artificial pollination, harvesting fruit, and even allowing for mass-scale organic agriculture by non-chemical pest control (deterring certain insects and birds without harming them, weeding diseased plants, etc.)
  • Fighting pirates: Drones are a cost-effective way of engaging pirates on shipping lanes. Most transport ships cannot carry a helicopter, and need to wait until pirates are dangerously close to engage them.

 

Concluding thoughts

As I explained at the beginning of this rather expansive post, future technologies do not exist simply because they can, but because enough people benefit from their existence – enough to want to pay for it. Rather than let my imagination run wild like a puppy on a meadow, I’ve listed some feasible possibilities based on need and relevance. It’s a lot harder, however, to predict how drones will shape our culture and be transformed in turn. For instance, will pilotless airliners ever become a reality? Will we ever entrust our safety to AI, even if it is measurably safer than being flown by a human pilot? After all, modern airplanes are already capable of flying themselves, and yet we have strict protocols requiring the involvement of a human crew.

AI will also throw regulatory challenges. Of late, many countries have started establishing regulations governing the use of drones. This is a necessary step – as drones get cheaper and more popular, their misuse for stupid or dangerous pranks, or even mistakes (like a drone crashing on pedestrians, parked cars or a monument) is increasing. Registering drones is now a legal requirement in many parts of the world, although enforcing this will be more challenging. For example, what if someone rigs a gun or a grenade to an unregistered drone and attacks security personnel, a rival gang or a public area with no intention of ever recovering it again? Vehicular traffic on a road network can be managed with relative ease, but what about the flights of drones in a wide open sky? Authorities have neither the money nor the technology to monitor small drones throughout a city. But this gets exponentially more complex when AI is involved. If a ‘criminal’ drone is capable of making its own decisions and biding time, averting arson and violence will be a lot harder.

This is why it’s so important that a strong body of research accompany the development of AI. We need to make it a lot harder for the bad guys to exploit AI, just as it’s gotten harder for criminals to release viruses or steal data. We need to ensure that we do not inadvertently create AI systems that are capable of harming innocent people or destroying our environment. We will also need to work toward building a social services system that fairly distributes the wealth created by automated machines to benefit people who will inevitably lose their jobs. We can turn this into an opportunity that lets millions of people pursue creative goals, explore and innovate without needing to work on unproductive jobs.

Ultimately, the benefits of drones will far outweigh their risks, just as people who will use them for the greater good will outnumber the few who use them for criminal activities. As we overcome challenges in AI and automated systems, every technological improvement and regulatory reform will have enormous repercussions. And if we have a healthy respect for the technology we are dealing with, we will create a wealthier, better connected and more peaceful global community.

 


 

Footnotes

[1] This was part of military and technology expert Peter W. Singer’s testimony before the Subcommittee on National Security and Foreign Affairs, U.S. House of Representatives Committee on Oversight and Government Reform.

[2] US Department of Defense, Unmanned Systems Integrated Roadmap FY 2011-2036. http://www.acq.osd.mil/sts/docs/Unmanned%20Systems%20Integrated%20Roadmap%20FY2011-2036.pdf

[3] US Department of the Air Force (2009), Unmanned Aircraft Systems Flight Plan 2009-2047. http://www.govexec.com/pdfs/072309kp1.pdf

[4] USAF Chief Scientist (2010), Report on Technology Horizons: A Vision for Air Force Science & Technology during 2010-2030, Vol. 1. https://www.flightglobal.com/assets/getasset.aspx?ItemID=35525

[5] Future of Life Institute (2015), Autonomous Weapons: An Open Letter from AI & Robotics Researchers. http://futureoflife.org/open-letter-autonomous-weapons/

[6] Human Rights Watch (2012), Losing Humanity: The Case Against Killer Robots. https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots

[7] According to World Bank data for 2014, the per capita GDP of the United States was $54,629, and that of Saudi Arabia was $54,606.

Featured Image: Rendering of an X-47B UCAV. Credit: DARPA

Leave A Comment via Facebook, Twitter or WordPress

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s