Would You Put AI In Charge Of Your Home’s Security?

The rise of the machines, bowing to our AI overlords, and being wiped out by time traveling killer robots are all far-fetched science fiction scenarios, but as AI technology advances and begins to weave itself into other fields and industries, examining just what we’d allow an AI do to is becoming all the more important.

Airlines want to save money with self-driving planes. Corporations rely on AIs to protect their data and to do automated research. Machine learning projects want to teach programs to think for themselves and acquire complex problem-solving skills.

But would you want an AI to secure your home?

While the general public is still largely skeptical – mostly due to misinformation – AI, as it is now, is almost entirely useless, or somewhat useful, but very very rarely even remotely dangerous. Very few commercial products rely on AI tech, but those which do haven’t turned against us, nor have glitches resulted in anything beyond inconvenience.

AI advances into personal security, be it for your home or for your digital property, have already been made. AI cyber security pioneer Cylance has provided corporate data protection with AI software since 2012, and is now planning to expand into the commercial field.

Another example is the Lighthouse home security camera, which uses a pretty bare-bones AI designed to better decide whether what it’s seeing is an animal or a human, and beyond that whether that human has any business rummaging through your underwear. Additionally, instead of having you skim through hours of footage to find what you’re looking for, it can interpret even complex search parameters based on few descriptors.

However, as AI tech advances, more and more opportunities will be unlocked. What happens when your AI butler is looking through your cameras 24/7, what if it decides whether to let someone in or if it is given the authorization to accept mail and packages in your name?

Say you have a front-door camera. Instead of there being a physical lock on your door, the friendly household AI scans your face from a distance and will automatically open the door for you, and will do the same for anyone whose facial pattern is saved as ‘authorized’. If an intruder arrives, it will keep the door shut and maybe automatically call law enforcement.

AI, as modern science interprets it, isn’t on the level of truly sentient artificial digital life as seen in the movies. These programs are still – for the most part – confined by their programming. A witty burglar wouldn’t be able to convince an AI into letting them in. AI can’t be bribed.

AI, however, can be hacked, and as the world moves further towards being technology centric, it has led rise to a new kind of criminal, the hacker. No, not the stock-photo hacker hunched over a laptop with a ski-mask, but real hackers who exploit the weaknesses of technology for personal gain.

We all fear that our social media profiles, e-mail accounts or even bank accounts get hacked, and the perpetrator doesn’t even need to stand up from their desk. So what happens when a house is hacked? With recent ransomware attacks being so common, maybe 5 years down the line one day you go home to a stranger demanding $300 to let you in your own door.

AI also raises privacy concerns. These days, everything is connected, and while privacy issues already exist with extortive EULAs for cloud storage services offered by some security camera manufacturers (find out which ones here), when an AI is observing literally every move you make in your home, how sure can you be that no-one else is seeing that?

With smart homes become more and more common instead of being a niche enthusiast industry, automation and digitization are spreading with extreme haste. More and more people are connecting their motion sensors to their cameras to their smart outlets to their lights to their heating system to their mobile devices.

If you’re heavily invested in a complex home automation framework you might already be at risk of digital burglary, though this concept is still so rare as to be befitting a movie.

Fiction centered around AI has matured alongside the science behind it, with modern takes on the once-tired trope hitting far closer to plausible scenarios. Recently concluded television series Person of Interest particularly expands on the topic of a global security system run by a closed AI system.

People without insight into the particulars of the AI governing the security systems of their homes in a hypothetical scenario might not be comfortable with not knowing exactly what the AI bases its decisions on. If someone who looks a lot like you, or holds a printed page with your face on it up to the camera, will it let them in? A simple lock cannot be fooled this way. Will a recording be enough to get past voice verification? These are all issues that need to be addressed – publicly – before AI home security device will catch on.

At the same time however, advancement always comes with risks, but there can be no rewards otherwise. If every time humanity encountered potentially risky technology it decided to turn away or draw the line, we’d still live in caves (which, relative to houses, are far more secure and safe). AI becoming common place is still years away, and until then many vulnerabilities will be ironed out. Not all, but many.

The same was technology in general has become essential to almost every walk of life – there is barely anything you do without using electricity – AI will also eventually be integrated into more and more types of tech. We’re not saying your coffee machine will greet you with a weather report every morning, but it might detect just how enough of Monday you’ve had and tailor the strength of the brew accordingly.