Surveillance cameras seem to be everywhere these days, deployed by consumers, businesses and other organizations seeking to deter crime and document security-related events. According to some estimates, more than 60 million surveillance cameras are now deployed across the U.S. recording billions of hours of footage each week.
It would be impossible for humans to watch all that video every week. Fortunately, they don’t have to, thanks to the continued development of artificial intelligence (AI) and video analytics technologies.
AI-driven video analytics software can monitor multiple video feeds simultaneously and trigger an alert when potentially significant situations are detected. Analytics systems support a wide range of security and operational use cases by providing facial and license plate recognition, detecting objects taken or left behind, accurately counting people in congested areas, and more.
Most important, software never gets bored or tired.
With conventional analog surveillance systems, humans must monitor the video to catch events in real time or review stored video to reconstruct events after the fact. In addition to being labor-intensive, human monitoring is highly fallible. People can easily miss developing events due to fatigue or because they’re looking at the wrong monitor at the wrong time.
The AI Difference
Recent advances in one form of AI have significantly boosted the speed and accuracy of video analytics software. Cognitive computing solutions combine multiple AI subsystems in a way that simulates human thought processes and can evaluate video content in a fraction of the actual viewing time required by a human.
AI is an umbrella term that covers multiple technologies, and cognitive computing relies heavily upon two of the predominant subsets — machine learning and deep learning. Although they are closely related, there are significant differences. Machine learning refers to the use of algorithms that “learn” to produce better analysis as they are exposed to more and more data.
Deep learning has been referred to as machine learning on steroids. It is designed to loosely mimic the way the human brain works with neurons and synapses. It utilizes a hierarchical system of artificial neural networks with many highly interconnected nodes working in unison to analyze large datasets. This gives a machine the ability to discover patterns or trends and learn from those discoveries.
The significant difference is that deep learning algorithms require little or no human involvement. Unlike machine learning models that require programmers to code specific instructions and then label and load datasets for analysis, artificial neural networks require only minimal instructions represented by just a few lines of code. They then “infer” things about the data they are exposed to.
By combining these technologies, cognitive computing becomes particularly useful for image and speech recognition. It uses a series of “classifiers” to identify and tag objects, settings and events based on features such as color, texture, shape and edges. The more data the system is exposed to over time, the more it learns and the more accurate it becomes.
This will allow organizations to use video surveillance for a range of use cases beyond just premises security. Many cities have implemented video surveillance for traffic control, while manufacturers are using it to monitor production for safety, quality assurance and regulatory compliance. Retailers are employing video surveillance for people counting, queue counting and dwell time analysis as well as for loss prevention.
Cognitive computing software makes widespread use of surveillance practical by enabling video analysis that is exponentially faster than conventional systems requiring human evaluation. Through our services agreement with the cognitive computing experts at Essex Technology Group, Converge can help customers implement autonomous video surveillance systems that no longer require human monitoring. Give us a call to learn more.