Trends

Vintra raises $4.8 million to analyze camera footage with AI

Security camera ownership is on the rise worldwide — Global Market Insights predicts that the IP camera market will cross $20 billion in revenue by 2024, driven by a 20 percent uptick in unit shipments. And those cameras will generate lots of footage. That’s why today most surveillance footage — as much as 95 percent, some estimate — goes unreviewed.
San Jose startup Vintra hopes to change that with an AI toolset capable of analyzing frames from any source. Today the company announced that it has raised $4.8 million from Bonfire Ventures, Vertex Ventures, London Venture Partners, and other investors. Vintra CEO Brent Boekestein says the funds will be used to acquire new customers and expand the company’s product offerings.
“Every year, billions of dollars in time and resources are spent on security personnel to monitor livestreams, as well as post-event investigations by law enforcement and analysts,” said Boekestein, who was previously vice president of sales and business development at Redwood Systems. “Vintra exists because we knew we could use computer vision and deep learning to build a new way forward for video analytics that took mobility and customization into account.”
The tech that underlies Vintra’s FulcrumAI platform is codec-agnostic, highly scalable (e.g., capable of handling up to 80 video sources on a single server), and works with cameras of all types — not only security cameras, but those mounted to drones, body vests, and car dashboards. Its proprietary computer vision algorithms integrate with video management service providers like Genetec and Milestone and analyze video in real time.
Fulcrum AI affords flexibility in deployment — it works both on-premises and in the cloud. And it makes “instantly” searchable most people, faces, vehicles, and objects visible in the videos it ingests. It can also apply facial recognition for scenarios like blocklists and access verification, and issue customizable alerts in the event it detects something’s amiss.
For law enforcement customers like the Northern California Regional Intelligence Center, the Sacramento City Police Department, and the NYC Department of Investigations, Vintra provides a tailored solution — FulcrumAI Investigator — that’s hosted by Amazon Web Services (AWS) and compliant with Criminal Justice Information Services (CJIS), a U.S. program that outlines the steps law enforcement officers are required to take to protect records like fingerprints and criminal backgrounds. Victra says that Investigator enabled one detective to analyze over 1,500 videos totaling 743 hours in just four hours, and it helped a separate team of investigators suss out a suspect’s car and face “immediately.”
Vintra’s services don’t come cheap. FulcrumAI Investigator, which includes 500 hours of AI video processing, 5TB of storage, and quarterly algorithm updates, starts at $2,000 per month.
Then there’s the elephant in the room: AI video analysis is a hotly debated topic. Researchers at the Massachusetts Institute of Technology published a contested study late last year that found Rekognition, AWS’ object detection API, failed to reliably determine the sex of female and darker-skinned faces in specific scenarios, and that’s far from the only example of algorithms gone awry. It was recently revealed that a system deployed by London’s Metropolitan Police produces as many as 49 false matches for every hit. And during a House oversight committee hearing on facial recognition technologies last year, the U.S. Federal Bureau of Investigation admitted that the algorithms it uses to identify criminal suspects are wrong about 15 percent of the time.
Even Rick Smith, CEO of Axon, one of the largest suppliers of body cameras in the U.S., was this summer quoted as saying that facial recognition isn’t yet accurate enough for law enforcement applications.
“[They aren’t] where they need to be to be making operational decisions off the facial recognition,” he said. “This is one where we think you don’t want to be premature and end up either where you have technical failures with disastrous outcomes or … there’s some unintended use case where it ends up being unacceptable publicly in terms of long-term use of the technology.”
But Vintra claims it strives to ensure its algorithms are “ethically developed in regard to data bias via continuous internal review” and that it “identif[ies] and correct[s] issues prior to production.” It also says it’s “committed to building solutions capable of reducing … unintended bias” and to respecting individual privacy concerns. “One of the most important parts about building AI solutions is ensuring that customers, the public, and communities can trust the solutions being deployed,” Boekestein said. “Toward that end, Vintra makes it easy for customers to test its solutions and review performance data on publicly available datasets.”
Short of open sourced datasets and peer-reviewed research, we’ll have to take the company’s word for it.
Source: VentureBeat

Follow Us On Facebook, Twitter & Instagram Please Share Your Stories, Press Release & Articles At [email protected]. To Read More News Daily, Subscribe To Our Push Notification at https://www.inventiva.co.in/

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker