David Woollard is the Chief Technology Officer (CTO) at Standard AI. He is a tech industry veteran with over 20 years of experience, having worked at companies like Samsung and NASA, and as an entrepreneur at both early and late-stage startups. He holds a PhD in Computer Science, specializing in software architectures for high-performance computing.
Standard AI offers provide unprecedented precision insights into shopper behavior, product performance, and store operations.
Can you share your journey from working at NASA’s Jet Propulsion Laboratory to becoming the CTO of Standard AI?
When I was at The Jet Propulsion Laboratory, my work focused primarily on large scale data management for NASA missions. I got to work with incredible scientists and engineers, learning about how to conduct research from outer space. Not only did I learn a lot about data science, but also large-scale engineering project management, balancing risk and error budgets, and large-scale software systems design. My PhD work at the University of Southern California was in the area of software architectures for high performance computing, and I was able to see the application of that research first-hand.
While I learned a tremendous amount from my time there, I also really wanted to work on things that were more tangible to everyday people. When I left JPL, I joined a friend who was founding a startup in the streaming video space as one of the first hires. I was hooked from the beginning on building consumer experiences and startups generally, both of which felt like a break from my previous world. When I got a chance to join Standard, I was drawn to the combination of hard scientific problems in AI and Computer Vision that I loved in my early career with tangible consumer experiences I found most fulfilling.
What motivated the shift in Standard AI's focus from autonomous checkout solutions to broader retail AI applications?
Standard AI was founded seven years ago with the mission to bring autonomous checkout to market. While we succeeded in delivering the best-in-class computer vision only solution to autonomous checkout and launched autonomous stores, ultimately we found that user adoption was slower than anticipated and consequently, the return on investment wasn’t there for retailers.
At the same time, we realized that there were a number of problems the retailer experienced that we could solve through the same underlying technology. This renewed focus on operational insights and improvements allowed Standard to deliver a more direct ROI to retailers who are looking for opportunities to improve their efficiencies in order to offset the effects of inflation and increased labor costs.
How does Standard AI’s computer vision technology track customer interactions with such high accuracy without using facial recognition?
Standard’s VISION platform is designed to track shoppers in real space by analyzing video from overhead cameras in the store, distinguishing between humans and other elements in each video, and estimating the pose, or skeletal structure, of each human. By looking through multiple cameras at the same time, we can reconstruct a 3D understanding of the space, just like we do with our two eyes. Because we have very precise measurements of each camera’s position, we can reconstruct a shopper’s position, orientation, and even hand placement, with high accuracy. Combined with advanced mapping algorithms, we can determine shopper movement and product interaction with 99% accuracy.
How does Standard AI ensure the privacy of shoppers while collecting and analyzing data?
Unlike other tracking systems that use facial recognition to identify shoppers between two different video streams, when Standard is determining a shopper’s pose, we are just using structural information and spatial geometry. At no time does Standard’s tracking system rely on shopper biometrics that can be used for identification like the shopper’s face. In other words, we don’t know who a shopper is, we just know how shoppers are moving through the store.
What are some of the most significant insights retailers can gain from using Standard AI’s VISION platform?
Retailers can gain a number of insights using Stand’s VISION platform. Most significantly, retailers are able to get a better understanding of how shoppers are moving through their space and interacting with products. While other solutions give a basic understanding of traffic volume through a specific portion of a store, Standard records every shopper’s individual path and can distinguish between shoppers and store employees to give a better accounting of not just traffic and dwell, but the specific behaviors of shoppers that are buying products.
Additionally, Standard can understand when products are out of stock on the shelf and more broadly, shelf conditions like missing facings that impact not just the ability of the shopper to purchase products, but to form impressions on different brand offerings. This type of conversion and impression data is valuable to both the retailer and to consumer packaged goods manufacturers. This data simply hasn’t been available before, and carries big implications for improving operations on everything from merchandising and marketing to supply chain and shrink.
How can predictive insights from VISION transform marketing and merchandising strategies for retailers?
Because Standard creates a full digital replica of a store, including both the physical space (like shelf placements) and shopper movements, we have a rich data set from which to build predictive models both to simulate store movement given physical changes (like merchandising updates and resets) as well as predicting shopper interactions based on their movement through the store. These predictive models allow retailers to experiment with–and validate–merchandising changes to the store without having to invest in costly physical updates and long periods of in-store experimentation. Further, impressions of product performance and interaction can inform placement on the shelf or endcaps. Altogether these can help prioritize spend and drive greater returns.
Could you provide examples of how real-time offers based on predicted customer paths have impacted sales in pilot tests?
While Standard doesn’t build the actual promotional systems used by retailers, we can use our understanding of shopper movement and our predictions of product interactions to help retailers understand a shopper’s intent, allowing the retailer to provide deeply meaningful and timely promotions rather than general offerings or only recommendations based on past purchases. Recommendations based on in-store behaviors allow for seasonality, availability, and intent, all of which translate to more effective promotional lift.
What were the outcomes of the tobacco tracking pilot, and how did it influence the brands involved?
Within a day of operating a pilot of one retailer, we were able to detect theft of tobacco products and flag that back to the retail for corrective actions. Longer term, we have been able to work with retailers to detect not just physical theft but also promotion abuse and compliance issues, both of which are very impactful to not just the retailer but to tobacco brands that both fund these promotions and spend significant resources on ensuring compliance manually. For example, we were also able to observe what happens when a customer’s first choice is out of stock; half of shoppers chose another family product, but nearly a quarter purchased nothing. That’s potentially a lot of lost revenue that could be addressed if caught sooner. Because our VISION platform is always on, it’s become an extension of tobacco brands’ sales teams, able to see (and alert on) the current state of any store in the whole or a retailer’s fleet at any time.
What are the biggest challenges you’ve faced in implementing AI solutions in physical retail, and how have you overcome them?
Working in retail environments has come with a number of challenges. Not only did we have to develop systems that were robust to issues that are common in the physical world (like camera drift, store changes, and hardware failures), we also developed processes that were compatible with retail operations. For example, with the recent Summer Olympics, many CPGs changed their packaging to promote Paris 2024. Because we visually identify SKUs based on their packaging, this meant we had to develop systems capable of flagging and handling these packaging changes.
From the beginning, Standard has chosen technical implementations that would work with retailer’s existing processes rather than change existing processes to meet our requirements. Store’s using our VISION platform operate just like they did before without any changes to physical merchandising or complex and expensive physical retrofits (like introducing shelf-sensors).
How do you see the role of AI evolving in the retail sector over the next decade?
I think that we are only scratching the surface of the digital transformation that AI will power within retailers in the coming years. While AI today is largely synonymous with large language models and retailers are thinking about their AI strategy, we believe that AI will, in the near future, be a foundational enabling technology rather than a strategy in its own right. Systems like Standard’s VISION Platform unlock unprecedented insights for retailers and allow them to unlock the rich information in the video they are already capturing. The types of operational improvements we can deliver will form the backbone of retailers’ strategies for improving their operational efficiency and improving their margin without having to pass costs onto consumers.
Thank you for the great interview, readers who wish to learn more should visit Standard AI.
#2024, #3D, #Accounting, #Ai, #AIStrategy, #Algorithms, #Applications, #Autonomous, #Behavior, #Biometrics, #Brands, #Budgets, #Building, #California, #Cameras, #Career, #Change, #Companies, #Compliance, #Computer, #ComputerScience, #ComputerVision, #Computing, #Consumers, #CTO, #Data, #DataManagement, #DataScience, #Design, #DigitalTransformation, #Drift, #Effects, #Efficiency, #Employees, #Engineering, #Engineers, #Experienced, #Extension, #Eyes, #FacialRecognition, #Focus, #Form, #Full, #Future, #Geometry, #Hand, #Hardware, #How, #HowTo, #Human, #Humans, #Impact, #Industry, #Inflation, #Insights, #Interaction, #INterview, #Interviews, #Investment, #Issues, #It, #JPL, #Language, #LanguageModels, #LargeLanguageModels, #Learn, #Learning, #Management, #Manufacturers, #Margin, #Marketing, #Measurements, #Models, #Movement, #NASA, #One, #OperationalEfficiency, #Other, #Packaging, #Performance, #Pilot, #Placement, #Placements, #Platform, #Power, #Predictions, #Privacy, #Project, #ProjectManagement, #RealTime, #Research, #Resources, #Retail, #Revenue, #Risk, #ROI, #Sales, #Samsung, #Scale, #Science, #Scientific, #Sensors, #Software, #Solve, #Space, #StandardAI, #Startup, #Startups, #Store, #Strategy, #Streaming, #Streams, #Structure, #SupplyChain, #Surface, #SystemsDesign, #Teams, #Tech, #TechIndustry, #Technology, #Time, #Tracking, #Transform, #Transformation, #Translate, #University, #Video, #Vision, #Work
Published on The Digital Insider at https://is.gd/Pr7jD2.
Comments
Post a Comment
Comments are moderated.