Striveworks CEO Jim Rebesco Speaking at Inaugural AI Expo
We’re excited to participate in the first Special Competitive Studies Project (SCSP) AI Expo for National Competitiveness, happening May 7 and 8 in Washington, DC. This new event convenes minds across the public, private, and non-profit sectors to discuss the US’s competitive advantage and address pressing problems of national security.
Lead expo sponsor, Palantir, has devoted time on their Foxtrot Stage for Striveworks CEO Jim Rebesco to discuss “Disappearing MLOps: Improve the Lifespan of Mission-Critical AI.” The presentation explores why machine learning (ML) models struggle in production and how remediation extends their functional uptime—a critical element for trusting AI in live operations.
Striveworks is proud to partner with Palantir to bring actionable, AI-powered insights to our US and coalition partners.
“Disappearing MLOps: Improve the Lifespan of Mission-Critical AI”
Striveworks will be returning to participate in SOF Week May 6–10 in Tampa, Florida. In the past few years, AI has proven to be a game changer during live operations, and Striveworks is honored to support our Special Operations Forces with tools that maintain the uptime of their mission-critical AI workflows. Don’t miss this opportunity to see how Striveworks is streamlining AI model development, deployment, and maintenance for defense applications. Find us at Booth #4903 in the Carahsoft Partner Pavilion of the JW Marriott Tampa Water Street.
Join Eddy Chavarria (VP of Solutions Engineering and Strategic Partnerships at Striveworks) at the USGIF GEOINT Symposium on Monday, May 6, for his lightning talk on "The Day 3 Problem: Keeping Geospatial Computer Vision Models Performant."
The Striveworks team is on-site at the Gaylord Palms Resort & Convention Center in Kissimmee throughout the event to showcase how our tools and technologies are solving challenges for geospatial intelligence. Stop by Booth #1703 in the Carahsoft Pavilion to see demos of our no-code geospatial imagery application that makes it easy for analysts to generate and use AI insights in ArcGIS or any GEOINT workflow.
Chariot users can now view the results of model evaluations from Valor, the Striveworks evaluation service, directly in the platform. This integration makes it much more convenient to keep track of past evaluations and see how models perform against different datasets.
From the Model Catalog page, open a model version and click the Evaluations tab. The user interface then shows performance metrics for every evaluation run on that model version. Check out the Valor Github page to see the various metrics you can calculate using Valor.
This is just the start of our industry-leading advancement in model evaluation. Soon, you’ll be able to compare multiple models across various conditions (location, time, etc.), providing a richer and more fine-grained picture of model performance across task types and use cases.
Learn more about Valor in our blog post below.
New Blog Post
Understanding Performance Bias With the Valor Model Evaluation Service
Most ML benchmarks rely on a single metric to judge performance—but doing so can be deceiving. The problem is that single metrics easily overlook performance bias: when a model performs worse on a particular segment of data than the whole.
Fortunately, Striveworks now has an open-source tool for understanding performance bias: the Valor evaluation service. Read the blog post from Eric Korman, Striveworks’ Chief Science Officer, to learn…
What is Valor?
How do I use Valor to understand model performance bias?
How can I start using Valor in my ML workflows today?
Eric Korman Explains Valor, the Striveworks Evaluation Service
Eric Korman is the Chief Science Officer at Striveworks. He leads our Research and Development team, which recently released Valor—our first-of-its-kind evaluation service for ML models.
We caught up with Eric to learn how this game-changing tool for maintaining the reliability of ML models in production.
How did your machine learning research ultimately lead to Valor?
So, in the MLOps space now, there are a lot of point solutions around model deployment and data management and experiment tracking. But what was really lacking, before we launched Valor, was a modern evaluation service. This is a service that will compute evaluations for you, store them centrally, make them shareable and queryable, and also provide more fine-grained evaluation metrics than just a single, all-encompassing number. It lets you really get an understanding of how your model performs—understanding different segments of your data, properties, those things. That’s the need we saw, so we built Valor to that need.
Interested in working with Striveworks? Curious about AI and machine learning projects that scale? Schedule a chat with our team to learn about our approach and how we’re making MLOps disappear.