Product for Product Management
AI Tools: LLMs with Sani Manic
- Podcast
- Product for Product Management
About This Episode
I joined Matt Green and Musha Mikanovsky on the Product for Product Management podcast for a conversation about what product managers actually need to know about LLMs: where they work, where they don't, and why the hype around AI company valuations would collapse if everyone understood how these tools really function.
My framing for the episode: there are two ways to use an LLM. You either use it to understand something, or to avoid having to understand something. The first is productive. The second is where things go wrong. If you don't give an LLM good enough context, there's no way it gives you good enough results. And if you can't verify the output because you don't know what the correct answer is, you shouldn't be using an LLM for that task.
We got practical about context engineering: breaking tasks into small, manageable pieces instead of asking an LLM to do something that would take a human two weeks. When I was building PodPacer, feeding ChatGPT 10 transcripts at once produced hallucinated chaos. Feeding it one short transcript at a time with specific questions produced useful output. That's the difference between understanding the tool's limitations and hoping for magic.
On the bigger picture, I made the case for open source models and self-hosted AI. If you can run a small model on your own infrastructure and get 95% of the same results as these multi-billion dollar frontier models, maybe the future of AI isn't OpenAI and Anthropic. Maybe it's WordPress-style democratization: install it yourself, own your data, build on your terms. The whole wrapper economy of companies reselling tokens is built on shaky foundations, and when OpenAI decides to raise token prices, a lot of businesses will be in serious trouble.
Key Topics Discussed
- Two ways to use LLMs: understanding vs. avoiding understanding
- Context engineering: small tasks, specific questions, better results
- Why LLMs are hallucination engines without proper context
- The wrapper economy and the risk of building on someone else's tokens
- Open source models as the democratized future of AI
- Gemini and Google's ecosystem advantage over ChatGPT
- Privacy risks of cloud-based AI and the ChatGPT data exposure
- Building products with AI: from PodPacer to the AI Fluency Club
- Why AI should be invisible infrastructure, not a chatbot
Transcript
Matt Green: Hello, product people. Welcome to the Product for Product podcast hosted by Matt Green, data advocate and product manager, and Musha Mikanovsky, product leader and author. Our goal is to serve the product community by helping you find products that can help make your work in product management easier. Thanks for joining us on another episode of the Product for Product podcast.
Matt Green: Welcome back, everyone. On today's episode, Misha and I are excited to speak with fellow podcaster and thought leader Sonny Manish, host of the nohacks podcast and co founder of the AI Fluency Club about key things to consider when using large language models or LLMs. So let's dive in. Welcome to the show.
Slobodan Manić: Sanny, thank you for that wonderful introduction and pleasure to be on the show.
Matt Green: It's a pleasure to have you. Hey, Mishay.
Musha Mikanovsky: Hey, Matt. Hey, Sunny. Welcome. You are joining us from Portugal, I believe.
Slobodan Manić: I'm currently in Porto. In Portugal.
Musha Mikanovsky: Yes, yes, Porto in Portugal, which I'm recently seeing a lot of. Maybe I have a lot of connections over there, but I see a few here and there post on LinkedIn about a growing community over there in product and then growing community over there in general, in AI, et cetera.
Slobodan Manić: It's happening. Yeah. Also we had an event two weeks ago at the time of this recording, Future of Experimentation, which was an invite only, an experimental kind of event here with 24 people from all over Europe. It was amazing. It was really, really good. Yeah.
Musha Mikanovsky: Very nice. Very nice. And we have you today as part of our series on AI tools that product managers should or might use in their process to help us. Now, your main expertise these days is in experimentation, but you also dabbed into product management, I believe, right?
Slobodan Manić: A little bit with some light SaaS products. I'm not as advanced as you guys. I would never say I'm a product manager, product person, nothing like that. But everyone is kind of everything these days with more or less success. So in that sense, maybe. I don't know. Yeah, 100%. What I would describe myself as is an AI skeptic or anti AI hype person. And I need to say that every single time I talk about AI, because you can't just keep talking about AI all the time without sounding like you're hyping everything. I love the technology. I don't like what big tech is doing with the technology. I can put it that way.
Musha Mikanovsky: So it's really interesting because throughout the show, Matt and me, Matt and I, I should say we have this kind of like, you know, I'm more on the anti AI and he's into the pro AI stuff like that campsite. And what I think with you is, I'm not saying that you are in my camp because you do try a lot of AI tools and you're not afraid to try it and you're actually talking a lot about it and stuff like that. Not that I'm afraid to try it myself, I'm trying things myself as well. But my skepticism is actually holding me back, I think. And you're somewhere between us, I believe.
Slobodan Manić: Probably yes. Also I'm probably more skeptical than an average person when it comes to AI because everything those startups and even the big AI companies tell you is self reported. It's them talking about their product, how great it is, how amazing it is, how it's going to. It was supposed to solve the energy crisis, it was supposed to cure cancer. Now we have an erotica sex chat in ChatGPT. So I think the priorities have shifted or at least they have finally been honest about what they're trying to do and that's get as many users as possible.
Matt Green: Yeah, for sure.
Slobodan Manić: I'm very skeptical about what they're doing.
Musha Mikanovsky: Our audience didn't see their me rolling my eyes when you mentioned.
Matt Green: Yeah, that was the latest news from OpenAI these days. So we see that a lot in product management as well. Like I know something Mache and I can agree on is this role of AI product managers, you know, that's not a new thing and like to say like AI product managers you see a lot of hype in that space in our, in our industry. So and that comes I think a lot from these big tech companies that may have those roles carved out at the end of the day. David Perea, he's another thought leader in product. You know he mentioned it's still product management with you can learn AI and I think that's probably the spirit of like what we're going to do during this series. Like we're going to learn and that's the way I approach it. I'm a product manager you that knows AI. But my foundation is. Yeah, yeah.
Musha Mikanovsky: Interesting enough. I was a product manager before ChatGPT was a thing. AI product manager. And I struggled a lot because before that, before the hype there was not too many of us so it was hard for me to find someone to actually learn from or there was a lot of us but you know, we're just, you know, the hype wasn't there so no one knew that we were there. Anyway, let's, let's put that aside and let's start Diving into our topic for today, Matt, you, you have used LLMs and you can share with us also, you know, your experience, you know, with different LLMs to do research and to. And I know a lot of product managers are doing that, learning. Many of them are writing their BRDs, PRDs, which is something I don't really like. I don't like PRDs, but still people are doing those user stories, et cetera. But we need really to understand and dive a little deeper. Where does it work, where it doesn't work. What do we still need to be careful about these days even after two, three years that they're out there?
Slobodan Manić: I'll just say that there are many ways to categorize how you use an LLM. But the one way I like to look at it is you either use it to understand something or to avoid having to understand something. And I think this is at the core of what we're going to talk about today because using an LLM to understand what your work is and how so you can do it better is thumbs up. Go ahead and do it all day, every day. Using an LLM to bypass having to understand what you're supposed to do to me is extremely problematic. And it's not even as bad as it's going to be if people keep doing that.
Musha Mikanovsky: That's a different way to put it.
Matt Green: Yeah, absolutely. Yeah. So I use it on a daily basis, but it's, I always go back to that. Context is key. Like you know the business better than AI. You know the people, the culture, you know that you should know that the market and your, your application better than AI. So you know, you have to keep that in mind when you're working alongside one. So I always go back. Like partnership, AI is a partner in helping you understand and develop strategy. It can create documents for you, but at the end of the day you need to check those documents and make sure like it's outputting what you want. Yeah, it can wind up a prd. It can do briefs and one pagers and stuff like that for you. But just make sure that you have the context and you're ultimately the authority. AI is just providing information, formatting things for you and I think that's like where it can help deliver the most value initially for PMs on a day to day basis.
Slobodan Manić: So if the world knew that the thing you just said, would valuation of AI companies go up or down?
Matt Green: Down.
Slobodan Manić: That's why most people don't think that way.
Matt Green: Yeah, yeah, yeah.
Musha Mikanovsky: Tell us more about your take on that, Sonny. I'm really curious, you know.
Slobodan Manić: Right.
Musha Mikanovsky: And then later on, I want also to hear about your favorite go to tools for that.
Slobodan Manić: Absolutely. So my take on that, what Matt said, context, context, context, context. You can't just go to an LLM and say, how should I do this or do that? But then open five separate tabs and ask the same question and you'll get different reasons, not just differently formatted paragraphs. It's going to be different things you need to do or different contents of the document it gives you. So please don't trust without verifying. That's number one. You can never, ever, ever trust an LLM because it's a hallucination engine that, that's what it's built to do. If you don't give it good enough context, there's absolutely no way it's going to give you good enough results. Also, you need to break down the task into smaller, easy to manage tasks and have it solve that. You can't just say back to Vibe coding tools. You can't just say, build me this. You need to know exactly what every step of the build should be or what every step of researching for a document should be before you can ask an LLM and expect good results out of it. So I think this is the biggest takeaway for me when, from my extensive usage of LLM to build things. When I was building podpacer last year, the initial trial for that project was I gave ChatGPT 10 transcripts of my podcast and I said, or other podcasts. And I said, can you prepare an interview with this guest based on the 10 transcripts? And it will start to hallucinate, mix up the, you know, make things up. It was all kinds of chaos that was for all this is. Well, 5 is worse, objectively, I guess. But when I gave it one short transcript and started asking questions based on one short transcript, what can you tell me about this episode? It was, it was not perfect, but it was more accurate. So you need to give it small pieces of information and have a very direct task. Not just, hey, I want you to do something that would take two weeks for a human to do. No, do something that would take an hour. But please help me do that in five minutes or three minutes or whatever it is. So don't overwhelm it because it will start to hallucinate. There's no way it doesn't hallucinate in that scenario.
Musha Mikanovsky: What I'm always wondering about that is how things have progressed over time. So my bad experience, and I might have mentioned it on the pod before when ChatGPT came out. And the first time I tried it, I basically just asked questions about product management because I knew this is something I know about. And I want to see what's the answers it's going to give me. Are they going to give me things that someone that doesn't have this information will be able to learn from that or it's going to misdirect them in different directions that they shouldn't really go. And it was really bad. I remember I showed it to Matt and Matt tried it on his end and he got also different answers.
Slobodan Manić: But.
Musha Mikanovsky: There were a lot of similarities between them, but they were still different. And then I was like, I can trust this tool if I don't trust it for things I know, how can I trust it for things I don't know? And I kind of let it sit there for a year or a year and a half, didn't use it at all like everyone else. And then I tried it again with the same questions and it didn't do much better. And I was like, if you didn't improve over time, like after a year and a half, then I don't know if I, how can I trust it that it changes or that it actually learns. And I've heard places people are saying actually that it becomes even worse because of the bad feedback it's getting from people or bad data that that is added in the Internet based on other AI models that are adding a lot of garbage out there.
Slobodan Manić: Yep.
Musha Mikanovsky: So coming back to what you just said, do you test it on these things over time? Like you got to that conclusion that it's not good to ask it a big question. You have to ask small questions and give it a lot of context. But do you think over time it will change or this will still should be the mode of operation?
Slobodan Manić: It depends on what you're asking it. So if you're asking something that is common knowledge, that's really, you can find it on Wikipedia, it will give you a correct answer. But Google can do that, Wikipedia can do that. So what's the point? If you're asking something about a specific AI based product for dog walkers, it doesn't know what the right thing to do is for that specific product to use a very niche example that I'm sure it doesn't have in its knowledge because it wasn't trained on such products. There's just no way it was. It's going to either hallucinate or tell you something that applies to a generic product or Some generic rules that should be best practices that work for everyone, but maybe they don't. So unless you know the what you're asking it was it's in its data set when it was trained. Don't assume it knows and assume it will hallucinate that that is the way I approach. But also, even if, even if it's common data, fact check, fact check everything every LLM gives you. Unless it's something very casual that you're just doing for fun.
Matt Green: Yep. And something I personally do is if I have a long thread I out and like Claude and some of these other models because I use a bunch of different models to accomplish different tasks I have like memory usage and so it remembers conversations I have. Going back to the context is key and then have a long running dialogue and so I'll do bits of pieces like hey, improve this, improve that and whole time I'm moving along, I'm giving it more information and by the time I'm down towards the bottom, it has a lot of context to make critical decisions that I needed to do. So they do offer a lot of deep research as well. So that is one area that you can really dive into deep research, you know, just from the surface building the building blocks really help.
Slobodan Manić: Absolutely. And deep research is amazing. But what it really is, it finds 50100 articles that Google would find and then it just analyzes them, just, just faster way to do what you could do with Google. Let's just be honest about what. It's not a, it's not AGI or something like that or anything, even pretending to be AGI. It's great, it saves time. But yeah, deep research is amazing. Usually I want to have a big project, I start with a deep research prompt and then when it has all that, my go to is Gemini because I'm just on Google integrates with everything I do. It gives me a long document based on the deep research it did. And that's the starting point, that's the context for the conversation. And then I give it more data that I have that they didn't find online. And then you have a better starting point for whatever you're trying to do.
Matt Green: Yeah, for sure. Yeah. I've started using Notebook LM as a personal researcher partner and like you could give it up to 50 different resources and so it uses a Gemini Gemini model and then you give it up to 50 different YouTube text websites and stuff like that and basically you can tailor it to what you want and then it'll output some great material that'll really help educate people. So, yeah, I think Gemini and Google is a great model.
Slobodan Manić: I think they're catching up. Yeah, definitely.
Matt Green: For sure.
Musha Mikanovsky: Yeah. I think that Gemini or Google, they did a really smart thing by infusing it into Google search so you don't have to go somewhere else. For people that like me still using Google to search things, and then they give me, most of the time those summaries from Gemini. I did try to use Gemini specifically on some occasions, but mostly to generate images because it's integrated with their image creation. They have banana.
Matt Green: Banana.
Musha Mikanovsky: Banana, yeah.
Slobodan Manić: What a dumb.
Musha Mikanovsky: I know, but it caught. I mean, I think because it's dumb. People like to say, but if you try to log into Nanobanana, it will give you credits. Two credits for like two images, one image. If you try to do it on Gemini, it's like free. I don't know. I don't get it. I don't really get it. But just yesterday I was using it for a presentation that I was building and it really frustrated me and I want to maybe talk a little about the frustration and how much you are actually getting those feelings. Because I asked it to do something, he did it very nicely. And then I told him this area in the picture, change it. And I gave him very specific instructions how to change it. And he didn't change it. He gave me exactly the same thing. I told him, oh, you didn't change it, it's the same image. And then you say, oh, yeah, I'm so sorry and let me do it again. And then it was thinking even longer and then it gave me again the same image. I'm like, what is going on? What are you doing over here? Why are you wasting my time? So it became really upsetting.
Matt Green: This is a big problem for me.
Slobodan Manić: I don't want anyone to see my chats with Gemini when I get frustrated because that would cancel me. No, it's not that bad.
Matt Green: There's a recent study that getting upset with the AI actually produces better results. So, you know, I'm not saying we need to be mean to our bots, but yeah, yeah, that's the true. That's the true research results right now.
Slobodan Manić: Yeah, the damage model, it's something used for fun. I honestly don't think this is something that is there yet. You can't use it for ad assets or you can tell it's AI. Even if it's great, you can tell it's AI. Some people will not mind. Some people will really, really mind it and hate you for it. So I Think it's still too risky to use it for that specific thing. It's just a fun. Even with the name like Nano Banana, it's supposed to be fun. That's my take on what Google was trying to achieve. But yeah, when an LLM gets frustrating. I was never upset at Google the way I was upset with an LLM searching on Google. I never had. Why are you so stupid with interaction like I have with an LLM daily?
Musha Mikanovsky: That's absolutely true. Because I didn't expect it to be silly or to do things that I didn't want it to do.
Slobodan Manić: So maybe Google is fine. I don't know. Maybe we don't need an LLM to tell us what the world is. Maybe we were doing fine without LLMs. I don't know.
Matt Green: And just to hang on that frustration piece, we see a lot of that in vibe coding. I personally do this vibe coding stuff and you get up to that 80% and then you tell it to make a change and it changes something else somewhere else. You're like, I didn't want that changed. I wanted this other thing changed. But to your point, Sonny, earlier, you're asking it to do a lot of way too much, you know, way, way too much building.
Slobodan Manić: So it's way too much. And one thing that adds every Vibe coding tool is a wrapper of a foundational model. As far as I know, they don't train their own models or anything. They use whatever Claude or ChatGPT or Gemini, whatever it is, every one of those at the bottom of every conversation says, this tool can make things up. There's a warning. So every Vibe coding tool is based on a fact that everything could be made up every time you ask it. And then there's 100 steps in one go. As a prototyping tool, it's amazing, it's fun, whatever you want to call it. But I want to see. I want to see a revenue generated, revenue generating, vibe coded app that's like killing it. That's making. That's more profitable than ChatGPT, which is plus anything. That's more than zero, I guess.
Matt Green: Well, profit wise.
Musha Mikanovsky: Yeah, profit wise. That's a whole other discussion because I've seen and heard also about this. You might have even posted about it. Sunny, I don't remember if it was you or someone else on LinkedIn about, you know, how no one actually profits because everyone is on top of others and those main ones are spending millions and billions of dollars on, on development without really profiting on anything. So there might be a crash There.
Slobodan Manić: It was somebody else may have been. If it was recent, I don't know. But Juliana wrote about on her substack called beyond the Mean. I think about wrapper economy and to go beyond the foundational models and Nvidia and all that stuff, there's a whole wrapper economy with people basically reselling tokens in their apps. That's the business model. I'm reselling tokens and making profit until the token cost goes up and then people think everything's too expensive and stop using it. It's all based on such shaky foundations that if I was building an AI based product that requires someone else's tokens today, I would be very scared because what happens when OpenAI realizes that this is not sustainable? We need to charge more for these tokens because we're losing a lot of money.
Musha Mikanovsky: That's going back to a very good product management topic is how much do you base your product on other products that might not exist in the future or might change their model? There is a lot of that with many different areas of product. Part of it is relying on government and regulations and then the government change the regulation and your product lose ground. But companies also can go under or change their models and all of a sudden your model breaks. So it's a very good point there.
Slobodan Manić: I think it's very risky to base your entire company or what startup that you're building on someone else's tokens. And there are thousands, millions of companies being built this way. So I see that that could be potentially very dangerous. But the other side of that, there's also open source LLMs that you can just run yourself and use yourself, even host on cloudflare if you don't have infrastructure to host it yourself. And I think that's an alternative. That's the bright future of AI. So I don't want to be all dark, but there's a bright future you can used the same way WordPress 20 years ago took over as an open source solution that anyone could install and host and own their data. You can do the same thing with LLMs. So maybe all of these big models are completely unnecessary when we have something like Quin or even the Meta's models, they're open source, you can run them, you can host them yourself and get 95% the same results you get from these super expensive models by these mega, multi, multibillion dollar companies. So maybe that's the future. I think if, if we're lucky, that's what the future of AI is. We'll see in the next few years, how things develop.
Matt Green: Yep. And privacy is a key component to all that as well.
Slobodan Manić: Absolutely, absolutely. Also, you can train your own models. You can train your own models. My, my friend Iqbal Ali does a lot of that. He trains LLMs for his company for other companies. A model that's 1 gigabyte in size that you can have on your computer, running on your computer and prompting it. And it's chatgpt. So what's the point of that?
Matt Green: Might be one of the competitive advantages of that. Absolutely. In the future like training your own models. You know, you have a proprietary data and you don't necessarily want to be sharing that with these other big enterprise companies like especially OpenAI, which you know, has had its hiccups along the way.
Slobodan Manić: So it still says open in the name.
Matt Green: Yes, exactly. Yeah, it's, it's, it's irony of it all.
Musha Mikanovsky: Yes, but, but who knows? Microsoft used to be very anti open source and over time one of those could do that as well. You don't know. I mean that could happen as well if they see that's the future.
Slobodan Manić: Yeah. I mean I don't know how a large enterprise would use AI and stay keep their data private and all that. Unless they're using an open source model that they're hosting themselves or their own model that they develop themselves. I would not trust Claude Anthropic. I would not trust OpenAI would not trust Google with any data that I'm sending it because they have a history of abusing data.
Musha Mikanovsky: Now let's go back to the partnership and research. So let's say I'm new in this, I'm a new product manager and I want to start doing that. Take me through some steps of how I might do that.
Slobodan Manić: So I mean you have to have an idea. Hopefully you have an idea for the product, for whatever you're trying to develop. Of course, brainstorming using AI to get to that idea. Sure. But only as an inspiration. You need to know what you want to build. I would start with that and then what we talked about earlier, really start with a deep research prompt that's going to gather as much information as possible. But then you need to proofread it, you need to check and really the same way you would do it without AI broken down into steps. You know the process better than I do. I'm not going to try to pretend that I know what the process is for, for a pm, but that same process, but every step of the way, see how AI can make you move faster, be more Efficient or just give better output. So don't try to reinvent the process that's already there. This applies to anything, not just PM and use AI every step of the way, but independently, not throughout the entire process in one go, as I said earlier.
Musha Mikanovsky: Yeah, and you guys also mentioned having notes or somewhere where you're adding on or stuff like that. Do you do that within the LLM or you do that like as a separate tool and then you feed it into the LLM all the time?
Slobodan Manić: What do you mean? Sorry? The notes.
Musha Mikanovsky: Like if you're adding context, how do you add the context to the.
Slobodan Manić: Right there. It needs to know what the context is. It's not going to work. So when you start a conversation it's good to give it as if you have three documents that will give it enough context. Say hey, I'm trying to work on this project, build this product, whatever you're working on. And these three documents will give you context. This one is for that. Explain what the documents do exactly and then it's going to be a very different conversation. However, if it's a long chat like you mentioned, Matt, it kind of forgets where it started. And that's always because of the context window. You have to be careful about that and keep reminding it. Or once you see that it's starting to talk something that doesn't really make sense, maybe say give me the summary of everything we talked about and just move to a new chat with that summary.
Matt Green: Yep, I've had to do that. Yeah, it starts, it starts to slip up and float away. So yeah, then another key component of all this is when you are as a, as a PM or anybody working in, in your organization, identify what product you're using internally. So kind of going back to the security and privacy aspect of this, don't just run off with a proprietary data of your company and pump it into a model. You know, I, we use Copilot where I'm at and CoPilot works with OpenAI GPT5. Keep in mind that you know what you're giving is company data. Just be mindful of that. I just want to have that as a caveat.
Slobodan Manić: I mean would you upload data to Pastebin or something like that where people can access where it might get. Go public? No, absolutely not.
Matt Green: Exactly. Yeah.
Slobodan Manić: You never know. We there. There was last week of July this year there was a big scandal when chats went public. The with. With. With Chat GPT right before they launched ChatGPT to kind of COVID the news cycle that to. To bury that what if that happens? Would, would, would your boss be happy with you if they saw you did something like that?
Matt Green: Probably. No, no, Absolutely. So it's a, it's a critical, critical thing to keep in mind. And, but it doesn't mean that, you know, there's a lot of other AI assistants out there. Atlassian has stuff and you can use a main model, but like a key component of all this is, is working in line with your partner, your AI partner. And so if you're building out backlog, Jira tickets, things like that, they have built in AI components. I mean they're, they're using whatever model on their back end that they like. But at the end of the day, like there's a bunch of other places where you're going to wind up using AI, but beyond just a running prompt list like you might see in Copilot.
Slobodan Manić: And I had a conversation on my podcast with Olga Andreenko. She was with Semrush until about a month ago and talking to me about which models they chose, what they implemented. It's an enterprise, so there was a lot of pushback. And then she figured out, wait, we're completely using Google? Everything we have, the infrastructure is on Google Gemini. Like, let's just get everyone on Gemini and getting the approval to do something like that from their legal team was just. Yeah, we already giving them all the data which IGPT would have taken God knows how many hoops to jump through to get the approval to use it.
Matt Green: Absolutely, yeah. If you have a Google Workspace account in your company, it works in line with all the places you're in. It helps you write email, it helps you write, you know, one pagers on your docs. You know, it'll do all, it'll even help with your slide deck. So if you're building out a presentation slide, you give it some context. This is how I'm going to write some words. I wanted this layout and it'll actually recommend ways to improve it. So that's the kind of inline AI assistant that you see a lot in product spaces these days.
Slobodan Manić: Which is why I love Google and Gemini because of the integration with the entire ecosystem they have. I mean, ChatGPT has no ecosystem to integrate with that. That's their problem. That's why I'm, if I was betting, I would bet on Google to overtake OpenAI at some point because of everything else they have. You know, there's, there's talks about ChatGPT and Shopify integrating. Yeah, but what, what happens when Gemini integrates with Google Merchant Center.
Matt Green: Right.
Slobodan Manić: You can just shop from Gemini and every product in existence is going to be there and Google, Google can even get paid.
Matt Green: Yeah, it's a nice moat to have. And then I think that's why OpenAI leans into the Microsoft partnership because as we all know, Microsoft's moat and enterprise companies is so wide. So you know, hey, you get copilot just because you have 365 and with OpenAI's investments back and forth between them, I think that gives them a foothold into enterprise data.
Slobodan Manić: Absolutely.
Musha Mikanovsky: Y but but that's also like, you know, most enterprises actually still work with Microsoft rather than with Google and for them it does make a lot of sense for the same reasoning to use copilot and, and therefore with Microsoft ownership or part of the ownership of OpenAI. I'm not following the percentages or whatever then may that's what eventually will happen.
Slobodan Manić: I think the partnership is kind of messy and getting messier so keeping an eye on what happens. But OpenAI needs to start paying Microsoft for some of the infrastructure at some point. It gets very ugly in a few years if you're OpenAI that partnership and.
Musha Mikanovsky: Copilot is a separate LLM.
Slobodan Manić: I think again, I'm not on Microsoft at all. I think they're trying to make it a separate thing. That, that is, that, that's from what I've seen.
Matt Green: Yeah, I see them, I see them trying to break away because as you work in Copilot you can try ChatGPT5 and so, you know, they may be using smaller models or like, you know, 3.5 or 4 or something like that. But like, yeah, I think in the end Microsoft leans towards doing its own thing long term personally.
Musha Mikanovsky: But it's also like going back to that, you know, which one an enterprise might choose or a company might choose. Copilot, like you say, it's coming with everything Microsoft and I don't know if it's integrated as well as Gemini is integrated with Google products. But it's the same idea.
Slobodan Manić: It's completely the same idea. Yes. And if an enterprise is asking that question, what should we use? You already know what are you using for everything else? That's the answer. Because the models in what the foundational models and what they can do and quality of the output, they're very close. They're always going to be very close. Like this is the same technology. So it's more about legal and what you can use safely than about which model you want to use.
Matt Green: Yeah.
Musha Mikanovsky: And the thing you mentioned, Matt, about You know, the tools inside the other tool, like the AI inside other tools, like inside JIRA or inside the Atlassian products or all of those. This is really the context, the context we're talking about all the time. Rather than just being a context, I have to feed it all the time. The context already exists in these tools and the AI should really go and find the context over there without me having to tell it again and again. Here is the context.
Matt Green: Absolutely. Yeah. It can go and look throughout your Atlassian confluence product discovery, you can go through all your JIRA and find this context. And then just when it comes down to building tickets, building your backlog, identifying things to work on, that's where assistance comes in and helps you. It makes you that 10x product manager. And then on top of that, you can go focus on strategic value, business value, talking to customers, doing the things that a lot of product managers like to do. But we don't always get time to do because we're, you know, often working on the backlog and stuff like that.
Slobodan Manić: On execution and that. That's the way AI, in my opinion, should be and probably will be in five or 10 years. It will be invisible. People will look back at a time when they had to type in chatgpt.com and go there and talk to AI the same way you look at going to a bank to get your salary. It just makes no sense. It needs to be invisible. It needs to be there for you. It needs to be providing the context on its own without you having to do it every time. And that will happen. We need more companies to integrate the same with the assistance inside the tools and all that. But the whole concept of having an assistant you have to go talk to and I don't get it, honestly, I don't get why this is such a big thing. Maybe fascination with AI pretending to be human from sci fi, and that's how we're trained to think about AI, that it's something that even the movie her, that was Sam Altman's dream, building ChatGPT to be exactly like that, with the same voice. And then they tried to get Scarlett Johansson to provide the voice. She said no, they generated the same, a similar voice. They found an actress, then she sued them. It's a very ugly story that you can read about, but AI is not supposed to be that. It's not supposed to pretend it's a human. Just the name artificial intelligence kind of makes people think that way, but it should be infrastructure that's present wherever you're doing your work.
Matt Green: Right, Yeah, I think there's, there's a good, good quote and I, I won't, I won't quote it, but like technology should be like invisible. That's the best design and best technology is the fact that you don't know you're using it. And I think long term that's where AI will wind up being. But right now everything's very intentional. You have to go ask it for things, you have to go prompt it, you have to like engage with doesn't. But at some point it's going to just start knowing things about you. And that's where personalization, for better, for worse comes into it, I think so.
Musha Mikanovsky: So I have a bit of a different perspective on this because humanoids are still, you know, humanoid robots are still thing that people are developing and, and people are dreaming about. And I've seen recently that in 2050, the, I don't remember the exact number, but the market value in the world of humanoids is expected to be double that than, than cars. So. And it was in the trillions, so maybe it was double, maybe even more. I think that in order for humanoid actually to interact with you, it will have to understand things like the LLMs are understanding. But the interaction is not going to be by me prompting it, writing it to it, but by me talking to it. And then it will have to, you know, telling it me back and not give me pages and pages of responses so that I have to read. Right. So, so the interface is, is different. And, and to me the next thought is that the way ChatGPT and all of those LLMs came out with this user experience of prompt and response, prompt and response, prompt and response. It's a very lazy user experience that unfortunately people that don't understand user experience adopted it for everything else, or people that do understand user experience, and I'm talking about designers adopted it because everyone else now is using that. So let's just use that as well. But it's not the right user experience for most tasks that we're doing. And that's where the AI needs to be. That's where I'm getting to what you just said. AI needs to be invisible, but the invisibility is about layering. Where does it do the work and how do I get to that? There is still always a user interface. The user interface, whether it's going to be with a humanoid that I'm talking with, I will notice technology, unless it will be so good, it will look like human beings. But in the beginning, I think the humanoids are going to look like humanoids or they're going to look like machines. So I will know that I'm using technology, but the user interface over there is going to be much more human than writing a prompt and getting responses that I have to read through.
Slobodan Manić: Right, that's very possible. One con to that that I would add is human body is efficient. If you're trying to do everything that humans do, no one's going to build a machine that's trying to do every single thing that humans do unless it's trying to replace humans. And that's a dark future that I don't even want to talk about.
Musha Mikanovsky: AI will do that.
Slobodan Manić: Well, sure, and it will create AGI and God knows what else. But let's say you have a. There's this cooking robot that I saw on YouTube that really has all the ingredients and can cook 20,000 recipes and whatnot. It has two arms. It has two arms to cook. It's not a humanoid that's pretending to be a chef because that would be inefficient usage of all the materials built used to build the body. You only need the arms if you're going to cook. I'm saying that whatever robot you have that's using AI, you don't need the entire human body for it. So the humanoid path, I think is wrong for efficiency purposes and nothing else.
Musha Mikanovsky: Well, it depends where there's a task, of course.
Slobodan Manić: Yeah, of course, of course.
Matt Green: Absolutely. Yeah. And there's video out there with the Optimus, you know, Tesla humanoid. So. But it folds laundry. So I think that. But I, you know, it'll go full laundry for you. But the thing is that it has to be told that. And so whether you're telling a prompt to do something or whether you're telling the humanoid to fold laundry. This is how I want my laundry folded. You're still being explicit. You're telling it to do something. AI does not know how to do things necessarily without you telling it to do things. So until we get to AGI and self replication of AI where it says I already know how to do all this, you don't. I don't need Sonny to your point, I don't need humans to do, to tell me to do things anymore. I still think whether it's a humanoid or whether it's a prompt inside an application, you still have to intelligence to do something for you.
Slobodan Manić: Well, hopefully forever, because it's very dark for sure. Cross that line.
Musha Mikanovsky: Well, I can tell you that in my age, I kind of happy that I might not see, you know, that future.
Slobodan Manić: Yeah. 100. Yeah.
Musha Mikanovsky: You know. You know how sometimes we are like, oh, you know, it's good we're living in this age and not like a hundred or, you know, a thousand years ago because of all the technology and all the things that we're having. And sometimes I'm starting to think that it's also good that I'm not going to be in 100 years from now or a thousand.
Matt Green: You might have one taken care of. Yeah, you might have one taking care of you down the road as some nurse or something like that. But, yeah, the barrier for entry as well is going to be an obstacle for a lot of people. They'll be able to purchase power of like, hey, can I actually buy a humanoid that has all this AI? So I still think it's a long, ambitious path, but there's interesting work out there.
Slobodan Manić: Absolutely, yeah.
Musha Mikanovsky: Amazing. This was an amazing discussion.
Matt Green: You.
Musha Mikanovsky: You actually have a group that you started that Matt mentioned it in the beginning. Can you tell us a bit about that?
Slobodan Manić: Yeah. So this is something that I started with my friend Bjarn Brunenberg, who also lives here in Porto. A few months ago, we. I spent most of this year talking to people from growth, experimentation and product about how they feel about AI. This is why I kind of pivoted no Hacks podcast to talk about the future of work because of AI forcing changes. I talked to a lot of people about how they feel about where they stand with AI adoption, why they're behind, do they feel they're behind. And almost everyone felt that they're behind, that everyone else is running while they're still crawling. And most of those people felt bad about not doing more and had no idea where to start. So we just started a cohort with. I think the first one was with eight people, two weeks, total of four hours online. We're building a workflow together using AI that they can use at work. So we had people from huge companies joining and like really high level executives in the field of experimentation and growth. And we helped them build whatever it is that they need to build that they can use at work. And the aha that you get from the people, people who feel like there's never going to be a moment where they understand what this is and how to use it. And then two days later, wow, it works like, I built something and I can, like, I need to finish it and then I can use it at work and they keep using it later is really what makes me optimistic about AI, because the more people Understand AI and the concepts of AI and AI agents specifically, which is like the big buzzword. It's just an LLM with access to tools, let's. And some memory. It's nothing fancy. It's not going to save the world or destroy the world. The excitement you get from the people when they figure out that this is something I can do. Even though I thought a week ago that it's never going to happen. We kept going with the cohorts and now we're launching a paid community in a few weeks. It'[email protected] so there's going to be a monthly membership. Not, not expensive for sure, at least to launch with. So if anyone's interested in that, you can either reach out to me directly or just go and sign up for the waitlist.
Musha Mikanovsky: So it's AI fluency club.comclub.com.
Slobodan Manić: Yep. Okay.
Musha Mikanovsky: Amazing.
Matt Green: That's great. Yeah, I think the, the access to, to AI is everywhere. You have it on your phone, your desktop and I think people, as people get used to it and you don't have to use it on your daily, everyday basis. But like as people get to use it and see how, where it can deliver value. The barrier for entry is very low and I think people want people get past that. Like, hey, how do I get started with it? I think that's like the aha. Like you mentioned, Sonny, also for most.
Slobodan Manić: People, once you leave the chatbot, whatever your chatbot is, and start using AI in different ways, just building something with AI, that is going to be the biggest transformational thing for you in, in how you use AI, Right?
Matt Green: Yep, absolutely.
Musha Mikanovsky: Applications and, and most people don't. I mean a lot of people don't know yet how to use LLMs and definitely most people don't know, you know, what AI is. Once you do that, once you leave the, the LLMs.
Slobodan Manić: Very true. Yep.
Musha Mikanovsky: Yeah, absolutely amazing. Matt, do you have any other questions?
Matt Green: Honestly, we probably could go on for a while, I think, you know, obviously this is an area that I'm extremely passionate about, really interested in. I think it's going to have a positive impact on product management. So maybe we'll, we'll learn along the way in our series on AI and PMS and LLMs and you know. Thank you, Sonny, for, for joining us today.
Slobodan Manić: Thank you for having me.
Musha Mikanovsky: Thank you so much. Sunny, where can people reach out to you?
Slobodan Manić: LinkedIn. My name is unique enough that there's only one uncle of mine with the same name on LinkedIn and you're gonna find me before you find him. Okay. So there and nohackspod.com that's the podcast. If you want to check out the podcast, it's really since earlier this year. It's about how AI is changing work in different digital industries like SEO, CRO industries, all that stuff, and what people should do to adapt and not get lost along the way.
Matt Green: Awesome. Well, thank you so much again, Sonny. Thank you. Mache. Thank you to all the listeners and we'll talk to everybody next time. Take care.
Musha Mikanovsky: Thank you very much.
Matt Green: Thank you to all the listeners. We really appreciate the feedback and support. Please leave us a review to help others find the show on Apple or Spotify or anywhere else you're listening to the show.
Slobodan Manić: Sam.