Incredible New AI Use Cases Available Right Now


We have a new best-in-class AI music generator. A little AI app for all your chess fans out there, and this has been the week for open source AI models. So much was released this week. So if that’s said, let’s have a look at all the new AI releases that you could be putting to work today. Okay, first week’s first is the music generator.

Yudio This is a new player that entered the game, and it’s essentially a slightly better version of Suno. Some people claim it’s twice as good; there’s a discussion to be had there, but one thing is clear: It’s way better at certain instruments, and it’s way better at clear audio. There’s a lot to talk about, so I created a dedicated video talking about this tool and all the other AI music tools that you might want to consider. If you want to just play around with it, you can go to udio.com, sign up for free, and as of today, April 10th, 2024, you can generate something like a few tracks a month, which is ridiculous. It’s really impressive and absolutely amazing. This tool has absolutely raised the bar for what AI music is today. So there you go. Did you hear that? It sounds really, really good. There are many examples here. You can just go to the home page and check out some of these. Here’s a Dune-themed one. He’s a listener, Al-Ga-Yee. Listen, Al-Ga-Yee. Listen, Al-Ga-Yee. Okay, let’s get serious. All right, next up quickly. I wanted to cover this just because it became so popular on GitHub, but there’s this brand new AI software builder called Plandex that has been getting more and more attention. This is super fresh. I have not played with this yet, but essentially what it does is that it’s one of these many copies of Devon. If you remember Devon, the AI assistant that builds apps for you, well, this doesn’t include vision yet. All of it just works inside the terminal. But I wanted to bring this up because there’s a whole open source movement forming around projects like this where people are trying to build AIs that build software, and they do what they can, right? There’s a lot of limitation, context, windows, and reasoning abilities.

These models have not been created to run autonomous agents perfectly. Nevertheless, people build things like this, and they open source them. So if you’re looking for an AI coding engine for complex tasks that works in your terminal, there you go. Moving on to some fun updates here. So let’s talk about a new feature that PoE added, and I want to keep this brief. But they added a thing where you can essentially monetize your chat bots for every single chat a person takes with them. And I wanted to feature this because I feel like it might outline the future of many of these products. Now, obviously, the GPT store came out; it got a lot of people excited, and now nobody really cares about it, right? Like, when was the last time you browsed the GPT store to find something interesting? I’d have to think for a second. All I use are my custom-built GPTs that do very specific things that are only relevant to me. And most people I talk to find the same thing. So right now might not be the moment where stores like this really shine; there’s all these useful chatbots, but I find it really interesting this approach to monetizing it, where if people find your bot useful, they’re going to be paying per request. What an amazing model! And I reckon as they evolve over time, things like PlanDex will be available through a GPT soonish, right? It’s just a question of time. Yeah, I can see a lot of people paying per request to build complex software with AI easily for something like chat GPT. By the way, if you didn’t know, OpenAI has started talking to some GPT creators. I saw this on X recently. They’re basically trying to pilot something similar to what Poe is doing today here, but this is largely unexported territory, like how do you monetize chatbots that people could build themselves, and they can probably be prompt injected so you can extract all the info that the maker put into it? I don’t know; I thought this was an interesting development, paying per message, something we might see more of in the future.

I’ll just use the basic AI chatbot. It’s that easy. Easy peasy, lemon squeezy. When you head on over to the builder, you’ll see a basic flow. So as you can see here, it starts with a message, and then it presents the user with a form where it captures the email. But first, we need to head on over to the knowledge base to add some data sources. You can upload documents, add site maps, and much more. So what I’ll simply do here is add my own little PDF with a guide on how to use a specific keyword in my journey. And there you go; now the bot has custom data. And now I have the choice. When the user talks to the chat bot, do I want it to use the knowledge base or an AI model? You can switch over to the AI model here, the GPT-4 Turbo. And as you might imagine, we could add all of these blocks here, make this more complex, and repeat the same thing over and over again. But let’s just preview the bot right here. A welcome message will fill out our email. And now it opens up the conversation. Hi Igor, How can I assist you today? And it uses AI to answer here. What can you do for me? And as you can see, I successfully answered with GPT-4. Now that there’s so much more in here between the building blocks and integrations with Zapier, you can add knowledge bases. And once done, you can easily publish this and add it to your website. If you want to start building AI-powered bots yourself, check out the link at the top of the description. And now let’s look at some more AI news you can use. Okay, but now let’s talk about all the open source language models that came out over the course of the last week. There are so many, and all of them are really good. This has been a massive week for open source.

So what’s new? First of all, let’s command our model by cohere, then this Mistral model 8X22B, and then Stability AI came out with LM212B. What makes this unique? Why should you care? The first week’s first Kohiro came out with R+ and already ranks at the top of the chatbot arena in terms of open source models. As you can see, this is the number one model here for open source. As rated by the people, benchmarks align, and there is super positive sentiment all across the internet towards this. People are loving this brand new model. Why might you ask? Well, first of all, it has a large context window with 128k tokens. Secondly, it’s optimized for retrieval augmented generation, aka RAG, which means it’s optimized for you to attach your own context and your own files to it, making it really good for business use cases. Also, it is trained to be multilingual, so you can work with different markets on the same model. And as I mentioned, this thing is fully open source. So this is a really exciting one for businesses and enterprises because you can run it locally. It has a large context window, you can add a lot of your very own context via rag, it’s multilingual, and it outperforms all the other models. Whew, kind of a big deal. Every week, we keep receiving better and better models, and this is no exception. But that’s not where this week ends. Beyond that, Mistral released another model. This one is called Mistral 8x22b. It’s also fully open source, and if we shift the conversation to strictly benchmarks, the new Mistral model is actually slightly better than the coherent R Plus model here. Now, mind you, these just came out, and as with all of these language models, it takes some time for users to figure out if they’re any good in practice. But from my very limited testing and what I found in my research across the internet, Mixeroll, which is built on the Mixer of Experts, is the one you want to go to for general-purpose usage, and Cohes R+, as it is optimized for RAG, is the one that you want to be using for business use cases where you’re going to be attaching a lot of extra files and a lot of extra knowledge. And there’s one more model that just came out, which is the Stable LM212B. The main selling point here is that this one is quite small and multilingual. So you have English, Spanish, German, Italian, French, Portuguese, and Dutch. Yeah, let’s do this. Test this with a German prompt. Perfect. Schleibernan is also a super penguin. Yes, we need this. There you go. If you didn’t know, these models that are natively trained on multilingual data perform way better in other languages than something like GPT, which can’t translate into other languages because it has seen them. But let me tell you, using GPT-4 in something like Slovak, where there’s five million inhabitants in the entire country and not a lot of literature across the internet compared to English, It’s just not the same model, as it’s translating a lot of its knowledge, and then a lot of the information gets lost in translation. So for international audiences and use cases, this is preferable, and we’re getting so many options these days. It’s absolutely amazing. Just keep in mind that this arena doesn’t update every single day. So here and there, they updated. Right now, we don’t even have the mixed role model on here yet.

One more thing about large language models this week: GPT-4 actually got an upgrade. Namely, they updated the GPT-4 Turbo API, their most capable API, to a new version. And that version is also inside ChatGPT. So to all of you ChatGPT users, it just got a little better. Now, how did it get better, and what are these major improvements? Well, we don’t really know. If you look into the comment section here on X, all these people share our frustrations. Hold up, hold up, hold up. Editing Igor here. I’m about to release this video, and a few hours ago, they actually released what is better about these. They published this graph with the different benchmarks and improvements, so on the left you can see before, after, and essentially better on almost everything. But here’s the most important thing to me: They changed how GPT responds now. It’s more conversational, so you get less of these bloated formulations like, Hey, friend’s name, just checking in to see if you had a chance. That’s not how people speak. Hey, friend’s name, just a human talking, and now chat GPT responds more like this, so that’s absolutely amazing. One more thing that also just happened today is that in the chatbot arena, this new GPT-4 turbo model that is also inside of chat GPT, so both chat GPT users and API users get the benefits of this, is the new king. It overtook Opus, and from my first tests, I can confirm. They felt the competition, and they improved several things about it, especially this conversational writing style, which is really enjoyable. Go ahead and try it out. In chat GPT, it does make a difference. You can ask it to write messages, and if you have experience with the responses before it, you’ll feel the difference. It’s a bit more human now. So yeah, the king is back, I suppose. Okay, back to the video. But I did find Lewis here that ran various needle-in-the-haystack benchmarks on GPT-4 Turbo, the new one versus the old one, and that got way better. If you’re not familiar, it’s a very simple benchmark where they give it a very long piece of text, and at different points, they hide a specific piece of information, and then they ask the large language models about that information. If it quotes it correctly, it found the needle in the haystack, and it scores well. They do that over and over again to test how good it is at retrieving little pieces of information that are hidden somewhere in the middle. GPT-4 used to be okay at this, but especially Claude Opus was way better at it. Now, this seemed to have gotten better. As you can see here, comparing all the different APIs they had, the green line is the performance of the newest one, and a higher value is better. So it’s the best GPT-4 we ever had in terms of retrieving things from a long context. Okay, next up.

I promise you something for the chess fanatics in here. And I myself am not even a big chess player. I used to play a lot as a kid with my dad, but over the years I became rather rusty. And this is where you can use this tool called Nocty.ai. I realized that Chess.com does something similar to this, but they have some unique features in here, like flashcards that it generates to teach you certain moves and live feedback as you’re playing. And you can also just play this raiding game where you play against AI, and it tells you what elo you have. Let me just quickly do that. Fight. All right, 821 on Light Chess; that’s 1.1. As mentioned, it is not great. Not terrible. But yeah, there you go. AI has been a thing in chess for quite a while now, but this is a slightly different approach. You can practice openings just like with other apps, create these flashcards, and apparently use AI for all of it. Okay, then I have one more app, and this one is really simple but very cool. It’s completely free as of now. It’s called Fix Blur, and it does one thing: it fixes blurry images. So, I don’t know; let’s just take this blurry image right here and see what this can do. Fix a photo now and upload an image. Look, this couldn’t be any simpler, right? We’re doing this in real time. It’s uploading, and there you go. There are five photos left today. Come on, do a good job. I want this tool to be really good. There you go. Wow. That’s way better. What a cool little tool, and it’s completely free.

About Anushka Agrawal

Leave a Reply

Your email address will not be published. Required fields are marked *