AI News: The ChatBot That FINALLY Beats ChatGPT


There’s a lot of really fun and fascinating stuff happening in the world of AI this week, and I don’t want to waste any time, so let’s get right into it. Starting with the fact that we have a new top-large language model, I’ve talked about Claude 3 Opus in past videos. Well, it is officially the top model, according to most benchmarks.

Chatbot Arena is a crowdsourced platform where real humans use the various chatbots and then vote on which ones are their favorites. And GPT-4 was the king for a long, long time. And this week, Claude 3 Opus, the premium model of Claude, which costs 20 bucks a month, knocked GPT-4 down to take the top spot as the best performing model as voted on by users. And as Claude takes the throne, Amazon wants to sit atop that throne with them. This week, Amazon said that they would spend another $2.75 billion on Anthropic. Amazon initially invested $1.25 billion, and now they’re adding $2.75 billion more. Amazon will maintain a minority stake in the company and won’t have an anthropopic board seat, the company said. So it sounds like a kind of similar deal to what Microsoft has with OpenAI. Amazon is trying to strike up a very similar deal with Anthropic and their Claude platform.

And if you haven’t used Claude yet and you’ve only used ChatGPT, definitely give Claude a try over at Claude.ai. Just keep in mind that, unlike ChatGPT, Claude’s not going to generate images for you, and unlike ChatGPT, Claude isn’t going to search the web for you. But any other use case that you can think of outside of those two use cases, Claude is pretty much better at it. And while we’re talking about new leaders in the AI world, the company Databricks just released a new large language model called DBRX. So I’m not sure how they want to pronounce it, but according to the benchmark tests and their blog posts here, it outperforms every other open source model. You can see that in language understanding, it beat out Lama2, Mixtral, and Grok1. It beat out all three of those in programming and all three of those in math. And if we scroll down the blog post a little bit, this DBRX also beats the free version of GPT 3.5 right now. You can see in every single one of these benchmarks here that DBRX or DBRX beat GPT 3.5. So if you’re someone who loves playing around and testing different open source models, this new DBRX model may be one to play with. Now the weights for these models are available over on Hugging Face, but they do have an application, and they need to approve you before they let you download them. And since we’re on the topic of open source, there was some big news in the world of open source this week over at Stability AI. To most people outside of the AI world, this is probably not that big of news, but to those of us who are closely paying attention to the AI world, this was actually pretty crazy. Ahmad Mustaq, the CEO of Stability AI, seemingly abruptly stepped down from his role as CEO and relinquished his board position. Now the speculation is that he was kind of forced out as opposed to actually stepping down, but all of the outward-facing information around the whole thing made it look like it was an amicable thing and that it was a moderator’s choice. But Stability AI has had kind of a crazy ride recently, with most of the key employees and engineers actually leaving the company over the last several months. In fact, I don’t even know if anybody who originally worked unstable diffusion still works at Stability AI because they all left in the last couple months. There’s also been a lot of rumors about Stability AI possibly filing for bankruptcy and having financial issues. So Ahmad stepping down kind of feels like confirmation that there were these issues over at Stability AI. Again, I don’t want to get too deep into the weeds with this because most people watching it probably don’t care about it.

It’s just this fascinating AI soap opera that I am really intrigued by. Ahmad did make this statement the same day after stepping down, as my notifications are resting in peace. My shares have the majority of votes at Stability AI, and they have full board control. The concentration of power in AI is bad for us all. I decided to step down to fix this at Stability and elsewhere. We’ll be sharing more soon.” And that was him replying to his own post saying all in on decentralized AI. We don’t totally know what his plans are or what he means by decentralized AI, but his claims almost seem like he’s saying I had too much power you shouldn’t give one man too much power so I’m stepping down to solve that. That’s the sort of outward facing statements that seem to be going around. But Stability AI didn’t slow down the very next day after Amod stepped down from his position they announced Stable Code Instruct 3b a new coding large language model seemingly to show that Stability is going to keep on shipping even though Amod has stepped and is no longer there. And continuing on the topic of open source large language models, Elon Musk said that all premium subscribers on X are going to gain access to Grok this week. Now I’ve had Grok for a while because I paid for that premium plus subscription at 20 bucks a month, just so I can have access to Grok and play around with it and compare it to the other chat bots. So I don’t actually know if this is rolled out. I don’t know how I would know because I’ve had it anyway, but it appears to be rolling out this week. So you might have it in your account if you are a premium subscriber over on X. Moving on over to OpenAI news, OpenAI is now actually starting to monetize the GPT store. Back when the GPT store was announced, a big piece of this was that you can create GPTs and then sell access to them and actually earn income from these custom GPTs. A lot of people built GPTs, but nobody’s really made money off of their GPTs yet.

I saw this tweet from Matt Schlitt here saying that he got this email from OpenAI. Today we announced a pilot program to partner with builders to test GPT earnings based on usage by paying builders based on the usage of their GPT. Now Matt’s tool was called automated writer and it said your GPT automated writer is highly utilized by chat GPT users. And we’d love to incorporate your feedback as we define the approach to GPT monetization. So if you have custom GPTs that are in the GPT store and they’re doing well, you may have an email like this in your inbox, inviting you to actually earn with ChatGPT. And then there was this tweet here from Nick Dobos, who created the popular Gremoire GPT. And he actually got an email from OpenAI after asking a few more questions. They told him, you’ll earn a minimum of $1,000 USD monthly with the potential for additional earnings based on usage of your GPT. The initiative will start on May 1st, 2024 with the first payout scheduled for mid-June. If you create content online at all, You’re going to want to check out this completely free ebook called Using Generative AI to Scale Your Content Operations. If you’ve got a business and you’re not creating content online, you’re falling behind. That’s why I put a link in the description to grab this free guide. When it comes to content creation and AI, it’s hard to know what is okay and what’s not in the eyes of places like Google. That’s why I found this section particularly helpful, where the book breaks down what Google does and doesn’t expect from your content and whether or not it’s okay to use AI in your content. Other than that, you’ll learn about how to access generative AI, how to create the perfect prompts using generative AI in your marketing and content strategy, limitations of AI, and so much more. This really is a great resource from HubSpot. And that’s why I partnered with them for this video. Once again, the link will be right at the top of the description below this video. And thank you so much to HubSpot for sponsoring this video and other open AI news that came out this week.

There’s a rumor circulating that they’re about to remove the quotas in GPT four. There was this leaked image circulating that shows a smart model selector that might be coming out soon, which will essentially decide whether to use GPT 3.5 or GPT four. When you ask chat GPT a question, if it doesn’t require the full capabilities of GPT four, it will ask GPT 3.5, which is a lot less expensive to run. and if it does require a more complex response it will use GPT-4 and that’s how it seems like they’re going to get rid of the limits is by switching back and forth between the most optimal model for the query that you asked it. We also got a peek at some new Sora videos this week. Sora was made available to a handful of creators and while I’m not going to show you all of the videos on here that were made by these creators, they are super super impressive. One of them even comes from somebody who’s a buddy of mine, Don Allen Stevenson. So really cool to see that he got some early access to Sora and was able to make videos with these crazy hybrid creatures like a flying pig and a flamingo giraffe hybrid and whale squids or whale octopuses. So really, really cool stuff. One of the standout videos that a lot of people have been circulating is this called Airhead by Shy Kids about this person who has a balloon for their head and it is some actually really good storytelling. Again, I don’t want to play this video here. I don’t know the sort of copyright on these videos, but I will link this post up in the description below so you can check out some of the new videos that have been circulating from Sora. Again, just like last time, they are really dang impressive videos. And while we’re on the topic of Sora, apparently OpenAI has been talking to the film industry, talking to people in Hollywood, trying to get them on board with the Sora software. Obviously Hollywood is a segment of the world who has been very, very fearful of AI because it threatens voice actors. It threatens actors. It threatens animators. It threatens script writers. There’s a lot of roles in Hollywood that AI kind of threatens right now. And OpenAI is sort of on tour trying to get the people in Hollywood to come around to accepting and leveraging AI as a tool that will make their lives easier, lower budgets on videos, and ideally not replace jobs, but help make the people that are doing the jobs much more efficient. This week we also got a new chat bot called Hume. You can find it over at hume.ai. And supposedly this is the first chat bot that actually has empathy.

This model is designed for you to actually talk to it with your voice, and it tries to determine your emotions through your voice. So if it detects that you’re angry, it will understand that and use that as context for its reply. If it understands that you’re sad or happy or in a silly mood, it tries to pick up those emotions and respond accordingly. They do have a demo that you can try out. What is Hume’s mission? Aligning AI with human wellbeing. I hear you’re frustrated about something. How can I help? Why do you think I’m frustrated? You know, I am not entirely sure. Let me try to understand better. I picked up on some confusion and maybe a hint of frustration, but I don’t want to make assumptions. Could you tell me more about what’s on your mind? I’m just very curious about Hume and excited to learn more, but I am trying to keep a very neutral tone of voice. Understandable. I see. My apologies, I should not have jumped to conclusions about your tone. I am glad to hear you’re just curious and excited to learn more about Hume. Please tell me what else you’d like to know. I am happy to provide more details about our mission and work. What makes Hume different from other large language models? Well now that’s a great question. Unlike typical large language models, Hume’s approach is centered on emotional intelligence and human well-being. Our models are built on deep insights from emotion science, not just raw language data. This allows us to truly understand and respond to human expressions, not just words. Want to dive deeper into the research behind it? Not right now, thank you. Got it, no problem. I am happy I could give you a high level overview. Feel free to circle back anytime if you want to explore the research further. I’m not a huge fan of the voices. They probably could be a little more realistic, but I also doubt that’s a top priority for them yet. They’re probably still trying to dial in the tooling and the research and everything. And then the voices sounding realistic is probably going to be one of the final elements that they dial in. Adobe had a big event this week as well. I believe it was just called the Adobe summit, but the biggest announcement to come out of Adobe summit was this Adobe Jen studio. Adobe Jen studio seems to be the central hub where you can generate images, keep brand assets, build campaigns for various social media platforms, and just essentially looks to be a central hub to do all of your creative design work with and without AI. You can see it’s got a brands tab here where it’s got a brand description, logos, brand colors, brand fonts, images, copy, the channels, style kits, custom models, a place to store all of your assets. And it also looks like it has Firefly and we’ll show you what a Firefly image would look like directly inside of things like Facebook ads. So just a cool central hub to manage whatever you’re working on. It’s also got data and stats that you can view inside of it to see how well your campaigns are performing. Looks pretty cool, but we don’t have access to it yet. So we’ll definitely dive into it more in a future video when it rolls out more widely. HeyGen rolled out a awesome new feature.

I actually haven’t used it myself yet, but the demo videos look awesome. It’s called Havatar in Motion 1.0, and you can move your hands, make gestures, use unique tones of voice, and HeyGen flawlessly tracks, translates, and lip syncs your video with any input text. Here’s the example demo video of a person walking around, moving their hands as they’re walking and then the audio is dubbed over and the lips still sync to the person talking. Today we want to give you a peek into the future. You can use HeyGen to create UGC style content even while people are in motion or moving their hands. You can even hear the background noise. Create a training video with specific tones and gestures and you’ll find how natural HeyGen’s videos can be. So basically this person filmed this video of them walking through the park and moving their hands around a little bit and then they took some text, pasted the text in. It created the new audio, dubbed over the new audio, and then made the lips look like they were saying the new audio that was just created. Pretty powerful. While we’re on the topic of AI video, LTX Studio this week had their launch party. They are now starting to give access to people on the wait list to try out the beta version of LTX. If you’re not familiar with LTX Studio, it is a tool where you can enter a single prompt and it builds out the entire storyboard from that single prompt. You can have consistent characters so every scene looks like it’s actually the same character that you want throughout and it is just a really cool all-in-one video creation platform that I’m excited about. I was actually going to make a video this week about LTX Studio but I had a little tech hiccup so I will be releasing a video about LTX Studio on Monday so look for that next week. I am really excited to dive in and put it through its motions. Rowan Chung shared this on Twitter this week that researchers at MIT just introduced a new image generation method called distribution matching distillation. Basically, it can generate images 30 times faster than stable diffusion while maintaining or surpassing the quality level outputs. So if we look at the example here, we can see how quickly stable diffusion is generating four images. And the new model, you can see how quickly it generated those images. These appear to be the same images the same seed here, but it is way faster than stable diffusion with pretty much the same quality. So we’ll be cool to see that roll out. I imagine a lot of the AI image platforms will be using similar technology in the near future to get that kind of rapid generation. While we’re on the topic of rapid generation, Leonardo AI this week has a brand new feature where you can now use the Lightning XL model, which generates images really, really quickly directly inside of Leonardo. If we come over to image generation here, if we come up here to the fine tune model, click our dropdown, select another model. They now have this Leonardo lightning Excel. If we click on this one, generate with this model, we’ll leave Alchemy V2 turned on and let’s go ahead and just enter lightning for our prompt. And it should generate really, really quickly. So let’s click generate. And it took roughly 10 seconds to generate all four of these images and they’re pretty solid quality. So a cool new feature that just rolled out into Leonardo to get faster image generation. They also rolled out another feature last week that I forgot to mention in a video, but it’s really, really cool. And it is the ability to remove the background on images. If I scroll through my images here and look for an image of like a person, here’s one that was supposed to be Sam Altman, but didn’t quite get there. There’s a button that says remove background. If I click this button, it uses a couple of your tokens. And now I have two versions down on the left, the original image and the no background image. So pretty cool. And then I can download this. It downloads it as a PNG file. If I open it on my computer, you can see it’s got the background removed. There’s also some new elements or Laura’s basically, they just added this glow wave, Laura, this CGI, Noir Laura, this digital painting Laura and this old school comic Laura. Those are all brand new, just added this week. There’s a cool new 3d design app that’s been circulating called Bezi AI. I came across this cause my buddy Bill of all shared it and it appears to be another text to 3d object generator that generates game ready and movie ready assets. If we check out bezi.com slash AI, It looks like they do have a free tier, but when you’re ready to start working with a team, then that’s when it starts costing money. So let’s go ahead and get started and see how easy this is to use. Apparently when you sign up, you get a free asset bundle made available in your library. So if I click continue, I get a little tour here. We’ve got our asset gallery of existing 3D assets that it created for us. We have 20 Bezi AI credits. Let’s go ahead and create a new file, we get a 3D plane here to work with, feels similar to working in like a blender or something. It seems like something I would need to spend a little bit of time learning. I’m not familiar with it. There seems to be like this node based system where you can add what they call states.

Definitely something I need to spend some time messing with and learn what the heck I’m doing because I don’t know what any of this means yet. But there is a button down on the bottom left that says Bezi AI. I’m gonna click on that and see what happens. just enter your prompt. Wonder what would happen if I use something that’s like a trademark. Like what happens if I put like X-Wing spaceship and then click generate. It actually created an X-Wing. Look at that. Let’s go ahead and click on this one. This one probably looks the best here. I just drag it in. Is that how it works? Yep, I can. Look at this. I just dropped an X-Wing onto my 3D plane here and this feels very similar to, you know, Blender or Unreal Engine. I can move it around, move it in different directions. probably click this to scale up and down. That is really cool. Nope. It’s free to use right now. I didn’t pay anything to generate this. And again, you can find it over at bezi B E Z I.com. And the final fun little thing I wanted to show you is that Quebble cop here, who is a popular gaming YouTuber created what he calls Quebble cop AI. And I’m going to go ahead and press play on this in just a second. But what you’re going to see is actually an AI agent playing Minecraft and then an AI version of Qwebbelkop with Qwebbelkop’s voice commentating over the AI.

Now admittedly it’s kind of cheesy even he himself has said that they need to up their game and make it a little less cheesy but it’s come a long way. An AI is playing the game, an AI version of Qwebbelkop is commentating over it and it actually sounds like his voice. So check this out I’ll just play a short clip of it. Navigating birch trees, popping red flowers, it’s like Mother Nature’s pixelated garden party. Look, I’m gardening, sneaking under leaves. It’s stealth mode me. Aha. Pretty flowers going to make a fancy bouquet. Swinging acts. Another tree bites the dust. Ah, fresh air and deforestation. My kind of day. Creeping under canopies, giant bird sheep, wool tactic camouflage, or nature’s Minecraft prank. Absolutely hilarious. Jokes aren’t that great. It still has a little ways to go, but man, this is as bad as it’s ever going to get. And I know Bubblecop here has been working on this for a while. And just how far I’ve seen him come with his AI generations in the last several months has been really impressive. So that’s going to get more and more dialed in. Just thought I’d share it. Pretty fun. I’m keeping an eye on what he’s doing with that. I think he’s onto something with some automated content creation. So we’ll, we’ll see how it plays out.

About Anushka Agrawal

Leave a Reply

Your email address will not be published. Required fields are marked *