Blogs de la comunidad
Sammy boy dropped the bomb!
Sam Altman (co-founder of OpenAI, ex-president of Y-Combinator) got on stage and gave a crazy talk.
I kept a running diary of my thoughts during the conference:
0:00 - Wow, if the leaks are true, this conference is going to change so many things! *gets a pen & starts playing OpenAI bingo*
0:04 - He recaps how ChatGPT has evolved and is the bread & butter of every household, through word of mouth. All of this in just 1 year!
1 year ago, Nov 2022, ChatGPT was launched as a “research project”8 months ago, Mar 2023, GPT4 was launched (the most capable model till date, flex worthy for sure)1 month ago, Oct 2023, ChatGPT got superpowers - see, hear, talk.
1:40 - The flexing doesn’t stop there (as it shouldn’t too). He throws some stats out for all the data nerds listening - 100 million weekly active users, 92% of the Fortune 500 use one of their products and 2 million devs build with it.
2:30 - But then the talk takes a weird turn… He’s showing a series of testimonies of how ChatGPT has changed people’s lives. It looks like one of the episodes of ‘Who wants to be a millionnaire’ where they show some sentimental life changing past to get the audience on their side.
We already know you’re changing people’s lives Sam. How much more do you want me to love you? I will watch it & broadcast it without a doubt :)
5:10 - OK, this is where the talk starts getting aggressive.
He puts up a slide. The crowd goes wild. The most awaited moment.
The new stuff is here - FINALLY! Launching… *drumroll* …
GPT4-Turbo. Exciting!
But to my disappointment, it was just more numbers. Impressive additions, but mostly for the devs. A few imp. points -
8k/32k context available, but now 128k context length. Huh? Larger context = you can write a larger text prompt, get a more detailed response back = 300 pages of a book.
Output as JSON toggle button. A trend I’ve noticed from my own experiments (vacation planner) is that JSON is super helpful. You can get the bot to generate data as JSON and use 1 standard HTML template to create 100s of pages!
All of the ChatGPT’s Vision, Speech and Dall-E features are now available through the API = new modalities.
Conclusion of last 10 minutes - GPT-4 is cheaper, faster, and can now read a prompt as long as an entire book.
14:40 - The dev specific section ends. Slides change, a figure emerges from the dark, Satya Nadella (CEO of Microsoft) comes on stage to have a short chit-chat with Sam.
I almost wanted something to happen between two tech giant leaders as they came together, but honestly, nothing interesting happened. No cage fight. No drama. Next time maybe?
no cage fight here, keep reading.
19:40 - Satya leaves. Back to the main topic of the event — ChatGPT4-Turbo. The moment of truth. If there was a OpenAI bingo with all the spicy rumors/leaks, I would be losing badly…
…until now,
GPTs slide captured from the conference livestream
Introducing GPTs + GPT Studio
GPTs - Each specific problem now has a ChatGPT. There’ll be a marketplace of them. It’s kind of like Character.ai or Meta’s AI-celebrity-bots but focused on problems not only fun conversations.
26:00 - Sam starts building a GPT live on stage *stop watch timer on*
Using English language, he starts building a mini-sam chatbot based on a lecture he’d given previously.
A real Inception moment; building a GPT to solve a problem using another GPT.
30:00 - He finished building the GPT. It’s working perfectly. It took only 4 mins.
4 freaking minutes to build a trained chatbot of Sam. Wild!
Mini-Sam GPT
31:05 - There’s a new marketplace in town. Oh baby, Santa keeps giving!
As in all new revolutions, a marketplace is bound to happen (that’s one of the [WIP] products of AIverse too, stay tuned). It’s now time to build an app for the marketplace before it’s too saturated. Hop on before it’s too late.
There’s revenue sharing for top apps.
It’s easy to build, only requires english language.
Are you ready for the future? - source Twitter
Legends at work, Steve Jobs (L) & Sam Altman (R) - source Twitter
32:50 - The Grand Finale
What I’m thinking - did he just kill ALL the UI-wrapped AI applications? I mean that’s what the startups did, base a niche feature on GPT4.
But then he shows a slide for the devs wondering the same question. It has 2 words. It seems the conference is going to open new possibilities for the AI apps.
Assistants API = Everything in GPT4-Turbo + Dev features to make integration easy are now in the API!
That means that the API (connecting ChatGPT with your app/website) now has
Different modalities (image reader, pdf reader, image generator, speech input/output and text)
Inbuilt context - handle long conversations, history, and retrieving data
Generate files and codes through API
While I initially thought this would kill AI startups, this infact enables them to focus on the problem and not managing the ChatGPT model. They have a framework ready. It opens new possibilities.
an (optimistic) ChatGPT wrapper builder 😆
42:25 - BONUS
The team from OpenAI gives away $500 of credit to each member of the conference using their assistant (trained on DevDay data). Works fabulously, every one goes home rich 🤑
44:30 - Sam’s final words
“AI is going to be technological and societal revolution, change the world and enable us to build more for all of us. It’s time to go elevate humanity on a scale we’ve never seen before”
*adds take-over-the-world to my to-do list*
My take - what does it mean for UX/Product designers?
I think GPTs, as a feature, is here to stay but as a concept is temporary. While each problem for each person can now be solved with AI a.k.a GPT, more & more problems + features can be combined. “AI” will moving in the background, not as a feature but as a layer to improve human-tech experiences. Conversational vs Invisible AI-UX.
Splitting of applications with singular use case has happened before right - Craiglist? Websites? App store apps? So there will come a time when single apps start combining?
Question I’m thinking about - Is the future ‘agents’ instead of apps?
AI chatbots are going to be integrated further with the existing applications/websites. But not all of them would have the same experience. So learning about the patterns, understanding users and their priorities is still important and something only humans can do well (for now)
ChatGPT4 - Vision and Dall-E 3 are now in the same context. It means you can understand an image, AND generate images based on that. For example, this person generated their own avatar.
I tried the ChatGPT4-Vision recently and it’s amazing! I believe there’s potential for AI to fast-track the design prototype to code part of the product process, that is sketch to code. Prediction: designers will start working more closely with developers, they will start understanding basic logic and syntax to better contribute to the final solution and design a solution specific to each specific user (not just user persona).
Note: In the example below from ChatGPT4-V, the red highlight on hover and double click to edit are just annotations in the sketch but in the HTML website, the functionalities are fully working & interactive
Experimenting with ChatGPT4 Vision by author
"¡el equipo de diseño aiverse lo logró!
Rara vez pago por contenido, y ¡esto valió totalmente la pena!"
Jacob Sullivan
Director de operaciones de Faculty.ai
Consigue el ebook 'Patrones de AI-UX en Tendencia'