a Dutch futurologist and like me the kick-off speaker of the Groningen counterpart of this event
He explained in his speech various future techniques. Most of them I already knew, others were not interesting for me.
But than at the end, he presented the brand new language model ChatGPT. First he let write ChatGPT a simple summary of the text (Mother Courage) written from the point of view of a 12 year old. As a 2nd
ChatGPT should explain a fusion generator in such a way that a 10 year old boy would understand it.
From my job as a marketing specialist I know how hard it is for most people to fill a white sheet with content.
The same people have less trouble correcting an already finished text.
With the language model, every user gets a whole gang of assistants to fill white sheets with text – that is more or less intelligent and eloquent.
Of course, you have to read through the result and check and correct the truth of the information, but much of the effort is done by the AI.
Even the makers of ChatGPT were more than surprised by the interest in their service.
The language model itself had been around for over a year and was an API interface, so to speak, for nerds who know programming.
At some point they changed it to a user-friendly chat situation and made it available to a larger target group.
In an interview with Heise Online, John Schulmann, Sandhini Agarwal, Liam Fetus and Jan Leike, the brains behind ChatGPT, said they expected it to attract a lot of interest.
But no one expected this level of mainstream popularity, and since that „iPhone moment“ the whole company has been chasing its success.
WITH ALL THE GROWTH PAINS
Of course, the language model itself will continue to be worked on and the response quality will be improved successively, but the basic service is available and the benefit is clear.
The challenge users currently face is understanding the language model.
To create qualified and useful content,
you have to ask the right questions.
This is a job that many people are saving themselves for the moment, because they realize that ChatGPT is attacking their work.
You are still safe – for the moment,
but I am convinced that this area will see rapid development.
Lanz and Precht have already talked about it in their podcast, the brain is a real power guzzler. It only makes up 2% of the body weight (for some I think even less) and consumes almost 20% of your energy.
So you as a human do well if you don‘t run this super organ on full load the whole day.
Therefore, the brain immediately goes into stand by when something recurring comes and perhaps the subconscious can do the job. Breathing, eating, drinking, all these things we have trained ourselves to do without the big machine running.
Our brain actually only turns on when something new, unknown and important happens.
This explains why we know things, but still prefer to Google them before turning on the giant machine and search in memory.
All tools that reduce the need of thinking are good tools that we like to use.
I am old,
I am even so old that I still know the time before navigation devices.
If you wanted to reach your destination, you sometimes had to ask peoples and look at analog street maps. Your energy guzzler ran at full speed.
The success of this was not only that next time you had no problems finding the way again but also gradually a better and better map of the environment in your brain was built and maintained.
Finding targets in the environment makes less problems as the inner map set became better and more complete.
With the invention of navigation devices, this old technique of wayfinding was left to the device to save energy.
Today, people use navigation aids in their hometown and even if they visit a destination the second and third time, it does not go without Navi.
In the future, people will use their minds for fewer and fewer thinking tasks and rely more and more on technical tools.
Not that thinking in these tools does not consume energy, but not your energy.
At the use of language models and answers of the AI, our brain energy-saving model hold a danger.
The creators always point out that the results of the AI do not have to correspond 100% to the truth.
In the beginning, you will still be able to recognize this because the energy guzzler is still at the start, but after 10 correct answers, the brain will say - I don‘t need to look at that here, I‘ll go to sleep.
The eleventh, then perhaps wrong answer will not be recognized by your brain in standby.
Quite a messy situation.
Other applications and websites have also come in the spotlight of people searching the whole internet for AI.
As I did
Many of the AI apps existed before, but the success of ChatGPT has given them all a boost and brought them into the user‘s field of vision.
My next Google queries brought me to image generators, after trying DalleE I quickly went to another incredibly powerful image generator.
is not a website but a Discord bot.
Discord is a chat program initially for gamers and programmers now also reached the masses.
For Discord there is a synchronized app for all devices, which makes the Midjourneybot to a loyal companion. Started at home on the computer, I see the result on the go on my phone. I also always have all the pictures with me everywhere.
In this app there is a bot that reacts to my commands.
These commands are called prompts and the bot creates images with the desired commands.
It quickly becomes clear that it is not easy to formulate the images from the head into a prompt.
At this point, millions of users who have generated countless images and have tried countless prompts help.
You can find lists on the internet with the command and what comes out.
In the end, it is your creativity that can generate amazing images. N
Unfortunately, despite all the commands, the end result is not always what you imagined.
However, if this powerful tool is put in the hands of people who know how to use graphics programs, then anything can be achieved.
It is also possible to upload his own footage a total of 5 images and mix them into each other.
By entering different percentages, you can determine the degree of blending for each image and get good results. I haven‘t really managed to do that yet.
Nevertheless, this is also a creative tool for creative users.
The next exciting topic in terms of help from the AI
An explainer video is an effective way to offer things, services or products, especially when they need a bit more explanation.
To create such a video in a reasonable quality was a costly affair in the past.
A speaker had to be found, one person for the sound, one for the camera. All together for one day in a studio rented and at the end still postproduction like cut and color.
With the service I would like to present next, it is no longer witchcraft.
You write your text and choose a suitable avatar to speak it for you.
And now comes the crazy, if you decide above in the menu for another language then Hocus Pocus carries your avatar that henceforth in Spanish.
Now get out your calculator and calculate this service before AI together.
Anyone who has worked with video knows that you can reduce the resolution a lot, but it‘s magic to make a video bigger - to increase the resolution.
Crap input material = crap output material.
The same is true for speed.
If you want to play a video slower than (it is slow motion) you need more frames per second.
So where do you get frames and pixels that no one has captured?
The AI can help here. It compares all the images and simply inserts a few in between.
Likewise, if the footage is too small, the AI looks at the pixels in the image and calculates which pixels would fit in between.
The topic is already pretty hot, if I have to take a look into the crystal ball, then I assume that you can train an AI to look at all the old unseen videos lying around on hard drives and pick out the best scenes.
Whether you want to do that depends a lot on what‘s on your videos.
Another exciting approach is Adobe‘s Podcast Studio.
In this tool you push an audio file or several and it transcribes the e.g. podcast.
Now I can change the order of the transcribed text and reassemble the podcast and Podcast Studio cuts the audio accordingly.
Besides, I can cut out annoying ähs and öhs and convert the voice quality from a crappy cell phone microphone into studio quality.
Now even my grandma with her knitting circle can get into podcast business.
I don‘t want to anticipate too much on the topic of generative music, but just this.
There is a 24/7 radio station on Youtube that only plays AI-generated music and generates videos.
Prof. Nicola L. Hein will certainly tell you where the journey in this area is headed in a moment.