Everybody's doing it (a Stable Diffusion test post)
A few years back, in February 2019, I, like quite a handful of people, heard for the first time of OpenAI's GPT-2. And while it sure was impressive, it only gave us a glimpse of what GPT-3 would be able to do a year later. Impressive wouldn't start to describe what some have accomplished with it.
But that feeling of awe turned out to be nothing compared to seeing what DALL·E 2, Google's Imagen and Midjourney are capable of.
But I'm still waiting on my DALL·E 2 beta access, Imagen is as tightly closed as it could, and there seems to be no plan to release Midjourney or a public API.
Now, saying Stable Diffusion is only great because it's freely available now would be both a reductive and a simplistic statement.
Stable Diffusion
stability.ai's Stable Diffusion is great because it is a great model and because it is completely, publicly & freely available one. Meaning pretty much anyone can use it. And anyone can build anything they want with it (within the confines of it's license & the law, obviosuly).
In my case, I simply set out to install an open source Web UI someone made for it on an AWS EC2 GPU Instance, using Docker, and securing my and my friends' access to it all using Tailscale.
There are already several SaaS, web apps & other online services making it available to the masses.
Some, on the other hand, have made several Web UIs (with quite interesting features such as prompt matrices & loopbacks), combining [Stable Diffusion] with other upscaling & facial features fixing models.
Others have containerized those.
And others yet have already managed to combine [Stable Diffusion] with other models to generate truly mesmerizing animations.
So if you have a curious mind, like fooling around with new wondrous toys, and figuring out what they can enable you to accomplish, now is the time to join the fray. To have fun generating truly absurd things, figure out this brand new and fascinating technology's limits, and see what we can build with it.