DALL·E 2 vs Midjourney vs Stable Diffusion
Comparison between most popular AI art generation tools
Text-to-image generation was there for quite some time now. Initially, these were started with the evolvement of generative models such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). You can read more about GANs here.
If we look into the broader domain the text-to-image models combine both Computer Vision (CV) and Natural Language Processing (NLP) subdomains. If you like to read a comparison based on NLP models you can read ChatGPT vs Google Bard vs Bing AI.
If we look at these models closely DALL·E 2 is not open to the public. But you can join the program by request from here. On the other hand, Midjourney gives its service through its discord channel. Both of these are not open source and they will remain as that in the future. Stable Diffusion claims to be an open source model and you can find the online workplace to work with as well as google collab notebooks to use this model. These models used a considerable amount of images, and texts to train, and the inner workings of these models will be discussed in another article. In this time we will compare some given prompts and how each model reacts to them.