Creating Digital Art by AI Art Generators

  • Athiwat Thammawatsiri Mahachulalongkornrajavidyalaya University, Chiang Mai Campus
  • Phisit Kotsupho Mahachulalongkornrajavidyalaya University, Chiang Mai Campus
  • Boonchuay Doojai
  • Phatcharabot Rittem Mahachulalongkornrajavidyalaya University, Chiang Mai Campus
  • Phra Witawat Kochakan Mahamamakut Buddhist University, Lanna campus
Keywords: AI Art Generators, Generative Adversarial Networks (GANs), Diffusion Models, Digital Art Ethics, Creative Collaboration

Abstract

In the era of rapid technological evolution, artificial intelligence (AI) has emerged as a transformative force in digital art creation. AI Art Generators powered primarily by Generative Adversarial Networks (GANs) and Diffusion Models enable users to produce complex, high-quality visual content by simply providing text prompts. Tools such as OpenAI’s DALL·E, Midjourney, Stable Diffusion, and Adobe Firefly illustrate this paradigm shift by bridging advanced algorithms with user-friendly interfaces that make creative production accessible to the general public.

This article explores the fundamental principles behind these systems, focusing on how GANs and Diffusion Models differ in architecture and image generation logic. Unlike traditional graphic design software, which relies heavily on the artist’s manual skills and iterative design processes, AI Art Generators automate tasks that were once labor-intensive. This automation democratizes artistic production but simultaneously raises critical questions about originality, authorship, and the role of human craftsmanship in the digital age.

The widespread adoption of AI-generated art has sparked global debates about intellectual property rights, ethical training datasets, and the potential misuse of artists’ works without consent. Recent lawsuits involving Midjourney and Stability AI highlight the need for clearer legal frameworks that balance technological innovation with respect for creators’ rights. The paper argues that AI should not be viewed solely as a replacement for human creativity but rather as a powerful collaborator that expands the boundaries of visual expression.

By examining the technical foundations, creative implications, and ethical challenges surrounding AI Art Generators, this article contributes to an interdisciplinary understanding of how emerging technologies are reshaping artistic practice and cultural production. It calls for collaborative strategies among artists, developers, and policymakers to ensure that AI-driven creativity evolves responsibly and inclusively in the years to come.

References

Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever . (2021). Zero-shot text-to-image generation. Proceedings of Machine Learning Research. https://proceedings.mlr.press/v139/ramesh21a/ramesh21a.pdf

Aditya Ramesh, P., Prafulla Dhariwal, Alex Nichol. (2022). Hierarchical text-conditional image generation with CLIP latents. arXiv preprint arXiv: 2204.06125. Retrieved from https://3dvar.com/Ramesh2022Hierarchical.pdf

Adobe. (2024). What is Adobe Firefly?. Adobe. https://www.adobe.com/sensei/generative-ai/firefly.html

Ahmed Elgammal. (2019). AI is blurring the definition of artist. American Scientist, 107(1), 18–21.

Ahmed Elgammal. Bingchen Liu. Mohamed Elhoseiny. Marian Mazzone. (2017). CAN: Creative Adversarial Networks, generating “art” by learning about styles and deviating from style norms. arXiv preprint arXiv:1706.07068. Retrieved from https://arxiv.org/abs/1706.07068

AP News. (2023, January 17). Artists file lawsuit against AI image generators. AP News. Retrieved from https://apnews.com/article/technology-lawsuits-copyright-artificial-intelligence-bf67b5e5b70e6e384961af1dfe6560e0

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. Retrieved from https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf

Christiane Paul. (2015). Digital art (3rd ed.). Thames & Hudson.

Ian J. Goodfellow, Jean Pouget-Abadie , Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair† , Aaron Courville, Yoshua Bengio. (2014). Generative adversarial nets. arXiv preprint arXiv:1406.2661. Retrieved from https://arxiv.org/pdf/1406.2661

James Vincent. (2018). A porn company promises to insert customers into scenes using deepfakes. The Verge. Retrieved from https://www.theverge.com/2018/8/21/17763278/deepfake-porn-custom-clips-naughty-america theverge.com

Jon McCormack, Toby Gifford, Patrick Hutchings. (2019). Autonomy, authenticity, authorship and intention in computer generated art. Retrieved from https://link.springer.com/chapter/10.1007/978-3-030-16667-0_3

Margot Lovejoy. (2004). Digital currents: Art in the electronic age. Routledge. Retrieved from https://www.routledge.com/Digital-Currents-Art-in-the-Electronic-Age/Lovejoy/p/book/9780415307819?srsltid=AfmBOorzHyzOZ55iOp6B41RFmtD7uVwWb0Ve98v88A0hlr2-ubXTZCud

Marian Mazzone, Ahmed Elgammal. (2019). Art, creativity, and the potential of artificial intelligence. Arts, 8(1), 26. Retrieved from https://www.mdpi.com/2076-0752/8/1/26

OpenAI. (2023). DALL·E 3 system card. OpenAI. Retrieved from https://openai.com/dall-e-3

Phra Athiwat Thammawatsiri, Phisit Kotsupho, Phatcharabot Rittem, Phra Witawat Kochakan, Taviz Tatnormjit, Lipikorn Makaew. The Digital Arts Creation in the Modern World’s Innovation. Lanna Journal of Buddhist Studies and Cultures. 6(1), 2024. Retrieved from https://www.firstojs.com/index.php/LJBSC/article/view/1511/851

Rajat Raina, Anand Madhavan, Andrew Y. Ng. (2009). Large-scale deep unsupervised learning using graphics processors. Proceedings of the 26th Annual International Conference on Machine Learning, 873–880. https://robotics.stanford.edu/~ang/papers/icml09-LargeScaleUnsupervisedDeepLearningGPU.pdf

Rebecca Umbach, Nicola Henry. (2024). Non-consensual synthetic intimate imagery: prevalence, attitudes, and knowledge in 10 countries. arXiv preprint arXiv: 2402.01721. Retrieved from https://dl.acm.org/doi/full/10.1145/3613904.3642382

Simon Colton, John Charnley, Alison Pease. (2015). Computational creativity theory: The FACE and IDEA models. Proceedings of the 2nd International Conference on Computational Creativity. Retrieved from https://computationalcreativity.net/iccc2015/

Stability AI. (2023). Stable Diffusion public release. Stability ai. Retrieved from https://stability.ai/news/stable-diffusion-public-release

University of Nevada, Reno. (2023). How are deepfakes dangerous?. Retrieved from https://www.unr.edu/nevada-today/news/2023/atp-deepfakes arxiv.orgunr.edu

Vincent, J., Elgammal, A., & McCormack, J. (2021). The future of AI and art education. Computational Creativity Bulletin, 5(1), 12–18.

Yann LeCun, Yoshua Bengio, Geoffrey Hinton. (2015). Deep learning. Nature, 521(7553), 436–444. Retrieved from https://www.nature.com/articles/nature14539

Published
2025-06-25
How to Cite
Thammawatsiri, A., Kotsupho, P., Doojai, B., Rittem, P., & Kochakan, P. W. (2025). Creating Digital Art by AI Art Generators. Lanna Journal of Buddhist Studies and Cultures, 7(1), 1-14. Retrieved from https://www.firstojs.com/index.php/LJBSC/article/view/1567