The analogy people offer when describing gen AI and its transformative shift in various jobs is the rise of photography. Just how painters were commissioned to draw realistic family portraits, families could use a camera and get a similar result.
Issues:
- A camera is a piece of hardware and software. It captures the scene you’re pointing it at. Generative AI models are trained on a huge amalgamation of data, with plagiarism mentioned almost not once in these discussions. Piracy is illegal, but NVIDIA using Anna’s Archive without repercussions is alright [1], while Aaron Swartz was punished for downloading everything MIT had access for on JSTOR [2].
- The democratization of knowledge and skills is bullshit. The companies offering the applications embedded with these AI models are private corporations, and they offer paid plans to use their services. We do not own their products, all the data is used to create a private business and profit.
The idea that “the best use of AI is when you do not realize is AI” is skewed too. What about the rise of deep fakes, child porn created using these models?
Before releasing and employing any technology, is it really that hard to evaluate the effects it might have? Everything for the sake of “innovation”, speed, and profit.
Ellul would say this is the nature of technique, and humanity cannot predict how a certain technique will be employed, and that it is nonsense to attach ethical attributes to it. I am skeptical to declare technique as something autonomous, however the religion attached to it in the name of progress is baffling.