There are lots of good and bad takes on the AI situation. That makes sense, there are lots of takes to have and lots of loud voices drowning out sensible discussion. Opinions will be had.
I see thought leaders, people I usually respect, with weird takes. Ideas half expressed and not well said. This is strange. It’s times like these that I stop for a second and try to introspect.
When people I respect for being well thought out disagree with me, the chance that I am wrong grows higher. But it’s not as clear cut. I only see sporadic essays and videos with different opinions.
I watched Suno, AI Music, and the Bad Future and walked away a bit confused. It’s mostly a string of disparate arguments formed together with a music bed and a soft voice to allude to something deeper than it explicitly states - a common theme for video essays. It doesn’t seem to make any coherent or strongly worded point.
Adam Neely has a good track record and I saw this off a recommendation from Folding Ideas. Dan Olson (from Folding Ideas) has my utmost respect. His videos are of fantastic quality. His recommendation is not a wholesale endorsement, but I felt it must mean he engaged with the content somewhat less critically.
The video makes some arguments similar to another essay that entered my world recently - Acting ethically in an imperfect world. It argues that LLM and generative AI use as a whole is unethical in a strong inherent sense. It is unethical because the training data contains copyrighted works.
Models can be distilled many times, but at some level very many of them are training off of real art. An actual artist contributed to the training set most certainly in a way they didn’t consent to. Legal or not, many works are in these sets in a way the artists do not support.
Which feels wrong in some way, right? We did something nonconsensual. That’s usually wrong? It feels wrong, I know that much. I’ve seen trainings on this, it’s usually wrong.
To make this argument a bit stronger you need to talk more about second order effects. The arguments in these essays, thought pieces, and articles tend to focus on the capital impact. We will starve the artists even more. Which matters, I suppose, for artists who do capital-facing work to support less capital-facing work.
But art being tied to capital is not a good thing. It is good that art is valued and it will continue to be. We won’t throw out a Monet because a machine can produce a copy. The argument is that without corporate work, the artists will change industries. If artists are out of work, we may live in an artless world. The artists will starve and will stop producing art.
But is that the argument? We want corporate slop because the artists need to do something?
If you dig too deep into capital-based arguments you reach lots of strange and ugly conclusions. Lots of the authors of these articles are more left-leaning, but these arguments are not compatible with leftist thought. Property ownership and capital-producing-capital is fundamentally not aligned with any fair version of the world. Arguing for it feels very silly.
In order to make a coherent argument for why generative AI is a moral wrong you need to be aligned on at least one of lots of contentious points:
- That artists should be forced to engage with capitalism
- That art has specific capital value
- That art ownership should be supported
- That art should be valued and owned, not shared
- That art is without influence and unique to its creator
Not one of these is easy to validate or argue for without falling into a bunch of other weird arguments.
The AI studios (not to be trusted as well) take obvious counter positions on lots of these. It usually goes something like:
- Every artist is inspired by previous work
- Art cannot be uniquely owned
- Art should be democratized
The hidden argument you’ll miss with the AI labs is that what a machine model produces can be identified individually as art. I have some thoughts here. In short, I don’t agree with this. Art requires human intention - the more you deviate from human intention, the less art you are left with.
A camera is a different medium and you can produce art with it up to the human intention used to produce it. The pixels are not the art, but the composition and framing might be. If you had a camera take pictures at random angles and intervals, the photos likely aren’t art, but the concept as a whole probably is. It is art up to the human intention used to produce it.
But none of this really matters.
Let’s say we trained an LLM on every copyrighted material from every dead artist in such a way that it was legally square.
The artists in this hypothetical are dead, so you are harming… what is it again? The foundation? The nepo-baby structure of this now wealthy family? You cannot hurt the capital interests of Tolkien anymore, only his supremely wealthy inheritors.
And what if they were alive? Is this a greater moral wrong than an online torrent?
There is some part that feels wrong here, which I want to again acknowledge. If I made a thing that took me a long time to make and I found later it had been taken and reproduced with no attribution, I might be upset. I’d want some kind of credit.
But it doesn’t make me more deserving of this.
The open source community has had interesting perspectives on this. Software as a whole is weird. I fundamentally disagree with copyright law being applicable to software. The incentives make no sense for anyone involved.
I engage with open source where I can. Anything I produce should be free, copyable, and stolen. The idea that I own any of my code seems ludicrous. What is it that I own? Surely not the syntax. This unique configuration of words? What value is there in this?
I am broadly a democratic capitalist with leftist sympathies. I align with the system not because it is deterministically moral, but because it produces net-positive returns. The parts of capitalism that are neither beneficial nor moral have no value.
Competition breeds innovation. Starving citizens and unhoused youths slow progress. We should house and feed our population. Believe and fight for the systems that matter, not the ones just echoed by your affiliated party.
Like every modern product, LLMs have ethically dubious origins and ongoing impact. But they’re useful tools. They produce real value.
Not everything is black and white, but you can think for a goddamn second. It’ll do you some good.