LLM Absolutism?
Published: 2024-03-11 10:00 AM
Category: AI | Tags: LLM, ethics, large language models
How do you square away the ethics of using an LLM? I'm wrestling with how to responsibly engage with this technology but my unease with everything from environmental impeacts to shady model training keep me from feeling like I can engage responsibly.
The water use alone is enough to make me feel uneasy. At the same time, I live in a house powered by natural gas. I don't have alternative energy sources so saying that I'm environmentally aware of the costs falls a little flat with the rest of my life being equally as consumptive in other areas. Does that make it okay to go ahead and use ChatGPT or similar because I'm not low-impact in other areas?
I think the unease is in that using an LLM is optional while powering my home is not. I can see use in using an LLM to brainstorm as Simon Willison describes in a talk he gave in August 2023:
If you’ve ever struggled with naming anything in your life, language models are the solution to that problem. ... When you’re using it for these kinds of exercises always ask for 20 ideas—lots and lots of options. The first few will be garbage and obvious, but by the time you get to the end you’ll get something which might not be exactly what you need but will be the spark of inspiration that gets you there.
After reading, I tried this. My kids need to do a small inquiry project each year in school, so I opened ChatGPT and asked it for some ideas on inquiry projects a 5th grader could do on exercise. It actually gave me a couple of ideas that went beyond demonstrating proper stretching technique.
So, the potential for this kind of assistive work is more interesting to me. I know as a teacher that I'm supposed to be intersted in the automatic YouTube quiz creator or the worksheet generators, but those are the lowest fruit, just above the whole "have AI give students feedback" mess that's starting to come out. I'm more curious about interactive LLMs as a rubber ducking tool to help me think better, not just try to offload cognitive effort that I should be engaging in personally.
And yet...I feel like using any of the available options makes me a willing conspirator to intellectual property theft. It's clear these companies used resources in secret and then released their programs because had they disclosed their work, it wouldn't have been allowed on the grounds of copyright. Tech is doing what they want and then using obscene amounts of money to deal with the legal issues after the fact. That's not okay.
I don't have any insight or answers - I'm mostly shouting into the void. I think I'm going to continue to read and think carefully about what technologies I choose to engage with and wrestle with personal convictions along the way. Maybe as technology improves, there will be some more models created which aren't as environmentally costly (working slower is always an option, you know) or as ethically shady as some of the big players are now.
And maybe that's the point - how we think about the issues as we come to decisions means more than the decision we end up making.
Comments