Ways of Categorizing Ethical Concerns Relating to Generative AI


I went on kind of an AI rampage during some professional development yesterday. 24 hours later, I'm feeling just as sad about how it went and more in a mindset to think things through more clearly.

Since starting to pay attention to AI, I’ve found my philosophical training kicking in. This has helped me notice that people tend to conflate things in multiple ways. I’m not immune from these, but I’m trying my best not to let them cloud my judgement.

First, we think through existing tropes that we’ve seen (e.g. ‘Skynet’ in the film The Terminator). Second, we conflate different types of AI (e.g. predictive vs generative). Third, and this is what I want to deal with in this post, we conflate different kinds of ethical concerns about generative AI.

I cought a portion of a discussion between Doug Belshaw and some others following Doug's post. The thread branches a couple times, but they're also worth reading. Doug is right and that categories of ethical concerns are more helpful for framing discussion than just "AI is good/bad." I also appreciate his point (starting here) that there is a difference between platforms and tooling.

In the end, I'm still really stuck on the environmental impact. I am by no means carbon neutral - it isn't a reality for me in my current living situation. But, I can avoid creating more problems by choosing to participate or not participate in certain activites. If (when?) the realities of energy requirements change, then I'll feel better about trying generative tools out more regularly.

[Original link]

Published: 2025-02-01 | Category: Links | Tags: AI, llm, chatgpt, generative ai

Share this post
Previous: January 2025 Reading Next: Dad Music

Comments

Get in touch

Comments are always open. You can get in touch by sending me an email at brian@ohheybrian.com

Categories