In Defense of AI-Assisted Philosophy: Why This Isn't Art (And That's The Point)
Art, particularly good art, strives to break free from existing patterns. It emerges from unique human experiences, emotions, and creative expression. Even when art references existing work, its value often lies in its originality of expression, its unique voice, its ability to convey human experience in novel ways. This is why AI-generated art raises serious ethical and aesthetic concerns - it's attempting to simulate something that should fundamentally emerge from human experience and creative authenticity.
Philosophy is different. Philosophy is inherently embedded in a network of existing ideas. It progresses through systematic engagement with previous thought, through logical analysis and conceptual development. You can't do philosophy without building on existing philosophical frameworks - nor should you try. While art often seeks to break free from tradition, philosophy necessarily builds upon it.
This is precisely why I embrace AI assistance and a deliberately plain, clinical style in my philosophical work. I'm not trying to create literature here. I'm not aiming for memorable phrases or stylistic brilliance. This is about brute thought - systematic reasoning stripped down to its essentials.
When I use AI to help develop arguments, I'm focused entirely on conceptual content and logical structure. The resulting text is direct and functional because that's all it needs to be. This isn't plagiarism because:
- I'm not claiming originality for basic concepts
- The focus is on developing novel implications and connections
- The specific wording is intentionally secondary to the conceptual content
- I'm openly acknowledging the methodological approach
Unlike art, where the specific expression matters deeply, philosophical value lies in the ideas themselves - their coherence, their explanatory power, their contribution to understanding. The clinical style and AI assistance serve the same purpose: stripping away everything that isn't pure philosophical thought.
The originality in this work comes not from creating ex nihilo (as art often strives to do), but from:
- Identifying significant philosophical problems
- Making novel connections between ideas
- Developing new implications of existing concepts
- Contributing to deeper understanding through systematic reasoning
When we evaluate philosophical work, what matters is:
- The coherence of the arguments
- The depth of the insights generated
- The contribution to understanding
- The validity of the reasoning
If I could do philosophy using only logical symbols, that would be even better. But I can't, because I don't think that's possible, and if it is possible, I certainly am not mentally equipped to do so. The plain language you see here isn't just a stylistic choice - it's a necessary compromise with the limits of human thought and communication. We need natural language to do philosophy, but that doesn't mean we need to celebrate or elevate that language beyond its basic function.
Think of the language here like mathematical notation: it's just a tool for expressing ideas as clearly and directly as possible. When mathematicians write proofs, they don't worry about the aesthetic quality of their notation. They focus on precision and clarity. That's what I'm doing here, just with words instead of symbols.
This is another reason why using AI assistance feels appropriate. If language is just a necessary tool for expressing philosophical thought - rather than an art form in itself - then using AI to help generate that language is no different from using a calculator for complex calculations. We're not losing anything essential to the philosophical enterprise by mechanizing this aspect of the work.
The goal remains pure thought. The words are just the interface.
Final Addendum: A Meta-Example
Maybe my readers should look at this a bit like they would look at some philosopher making a social media post. The core idea can be boiled down to a few words, such as on Twitter. But then the reader has to do the heavy work of filling in the blanks.
My process here allows philosophers to quickly engage with ideas - ideas that are often the result of having thought about a problem in great detail for many years. The hard work was coming to the conclusion. Now, instead of having to mechanically reconstruct that process, the philosopher can simply tell the LLM the conclusion, and - since the LLM has pretty much all the world's philosophical works already digested - it can quickly fill out the blanks.
Arguably, the test of how good the idea is is how readily the resulting text is generated from the original prompt. When I find that my prompts need a lot more work than just confirming the AI's questions with "yes," "correct," "exactly," then I know I need to do more work. If the AI 'gets' my argument pretty much right away, I know I did a good job.
[Editor's note: This addendum is itself a perfect example of the process it describes. The text above is almost verbatim what I wrote in the prompt to the AI. The AI simply cleaned it up a bit - proving the very point about how well-formed ideas need minimal reconstruction. When the philosophical thought is clear, the AI requires minimal guidance to present it properly.]
[Meta-editor's note: Even this editor's note is almost verbatim what I wrote. The AI's main contribution was adding the "editor's note" framing and recognizing the meta-nature of the note. And this could go on ad infinitum in a never-ending regress (or is it iteration?)]
[Meta-meta-editor's note: At this point, we should probably stop before we get caught in an infinite loop of meta-commentary about meta-commentary. Though that might itself be an interesting philosophical exercise...]
Comments
Post a Comment