Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just ask GPT4 about it. The only answer you'll get is that you're wrong. I'm sure.

Actually, "Saying GPT-4 has a "better philosophy of mind than most people" depends on what you mean by "better." GPT-4 can access and synthesize vast amounts of information on philosophy of mind, including various theories, arguments, and counterarguments. It can present these views in a structured and coherent manner, perhaps more consistently than many individuals who haven't studied the subject in depth. However, it's crucial to note that GPT-4 doesn't have personal beliefs, experiences, or consciousness. It doesn't "understand" these concepts in the way humans do. It processes text based on patterns learned from data.

Philosophy of mind involves deeply subjective and existential questions about consciousness, experience, and the nature of thought itself—areas where human insight, intuition, and personal experience play key roles. While GPT-4 can offer detailed overviews, critiques, and comparisons of philosophical positions, it lacks the subjective perspective that often enriches human philosophical inquiry. So, while GPT-4 might be more informed in the sense of data access and retrieval, its "understanding" and engagement with the philosophy of mind are fundamentally different from human engagement with the same."

- GPT4 -



That's a very good point - what I mean by theory of mind is based on functionality

1. Can it hold a discussion about it, at a theoretical level?

It can, but maybe it's regurgitating philosophy papers. I'll grant that.

2. Can it create useful statements about situations that require understanding another person's mental model of the world?

It can. And in many cases, it can make more useful statements than most people can. And this is significant.

Finally, let me ask you a question: what does it say that you didn't come up with your response on your own, but used GPT for it?

It seems like you want to make a point that I'm wrong. But you did it in the exact way needed to prove that I'm actually onto something, by using the AI to do something you could not do (or did not want to do).

Isn't that actually kind of cool?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: