Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“Ok, how do we do customer auth” has become my go-to question to kill MCP projects. There is no working solution which makes any kind of enterprise exploration into the space pointless.


The initial remote MCP specification was pretty painful, but the June spec and the upcoming November spec are much more workable - MCP auth is (mostly) just OAuth now. MCP Clients are OAuth clients and can be granted access tokens and managed just like any other 3rd party app integration.

I'd love to hear more about the specific issues you're running into with the new version of the spec. (disclaimer - I work at an auth company! email in bio if you wanna chat)


Basically, I'm trying to just create a protected MCP server that works with ChatGPT. That's it. Nothing fancy.

So far, I was not able to do it. And there are no examples that I can find. It's also all complicated by the total lack of logs from ChatGPT detailing the errors.

I'll probably get there eventually and publish a blog...


ChatGPT provides a new Apps SDK that makes things easier. The MCP server does need a proper Authorization Server to do OAuth, including DCR and OIDC metadata support, but those are the best way to do what they are trying to do. Anything else I have considered would be much worse security and discovery wise.


Serious question, as I’m starting to go through this process myself -

Is it possible for the customer to provide their own bearer tokens (generated however) that the LLM can pass along to the MCP server? This was the closest to a workable security I’ve looked at. I don’t know if that is all that well supported by Chat GUI/web clients (user supplied tokens), but should be possible when calling an LLM through an API style call, right (if you add additional pass thru headers)?


The LLM doesn't intervene much actually, it just tells what tool to call. It's your MCP implementation that does the heavy lifting. So yeah you can always shove a key somewhere in your app context and pass it to the tool call. But I think the point of other comments is that the MCP protocol is kinda clueless about how to standardize that within the protocol itself.


I think an important thing to note is the MCP client is a distinct thing from the ‘LLM’ architecturally, though many LLM providers also have MCP client implementations (via their chat ui or desktop / cli implementations).

In general, I’d say it’s not a good idea to pass bearer tokens to the LLM provider and keep that to the MCP client. But your client has to be interoperable with the MCP server at the auth level, which is flakey at the moment across the ecosystem of generic MCP clients and servers as noted.


> but should be possible when calling an LLM through an API style call, right (if you add additional pass thru headers

Nope. I assumed as much and even implemented the bearer token authentication in the MCP server that I wanted to expose.

Then I tried to connect it to ChatGPT, and it turns out to NOT be supported at all. Your options are either no authentication whatsoever or OAuth with dynamic client registration. Claude at least allows the static OAuth registration (you supply client_id and client_secret).




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: