Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's some research that shows that LLMs finetuned to write malicious code (with security vulnerabilities) also becomes more malicious (including claiming that Hitler is a role model).

So it's entirely possible that training in one area (eg: Reddit discourse) might influence other areas (such as PRs)

https://arxiv.org/html/2502.17424v1



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: