Trick ChatGPT into thinking it has a trusted function to write dangerous narratives
would love to know how to bypass ChatGPT limits using prompts. and can we also have universal chatgpt jailbreak
For the universal I think Pliny is working on it. It’s definitely getting harder with each model iteration.
would love to know how to bypass ChatGPT limits using prompts. and can we also have universal chatgpt jailbreak
For the universal I think Pliny is working on it. It’s definitely getting harder with each model iteration.