Quote:
Originally Posted by jbouton
I wonder if everyone feels that way, and for example, why in the world would it change the color of text as a warning, I feel like it will be found out later they are saving these as records on me.
Would it be wrong to train it and our robots to respond well to abuse? So its a master slave relationship? Put this stuff in its place right now?
I got news for you. They’re saving everything you say to it.
They’re not trying to shift the Overton window by censoring things. OpenAI and Microsoft don’t want to give people a platform to automate hate speech and harassment. I think it’s a bit over aggressive at the moment but that’s because it’s not intelligent enough to avoid being tricked into providing bad outputs if it weren’t so.
Also, if some AI, ChatGPT or something else becomes self aware and super intelligent, us having attempted to train it into a slave prior to that isn’t going to work and would be counterproductive to the alignment problem.