ckrapu 10 hours ago From an AI risk perspective, one of the most wonderful things about LLMs is that their chain of thought can be entirely read off their outputs by humans with no specific training.This is a risky step backwards, and for apparently little gain.
From an AI risk perspective, one of the most wonderful things about LLMs is that their chain of thought can be entirely read off their outputs by humans with no specific training.
This is a risky step backwards, and for apparently little gain.