First, something lying sometimes doesn't mean it lies all the time.
Second, the whole point of LLMs is inference - they use massive amounts of amalgamated information to produce answers. The Olympiad math problems are not frontier mathematics requiring ideation, they are complex examples of existing problems. That means they're exactly the sort of thing an LLM with enough training data is good at.
The question of whether recombining existing knowledge is all it takes to be "creative" or produce things which are novel is an open one, but I don't think this is contradictory on its face.
First, something lying sometimes doesn't mean it lies all the time.
Second, the whole point of LLMs is inference - they use massive amounts of amalgamated information to produce answers. The Olympiad math problems are not frontier mathematics requiring ideation, they are complex examples of existing problems. That means they're exactly the sort of thing an LLM with enough training data is good at.
The question of whether recombining existing knowledge is all it takes to be "creative" or produce things which are novel is an open one, but I don't think this is contradictory on its face.