Sometimes it when asking questions as well as hallucinating ChatGPT, can respond with results based on old versions (not updated for an up to date code base).
So if any past in development or experimental code branch, had its code and was mistakenly released. Then the mistake was found and fixed, it can have found its way into the training data set.
Can also be an example of training data set poisoning, forcing it to give misleading or mistaken responses, due to it hallucinating as a result.