问答基准测试:保罗·格雷厄姆论文 #
问答基准测试:保罗·格雷厄姆论文 Question Answering Benchmarking: Paul Graham Essay
在这里,我们将讨论如何在Paul Graham的文章中对问答任务进行基准测试。
强烈建议您在启用跟踪的情况下进行任何评估/基准测试。请参阅此处here (opens in a new tab) 了解什么是跟踪以及如何设置它。
# Comment this out if you are NOT using tracing
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
加载数据 Loading the data #
首先,让我们加载数据。
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("question-answering-paul-graham")
Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--question-answering-paul-graham-76e8f711e038d742/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
设置链 Setting up a chain #
现在我们需要创建一些管道来进行问题回答。第一步是在有问题的数据上创建索引。
from langchain.document_loaders import TextLoader
loader = TextLoader("../../modules/paul_graham_essay.txt")
from langchain.indexes import VectorstoreIndexCreator
vectorstore = VectorstoreIndexCreator().from_loaders([loader]).vectorstore
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
现在我们可以创建一个问题回答链。
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
chain = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=vectorstore.as_retriever(), input_key="question")
预测 Make a prediction #
首先,我们可以一次预测一个数据点。在这种粒度级别上执行此操作允许use详细地探索输出,而且比在多个数据点上运行要便宜得多
chain(dataset[0])
{'question': 'What were the two main things the author worked on before college?',
'answer': 'The two main things the author worked on before college were writing and programming.',
'result': ' Writing and programming.'}
做很多预测 Make many predictions #
在我们可以做出预测
predictions = chain.apply(dataset)
评估性能 Evaluate performance #
现在我们可以评估预测。我们能做的第一件事就是用眼睛看它们。
predictions[0]
{'question': 'What were the two main things the author worked on before college?',
'answer': 'The two main things the author worked on before college were writing and programming.',
'result': ' Writing and programming.'}
接下来,我们可以使用一个语言模型,以编程的方式给它们打分
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(dataset, predictions, question_key="question", prediction_key="result")
我们可以将分级输出添加到 predictions
dict中,然后获得等级计数。
for i, prediction in enumerate(predictions):
prediction['grade'] = graded_outputs[i]['text']
from collections import Counter
Counter([pred['grade'] for pred in predictions])
Counter({' CORRECT': 12, ' INCORRECT': 10})
我们还可以过滤数据点,找出不正确的例子并查看它们。
incorrect = [pred for pred in predictions if pred['grade'] == " INCORRECT"]
incorrect[0]
{'question': 'What did the author write their dissertation on?',
'answer': 'The author wrote their dissertation on applications of continuations.',
'result': ' The author does not mention what their dissertation was on, so it is not known.',
'grade': ' INCORRECT'}