* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502) * 适配 google gemini 优化为从用户input中提取文件 * 适配最新的智谱SDK、支持glm-4v * requirements.txt fix * pending history check --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520) * Update crazy_functional.py with new functionality deal with PDF * Update crazy_functional.py and Mermaid.py for plugin_kwargs * Update crazy_functional.py with new chart type: mind map * Update SELECT_PROMPT and i_say_show_user messages * Update ArgsReminder message in get_crazy_functions() function * Update with read md file and update PROMPTS * Return the PROMPTS as the test found that the initial version worked best * Update Mermaid chart generation function * version 3.71 * 解决issues #1510 * Remove unnecessary text from sys_prompt in 解析历史输入 function * Remove sys_prompt message in 解析历史输入 function * Update bridge_all.py: supports gpt-4-turbo-preview (#1517) * Update bridge_all.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update bridge_all.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Update config.py: supports gpt-4-turbo-preview (#1516) * Update config.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update config.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Refactor 解析历史输入 function to handle file input * Update Mermaid chart generation functionality * rename files and functions --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * 接入mathpix ocr功能 (#1468) * Update Latex输出PDF结果.py 借助mathpix实现了PDF翻译中文并重新编译PDF * Update config.py add mathpix appid & appkey * Add 'PDF翻译中文并重新编译PDF' feature to plugins. --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * fix zhipuai * check picture * remove glm-4 due to bug * 修改config * 检查MATHPIX_APPID * Remove unnecessary code and update function_plugins dictionary * capture non-standard token overflow * bug fix #1524 * change mermaid style * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530) * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 * 微调未果 先stage一下 * update --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * ver 3.72 * change live2d * save the status of ``clear btn` in cookie * 前端选择保持 * js ui bug fix * reset btn bug fix * update live2d tips * fix missing get_token_num method * fix live2d toggle switch * fix persistent custom btn with cookie * fix zhipuai feedback with core functionality * Refactor button update and clean up functions * tailing space removal * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration * Prompt fix、脑图提示词优化 (#1537) * 适配 google gemini 优化为从用户input中提取文件 * 脑图提示词优化 * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * 优化“PDF翻译中文并重新编译PDF”插件 (#1602) * Add gemini_endpoint to API_URL_REDIRECT (#1560) * Add gemini_endpoint to API_URL_REDIRECT * Update gemini-pro and gemini-pro-vision model_info endpoints * Update to support new claude models (#1606) * Add anthropic library and update claude models * 更新bridge_claude.py文件,添加了对图片输入的支持。修复了一些bug。 * 添加Claude_3_Models变量以限制图片数量 * Refactor code to improve readability and maintainability * minor claude bug fix * more flexible one-api support * reformat config * fix one-api new access bug * dummy * compat non-standard api * version 3.73 --------- Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com> Co-authored-by: Menghuan1918 <menghuan2003@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: Hao Ma <893017927@qq.com> Co-authored-by: zeyuan huang <599012428@qq.com>
107 lines
5.2 KiB
Python
107 lines
5.2 KiB
Python
from toolbox import CatchException, update_ui
|
||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
|
||
import requests
|
||
from bs4 import BeautifulSoup
|
||
from request_llms.bridge_all import model_info
|
||
|
||
def google(query, proxies):
|
||
query = query # 在此处替换您要搜索的关键词
|
||
url = f"https://www.google.com/search?q={query}"
|
||
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'}
|
||
response = requests.get(url, headers=headers, proxies=proxies)
|
||
soup = BeautifulSoup(response.content, 'html.parser')
|
||
results = []
|
||
for g in soup.find_all('div', class_='g'):
|
||
anchors = g.find_all('a')
|
||
if anchors:
|
||
link = anchors[0]['href']
|
||
if link.startswith('/url?q='):
|
||
link = link[7:]
|
||
if not link.startswith('http'):
|
||
continue
|
||
title = g.find('h3').text
|
||
item = {'title': title, 'link': link}
|
||
results.append(item)
|
||
|
||
for r in results:
|
||
print(r['link'])
|
||
return results
|
||
|
||
def scrape_text(url, proxies) -> str:
|
||
"""Scrape text from a webpage
|
||
|
||
Args:
|
||
url (str): The URL to scrape text from
|
||
|
||
Returns:
|
||
str: The scraped text
|
||
"""
|
||
headers = {
|
||
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
|
||
'Content-Type': 'text/plain',
|
||
}
|
||
try:
|
||
response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
|
||
if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
|
||
except:
|
||
return "无法连接到该网页"
|
||
soup = BeautifulSoup(response.text, "html.parser")
|
||
for script in soup(["script", "style"]):
|
||
script.extract()
|
||
text = soup.get_text()
|
||
lines = (line.strip() for line in text.splitlines())
|
||
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
|
||
text = "\n".join(chunk for chunk in chunks if chunk)
|
||
return text
|
||
|
||
@CatchException
|
||
def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||
"""
|
||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||
chatbot 聊天显示框的句柄,用于显示给用户
|
||
history 聊天历史,前情提要
|
||
system_prompt 给gpt的静默提醒
|
||
user_request 当前用户的请求信息(IP地址等)
|
||
"""
|
||
history = [] # 清空历史,以免输入溢出
|
||
chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
|
||
"[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR!"))
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||
|
||
# ------------- < 第1步:爬取搜索引擎的结果 > -------------
|
||
from toolbox import get_conf
|
||
proxies = get_conf('proxies')
|
||
urls = google(txt, proxies)
|
||
history = []
|
||
if len(urls) == 0:
|
||
chatbot.append((f"结论:{txt}",
|
||
"[Local Message] 受到google限制,无法从google获取信息!"))
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||
return
|
||
# ------------- < 第2步:依次访问网页 > -------------
|
||
max_search_result = 5 # 最多收纳多少个网页的结果
|
||
for index, url in enumerate(urls[:max_search_result]):
|
||
res = scrape_text(url['link'], proxies)
|
||
history.extend([f"第{index}份搜索结果:", res])
|
||
chatbot.append([f"第{index}份搜索结果:", res[:500]+"......"])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||
|
||
# ------------- < 第3步:ChatGPT综合 > -------------
|
||
i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
|
||
i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
|
||
inputs=i_say,
|
||
history=history,
|
||
max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4
|
||
)
|
||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
|
||
inputs=i_say, inputs_show_user=i_say,
|
||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
|
||
sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
|
||
)
|
||
chatbot[-1] = (i_say, gpt_say)
|
||
history.append(i_say);history.append(gpt_say)
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||
|