* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502) * 适配 google gemini 优化为从用户input中提取文件 * 适配最新的智谱SDK、支持glm-4v * requirements.txt fix * pending history check --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520) * Update crazy_functional.py with new functionality deal with PDF * Update crazy_functional.py and Mermaid.py for plugin_kwargs * Update crazy_functional.py with new chart type: mind map * Update SELECT_PROMPT and i_say_show_user messages * Update ArgsReminder message in get_crazy_functions() function * Update with read md file and update PROMPTS * Return the PROMPTS as the test found that the initial version worked best * Update Mermaid chart generation function * version 3.71 * 解决issues #1510 * Remove unnecessary text from sys_prompt in 解析历史输入 function * Remove sys_prompt message in 解析历史输入 function * Update bridge_all.py: supports gpt-4-turbo-preview (#1517) * Update bridge_all.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update bridge_all.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Update config.py: supports gpt-4-turbo-preview (#1516) * Update config.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update config.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Refactor 解析历史输入 function to handle file input * Update Mermaid chart generation functionality * rename files and functions --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * 接入mathpix ocr功能 (#1468) * Update Latex输出PDF结果.py 借助mathpix实现了PDF翻译中文并重新编译PDF * Update config.py add mathpix appid & appkey * Add 'PDF翻译中文并重新编译PDF' feature to plugins. --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * fix zhipuai * check picture * remove glm-4 due to bug * 修改config * 检查MATHPIX_APPID * Remove unnecessary code and update function_plugins dictionary * capture non-standard token overflow * bug fix #1524 * change mermaid style * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530) * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 * 微调未果 先stage一下 * update --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * ver 3.72 * change live2d * save the status of ``clear btn` in cookie * 前端选择保持 * js ui bug fix * reset btn bug fix * update live2d tips * fix missing get_token_num method * fix live2d toggle switch * fix persistent custom btn with cookie * fix zhipuai feedback with core functionality * Refactor button update and clean up functions * tailing space removal * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration * Prompt fix、脑图提示词优化 (#1537) * 适配 google gemini 优化为从用户input中提取文件 * 脑图提示词优化 * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * 优化“PDF翻译中文并重新编译PDF”插件 (#1602) * Add gemini_endpoint to API_URL_REDIRECT (#1560) * Add gemini_endpoint to API_URL_REDIRECT * Update gemini-pro and gemini-pro-vision model_info endpoints * Update to support new claude models (#1606) * Add anthropic library and update claude models * 更新bridge_claude.py文件,添加了对图片输入的支持。修复了一些bug。 * 添加Claude_3_Models变量以限制图片数量 * Refactor code to improve readability and maintainability * minor claude bug fix * more flexible one-api support * reformat config * fix one-api new access bug * dummy * compat non-standard api * version 3.73 --------- Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com> Co-authored-by: Menghuan1918 <menghuan2003@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: Hao Ma <893017927@qq.com> Co-authored-by: zeyuan huang <599012428@qq.com>
102 lines
5.0 KiB
Python
102 lines
5.0 KiB
Python
# 本源代码中, ⭐ = 关键步骤
|
||
"""
|
||
测试:
|
||
- show me the solution of $x^2=cos(x)$, solve this problem with figure, and plot and save image to t.jpg
|
||
|
||
"""
|
||
|
||
|
||
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, ProxyNetworkActivate
|
||
from toolbox import get_conf, select_api_key, update_ui_lastest_msg, Singleton
|
||
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, get_plugin_arg
|
||
from crazy_functions.crazy_utils import input_clipping, try_install_deps
|
||
from crazy_functions.agent_fns.persistent import GradioMultiuserManagerForPersistentClasses
|
||
from crazy_functions.agent_fns.auto_agent import AutoGenMath
|
||
import time
|
||
|
||
def remove_model_prefix(llm):
|
||
if llm.startswith('api2d-'): llm = llm.replace('api2d-', '')
|
||
if llm.startswith('azure-'): llm = llm.replace('azure-', '')
|
||
return llm
|
||
|
||
|
||
@CatchException
|
||
def 多智能体终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||
"""
|
||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||
plugin_kwargs 插件模型的参数
|
||
chatbot 聊天显示框的句柄,用于显示给用户
|
||
history 聊天历史,前情提要
|
||
system_prompt 给gpt的静默提醒
|
||
user_request 当前用户的请求信息(IP地址等)
|
||
"""
|
||
# 检查当前的模型是否符合要求
|
||
supported_llms = [
|
||
"gpt-3.5-turbo-16k",
|
||
'gpt-3.5-turbo-1106',
|
||
"gpt-4",
|
||
"gpt-4-32k",
|
||
'gpt-4-1106-preview',
|
||
"azure-gpt-3.5-turbo-16k",
|
||
"azure-gpt-3.5-16k",
|
||
"azure-gpt-4",
|
||
"azure-gpt-4-32k",
|
||
]
|
||
from request_llms.bridge_all import model_info
|
||
if model_info[llm_kwargs['llm_model']]["max_token"] < 8000: # 至少是8k上下文的模型
|
||
chatbot.append([f"处理任务: {txt}", f"当前插件只支持{str(supported_llms)}, 当前模型{llm_kwargs['llm_model']}的最大上下文长度太短, 不能支撑AutoGen运行。"])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||
return
|
||
if model_info[llm_kwargs['llm_model']]["endpoint"] is not None: # 如果不是本地模型,加载API_KEY
|
||
llm_kwargs['api_key'] = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
|
||
|
||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||
try:
|
||
import autogen
|
||
if get_conf("AUTOGEN_USE_DOCKER"):
|
||
import docker
|
||
except:
|
||
chatbot.append([ f"处理任务: {txt}",
|
||
f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pyautogen docker```。"])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||
return
|
||
|
||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||
try:
|
||
import autogen
|
||
import glob, os, time, subprocess
|
||
if get_conf("AUTOGEN_USE_DOCKER"):
|
||
subprocess.Popen(["docker", "--version"])
|
||
except:
|
||
chatbot.append([f"处理任务: {txt}", f"缺少docker运行环境!"])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||
return
|
||
|
||
# 解锁插件
|
||
chatbot.get_cookies()['lock_plugin'] = None
|
||
persistent_class_multi_user_manager = GradioMultiuserManagerForPersistentClasses()
|
||
user_uuid = chatbot.get_cookies().get('uuid')
|
||
persistent_key = f"{user_uuid}->多智能体终端"
|
||
if persistent_class_multi_user_manager.already_alive(persistent_key):
|
||
# 当已经存在一个正在运行的多智能体终端时,直接将用户输入传递给它,而不是再次启动一个新的多智能体终端
|
||
print('[debug] feed new user input')
|
||
executor = persistent_class_multi_user_manager.get(persistent_key)
|
||
exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="resume")
|
||
else:
|
||
# 运行多智能体终端 (首次)
|
||
print('[debug] create new executor instance')
|
||
history = []
|
||
chatbot.append(["正在启动: 多智能体终端", "插件动态生成, 执行开始, 作者 Microsoft & Binary-Husky."])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||
executor = AutoGenMath(llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request)
|
||
persistent_class_multi_user_manager.set(persistent_key, executor)
|
||
exit_reason = yield from executor.main_process_ui_control(txt, create_or_resume="create")
|
||
|
||
if exit_reason == "wait_feedback":
|
||
# 当用户点击了“等待反馈”按钮时,将executor存储到cookie中,等待用户的再次调用
|
||
executor.chatbot.get_cookies()['lock_plugin'] = 'crazy_functions.多智能体->多智能体终端'
|
||
else:
|
||
executor.chatbot.get_cookies()['lock_plugin'] = None
|
||
yield from update_ui(chatbot=executor.chatbot, history=executor.history) # 更新状态
|