* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502) * 适配 google gemini 优化为从用户input中提取文件 * 适配最新的智谱SDK、支持glm-4v * requirements.txt fix * pending history check --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520) * Update crazy_functional.py with new functionality deal with PDF * Update crazy_functional.py and Mermaid.py for plugin_kwargs * Update crazy_functional.py with new chart type: mind map * Update SELECT_PROMPT and i_say_show_user messages * Update ArgsReminder message in get_crazy_functions() function * Update with read md file and update PROMPTS * Return the PROMPTS as the test found that the initial version worked best * Update Mermaid chart generation function * version 3.71 * 解决issues #1510 * Remove unnecessary text from sys_prompt in 解析历史输入 function * Remove sys_prompt message in 解析历史输入 function * Update bridge_all.py: supports gpt-4-turbo-preview (#1517) * Update bridge_all.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update bridge_all.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Update config.py: supports gpt-4-turbo-preview (#1516) * Update config.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update config.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Refactor 解析历史输入 function to handle file input * Update Mermaid chart generation functionality * rename files and functions --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * 接入mathpix ocr功能 (#1468) * Update Latex输出PDF结果.py 借助mathpix实现了PDF翻译中文并重新编译PDF * Update config.py add mathpix appid & appkey * Add 'PDF翻译中文并重新编译PDF' feature to plugins. --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * fix zhipuai * check picture * remove glm-4 due to bug * 修改config * 检查MATHPIX_APPID * Remove unnecessary code and update function_plugins dictionary * capture non-standard token overflow * bug fix #1524 * change mermaid style * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530) * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 * 微调未果 先stage一下 * update --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * ver 3.72 * change live2d * save the status of ``clear btn` in cookie * 前端选择保持 * js ui bug fix * reset btn bug fix * update live2d tips * fix missing get_token_num method * fix live2d toggle switch * fix persistent custom btn with cookie * fix zhipuai feedback with core functionality * Refactor button update and clean up functions * tailing space removal * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration * Prompt fix、脑图提示词优化 (#1537) * 适配 google gemini 优化为从用户input中提取文件 * 脑图提示词优化 * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * 优化“PDF翻译中文并重新编译PDF”插件 (#1602) * Add gemini_endpoint to API_URL_REDIRECT (#1560) * Add gemini_endpoint to API_URL_REDIRECT * Update gemini-pro and gemini-pro-vision model_info endpoints * Update to support new claude models (#1606) * Add anthropic library and update claude models * 更新bridge_claude.py文件,添加了对图片输入的支持。修复了一些bug。 * 添加Claude_3_Models变量以限制图片数量 * Refactor code to improve readability and maintainability * minor claude bug fix * more flexible one-api support * reformat config * fix one-api new access bug * dummy * compat non-standard api * version 3.73 --------- Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com> Co-authored-by: Menghuan1918 <menghuan2003@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: Hao Ma <893017927@qq.com> Co-authored-by: zeyuan huang <599012428@qq.com>
195 lines
9.0 KiB
Python
195 lines
9.0 KiB
Python
from toolbox import get_log_folder, update_ui, gen_time_str, get_conf, promote_file_to_downloadzone
|
||
from crazy_functions.agent_fns.watchdog import WatchDog
|
||
import time, os
|
||
|
||
class PipeCom:
|
||
def __init__(self, cmd, content) -> None:
|
||
self.cmd = cmd
|
||
self.content = content
|
||
|
||
|
||
class PluginMultiprocessManager:
|
||
def __init__(self, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||
# ⭐ run in main process
|
||
self.autogen_work_dir = os.path.join(get_log_folder("autogen"), gen_time_str())
|
||
self.previous_work_dir_files = {}
|
||
self.llm_kwargs = llm_kwargs
|
||
self.plugin_kwargs = plugin_kwargs
|
||
self.chatbot = chatbot
|
||
self.history = history
|
||
self.system_prompt = system_prompt
|
||
# self.user_request = user_request
|
||
self.alive = True
|
||
self.use_docker = get_conf("AUTOGEN_USE_DOCKER")
|
||
self.last_user_input = ""
|
||
# create a thread to monitor self.heartbeat, terminate the instance if no heartbeat for a long time
|
||
timeout_seconds = 5 * 60
|
||
self.heartbeat_watchdog = WatchDog(timeout=timeout_seconds, bark_fn=self.terminate, interval=5)
|
||
self.heartbeat_watchdog.begin_watch()
|
||
|
||
def feed_heartbeat_watchdog(self):
|
||
# feed this `dog`, so the dog will not `bark` (bark_fn will terminate the instance)
|
||
self.heartbeat_watchdog.feed()
|
||
|
||
def is_alive(self):
|
||
return self.alive
|
||
|
||
def launch_subprocess_with_pipe(self):
|
||
# ⭐ run in main process
|
||
from multiprocessing import Process, Pipe
|
||
|
||
parent_conn, child_conn = Pipe()
|
||
self.p = Process(target=self.subprocess_worker, args=(child_conn,))
|
||
self.p.daemon = True
|
||
self.p.start()
|
||
return parent_conn
|
||
|
||
def terminate(self):
|
||
self.p.terminate()
|
||
self.alive = False
|
||
print("[debug] instance terminated")
|
||
|
||
def subprocess_worker(self, child_conn):
|
||
# ⭐⭐ run in subprocess
|
||
raise NotImplementedError
|
||
|
||
def send_command(self, cmd):
|
||
# ⭐ run in main process
|
||
repeated = False
|
||
if cmd == self.last_user_input:
|
||
repeated = True
|
||
cmd = ""
|
||
else:
|
||
self.last_user_input = cmd
|
||
self.parent_conn.send(PipeCom("user_input", cmd))
|
||
return repeated, cmd
|
||
|
||
def immediate_showoff_when_possible(self, fp):
|
||
# ⭐ 主进程
|
||
# 获取fp的拓展名
|
||
file_type = fp.split('.')[-1]
|
||
# 如果是文本文件, 则直接显示文本内容
|
||
if file_type.lower() in ['png', 'jpg']:
|
||
image_path = os.path.abspath(fp)
|
||
self.chatbot.append([
|
||
'检测到新生图像:',
|
||
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
|
||
])
|
||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||
|
||
def overwatch_workdir_file_change(self):
|
||
# ⭐ 主进程 Docker 外挂文件夹监控
|
||
path_to_overwatch = self.autogen_work_dir
|
||
change_list = []
|
||
# 扫描路径下的所有文件, 并与self.previous_work_dir_files中所记录的文件进行对比,
|
||
# 如果有新文件出现,或者文件的修改时间发生变化,则更新self.previous_work_dir_files中
|
||
# 把新文件和发生变化的文件的路径记录到 change_list 中
|
||
for root, dirs, files in os.walk(path_to_overwatch):
|
||
for file in files:
|
||
file_path = os.path.join(root, file)
|
||
if file_path not in self.previous_work_dir_files.keys():
|
||
last_modified_time = os.stat(file_path).st_mtime
|
||
self.previous_work_dir_files.update({file_path: last_modified_time})
|
||
change_list.append(file_path)
|
||
else:
|
||
last_modified_time = os.stat(file_path).st_mtime
|
||
if last_modified_time != self.previous_work_dir_files[file_path]:
|
||
self.previous_work_dir_files[file_path] = last_modified_time
|
||
change_list.append(file_path)
|
||
if len(change_list) > 0:
|
||
file_links = ""
|
||
for f in change_list:
|
||
res = promote_file_to_downloadzone(f)
|
||
file_links += f'<br/><a href="file={res}" target="_blank">{res}</a>'
|
||
yield from self.immediate_showoff_when_possible(f)
|
||
|
||
self.chatbot.append(['检测到新生文档.', f'文档清单如下: {file_links}'])
|
||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||
return change_list
|
||
|
||
|
||
def main_process_ui_control(self, txt, create_or_resume) -> str:
|
||
# ⭐ 主进程
|
||
if create_or_resume == 'create':
|
||
self.cnt = 1
|
||
self.parent_conn = self.launch_subprocess_with_pipe() # ⭐⭐⭐
|
||
repeated, cmd_to_autogen = self.send_command(txt)
|
||
if txt == 'exit':
|
||
self.chatbot.append([f"结束", "结束信号已明确,终止AutoGen程序。"])
|
||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||
self.terminate()
|
||
return "terminate"
|
||
|
||
# patience = 10
|
||
|
||
while True:
|
||
time.sleep(0.5)
|
||
if not self.alive:
|
||
# the heartbeat watchdog might have it killed
|
||
self.terminate()
|
||
return "terminate"
|
||
if self.parent_conn.poll():
|
||
self.feed_heartbeat_watchdog()
|
||
if "[GPT-Academic] 等待中" in self.chatbot[-1][-1]:
|
||
self.chatbot.pop(-1) # remove the last line
|
||
if "等待您的进一步指令" in self.chatbot[-1][-1]:
|
||
self.chatbot.pop(-1) # remove the last line
|
||
if '[GPT-Academic] 等待中' in self.chatbot[-1][-1]:
|
||
self.chatbot.pop(-1) # remove the last line
|
||
msg = self.parent_conn.recv() # PipeCom
|
||
if msg.cmd == "done":
|
||
self.chatbot.append([f"结束", msg.content])
|
||
self.cnt += 1
|
||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||
self.terminate()
|
||
break
|
||
if msg.cmd == "show":
|
||
yield from self.overwatch_workdir_file_change()
|
||
notice = ""
|
||
if repeated: notice = "(自动忽略重复的输入)"
|
||
self.chatbot.append([f"运行阶段-{self.cnt}(上次用户反馈输入为: 「{cmd_to_autogen}」{notice}", msg.content])
|
||
self.cnt += 1
|
||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||
if msg.cmd == "interact":
|
||
yield from self.overwatch_workdir_file_change()
|
||
self.chatbot.append([f"程序抵达用户反馈节点.", msg.content +
|
||
"\n\n等待您的进一步指令." +
|
||
"\n\n(1) 一般情况下您不需要说什么, 清空输入区, 然后直接点击“提交”以继续. " +
|
||
"\n\n(2) 如果您需要补充些什么, 输入要反馈的内容, 直接点击“提交”以继续. " +
|
||
"\n\n(3) 如果您想终止程序, 输入exit, 直接点击“提交”以终止AutoGen并解锁. "
|
||
])
|
||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||
# do not terminate here, leave the subprocess_worker instance alive
|
||
return "wait_feedback"
|
||
else:
|
||
self.feed_heartbeat_watchdog()
|
||
if '[GPT-Academic] 等待中' not in self.chatbot[-1][-1]:
|
||
# begin_waiting_time = time.time()
|
||
self.chatbot.append(["[GPT-Academic] 等待AutoGen执行结果 ...", "[GPT-Academic] 等待中"])
|
||
self.chatbot[-1] = [self.chatbot[-1][0], self.chatbot[-1][1].replace("[GPT-Academic] 等待中", "[GPT-Academic] 等待中.")]
|
||
yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||
# if time.time() - begin_waiting_time > patience:
|
||
# self.chatbot.append([f"结束", "等待超时, 终止AutoGen程序。"])
|
||
# yield from update_ui(chatbot=self.chatbot, history=self.history)
|
||
# self.terminate()
|
||
# return "terminate"
|
||
|
||
self.terminate()
|
||
return "terminate"
|
||
|
||
def subprocess_worker_wait_user_feedback(self, wait_msg="wait user feedback"):
|
||
# ⭐⭐ run in subprocess
|
||
patience = 5 * 60
|
||
begin_waiting_time = time.time()
|
||
self.child_conn.send(PipeCom("interact", wait_msg))
|
||
while True:
|
||
time.sleep(0.5)
|
||
if self.child_conn.poll():
|
||
wait_success = True
|
||
break
|
||
if time.time() - begin_waiting_time > patience:
|
||
self.child_conn.send(PipeCom("done", ""))
|
||
wait_success = False
|
||
break
|
||
return wait_success
|