* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502) * 适配 google gemini 优化为从用户input中提取文件 * 适配最新的智谱SDK、支持glm-4v * requirements.txt fix * pending history check --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520) * Update crazy_functional.py with new functionality deal with PDF * Update crazy_functional.py and Mermaid.py for plugin_kwargs * Update crazy_functional.py with new chart type: mind map * Update SELECT_PROMPT and i_say_show_user messages * Update ArgsReminder message in get_crazy_functions() function * Update with read md file and update PROMPTS * Return the PROMPTS as the test found that the initial version worked best * Update Mermaid chart generation function * version 3.71 * 解决issues #1510 * Remove unnecessary text from sys_prompt in 解析历史输入 function * Remove sys_prompt message in 解析历史输入 function * Update bridge_all.py: supports gpt-4-turbo-preview (#1517) * Update bridge_all.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update bridge_all.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Update config.py: supports gpt-4-turbo-preview (#1516) * Update config.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update config.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Refactor 解析历史输入 function to handle file input * Update Mermaid chart generation functionality * rename files and functions --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * 接入mathpix ocr功能 (#1468) * Update Latex输出PDF结果.py 借助mathpix实现了PDF翻译中文并重新编译PDF * Update config.py add mathpix appid & appkey * Add 'PDF翻译中文并重新编译PDF' feature to plugins. --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * fix zhipuai * check picture * remove glm-4 due to bug * 修改config * 检查MATHPIX_APPID * Remove unnecessary code and update function_plugins dictionary * capture non-standard token overflow * bug fix #1524 * change mermaid style * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530) * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 * 微调未果 先stage一下 * update --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * ver 3.72 * change live2d * save the status of ``clear btn` in cookie * 前端选择保持 * js ui bug fix * reset btn bug fix * update live2d tips * fix missing get_token_num method * fix live2d toggle switch * fix persistent custom btn with cookie * fix zhipuai feedback with core functionality * Refactor button update and clean up functions * tailing space removal * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration * Prompt fix、脑图提示词优化 (#1537) * 适配 google gemini 优化为从用户input中提取文件 * 脑图提示词优化 * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * 优化“PDF翻译中文并重新编译PDF”插件 (#1602) * Add gemini_endpoint to API_URL_REDIRECT (#1560) * Add gemini_endpoint to API_URL_REDIRECT * Update gemini-pro and gemini-pro-vision model_info endpoints * Update to support new claude models (#1606) * Add anthropic library and update claude models * 更新bridge_claude.py文件,添加了对图片输入的支持。修复了一些bug。 * 添加Claude_3_Models变量以限制图片数量 * Refactor code to improve readability and maintainability * minor claude bug fix * more flexible one-api support * reformat config * fix one-api new access bug * dummy * compat non-standard api * version 3.73 --------- Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com> Co-authored-by: Menghuan1918 <menghuan2003@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: Hao Ma <893017927@qq.com> Co-authored-by: zeyuan huang <599012428@qq.com>
153 lines
7.3 KiB
Python
153 lines
7.3 KiB
Python
from toolbox import CatchException, update_ui, promote_file_to_downloadzone, get_log_folder, get_user
|
||
import re
|
||
|
||
f_prefix = 'GPT-Academic对话存档'
|
||
|
||
def write_chat_to_file(chatbot, history=None, file_name=None):
|
||
"""
|
||
将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
|
||
"""
|
||
import os
|
||
import time
|
||
if file_name is None:
|
||
file_name = f_prefix + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.html'
|
||
fp = os.path.join(get_log_folder(get_user(chatbot), plugin_name='chat_history'), file_name)
|
||
with open(fp, 'w', encoding='utf8') as f:
|
||
from themes.theme import advanced_css
|
||
f.write(f'<!DOCTYPE html><head><meta charset="utf-8"><title>对话历史</title><style>{advanced_css}</style></head>')
|
||
for i, contents in enumerate(chatbot):
|
||
for j, content in enumerate(contents):
|
||
try: # 这个bug没找到触发条件,暂时先这样顶一下
|
||
if type(content) != str: content = str(content)
|
||
except:
|
||
continue
|
||
f.write(content)
|
||
if j == 0:
|
||
f.write('<hr style="border-top: dotted 3px #ccc;">')
|
||
f.write('<hr color="red"> \n\n')
|
||
f.write('<hr color="blue"> \n\n raw chat context:\n')
|
||
f.write('<code>')
|
||
for h in history:
|
||
f.write("\n>>>" + h)
|
||
f.write('</code>')
|
||
promote_file_to_downloadzone(fp, rename_file=file_name, chatbot=chatbot)
|
||
return '对话历史写入:' + fp
|
||
|
||
def gen_file_preview(file_name):
|
||
try:
|
||
with open(file_name, 'r', encoding='utf8') as f:
|
||
file_content = f.read()
|
||
# pattern to match the text between <head> and </head>
|
||
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
|
||
file_content = re.sub(pattern, '', file_content)
|
||
html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
|
||
history = history.strip('<code>')
|
||
history = history.strip('</code>')
|
||
history = history.split("\n>>>")
|
||
return list(filter(lambda x:x!="", history))[0][:100]
|
||
except:
|
||
return ""
|
||
|
||
def read_file_to_chat(chatbot, history, file_name):
|
||
with open(file_name, 'r', encoding='utf8') as f:
|
||
file_content = f.read()
|
||
# pattern to match the text between <head> and </head>
|
||
pattern = re.compile(r'<head>.*?</head>', flags=re.DOTALL)
|
||
file_content = re.sub(pattern, '', file_content)
|
||
html, history = file_content.split('<hr color="blue"> \n\n raw chat context:\n')
|
||
history = history.strip('<code>')
|
||
history = history.strip('</code>')
|
||
history = history.split("\n>>>")
|
||
history = list(filter(lambda x:x!="", history))
|
||
html = html.split('<hr color="red"> \n\n')
|
||
html = list(filter(lambda x:x!="", html))
|
||
chatbot.clear()
|
||
for i, h in enumerate(html):
|
||
i_say, gpt_say = h.split('<hr style="border-top: dotted 3px #ccc;">')
|
||
chatbot.append([i_say, gpt_say])
|
||
chatbot.append([f"存档文件详情?", f"[Local Message] 载入对话{len(html)}条,上下文{len(history)}条。"])
|
||
return chatbot, history
|
||
|
||
@CatchException
|
||
def 对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||
"""
|
||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||
chatbot 聊天显示框的句柄,用于显示给用户
|
||
history 聊天历史,前情提要
|
||
system_prompt 给gpt的静默提醒
|
||
user_request 当前用户的请求信息(IP地址等)
|
||
"""
|
||
|
||
chatbot.append(("保存当前对话",
|
||
f"[Local Message] {write_chat_to_file(chatbot, history)},您可以调用下拉菜单中的“载入对话历史存档”还原当下的对话。"))
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
|
||
|
||
def hide_cwd(str):
|
||
import os
|
||
current_path = os.getcwd()
|
||
replace_path = "."
|
||
return str.replace(current_path, replace_path)
|
||
|
||
@CatchException
|
||
def 载入对话历史存档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||
"""
|
||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||
chatbot 聊天显示框的句柄,用于显示给用户
|
||
history 聊天历史,前情提要
|
||
system_prompt 给gpt的静默提醒
|
||
user_request 当前用户的请求信息(IP地址等)
|
||
"""
|
||
from .crazy_utils import get_files_from_everything
|
||
success, file_manifest, _ = get_files_from_everything(txt, type='.html')
|
||
|
||
if not success:
|
||
if txt == "": txt = '空空如也的输入栏'
|
||
import glob
|
||
local_history = "<br/>".join([
|
||
"`"+hide_cwd(f)+f" ({gen_file_preview(f)})"+"`"
|
||
for f in glob.glob(
|
||
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html',
|
||
recursive=True
|
||
)])
|
||
chatbot.append([f"正在查找对话历史文件(html格式): {txt}", f"找不到任何html文件: {txt}。但本地存储了以下历史文件,您可以将任意一个文件路径粘贴到输入区,然后重试:<br/>{local_history}"])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||
return
|
||
|
||
try:
|
||
chatbot, history = read_file_to_chat(chatbot, history, file_manifest[0])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||
except:
|
||
chatbot.append([f"载入对话历史文件", f"对话历史文件损坏!"])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||
return
|
||
|
||
@CatchException
|
||
def 删除所有本地对话历史记录(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||
"""
|
||
txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
|
||
llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
|
||
plugin_kwargs 插件模型的参数,暂时没有用武之地
|
||
chatbot 聊天显示框的句柄,用于显示给用户
|
||
history 聊天历史,前情提要
|
||
system_prompt 给gpt的静默提醒
|
||
user_request 当前用户的请求信息(IP地址等)
|
||
"""
|
||
|
||
import glob, os
|
||
local_history = "<br/>".join([
|
||
"`"+hide_cwd(f)+"`"
|
||
for f in glob.glob(
|
||
f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True
|
||
)])
|
||
for f in glob.glob(f'{get_log_folder(get_user(chatbot), plugin_name="chat_history")}/**/{f_prefix}*.html', recursive=True):
|
||
os.remove(f)
|
||
chatbot.append([f"删除所有历史对话文件", f"已删除<br/>{local_history}"])
|
||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||
return
|
||
|
||
|