Merge Frontier, Update to Version 3.72 (#1553)
* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502) * 适配 google gemini 优化为从用户input中提取文件 * 适配最新的智谱SDK、支持glm-4v * requirements.txt fix * pending history check --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520) * Update crazy_functional.py with new functionality deal with PDF * Update crazy_functional.py and Mermaid.py for plugin_kwargs * Update crazy_functional.py with new chart type: mind map * Update SELECT_PROMPT and i_say_show_user messages * Update ArgsReminder message in get_crazy_functions() function * Update with read md file and update PROMPTS * Return the PROMPTS as the test found that the initial version worked best * Update Mermaid chart generation function * version 3.71 * 解决issues #1510 * Remove unnecessary text from sys_prompt in 解析历史输入 function * Remove sys_prompt message in 解析历史输入 function * Update bridge_all.py: supports gpt-4-turbo-preview (#1517) * Update bridge_all.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update bridge_all.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Update config.py: supports gpt-4-turbo-preview (#1516) * Update config.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update config.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Refactor 解析历史输入 function to handle file input * Update Mermaid chart generation functionality * rename files and functions --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * 接入mathpix ocr功能 (#1468) * Update Latex输出PDF结果.py 借助mathpix实现了PDF翻译中文并重新编译PDF * Update config.py add mathpix appid & appkey * Add 'PDF翻译中文并重新编译PDF' feature to plugins. --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * fix zhipuai * check picture * remove glm-4 due to bug * 修改config * 检查MATHPIX_APPID * Remove unnecessary code and update function_plugins dictionary * capture non-standard token overflow * bug fix #1524 * change mermaid style * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530) * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 * 微调未果 先stage一下 * update --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * ver 3.72 * change live2d * save the status of ``clear btn` in cookie * 前端选择保持 * js ui bug fix * reset btn bug fix * update live2d tips * fix missing get_token_num method * fix live2d toggle switch * fix persistent custom btn with cookie * fix zhipuai feedback with core functionality * Refactor button update and clean up functions --------- Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com> Co-authored-by: Menghuan1918 <menghuan2003@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: Hao Ma <893017927@qq.com> Co-authored-by: zeyuan huang <599012428@qq.com>
This commit is contained in:
parent
e0c5859cf9
commit
2e9b4a5770
21
config.py
21
config.py
@ -89,8 +89,8 @@ DEFAULT_FN_GROUPS = ['对话', '编程', '学术', '智能体']
|
|||||||
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
|
LLM_MODEL = "gpt-3.5-turbo-16k" # 可选 ↓↓↓
|
||||||
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
|
AVAIL_LLM_MODELS = ["gpt-4-1106-preview", "gpt-4-turbo-preview", "gpt-4-vision-preview",
|
||||||
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
"gpt-3.5-turbo-1106", "gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
"gpt-4", "gpt-4-32k", "azure-gpt-4", "glm-4", "glm-3-turbo",
|
||||||
"gemini-pro", "chatglm3", "claude-2", "zhipuai"]
|
"gemini-pro", "chatglm3", "claude-2"]
|
||||||
# P.S. 其他可用的模型还包括 [
|
# P.S. 其他可用的模型还包括 [
|
||||||
# "moss", "qwen-turbo", "qwen-plus", "qwen-max"
|
# "moss", "qwen-turbo", "qwen-plus", "qwen-max"
|
||||||
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
|
# "zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen-local", "gpt-3.5-turbo-0613",
|
||||||
@ -195,7 +195,7 @@ XFYUN_API_KEY = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
|
|||||||
|
|
||||||
# 接入智谱大模型
|
# 接入智谱大模型
|
||||||
ZHIPUAI_API_KEY = ""
|
ZHIPUAI_API_KEY = ""
|
||||||
ZHIPUAI_MODEL = "glm-4" # 可选 "glm-3-turbo" "glm-4"
|
ZHIPUAI_MODEL = "" # 此选项已废弃,不再需要填写
|
||||||
|
|
||||||
|
|
||||||
# # 火山引擎YUNQUE大模型
|
# # 火山引擎YUNQUE大模型
|
||||||
@ -208,6 +208,11 @@ ZHIPUAI_MODEL = "glm-4" # 可选 "glm-3-turbo" "glm-4"
|
|||||||
ANTHROPIC_API_KEY = ""
|
ANTHROPIC_API_KEY = ""
|
||||||
|
|
||||||
|
|
||||||
|
# Mathpix 拥有执行PDF的OCR功能,但是需要注册账号
|
||||||
|
MATHPIX_APPID = ""
|
||||||
|
MATHPIX_APPKEY = ""
|
||||||
|
|
||||||
|
|
||||||
# 自定义API KEY格式
|
# 自定义API KEY格式
|
||||||
CUSTOM_API_KEY_PATTERN = ""
|
CUSTOM_API_KEY_PATTERN = ""
|
||||||
|
|
||||||
@ -297,9 +302,8 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|||||||
│ ├── BAIDU_CLOUD_API_KEY
|
│ ├── BAIDU_CLOUD_API_KEY
|
||||||
│ └── BAIDU_CLOUD_SECRET_KEY
|
│ └── BAIDU_CLOUD_SECRET_KEY
|
||||||
│
|
│
|
||||||
├── "zhipuai" 智谱AI大模型chatglm_turbo
|
├── "glm-4", "glm-3-turbo", "zhipuai" 智谱AI大模型
|
||||||
│ ├── ZHIPUAI_API_KEY
|
│ └── ZHIPUAI_API_KEY
|
||||||
│ └── ZHIPUAI_MODEL
|
|
||||||
│
|
│
|
||||||
├── "qwen-turbo" 等通义千问大模型
|
├── "qwen-turbo" 等通义千问大模型
|
||||||
│ └── DASHSCOPE_API_KEY
|
│ └── DASHSCOPE_API_KEY
|
||||||
@ -351,6 +355,9 @@ NUM_CUSTOM_BASIC_BTN = 4
|
|||||||
│ └── ALIYUN_SECRET
|
│ └── ALIYUN_SECRET
|
||||||
│
|
│
|
||||||
└── PDF文档精准解析
|
└── PDF文档精准解析
|
||||||
└── GROBID_URLS
|
├── GROBID_URLS
|
||||||
|
├── MATHPIX_APPID
|
||||||
|
└── MATHPIX_APPKEY
|
||||||
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
@ -70,11 +70,11 @@ def get_crazy_functions():
|
|||||||
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
|
||||||
"Function": HotReload(清除缓存),
|
"Function": HotReload(清除缓存),
|
||||||
},
|
},
|
||||||
"生成多种Mermaid图表(从当前对话或文件(.pdf/.md)中生产图表)": {
|
"生成多种Mermaid图表(从当前对话或路径(.pdf/.md/.docx)中生产图表)": {
|
||||||
"Group": "对话",
|
"Group": "对话",
|
||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"Info" : "基于当前对话或PDF生成多种Mermaid图表,图表类型由模型判断",
|
"Info" : "基于当前对话或文件生成多种Mermaid图表,图表类型由模型判断",
|
||||||
"Function": HotReload(生成多种Mermaid图表),
|
"Function": HotReload(生成多种Mermaid图表),
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "请输入图类型对应的数字,不输入则为模型自行判断:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图,9-思维导图",
|
"ArgsReminder": "请输入图类型对应的数字,不输入则为模型自行判断:1-流程图,2-序列图,3-类图,4-饼图,5-甘特图,6-状态图,7-实体关系图,8-象限提示图,9-思维导图",
|
||||||
@ -532,8 +532,9 @@ def get_crazy_functions():
|
|||||||
print("Load function plugin failed")
|
print("Load function plugin failed")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
|
from crazy_functions.Latex输出PDF import Latex英文纠错加PDF对比
|
||||||
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
|
from crazy_functions.Latex输出PDF import Latex翻译中文并重新编译PDF
|
||||||
|
from crazy_functions.Latex输出PDF import PDF翻译中文并重新编译PDF
|
||||||
|
|
||||||
function_plugins.update(
|
function_plugins.update(
|
||||||
{
|
{
|
||||||
@ -550,9 +551,9 @@ def get_crazy_functions():
|
|||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||||
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||||
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID,比如1812.10695",
|
||||||
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
||||||
},
|
},
|
||||||
@ -561,11 +562,22 @@ def get_crazy_functions():
|
|||||||
"Color": "stop",
|
"Color": "stop",
|
||||||
"AsButton": False,
|
"AsButton": False,
|
||||||
"AdvancedArgs": True,
|
"AdvancedArgs": True,
|
||||||
"ArgsReminder": "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||||
+ "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||||
+ 'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
|
||||||
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
"Function": HotReload(Latex翻译中文并重新编译PDF),
|
||||||
|
},
|
||||||
|
"PDF翻译中文并重新编译PDF(上传PDF)[需Latex]": {
|
||||||
|
"Group": "学术",
|
||||||
|
"Color": "stop",
|
||||||
|
"AsButton": False,
|
||||||
|
"AdvancedArgs": True,
|
||||||
|
"ArgsReminder": r"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 "
|
||||||
|
r"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: "
|
||||||
|
r'If the term "agent" is used in this section, it should be translated to "智能体". ',
|
||||||
|
"Info": "PDF翻译中文,并重新编译PDF | 输入参数为路径",
|
||||||
|
"Function": HotReload(PDF翻译中文并重新编译PDF)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
484
crazy_functions/Latex输出PDF.py
Normal file
484
crazy_functions/Latex输出PDF.py
Normal file
@ -0,0 +1,484 @@
|
|||||||
|
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
|
||||||
|
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
|
||||||
|
from functools import partial
|
||||||
|
import glob, os, requests, time, json, tarfile
|
||||||
|
|
||||||
|
pj = os.path.join
|
||||||
|
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
||||||
|
|
||||||
|
|
||||||
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
|
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
||||||
|
def switch_prompt(pfg, mode, more_requirement):
|
||||||
|
"""
|
||||||
|
Generate prompts and system prompts based on the mode for proofreading or translating.
|
||||||
|
Args:
|
||||||
|
- pfg: Proofreader or Translator instance.
|
||||||
|
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- inputs_array: A list of strings containing prompts for users to respond to.
|
||||||
|
- sys_prompt_array: A list of strings containing prompts for system prompts.
|
||||||
|
"""
|
||||||
|
n_split = len(pfg.sp_file_contents)
|
||||||
|
if mode == 'proofread_en':
|
||||||
|
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
||||||
|
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
|
||||||
|
r"Answer me only with the revised text:" +
|
||||||
|
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||||
|
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
||||||
|
elif mode == 'translate_zh':
|
||||||
|
inputs_array = [
|
||||||
|
r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
|
||||||
|
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
||||||
|
r"Answer me only with the translated text:" +
|
||||||
|
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
||||||
|
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
|
||||||
|
else:
|
||||||
|
assert False, "未知指令"
|
||||||
|
return inputs_array, sys_prompt_array
|
||||||
|
|
||||||
|
|
||||||
|
def desend_to_extracted_folder_if_exist(project_folder):
|
||||||
|
"""
|
||||||
|
Descend into the extracted folder if it exists, otherwise return the original folder.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- project_folder: A string specifying the folder path.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
|
||||||
|
"""
|
||||||
|
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
||||||
|
if len(maybe_dir) == 0: return project_folder
|
||||||
|
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
|
||||||
|
return project_folder
|
||||||
|
|
||||||
|
|
||||||
|
def move_project(project_folder, arxiv_id=None):
|
||||||
|
"""
|
||||||
|
Create a new work folder and copy the project folder to it.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- project_folder: A string specifying the folder path of the project.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- A string specifying the path to the new work folder.
|
||||||
|
"""
|
||||||
|
import shutil, time
|
||||||
|
time.sleep(2) # avoid time string conflict
|
||||||
|
if arxiv_id is not None:
|
||||||
|
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
|
||||||
|
else:
|
||||||
|
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
|
||||||
|
try:
|
||||||
|
shutil.rmtree(new_workfolder)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# align subfolder if there is a folder wrapper
|
||||||
|
items = glob.glob(pj(project_folder, '*'))
|
||||||
|
items = [item for item in items if os.path.basename(item) != '__MACOSX']
|
||||||
|
if len(glob.glob(pj(project_folder, '*.tex'))) == 0 and len(items) == 1:
|
||||||
|
if os.path.isdir(items[0]): project_folder = items[0]
|
||||||
|
|
||||||
|
shutil.copytree(src=project_folder, dst=new_workfolder)
|
||||||
|
return new_workfolder
|
||||||
|
|
||||||
|
|
||||||
|
def arxiv_download(chatbot, history, txt, allow_cache=True):
|
||||||
|
def check_cached_translation_pdf(arxiv_id):
|
||||||
|
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
|
||||||
|
if not os.path.exists(translation_dir):
|
||||||
|
os.makedirs(translation_dir)
|
||||||
|
target_file = pj(translation_dir, 'translate_zh.pdf')
|
||||||
|
if os.path.exists(target_file):
|
||||||
|
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
|
||||||
|
target_file_compare = pj(translation_dir, 'comparison.pdf')
|
||||||
|
if os.path.exists(target_file_compare):
|
||||||
|
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
|
||||||
|
return target_file
|
||||||
|
return False
|
||||||
|
|
||||||
|
def is_float(s):
|
||||||
|
try:
|
||||||
|
float(s)
|
||||||
|
return True
|
||||||
|
except ValueError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
|
||||||
|
txt = 'https://arxiv.org/abs/' + txt.strip()
|
||||||
|
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
||||||
|
txt = 'https://arxiv.org/abs/' + txt[:10]
|
||||||
|
|
||||||
|
if not txt.startswith('https://arxiv.org'):
|
||||||
|
return txt, None # 是本地文件,跳过下载
|
||||||
|
|
||||||
|
# <-------------- inspect format ------------->
|
||||||
|
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
|
||||||
|
url_ = txt # https://arxiv.org/abs/1707.06690
|
||||||
|
if not txt.startswith('https://arxiv.org/abs/'):
|
||||||
|
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}。"
|
||||||
|
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return msg, None
|
||||||
|
# <-------------- set format ------------->
|
||||||
|
arxiv_id = url_.split('/abs/')[-1]
|
||||||
|
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
|
||||||
|
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
|
||||||
|
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
|
||||||
|
|
||||||
|
url_tar = url_.replace('/abs/', '/e-print/')
|
||||||
|
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
|
||||||
|
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
|
||||||
|
os.makedirs(translation_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# <-------------- download arxiv source file ------------->
|
||||||
|
dst = pj(translation_dir, arxiv_id + '.tar')
|
||||||
|
if os.path.exists(dst):
|
||||||
|
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
else:
|
||||||
|
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
proxies = get_conf('proxies')
|
||||||
|
r = requests.get(url_tar, proxies=proxies)
|
||||||
|
with open(dst, 'wb+') as f:
|
||||||
|
f.write(r.content)
|
||||||
|
# <-------------- extract file ------------->
|
||||||
|
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
from toolbox import extract_archive
|
||||||
|
extract_archive(file_path=dst, dest_dir=extract_dst)
|
||||||
|
return extract_dst, arxiv_id
|
||||||
|
|
||||||
|
|
||||||
|
def pdf2tex_project(pdf_file_path):
|
||||||
|
# Mathpix API credentials
|
||||||
|
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
|
||||||
|
headers = {"app_id": app_id, "app_key": app_key}
|
||||||
|
|
||||||
|
# Step 1: Send PDF file for processing
|
||||||
|
options = {
|
||||||
|
"conversion_formats": {"tex.zip": True},
|
||||||
|
"math_inline_delimiters": ["$", "$"],
|
||||||
|
"rm_spaces": True
|
||||||
|
}
|
||||||
|
|
||||||
|
response = requests.post(url="https://api.mathpix.com/v3/pdf",
|
||||||
|
headers=headers,
|
||||||
|
data={"options_json": json.dumps(options)},
|
||||||
|
files={"file": open(pdf_file_path, "rb")})
|
||||||
|
|
||||||
|
if response.ok:
|
||||||
|
pdf_id = response.json()["pdf_id"]
|
||||||
|
print(f"PDF processing initiated. PDF ID: {pdf_id}")
|
||||||
|
|
||||||
|
# Step 2: Check processing status
|
||||||
|
while True:
|
||||||
|
conversion_response = requests.get(f"https://api.mathpix.com/v3/pdf/{pdf_id}", headers=headers)
|
||||||
|
conversion_data = conversion_response.json()
|
||||||
|
|
||||||
|
if conversion_data["status"] == "completed":
|
||||||
|
print("PDF processing completed.")
|
||||||
|
break
|
||||||
|
elif conversion_data["status"] == "error":
|
||||||
|
print("Error occurred during processing.")
|
||||||
|
else:
|
||||||
|
print(f"Processing status: {conversion_data['status']}")
|
||||||
|
time.sleep(5) # wait for a few seconds before checking again
|
||||||
|
|
||||||
|
# Step 3: Save results to local files
|
||||||
|
output_dir = os.path.join(os.path.dirname(pdf_file_path), 'mathpix_output')
|
||||||
|
if not os.path.exists(output_dir):
|
||||||
|
os.makedirs(output_dir)
|
||||||
|
|
||||||
|
url = f"https://api.mathpix.com/v3/pdf/{pdf_id}.tex"
|
||||||
|
response = requests.get(url, headers=headers)
|
||||||
|
file_name_wo_dot = '_'.join(os.path.basename(pdf_file_path).split('.')[:-1])
|
||||||
|
output_name = f"{file_name_wo_dot}.tex.zip"
|
||||||
|
output_path = os.path.join(output_dir, output_name)
|
||||||
|
with open(output_path, "wb") as output_file:
|
||||||
|
output_file.write(response.content)
|
||||||
|
print(f"tex.zip file saved at: {output_path}")
|
||||||
|
|
||||||
|
import zipfile
|
||||||
|
unzip_dir = os.path.join(output_dir, file_name_wo_dot)
|
||||||
|
with zipfile.ZipFile(output_path, 'r') as zip_ref:
|
||||||
|
zip_ref.extractall(unzip_dir)
|
||||||
|
|
||||||
|
return unzip_dir
|
||||||
|
|
||||||
|
else:
|
||||||
|
print(f"Error sending PDF for processing. Status code: {response.status_code}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
|
# <-------------- information about this plugin ------------->
|
||||||
|
chatbot.append(["函数插件功能?",
|
||||||
|
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
# <-------------- more requirements ------------->
|
||||||
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
|
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||||
|
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||||
|
|
||||||
|
# <-------------- check deps ------------->
|
||||||
|
try:
|
||||||
|
import glob, os, time, subprocess
|
||||||
|
subprocess.Popen(['pdflatex', '-version'])
|
||||||
|
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||||
|
except Exception as e:
|
||||||
|
chatbot.append([f"解析项目: {txt}",
|
||||||
|
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- clear history and read input ------------->
|
||||||
|
history = []
|
||||||
|
if os.path.exists(txt):
|
||||||
|
project_folder = txt
|
||||||
|
else:
|
||||||
|
if txt == "": txt = '空空如也的输入栏'
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- if is a zip/tar file ------------->
|
||||||
|
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||||
|
|
||||||
|
# <-------------- move latex project away from temp folder ------------->
|
||||||
|
project_folder = move_project(project_folder, arxiv_id=None)
|
||||||
|
|
||||||
|
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||||
|
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
|
||||||
|
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||||
|
chatbot, history, system_prompt, mode='proofread_en',
|
||||||
|
switch_prompt=_switch_prompt_)
|
||||||
|
|
||||||
|
# <-------------- compile PDF ------------->
|
||||||
|
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||||
|
main_file_modified='merge_proofread_en',
|
||||||
|
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||||
|
work_folder=project_folder)
|
||||||
|
|
||||||
|
# <-------------- zip PDF ------------->
|
||||||
|
zip_res = zip_result(project_folder)
|
||||||
|
if success:
|
||||||
|
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
else:
|
||||||
|
chatbot.append((f"失败了",
|
||||||
|
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
|
||||||
|
# <-------------- we are done ------------->
|
||||||
|
return success
|
||||||
|
|
||||||
|
|
||||||
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
||||||
|
# <-------------- information about this plugin ------------->
|
||||||
|
chatbot.append([
|
||||||
|
"函数插件功能?",
|
||||||
|
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
# <-------------- more requirements ------------->
|
||||||
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
|
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||||
|
no_cache = more_req.startswith("--no-cache")
|
||||||
|
if no_cache: more_req.lstrip("--no-cache")
|
||||||
|
allow_cache = not no_cache
|
||||||
|
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||||
|
|
||||||
|
# <-------------- check deps ------------->
|
||||||
|
try:
|
||||||
|
import glob, os, time, subprocess
|
||||||
|
subprocess.Popen(['pdflatex', '-version'])
|
||||||
|
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||||
|
except Exception as e:
|
||||||
|
chatbot.append([f"解析项目: {txt}",
|
||||||
|
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- clear history and read input ------------->
|
||||||
|
history = []
|
||||||
|
try:
|
||||||
|
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
||||||
|
except tarfile.ReadError as e:
|
||||||
|
yield from update_ui_lastest_msg(
|
||||||
|
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
|
||||||
|
chatbot=chatbot, history=history)
|
||||||
|
return
|
||||||
|
|
||||||
|
if txt.endswith('.pdf'):
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"发现已经存在翻译好的PDF文档")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
if os.path.exists(txt):
|
||||||
|
project_folder = txt
|
||||||
|
else:
|
||||||
|
if txt == "": txt = '空空如也的输入栏'
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- if is a zip/tar file ------------->
|
||||||
|
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||||
|
|
||||||
|
# <-------------- move latex project away from temp folder ------------->
|
||||||
|
project_folder = move_project(project_folder, arxiv_id)
|
||||||
|
|
||||||
|
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||||
|
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
||||||
|
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||||
|
chatbot, history, system_prompt, mode='translate_zh',
|
||||||
|
switch_prompt=_switch_prompt_)
|
||||||
|
|
||||||
|
# <-------------- compile PDF ------------->
|
||||||
|
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||||
|
main_file_modified='merge_translate_zh', mode='translate_zh',
|
||||||
|
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||||
|
work_folder=project_folder)
|
||||||
|
|
||||||
|
# <-------------- zip PDF ------------->
|
||||||
|
zip_res = zip_result(project_folder)
|
||||||
|
if success:
|
||||||
|
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
else:
|
||||||
|
chatbot.append((f"失败了",
|
||||||
|
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
|
||||||
|
# <-------------- we are done ------------->
|
||||||
|
return success
|
||||||
|
|
||||||
|
|
||||||
|
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 插件主程序3 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
||||||
|
|
||||||
|
@CatchException
|
||||||
|
def PDF翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
|
# <-------------- information about this plugin ------------->
|
||||||
|
chatbot.append([
|
||||||
|
"函数插件功能?",
|
||||||
|
"将PDF转换为Latex项目,翻译为中文后重新编译为PDF。函数插件贡献者: Marroh。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
# <-------------- more requirements ------------->
|
||||||
|
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
||||||
|
more_req = plugin_kwargs.get("advanced_arg", "")
|
||||||
|
no_cache = more_req.startswith("--no-cache")
|
||||||
|
if no_cache: more_req.lstrip("--no-cache")
|
||||||
|
allow_cache = not no_cache
|
||||||
|
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
||||||
|
|
||||||
|
# <-------------- check deps ------------->
|
||||||
|
try:
|
||||||
|
import glob, os, time, subprocess
|
||||||
|
subprocess.Popen(['pdflatex', '-version'])
|
||||||
|
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
||||||
|
except Exception as e:
|
||||||
|
chatbot.append([f"解析项目: {txt}",
|
||||||
|
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- clear history and read input ------------->
|
||||||
|
if os.path.exists(txt):
|
||||||
|
project_folder = txt
|
||||||
|
else:
|
||||||
|
if txt == "": txt = '空空如也的输入栏'
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无法处理: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.pdf文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
if len(file_manifest) != 1:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"不支持同时处理多个pdf文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
app_id, app_key = get_conf('MATHPIX_APPID', 'MATHPIX_APPKEY')
|
||||||
|
if len(app_id) == 0 or len(app_key) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"请配置 MATHPIX_APPID 和 MATHPIX_APPKEY")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- convert pdf into tex ------------->
|
||||||
|
project_folder = pdf2tex_project(file_manifest[0])
|
||||||
|
|
||||||
|
# Translate English Latex to Chinese Latex, and compile it
|
||||||
|
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
||||||
|
if len(file_manifest) == 0:
|
||||||
|
report_exception(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.tex文件: {txt}")
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
return
|
||||||
|
|
||||||
|
# <-------------- if is a zip/tar file ------------->
|
||||||
|
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
||||||
|
|
||||||
|
# <-------------- move latex project away from temp folder ------------->
|
||||||
|
project_folder = move_project(project_folder)
|
||||||
|
|
||||||
|
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
||||||
|
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
||||||
|
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
||||||
|
chatbot, history, system_prompt, mode='translate_zh',
|
||||||
|
switch_prompt=_switch_prompt_)
|
||||||
|
|
||||||
|
# <-------------- compile PDF ------------->
|
||||||
|
success = yield from 编译Latex(chatbot, history, main_file_original='merge',
|
||||||
|
main_file_modified='merge_translate_zh', mode='translate_zh',
|
||||||
|
work_folder_original=project_folder, work_folder_modified=project_folder,
|
||||||
|
work_folder=project_folder)
|
||||||
|
|
||||||
|
# <-------------- zip PDF ------------->
|
||||||
|
zip_res = zip_result(project_folder)
|
||||||
|
if success:
|
||||||
|
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
else:
|
||||||
|
chatbot.append((f"失败了",
|
||||||
|
'虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history);
|
||||||
|
time.sleep(1) # 刷新界面
|
||||||
|
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
||||||
|
|
||||||
|
# <-------------- we are done ------------->
|
||||||
|
return success
|
@ -1,313 +0,0 @@
|
|||||||
from toolbox import update_ui, trimmed_format_exc, get_conf, get_log_folder, promote_file_to_downloadzone
|
|
||||||
from toolbox import CatchException, report_exception, update_ui_lastest_msg, zip_result, gen_time_str
|
|
||||||
from functools import partial
|
|
||||||
import glob, os, requests, time, tarfile
|
|
||||||
pj = os.path.join
|
|
||||||
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
|
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 工具函数 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
|
||||||
# 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
|
|
||||||
def switch_prompt(pfg, mode, more_requirement):
|
|
||||||
"""
|
|
||||||
Generate prompts and system prompts based on the mode for proofreading or translating.
|
|
||||||
Args:
|
|
||||||
- pfg: Proofreader or Translator instance.
|
|
||||||
- mode: A string specifying the mode, either 'proofread' or 'translate_zh'.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
- inputs_array: A list of strings containing prompts for users to respond to.
|
|
||||||
- sys_prompt_array: A list of strings containing prompts for system prompts.
|
|
||||||
"""
|
|
||||||
n_split = len(pfg.sp_file_contents)
|
|
||||||
if mode == 'proofread_en':
|
|
||||||
inputs_array = [r"Below is a section from an academic paper, proofread this section." +
|
|
||||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " + more_requirement +
|
|
||||||
r"Answer me only with the revised text:" +
|
|
||||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
|
||||||
sys_prompt_array = ["You are a professional academic paper writer." for _ in range(n_split)]
|
|
||||||
elif mode == 'translate_zh':
|
|
||||||
inputs_array = [r"Below is a section from an English academic paper, translate it into Chinese. " + more_requirement +
|
|
||||||
r"Do not modify any latex command such as \section, \cite, \begin, \item and equations. " +
|
|
||||||
r"Answer me only with the translated text:" +
|
|
||||||
f"\n\n{frag}" for frag in pfg.sp_file_contents]
|
|
||||||
sys_prompt_array = ["You are a professional translator." for _ in range(n_split)]
|
|
||||||
else:
|
|
||||||
assert False, "未知指令"
|
|
||||||
return inputs_array, sys_prompt_array
|
|
||||||
|
|
||||||
def desend_to_extracted_folder_if_exist(project_folder):
|
|
||||||
"""
|
|
||||||
Descend into the extracted folder if it exists, otherwise return the original folder.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
- project_folder: A string specifying the folder path.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
- A string specifying the path to the extracted folder, or the original folder if there is no extracted folder.
|
|
||||||
"""
|
|
||||||
maybe_dir = [f for f in glob.glob(f'{project_folder}/*') if os.path.isdir(f)]
|
|
||||||
if len(maybe_dir) == 0: return project_folder
|
|
||||||
if maybe_dir[0].endswith('.extract'): return maybe_dir[0]
|
|
||||||
return project_folder
|
|
||||||
|
|
||||||
def move_project(project_folder, arxiv_id=None):
|
|
||||||
"""
|
|
||||||
Create a new work folder and copy the project folder to it.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
- project_folder: A string specifying the folder path of the project.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
- A string specifying the path to the new work folder.
|
|
||||||
"""
|
|
||||||
import shutil, time
|
|
||||||
time.sleep(2) # avoid time string conflict
|
|
||||||
if arxiv_id is not None:
|
|
||||||
new_workfolder = pj(ARXIV_CACHE_DIR, arxiv_id, 'workfolder')
|
|
||||||
else:
|
|
||||||
new_workfolder = f'{get_log_folder()}/{gen_time_str()}'
|
|
||||||
try:
|
|
||||||
shutil.rmtree(new_workfolder)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# align subfolder if there is a folder wrapper
|
|
||||||
items = glob.glob(pj(project_folder,'*'))
|
|
||||||
items = [item for item in items if os.path.basename(item)!='__MACOSX']
|
|
||||||
if len(glob.glob(pj(project_folder,'*.tex'))) == 0 and len(items) == 1:
|
|
||||||
if os.path.isdir(items[0]): project_folder = items[0]
|
|
||||||
|
|
||||||
shutil.copytree(src=project_folder, dst=new_workfolder)
|
|
||||||
return new_workfolder
|
|
||||||
|
|
||||||
def arxiv_download(chatbot, history, txt, allow_cache=True):
|
|
||||||
def check_cached_translation_pdf(arxiv_id):
|
|
||||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'translation')
|
|
||||||
if not os.path.exists(translation_dir):
|
|
||||||
os.makedirs(translation_dir)
|
|
||||||
target_file = pj(translation_dir, 'translate_zh.pdf')
|
|
||||||
if os.path.exists(target_file):
|
|
||||||
promote_file_to_downloadzone(target_file, rename_file=None, chatbot=chatbot)
|
|
||||||
target_file_compare = pj(translation_dir, 'comparison.pdf')
|
|
||||||
if os.path.exists(target_file_compare):
|
|
||||||
promote_file_to_downloadzone(target_file_compare, rename_file=None, chatbot=chatbot)
|
|
||||||
return target_file
|
|
||||||
return False
|
|
||||||
def is_float(s):
|
|
||||||
try:
|
|
||||||
float(s)
|
|
||||||
return True
|
|
||||||
except ValueError:
|
|
||||||
return False
|
|
||||||
if ('.' in txt) and ('/' not in txt) and is_float(txt): # is arxiv ID
|
|
||||||
txt = 'https://arxiv.org/abs/' + txt.strip()
|
|
||||||
if ('.' in txt) and ('/' not in txt) and is_float(txt[:10]): # is arxiv ID
|
|
||||||
txt = 'https://arxiv.org/abs/' + txt[:10]
|
|
||||||
if not txt.startswith('https://arxiv.org'):
|
|
||||||
return txt, None # 是本地文件,跳过下载
|
|
||||||
|
|
||||||
# <-------------- inspect format ------------->
|
|
||||||
chatbot.append([f"检测到arxiv文档连接", '尝试下载 ...'])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
|
||||||
time.sleep(1) # 刷新界面
|
|
||||||
|
|
||||||
url_ = txt # https://arxiv.org/abs/1707.06690
|
|
||||||
if not txt.startswith('https://arxiv.org/abs/'):
|
|
||||||
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}。"
|
|
||||||
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return msg, None
|
|
||||||
# <-------------- set format ------------->
|
|
||||||
arxiv_id = url_.split('/abs/')[-1]
|
|
||||||
if 'v' in arxiv_id: arxiv_id = arxiv_id[:10]
|
|
||||||
cached_translation_pdf = check_cached_translation_pdf(arxiv_id)
|
|
||||||
if cached_translation_pdf and allow_cache: return cached_translation_pdf, arxiv_id
|
|
||||||
|
|
||||||
url_tar = url_.replace('/abs/', '/e-print/')
|
|
||||||
translation_dir = pj(ARXIV_CACHE_DIR, arxiv_id, 'e-print')
|
|
||||||
extract_dst = pj(ARXIV_CACHE_DIR, arxiv_id, 'extract')
|
|
||||||
os.makedirs(translation_dir, exist_ok=True)
|
|
||||||
|
|
||||||
# <-------------- download arxiv source file ------------->
|
|
||||||
dst = pj(translation_dir, arxiv_id+'.tar')
|
|
||||||
if os.path.exists(dst):
|
|
||||||
yield from update_ui_lastest_msg("调用缓存", chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
else:
|
|
||||||
yield from update_ui_lastest_msg("开始下载", chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
proxies = get_conf('proxies')
|
|
||||||
r = requests.get(url_tar, proxies=proxies)
|
|
||||||
with open(dst, 'wb+') as f:
|
|
||||||
f.write(r.content)
|
|
||||||
# <-------------- extract file ------------->
|
|
||||||
yield from update_ui_lastest_msg("下载完成", chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
from toolbox import extract_archive
|
|
||||||
extract_archive(file_path=dst, dest_dir=extract_dst)
|
|
||||||
return extract_dst, arxiv_id
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序1 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
|
||||||
|
|
||||||
|
|
||||||
@CatchException
|
|
||||||
def Latex英文纠错加PDF对比(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
|
||||||
# <-------------- information about this plugin ------------->
|
|
||||||
chatbot.append([ "函数插件功能?",
|
|
||||||
"对整个Latex项目进行纠错, 用latex编译为PDF对修正处做高亮。函数插件贡献者: Binary-Husky。注意事项: 目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。仅在Windows系统进行了测试,其他操作系统表现未知。"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
|
|
||||||
# <-------------- more requirements ------------->
|
|
||||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
|
||||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
|
||||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
|
||||||
|
|
||||||
# <-------------- check deps ------------->
|
|
||||||
try:
|
|
||||||
import glob, os, time, subprocess
|
|
||||||
subprocess.Popen(['pdflatex', '-version'])
|
|
||||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
|
||||||
except Exception as e:
|
|
||||||
chatbot.append([ f"解析项目: {txt}",
|
|
||||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- clear history and read input ------------->
|
|
||||||
history = []
|
|
||||||
if os.path.exists(txt):
|
|
||||||
project_folder = txt
|
|
||||||
else:
|
|
||||||
if txt == "": txt = '空空如也的输入栏'
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
|
||||||
if len(file_manifest) == 0:
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- if is a zip/tar file ------------->
|
|
||||||
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- move latex project away from temp folder ------------->
|
|
||||||
project_folder = move_project(project_folder, arxiv_id=None)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
|
||||||
if not os.path.exists(project_folder + '/merge_proofread_en.tex'):
|
|
||||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|
||||||
chatbot, history, system_prompt, mode='proofread_en', switch_prompt=_switch_prompt_)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- compile PDF ------------->
|
|
||||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_proofread_en',
|
|
||||||
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- zip PDF ------------->
|
|
||||||
zip_res = zip_result(project_folder)
|
|
||||||
if success:
|
|
||||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
|
||||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
|
||||||
else:
|
|
||||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...'))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
|
||||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
|
||||||
|
|
||||||
# <-------------- we are done ------------->
|
|
||||||
return success
|
|
||||||
|
|
||||||
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= 插件主程序2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
|
|
||||||
|
|
||||||
@CatchException
|
|
||||||
def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_request):
|
|
||||||
# <-------------- information about this plugin ------------->
|
|
||||||
chatbot.append([
|
|
||||||
"函数插件功能?",
|
|
||||||
"对整个Latex项目进行翻译, 生成中文PDF。函数插件贡献者: Binary-Husky。注意事项: 此插件Windows支持最佳,Linux下必须使用Docker安装,详见项目主README.md。目前仅支持GPT3.5/GPT4,其他模型转化效果未知。目前对机器学习类文献转化效果最好,其他类型文献转化效果未知。"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
|
|
||||||
# <-------------- more requirements ------------->
|
|
||||||
if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg")
|
|
||||||
more_req = plugin_kwargs.get("advanced_arg", "")
|
|
||||||
no_cache = more_req.startswith("--no-cache")
|
|
||||||
if no_cache: more_req.lstrip("--no-cache")
|
|
||||||
allow_cache = not no_cache
|
|
||||||
_switch_prompt_ = partial(switch_prompt, more_requirement=more_req)
|
|
||||||
|
|
||||||
# <-------------- check deps ------------->
|
|
||||||
try:
|
|
||||||
import glob, os, time, subprocess
|
|
||||||
subprocess.Popen(['pdflatex', '-version'])
|
|
||||||
from .latex_fns.latex_actions import Latex精细分解与转化, 编译Latex
|
|
||||||
except Exception as e:
|
|
||||||
chatbot.append([ f"解析项目: {txt}",
|
|
||||||
f"尝试执行Latex指令失败。Latex没有安装, 或者不在环境变量PATH中。安装方法https://tug.org/texlive/。报错信息\n\n```\n\n{trimmed_format_exc()}\n\n```\n\n"])
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- clear history and read input ------------->
|
|
||||||
history = []
|
|
||||||
try:
|
|
||||||
txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)
|
|
||||||
except tarfile.ReadError as e:
|
|
||||||
yield from update_ui_lastest_msg(
|
|
||||||
"无法自动下载该论文的Latex源码,请前往arxiv打开此论文下载页面,点other Formats,然后download source手动下载latex源码包。接下来调用本地Latex翻译插件即可。",
|
|
||||||
chatbot=chatbot, history=history)
|
|
||||||
return
|
|
||||||
|
|
||||||
if txt.endswith('.pdf'):
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"发现已经存在翻译好的PDF文档")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
if os.path.exists(txt):
|
|
||||||
project_folder = txt
|
|
||||||
else:
|
|
||||||
if txt == "": txt = '空空如也的输入栏'
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无法处理: {txt}")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)]
|
|
||||||
if len(file_manifest) == 0:
|
|
||||||
report_exception(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- if is a zip/tar file ------------->
|
|
||||||
project_folder = desend_to_extracted_folder_if_exist(project_folder)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- move latex project away from temp folder ------------->
|
|
||||||
project_folder = move_project(project_folder, arxiv_id)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- if merge_translate_zh is already generated, skip gpt req ------------->
|
|
||||||
if not os.path.exists(project_folder + '/merge_translate_zh.tex'):
|
|
||||||
yield from Latex精细分解与转化(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
|
|
||||||
chatbot, history, system_prompt, mode='translate_zh', switch_prompt=_switch_prompt_)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- compile PDF ------------->
|
|
||||||
success = yield from 编译Latex(chatbot, history, main_file_original='merge', main_file_modified='merge_translate_zh', mode='translate_zh',
|
|
||||||
work_folder_original=project_folder, work_folder_modified=project_folder, work_folder=project_folder)
|
|
||||||
|
|
||||||
# <-------------- zip PDF ------------->
|
|
||||||
zip_res = zip_result(project_folder)
|
|
||||||
if success:
|
|
||||||
chatbot.append((f"成功啦", '请查收结果(压缩包)...'))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
|
||||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
|
||||||
else:
|
|
||||||
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux,请检查系统字体(见Github wiki) ...'))
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
|
|
||||||
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
|
|
||||||
|
|
||||||
|
|
||||||
# <-------------- we are done ------------->
|
|
||||||
return success
|
|
85
crazy_functions/pdf_fns/parse_word.py
Normal file
85
crazy_functions/pdf_fns/parse_word.py
Normal file
@ -0,0 +1,85 @@
|
|||||||
|
from crazy_functions.crazy_utils import read_and_clean_pdf_text, get_files_from_everything
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
def extract_text_from_files(txt, chatbot, history):
|
||||||
|
"""
|
||||||
|
查找pdf/md/word并获取文本内容并返回状态以及文本
|
||||||
|
|
||||||
|
输入参数 Args:
|
||||||
|
chatbot: chatbot inputs and outputs (用户界面对话窗口句柄,用于数据流可视化)
|
||||||
|
history (list): List of chat history (历史,对话历史列表)
|
||||||
|
|
||||||
|
输出 Returns:
|
||||||
|
文件是否存在(bool)
|
||||||
|
final_result(list):文本内容
|
||||||
|
page_one(list):第一页内容/摘要
|
||||||
|
file_manifest(list):文件路径
|
||||||
|
excption(string):需要用户手动处理的信息,如没出错则保持为空
|
||||||
|
"""
|
||||||
|
|
||||||
|
final_result = []
|
||||||
|
page_one = []
|
||||||
|
file_manifest = []
|
||||||
|
excption = ""
|
||||||
|
|
||||||
|
if txt == "":
|
||||||
|
final_result.append(txt)
|
||||||
|
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
|
||||||
|
|
||||||
|
#查找输入区内容中的文件
|
||||||
|
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
|
||||||
|
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
|
||||||
|
file_word,word_manifest,folder_word = get_files_from_everything(txt, '.docx')
|
||||||
|
file_doc,doc_manifest,folder_doc = get_files_from_everything(txt, '.doc')
|
||||||
|
|
||||||
|
if file_doc:
|
||||||
|
excption = "word"
|
||||||
|
return False, final_result, page_one, file_manifest, excption
|
||||||
|
|
||||||
|
file_num = len(pdf_manifest) + len(md_manifest) + len(word_manifest)
|
||||||
|
if file_num == 0:
|
||||||
|
final_result.append(txt)
|
||||||
|
return False, final_result, page_one, file_manifest, excption #如输入区内容不是文件则直接返回输入区内容
|
||||||
|
|
||||||
|
if file_pdf:
|
||||||
|
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
|
import fitz
|
||||||
|
except:
|
||||||
|
excption = "pdf"
|
||||||
|
return False, final_result, page_one, file_manifest, excption
|
||||||
|
for index, fp in enumerate(pdf_manifest):
|
||||||
|
file_content, pdf_one = read_and_clean_pdf_text(fp) # (尝试)按照章节切割PDF
|
||||||
|
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||||
|
pdf_one = str(pdf_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
||||||
|
final_result.append(file_content)
|
||||||
|
page_one.append(pdf_one)
|
||||||
|
file_manifest.append(os.path.relpath(fp, folder_pdf))
|
||||||
|
|
||||||
|
if file_md:
|
||||||
|
for index, fp in enumerate(md_manifest):
|
||||||
|
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
||||||
|
file_content = f.read()
|
||||||
|
file_content = file_content.encode('utf-8', 'ignore').decode()
|
||||||
|
headers = re.findall(r'^#\s(.*)$', file_content, re.MULTILINE) #接下来提取md中的一级/二级标题作为摘要
|
||||||
|
if len(headers) > 0:
|
||||||
|
page_one.append("\n".join(headers)) #合并所有的标题,以换行符分割
|
||||||
|
else:
|
||||||
|
page_one.append("")
|
||||||
|
final_result.append(file_content)
|
||||||
|
file_manifest.append(os.path.relpath(fp, folder_md))
|
||||||
|
|
||||||
|
if file_word:
|
||||||
|
try: # 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
|
from docx import Document
|
||||||
|
except:
|
||||||
|
excption = "word_pip"
|
||||||
|
return False, final_result, page_one, file_manifest, excption
|
||||||
|
for index, fp in enumerate(word_manifest):
|
||||||
|
doc = Document(fp)
|
||||||
|
file_content = '\n'.join([p.text for p in doc.paragraphs])
|
||||||
|
file_content = file_content.encode('utf-8', 'ignore').decode()
|
||||||
|
page_one.append(file_content[:200])
|
||||||
|
final_result.append(file_content)
|
||||||
|
file_manifest.append(os.path.relpath(fp, folder_word))
|
||||||
|
|
||||||
|
return True, final_result, page_one, file_manifest, excption
|
@ -1,6 +1,5 @@
|
|||||||
from toolbox import CatchException, update_ui, report_exception
|
from toolbox import CatchException, update_ui, report_exception
|
||||||
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
|
||||||
from .crazy_utils import read_and_clean_pdf_text
|
|
||||||
import datetime
|
import datetime
|
||||||
|
|
||||||
#以下是每类图表的PROMPT
|
#以下是每类图表的PROMPT
|
||||||
@ -162,7 +161,7 @@ mindmap
|
|||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
|
def 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs):
|
||||||
############################## <第 0 步,切割输入> ##################################
|
############################## <第 0 步,切割输入> ##################################
|
||||||
# 借用PDF切割中的函数对文本进行切割
|
# 借用PDF切割中的函数对文本进行切割
|
||||||
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
TOKEN_LIMIT_PER_FRAGMENT = 2500
|
||||||
@ -170,8 +169,6 @@ def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
|
|||||||
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
from crazy_functions.pdf_fns.breakdown_txt import breakdown_text_to_satisfy_token_limit
|
||||||
txt = breakdown_text_to_satisfy_token_limit(txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
txt = breakdown_text_to_satisfy_token_limit(txt=txt, limit=TOKEN_LIMIT_PER_FRAGMENT, llm_model=llm_kwargs['llm_model'])
|
||||||
############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ##################################
|
############################## <第 1 步,迭代地历遍整个文章,提取精炼信息> ##################################
|
||||||
i_say_show_user = f'首先你从历史记录或文件中提取摘要。'; gpt_say = "[Local Message] 收到。" # 用户提示
|
|
||||||
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
|
|
||||||
results = []
|
results = []
|
||||||
MAX_WORD_TOTAL = 4096
|
MAX_WORD_TOTAL = 4096
|
||||||
n_txt = len(txt)
|
n_txt = len(txt)
|
||||||
@ -179,7 +176,7 @@ def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
|
|||||||
if n_txt >= 20: print('文章极长,不能达到预期效果')
|
if n_txt >= 20: print('文章极长,不能达到预期效果')
|
||||||
for i in range(n_txt):
|
for i in range(n_txt):
|
||||||
NUM_OF_WORD = MAX_WORD_TOTAL // n_txt
|
NUM_OF_WORD = MAX_WORD_TOTAL // n_txt
|
||||||
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i]}"
|
i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words in Chinese: {txt[i]}"
|
||||||
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
|
i_say_show_user = f"[{i+1}/{n_txt}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {txt[i][:200]} ...."
|
||||||
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问
|
||||||
llm_kwargs, chatbot,
|
llm_kwargs, chatbot,
|
||||||
@ -232,35 +229,11 @@ def 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs):
|
|||||||
inputs=i_say,
|
inputs=i_say,
|
||||||
inputs_show_user=i_say_show_user,
|
inputs_show_user=i_say_show_user,
|
||||||
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
|
||||||
sys_prompt="你精通使用mermaid语法来绘制图表,首先确保语法正确,其次避免在mermaid语法中使用不允许的字符,此外也应当分考虑图表的可读性。"
|
sys_prompt=""
|
||||||
)
|
)
|
||||||
history.append(gpt_say)
|
history.append(gpt_say)
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
|
||||||
|
|
||||||
def 输入区文件处理(txt):
|
|
||||||
if txt == "": return False, txt
|
|
||||||
success = True
|
|
||||||
import glob
|
|
||||||
from .crazy_utils import get_files_from_everything
|
|
||||||
file_pdf,pdf_manifest,folder_pdf = get_files_from_everything(txt, '.pdf')
|
|
||||||
file_md,md_manifest,folder_md = get_files_from_everything(txt, '.md')
|
|
||||||
if len(pdf_manifest) == 0 and len(md_manifest) == 0:
|
|
||||||
return False, txt #如输入区内容不是文件则直接返回输入区内容
|
|
||||||
|
|
||||||
final_result = ""
|
|
||||||
if file_pdf:
|
|
||||||
for index, fp in enumerate(pdf_manifest):
|
|
||||||
file_content, page_one = read_and_clean_pdf_text(fp) # (尝试)按照章节切割PDF
|
|
||||||
file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars
|
|
||||||
final_result += "\n" + file_content
|
|
||||||
if file_md:
|
|
||||||
for index, fp in enumerate(md_manifest):
|
|
||||||
with open(fp, 'r', encoding='utf-8', errors='replace') as f:
|
|
||||||
file_content = f.read()
|
|
||||||
file_content = file_content.encode('utf-8', 'ignore').decode()
|
|
||||||
final_result += "\n" + file_content
|
|
||||||
return True, final_result
|
|
||||||
|
|
||||||
@CatchException
|
@CatchException
|
||||||
def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
|
||||||
"""
|
"""
|
||||||
@ -277,26 +250,47 @@ def 生成多种Mermaid图表(txt, llm_kwargs, plugin_kwargs, chatbot, history,
|
|||||||
# 基本信息:功能、贡献者
|
# 基本信息:功能、贡献者
|
||||||
chatbot.append([
|
chatbot.append([
|
||||||
"函数插件功能?",
|
"函数插件功能?",
|
||||||
"根据当前聊天历史或文件中(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
|
"根据当前聊天历史或指定的路径文件(文件内容优先)绘制多种mermaid图表,将会由对话模型首先判断适合的图表类型,随后绘制图表。\
|
||||||
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"])
|
\n您也可以使用插件参数指定绘制的图表类型,函数插件贡献者: Menghuan1918"])
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
|
||||||
try:
|
|
||||||
import fitz
|
|
||||||
except:
|
|
||||||
report_exception(chatbot, history,
|
|
||||||
a = f"解析项目: {txt}",
|
|
||||||
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
|
||||||
return
|
|
||||||
|
|
||||||
if os.path.exists(txt): #如输入区无内容则直接解析历史记录
|
if os.path.exists(txt): #如输入区无内容则直接解析历史记录
|
||||||
file_exist, txt = 输入区文件处理(txt)
|
from crazy_functions.pdf_fns.parse_word import extract_text_from_files
|
||||||
|
file_exist, final_result, page_one, file_manifest, excption = extract_text_from_files(txt, chatbot, history)
|
||||||
else:
|
else:
|
||||||
file_exist = False
|
file_exist = False
|
||||||
|
excption = ""
|
||||||
|
file_manifest = []
|
||||||
|
|
||||||
if file_exist : history = [] #如输入区内容为文件则清空历史记录
|
if excption != "":
|
||||||
history.append(txt) #将解析后的txt传递加入到历史中
|
if excption == "word":
|
||||||
|
report_exception(chatbot, history,
|
||||||
|
a = f"解析项目: {txt}",
|
||||||
|
b = f"找到了.doc文件,但是该文件格式不被支持,请先转化为.docx格式。")
|
||||||
|
|
||||||
yield from 解析历史输入(history,llm_kwargs,chatbot,plugin_kwargs)
|
elif excption == "pdf":
|
||||||
|
report_exception(chatbot, history,
|
||||||
|
a = f"解析项目: {txt}",
|
||||||
|
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
|
||||||
|
|
||||||
|
elif excption == "word_pip":
|
||||||
|
report_exception(chatbot, history,
|
||||||
|
a=f"解析项目: {txt}",
|
||||||
|
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
|
||||||
|
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
|
||||||
|
|
||||||
|
else:
|
||||||
|
if not file_exist:
|
||||||
|
history.append(txt) #如输入区不是文件则将输入区内容加入历史记录
|
||||||
|
i_say_show_user = f'首先你从历史记录中提取摘要。'; gpt_say = "[Local Message] 收到。" # 用户提示
|
||||||
|
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
|
||||||
|
yield from 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs)
|
||||||
|
else:
|
||||||
|
file_num = len(file_manifest)
|
||||||
|
for i in range(file_num): #依次处理文件
|
||||||
|
i_say_show_user = f"[{i+1}/{file_num}]处理文件{file_manifest[i]}"; gpt_say = "[Local Message] 收到。" # 用户提示
|
||||||
|
chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=history) # 更新UI
|
||||||
|
history = [] #如输入区内容为文件则清空历史记录
|
||||||
|
history.append(final_result[i])
|
||||||
|
yield from 解析历史输入(history,llm_kwargs,file_manifest,chatbot,plugin_kwargs)
|
@ -1668,7 +1668,7 @@
|
|||||||
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
|
"Markdown翻译指定语言": "TranslateMarkdownToSpecifiedLanguage",
|
||||||
"Langchain知识库": "LangchainKnowledgeBase",
|
"Langchain知识库": "LangchainKnowledgeBase",
|
||||||
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
|
"Latex英文纠错加PDF对比": "CorrectEnglishInLatexWithPDFComparison",
|
||||||
"Latex输出PDF结果": "OutputPDFFromLatex",
|
"Latex输出PDF": "OutputPDFFromLatex",
|
||||||
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
|
"Latex翻译中文并重新编译PDF": "TranslateChineseToEnglishInLatexAndRecompilePDF",
|
||||||
"sprint亮靛": "SprintIndigo",
|
"sprint亮靛": "SprintIndigo",
|
||||||
"寻找Latex主文件": "FindLatexMainFile",
|
"寻找Latex主文件": "FindLatexMainFile",
|
||||||
|
@ -1492,7 +1492,7 @@
|
|||||||
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
"交互功能模板函数": "InteractiveFunctionTemplateFunction",
|
||||||
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
"交互功能函数模板": "InteractiveFunctionFunctionTemplate",
|
||||||
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
|
"Latex英文纠错加PDF对比": "LatexEnglishErrorCorrectionWithPDFComparison",
|
||||||
"Latex输出PDF结果": "LatexOutputPDFResult",
|
"Latex输出PDF": "LatexOutputPDFResult",
|
||||||
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
|
"Latex翻译中文并重新编译PDF": "TranslateChineseAndRecompilePDF",
|
||||||
"语音助手": "VoiceAssistant",
|
"语音助手": "VoiceAssistant",
|
||||||
"微调数据集生成": "FineTuneDatasetGeneration",
|
"微调数据集生成": "FineTuneDatasetGeneration",
|
||||||
|
@ -16,7 +16,7 @@
|
|||||||
"批量Markdown翻译": "BatchTranslateMarkdown",
|
"批量Markdown翻译": "BatchTranslateMarkdown",
|
||||||
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
"连接bing搜索回答问题": "ConnectBingSearchAnswerQuestion",
|
||||||
"Langchain知识库": "LangchainKnowledgeBase",
|
"Langchain知识库": "LangchainKnowledgeBase",
|
||||||
"Latex输出PDF结果": "OutputPDFFromLatex",
|
"Latex输出PDF": "OutputPDFFromLatex",
|
||||||
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
|
"把字符太少的块清除为回车": "ClearBlocksWithTooFewCharactersToNewline",
|
||||||
"Latex精细分解与转化": "DecomposeAndConvertLatex",
|
"Latex精细分解与转化": "DecomposeAndConvertLatex",
|
||||||
"解析一个C项目的头文件": "ParseCProjectHeaderFiles",
|
"解析一个C项目的头文件": "ParseCProjectHeaderFiles",
|
||||||
|
@ -1468,7 +1468,7 @@
|
|||||||
"交互功能模板函数": "InteractiveFunctionTemplateFunctions",
|
"交互功能模板函数": "InteractiveFunctionTemplateFunctions",
|
||||||
"交互功能函数模板": "InteractiveFunctionFunctionTemplates",
|
"交互功能函数模板": "InteractiveFunctionFunctionTemplates",
|
||||||
"Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison",
|
"Latex英文纠错加PDF对比": "LatexEnglishCorrectionWithPDFComparison",
|
||||||
"Latex输出PDF结果": "OutputPDFFromLatex",
|
"Latex输出PDF": "OutputPDFFromLatex",
|
||||||
"Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF",
|
"Latex翻译中文并重新编译PDF": "TranslateLatexToChineseAndRecompilePDF",
|
||||||
"语音助手": "VoiceAssistant",
|
"语音助手": "VoiceAssistant",
|
||||||
"微调数据集生成": "FineTuneDatasetGeneration",
|
"微调数据集生成": "FineTuneDatasetGeneration",
|
||||||
|
@ -1,30 +0,0 @@
|
|||||||
try {
|
|
||||||
$("<link>").attr({href: "file=docs/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css"}).appendTo('head');
|
|
||||||
$('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>');
|
|
||||||
$.ajax({url: "file=docs/waifu_plugin/waifu-tips.js", dataType:"script", cache: true, success: function() {
|
|
||||||
$.ajax({url: "file=docs/waifu_plugin/live2d.js", dataType:"script", cache: true, success: function() {
|
|
||||||
/* 可直接修改部分参数 */
|
|
||||||
live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API
|
|
||||||
live2d_settings['modelId'] = 5; // 默认模型 ID
|
|
||||||
live2d_settings['modelTexturesId'] = 1; // 默认材质 ID
|
|
||||||
live2d_settings['modelStorage'] = false; // 不储存模型 ID
|
|
||||||
live2d_settings['waifuSize'] = '210x187';
|
|
||||||
live2d_settings['waifuTipsSize'] = '187x52';
|
|
||||||
live2d_settings['canSwitchModel'] = true;
|
|
||||||
live2d_settings['canSwitchTextures'] = true;
|
|
||||||
live2d_settings['canSwitchHitokoto'] = false;
|
|
||||||
live2d_settings['canTakeScreenshot'] = false;
|
|
||||||
live2d_settings['canTurnToHomePage'] = false;
|
|
||||||
live2d_settings['canTurnToAboutPage'] = false;
|
|
||||||
live2d_settings['showHitokoto'] = false; // 显示一言
|
|
||||||
live2d_settings['showF12Status'] = false; // 显示加载状态
|
|
||||||
live2d_settings['showF12Message'] = false; // 显示看板娘消息
|
|
||||||
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
|
|
||||||
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
|
|
||||||
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
|
|
||||||
|
|
||||||
/* 在 initModel 前添加 */
|
|
||||||
initModel("file=docs/waifu_plugin/waifu-tips.json");
|
|
||||||
}});
|
|
||||||
}});
|
|
||||||
} catch(err) { console.log("[Error] JQuery is not defined.") }
|
|
93
main.py
93
main.py
@ -15,22 +15,22 @@ help_menu_description = \
|
|||||||
|
|
||||||
def main():
|
def main():
|
||||||
import gradio as gr
|
import gradio as gr
|
||||||
if gr.__version__ not in ['3.32.6', '3.32.7', '3.32.8']:
|
if gr.__version__ not in ['3.32.8']:
|
||||||
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
|
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
|
||||||
from request_llms.bridge_all import predict
|
from request_llms.bridge_all import predict
|
||||||
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
|
||||||
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
|
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址
|
||||||
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
|
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
|
||||||
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
|
||||||
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME')
|
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME, ADD_WAIFU = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME', 'ADD_WAIFU')
|
||||||
DARK_MODE, NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
|
DARK_MODE, NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
|
||||||
INIT_SYS_PROMPT = get_conf('INIT_SYS_PROMPT')
|
INIT_SYS_PROMPT = get_conf('INIT_SYS_PROMPT')
|
||||||
|
|
||||||
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
# 如果WEB_PORT是-1, 则随机选取WEB端口
|
||||||
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
|
||||||
from check_proxy import get_current_version
|
from check_proxy import get_current_version
|
||||||
from themes.theme import adjust_theme, advanced_css, theme_declaration
|
from themes.theme import adjust_theme, advanced_css, theme_declaration, js_code_clear, js_code_reset, js_code_show_or_hide, js_code_show_or_hide_group2
|
||||||
from themes.theme import js_code_for_css_changing, js_code_for_darkmode_init, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
|
from themes.theme import js_code_for_css_changing, js_code_for_toggle_darkmode, js_code_for_persistent_cookie_init
|
||||||
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie
|
from themes.theme import load_dynamic_theme, to_cookie_str, from_cookie_str, init_cookie
|
||||||
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
|
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
|
||||||
|
|
||||||
@ -76,7 +76,7 @@ def main():
|
|||||||
predefined_btns = {}
|
predefined_btns = {}
|
||||||
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo:
|
with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo:
|
||||||
gr.HTML(title_html)
|
gr.HTML(title_html)
|
||||||
secret_css, dark_mode, persistent_cookie = gr.Textbox(visible=False), gr.Textbox(DARK_MODE, visible=False), gr.Textbox(visible=False)
|
secret_css, dark_mode, py_pickle_cookie = gr.Textbox(visible=False), gr.Textbox(DARK_MODE, visible=False), gr.Textbox(visible=False)
|
||||||
cookies = gr.State(load_chat_cookies())
|
cookies = gr.State(load_chat_cookies())
|
||||||
with gr_L1():
|
with gr_L1():
|
||||||
with gr_L2(scale=2, elem_id="gpt-chat"):
|
with gr_L2(scale=2, elem_id="gpt-chat"):
|
||||||
@ -98,6 +98,7 @@ def main():
|
|||||||
audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
|
audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
|
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
|
||||||
|
|
||||||
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
|
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
for k in range(NUM_CUSTOM_BASIC_BTN):
|
for k in range(NUM_CUSTOM_BASIC_BTN):
|
||||||
@ -142,7 +143,6 @@ def main():
|
|||||||
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
|
with gr.Accordion("点击展开“文件下载区”。", open=False) as area_file_up:
|
||||||
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
|
file_upload = gr.Files(label="任何文件, 推荐上传压缩文件(zip, tar)", file_count="multiple", elem_id="elem_upload")
|
||||||
|
|
||||||
|
|
||||||
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
|
with gr.Floating(init_x="0%", init_y="0%", visible=True, width=None, drag="forbidden", elem_id="tooltip"):
|
||||||
with gr.Row():
|
with gr.Row():
|
||||||
with gr.Tab("上传文件", elem_id="interact-panel"):
|
with gr.Tab("上传文件", elem_id="interact-panel"):
|
||||||
@ -158,10 +158,11 @@ def main():
|
|||||||
|
|
||||||
with gr.Tab("界面外观", elem_id="interact-panel"):
|
with gr.Tab("界面外观", elem_id="interact-panel"):
|
||||||
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
|
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
|
||||||
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"],
|
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "浮动输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
|
||||||
value=["基础功能区", "函数插件区"], label="显示/隐藏功能区", elem_id='cbs').style(container=False)
|
opt = ["自定义菜单"]
|
||||||
checkboxes_2 = gr.CheckboxGroup(["自定义菜单"],
|
value=[]
|
||||||
value=[], label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
|
if ADD_WAIFU: opt += ["添加Live2D形象"]; value += ["添加Live2D形象"]
|
||||||
|
checkboxes_2 = gr.CheckboxGroup(opt, value=value, label="显示/隐藏自定义菜单", elem_id='cbsc').style(container=False)
|
||||||
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
|
dark_mode_btn = gr.Button("切换界面明暗 ☀", variant="secondary").style(size="sm")
|
||||||
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
|
dark_mode_btn.click(None, None, None, _js=js_code_for_toggle_darkmode)
|
||||||
with gr.Tab("帮助", elem_id="interact-panel"):
|
with gr.Tab("帮助", elem_id="interact-panel"):
|
||||||
@ -178,7 +179,7 @@ def main():
|
|||||||
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
|
submitBtn2 = gr.Button("提交", variant="primary"); submitBtn2.style(size="sm")
|
||||||
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
|
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
|
||||||
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
|
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
|
||||||
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
|
clearBtn2 = gr.Button("清除", elem_id="elem_clear2", variant="secondary", visible=False); clearBtn2.style(size="sm")
|
||||||
|
|
||||||
|
|
||||||
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
|
with gr.Floating(init_x="20%", init_y="50%", visible=False, width="40%", drag="top") as area_customize:
|
||||||
@ -192,10 +193,12 @@ def main():
|
|||||||
basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False)
|
basic_fn_suffix = gr.Textbox(show_label=False, placeholder="输入新提示后缀", lines=4).style(container=False)
|
||||||
with gr.Column(scale=1, min_width=70):
|
with gr.Column(scale=1, min_width=70):
|
||||||
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm")
|
basic_fn_confirm = gr.Button("确认并保存", variant="primary"); basic_fn_confirm.style(size="sm")
|
||||||
basic_fn_load = gr.Button("加载已保存", variant="primary"); basic_fn_load.style(size="sm")
|
basic_fn_clean = gr.Button("恢复默认", variant="primary"); basic_fn_clean.style(size="sm")
|
||||||
def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix):
|
def assign_btn(persistent_cookie_, cookies_, basic_btn_dropdown_, basic_fn_title, basic_fn_prefix, basic_fn_suffix, clean_up=False):
|
||||||
ret = {}
|
ret = {}
|
||||||
|
# 读取之前的自定义按钮
|
||||||
customize_fn_overwrite_ = cookies_['customize_fn_overwrite']
|
customize_fn_overwrite_ = cookies_['customize_fn_overwrite']
|
||||||
|
# 更新新的自定义按钮
|
||||||
customize_fn_overwrite_.update({
|
customize_fn_overwrite_.update({
|
||||||
basic_btn_dropdown_:
|
basic_btn_dropdown_:
|
||||||
{
|
{
|
||||||
@ -205,20 +208,34 @@ def main():
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
cookies_.update(customize_fn_overwrite_)
|
if clean_up:
|
||||||
|
customize_fn_overwrite_ = {}
|
||||||
|
cookies_.update(customize_fn_overwrite_) # 更新cookie
|
||||||
|
visible = (not clean_up) and (basic_fn_title != "")
|
||||||
if basic_btn_dropdown_ in customize_btns:
|
if basic_btn_dropdown_ in customize_btns:
|
||||||
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)})
|
# 是自定义按钮,不是预定义按钮
|
||||||
|
ret.update({customize_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
|
||||||
else:
|
else:
|
||||||
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=True, value=basic_fn_title)})
|
# 是预定义按钮
|
||||||
|
ret.update({predefined_btns[basic_btn_dropdown_]: gr.update(visible=visible, value=basic_fn_title)})
|
||||||
ret.update({cookies: cookies_})
|
ret.update({cookies: cookies_})
|
||||||
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
|
try: persistent_cookie_ = from_cookie_str(persistent_cookie_) # persistent cookie to dict
|
||||||
except: persistent_cookie_ = {}
|
except: persistent_cookie_ = {}
|
||||||
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
|
persistent_cookie_["custom_bnt"] = customize_fn_overwrite_ # dict update new value
|
||||||
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
|
persistent_cookie_ = to_cookie_str(persistent_cookie_) # persistent cookie to dict
|
||||||
ret.update({persistent_cookie: persistent_cookie_}) # write persistent cookie
|
ret.update({py_pickle_cookie: persistent_cookie_}) # write persistent cookie
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
def reflesh_btn(persistent_cookie_, cookies_):
|
# update btn
|
||||||
|
h = basic_fn_confirm.click(assign_btn, [py_pickle_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
|
||||||
|
[py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
|
||||||
|
h.then(None, [py_pickle_cookie], None, _js="""(py_pickle_cookie)=>{setCookie("py_pickle_cookie", py_pickle_cookie, 365);}""")
|
||||||
|
# clean up btn
|
||||||
|
h2 = basic_fn_clean.click(assign_btn, [py_pickle_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix, gr.State(True)],
|
||||||
|
[py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
|
||||||
|
h2.then(None, [py_pickle_cookie], None, _js="""(py_pickle_cookie)=>{setCookie("py_pickle_cookie", py_pickle_cookie, 365);}""")
|
||||||
|
|
||||||
|
def persistent_cookie_reload(persistent_cookie_, cookies_):
|
||||||
ret = {}
|
ret = {}
|
||||||
for k in customize_btns:
|
for k in customize_btns:
|
||||||
ret.update({customize_btns[k]: gr.update(visible=False, value="")})
|
ret.update({customize_btns[k]: gr.update(visible=False, value="")})
|
||||||
@ -236,25 +253,16 @@ def main():
|
|||||||
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
|
else: ret.update({predefined_btns[k]: gr.update(visible=True, value=v['Title'])})
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
basic_fn_load.click(reflesh_btn, [persistent_cookie, cookies], [cookies, *customize_btns.values(), *predefined_btns.values()])
|
|
||||||
h = basic_fn_confirm.click(assign_btn, [persistent_cookie, cookies, basic_btn_dropdown, basic_fn_title, basic_fn_prefix, basic_fn_suffix],
|
|
||||||
[persistent_cookie, cookies, *customize_btns.values(), *predefined_btns.values()])
|
|
||||||
# save persistent cookie
|
|
||||||
h.then(None, [persistent_cookie], None, _js="""(persistent_cookie)=>{setCookie("persistent_cookie", persistent_cookie, 5);}""")
|
|
||||||
|
|
||||||
# 功能区显示开关与功能区的互动
|
# 功能区显示开关与功能区的互动
|
||||||
def fn_area_visibility(a):
|
def fn_area_visibility(a):
|
||||||
ret = {}
|
ret = {}
|
||||||
ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))})
|
|
||||||
ret.update({area_crazy_fn: gr.update(visible=("函数插件区" in a))})
|
|
||||||
ret.update({area_input_primary: gr.update(visible=("浮动输入区" not in a))})
|
ret.update({area_input_primary: gr.update(visible=("浮动输入区" not in a))})
|
||||||
ret.update({area_input_secondary: gr.update(visible=("浮动输入区" in a))})
|
ret.update({area_input_secondary: gr.update(visible=("浮动输入区" in a))})
|
||||||
ret.update({clearBtn: gr.update(visible=("输入清除键" in a))})
|
|
||||||
ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))})
|
|
||||||
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
|
ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))})
|
||||||
if "浮动输入区" in a: ret.update({txt: gr.update(value="")})
|
if "浮动输入区" in a: ret.update({txt: gr.update(value="")})
|
||||||
return ret
|
return ret
|
||||||
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] )
|
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, plugin_advanced_arg] )
|
||||||
|
checkboxes.select(None, [checkboxes], None, _js=js_code_show_or_hide)
|
||||||
|
|
||||||
# 功能区显示开关与功能区的互动
|
# 功能区显示开关与功能区的互动
|
||||||
def fn_area_visibility_2(a):
|
def fn_area_visibility_2(a):
|
||||||
@ -262,6 +270,7 @@ def main():
|
|||||||
ret.update({area_customize: gr.update(visible=("自定义菜单" in a))})
|
ret.update({area_customize: gr.update(visible=("自定义菜单" in a))})
|
||||||
return ret
|
return ret
|
||||||
checkboxes_2.select(fn_area_visibility_2, [checkboxes_2], [area_customize] )
|
checkboxes_2.select(fn_area_visibility_2, [checkboxes_2], [area_customize] )
|
||||||
|
checkboxes_2.select(None, [checkboxes_2], None, _js=js_code_show_or_hide_group2)
|
||||||
|
|
||||||
# 整理反复出现的控件句柄组合
|
# 整理反复出现的控件句柄组合
|
||||||
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
|
input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg]
|
||||||
@ -272,15 +281,17 @@ def main():
|
|||||||
cancel_handles.append(txt2.submit(**predict_args))
|
cancel_handles.append(txt2.submit(**predict_args))
|
||||||
cancel_handles.append(submitBtn.click(**predict_args))
|
cancel_handles.append(submitBtn.click(**predict_args))
|
||||||
cancel_handles.append(submitBtn2.click(**predict_args))
|
cancel_handles.append(submitBtn2.click(**predict_args))
|
||||||
resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status])
|
resetBtn.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
|
||||||
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status])
|
resetBtn2.click(None, None, [chatbot, history, status], _js=js_code_reset) # 先在前端快速清除chatbot&status
|
||||||
clearBtn.click(lambda: ("",""), None, [txt, txt2])
|
resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) # 再在后端清除history
|
||||||
clearBtn2.click(lambda: ("",""), None, [txt, txt2])
|
resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) # 再在后端清除history
|
||||||
|
clearBtn.click(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
|
clearBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
if AUTO_CLEAR_TXT:
|
if AUTO_CLEAR_TXT:
|
||||||
submitBtn.click(lambda: ("",""), None, [txt, txt2])
|
submitBtn.click(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
submitBtn2.click(lambda: ("",""), None, [txt, txt2])
|
submitBtn2.click(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
txt.submit(lambda: ("",""), None, [txt, txt2])
|
txt.submit(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
txt2.submit(lambda: ("",""), None, [txt, txt2])
|
txt2.submit(None, None, [txt, txt2], _js=js_code_clear)
|
||||||
# 基础功能区的回调函数注册
|
# 基础功能区的回调函数注册
|
||||||
for k in functional:
|
for k in functional:
|
||||||
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
|
||||||
@ -360,10 +371,10 @@ def main():
|
|||||||
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
|
audio_mic.stream(deal_audio, inputs=[audio_mic, cookies])
|
||||||
|
|
||||||
|
|
||||||
demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies])
|
demo.load(init_cookie, inputs=[cookies], outputs=[cookies])
|
||||||
darkmode_js = js_code_for_darkmode_init
|
demo.load(persistent_cookie_reload, inputs = [py_pickle_cookie, cookies],
|
||||||
demo.load(None, inputs=None, outputs=[persistent_cookie], _js=js_code_for_persistent_cookie_init)
|
outputs = [py_pickle_cookie, cookies, *customize_btns.values(), *predefined_btns.values()], _js=js_code_for_persistent_cookie_init)
|
||||||
demo.load(None, inputs=[dark_mode], outputs=None, _js=darkmode_js) # 配置暗色主题或亮色主题
|
demo.load(None, inputs=[dark_mode], outputs=None, _js="""(dark_mode)=>{apply_cookie_for_checkbox(dark_mode);}""") # 配置暗色主题或亮色主题
|
||||||
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
|
demo.load(None, inputs=[gr.Textbox(LAYOUT, visible=False)], outputs=None, _js='(LAYOUT)=>{GptAcademicJavaScriptInit(LAYOUT);}')
|
||||||
|
|
||||||
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
|
# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
|
||||||
|
@ -31,6 +31,9 @@ from .bridge_qianfan import predict as qianfan_ui
|
|||||||
from .bridge_google_gemini import predict as genai_ui
|
from .bridge_google_gemini import predict as genai_ui
|
||||||
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
|
from .bridge_google_gemini import predict_no_ui_long_connection as genai_noui
|
||||||
|
|
||||||
|
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
|
||||||
|
from .bridge_zhipu import predict as zhipu_ui
|
||||||
|
|
||||||
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044']
|
||||||
|
|
||||||
class LazyloadTiktoken(object):
|
class LazyloadTiktoken(object):
|
||||||
@ -215,16 +218,25 @@ model_info = {
|
|||||||
"token_cnt": get_token_num_gpt4,
|
"token_cnt": get_token_num_gpt4,
|
||||||
},
|
},
|
||||||
|
|
||||||
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加)
|
# 智谱AI
|
||||||
"api2d-gpt-3.5-turbo": {
|
"glm-4": {
|
||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": zhipu_ui,
|
||||||
"fn_without_ui": chatgpt_noui,
|
"fn_without_ui": zhipu_noui,
|
||||||
"endpoint": api2d_endpoint,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 10124 * 8,
|
||||||
|
"tokenizer": tokenizer_gpt35,
|
||||||
|
"token_cnt": get_token_num_gpt35,
|
||||||
|
},
|
||||||
|
"glm-3-turbo": {
|
||||||
|
"fn_with_ui": zhipu_ui,
|
||||||
|
"fn_without_ui": zhipu_noui,
|
||||||
|
"endpoint": None,
|
||||||
|
"max_token": 10124 * 4,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
},
|
},
|
||||||
|
|
||||||
|
# api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加)
|
||||||
"api2d-gpt-4": {
|
"api2d-gpt-4": {
|
||||||
"fn_with_ui": chatgpt_ui,
|
"fn_with_ui": chatgpt_ui,
|
||||||
"fn_without_ui": chatgpt_noui,
|
"fn_without_ui": chatgpt_noui,
|
||||||
@ -580,19 +592,17 @@ if "llama2" in AVAIL_LLM_MODELS: # llama2
|
|||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai
|
if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai 是glm-4的别名,向后兼容配置
|
||||||
try:
|
try:
|
||||||
from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
|
|
||||||
from .bridge_zhipu import predict as zhipu_ui
|
|
||||||
model_info.update({
|
model_info.update({
|
||||||
"zhipuai": {
|
"zhipuai": {
|
||||||
"fn_with_ui": zhipu_ui,
|
"fn_with_ui": zhipu_ui,
|
||||||
"fn_without_ui": zhipu_noui,
|
"fn_without_ui": zhipu_noui,
|
||||||
"endpoint": None,
|
"endpoint": None,
|
||||||
"max_token": 4096,
|
"max_token": 10124 * 8,
|
||||||
"tokenizer": tokenizer_gpt35,
|
"tokenizer": tokenizer_gpt35,
|
||||||
"token_cnt": get_token_num_gpt35,
|
"token_cnt": get_token_num_gpt35,
|
||||||
}
|
},
|
||||||
})
|
})
|
||||||
except:
|
except:
|
||||||
print(trimmed_format_exc())
|
print(trimmed_format_exc())
|
||||||
|
@ -113,6 +113,8 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
error_msg = get_full_error(chunk, stream_response).decode()
|
error_msg = get_full_error(chunk, stream_response).decode()
|
||||||
if "reduce the length" in error_msg:
|
if "reduce the length" in error_msg:
|
||||||
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
|
||||||
|
elif """type":"upstream_error","param":"307""" in error_msg:
|
||||||
|
raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。")
|
||||||
else:
|
else:
|
||||||
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
raise RuntimeError("OpenAI拒绝了请求:" + error_msg)
|
||||||
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
|
if ('data: [DONE]' in chunk_decoded): break # api2d 正常完成
|
||||||
|
@ -57,6 +57,10 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
|
|
||||||
if "vision" in llm_kwargs["llm_model"]:
|
if "vision" in llm_kwargs["llm_model"]:
|
||||||
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
||||||
|
if not have_recent_file:
|
||||||
|
chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
|
||||||
|
return
|
||||||
def make_media_input(inputs, image_paths):
|
def make_media_input(inputs, image_paths):
|
||||||
for image_path in image_paths:
|
for image_path in image_paths:
|
||||||
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
||||||
|
@ -1,15 +1,21 @@
|
|||||||
|
|
||||||
import time
|
import time
|
||||||
|
import os
|
||||||
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
from toolbox import update_ui, get_conf, update_ui_lastest_msg
|
||||||
from toolbox import check_packages, report_exception
|
from toolbox import check_packages, report_exception, have_any_recent_upload_image_files
|
||||||
|
|
||||||
model_name = '智谱AI大模型'
|
model_name = '智谱AI大模型'
|
||||||
|
zhipuai_default_model = 'glm-4'
|
||||||
|
|
||||||
def validate_key():
|
def validate_key():
|
||||||
ZHIPUAI_API_KEY = get_conf("ZHIPUAI_API_KEY")
|
ZHIPUAI_API_KEY = get_conf("ZHIPUAI_API_KEY")
|
||||||
if ZHIPUAI_API_KEY == '': return False
|
if ZHIPUAI_API_KEY == '': return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
def make_media_input(inputs, image_paths):
|
||||||
|
for image_path in image_paths:
|
||||||
|
inputs = inputs + f'<br/><br/><div align="center"><img src="file={os.path.abspath(image_path)}"></div>'
|
||||||
|
return inputs
|
||||||
|
|
||||||
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
|
||||||
"""
|
"""
|
||||||
⭐多线程方法
|
⭐多线程方法
|
||||||
@ -18,32 +24,38 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
|
|||||||
watch_dog_patience = 5
|
watch_dog_patience = 5
|
||||||
response = ""
|
response = ""
|
||||||
|
|
||||||
|
if llm_kwargs["llm_model"] == "zhipuai":
|
||||||
|
llm_kwargs["llm_model"] = zhipuai_default_model
|
||||||
|
|
||||||
if validate_key() is False:
|
if validate_key() is False:
|
||||||
raise RuntimeError('请配置ZHIPUAI_API_KEY')
|
raise RuntimeError('请配置ZHIPUAI_API_KEY')
|
||||||
|
|
||||||
from .com_zhipuapi import ZhipuRequestInstance
|
# 开始接收回复
|
||||||
sri = ZhipuRequestInstance()
|
from .com_zhipuglm import ZhipuChatInit
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
|
zhipu_bro_init = ZhipuChatInit()
|
||||||
|
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, sys_prompt):
|
||||||
if len(observe_window) >= 1:
|
if len(observe_window) >= 1:
|
||||||
observe_window[0] = response
|
observe_window[0] = response
|
||||||
if len(observe_window) >= 2:
|
if len(observe_window) >= 2:
|
||||||
if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
|
if (time.time() - observe_window[1]) > watch_dog_patience:
|
||||||
|
raise RuntimeError("程序终止。")
|
||||||
return response
|
return response
|
||||||
|
|
||||||
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
|
|
||||||
|
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
|
||||||
"""
|
"""
|
||||||
⭐单线程方法
|
⭐单线程方法
|
||||||
函数的说明请见 request_llms/bridge_all.py
|
函数的说明请见 request_llms/bridge_all.py
|
||||||
"""
|
"""
|
||||||
chatbot.append((inputs, ""))
|
chatbot.append([inputs, ""])
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
# 尝试导入依赖,如果缺少依赖,则给出安装建议
|
||||||
try:
|
try:
|
||||||
check_packages(["zhipuai"])
|
check_packages(["zhipuai"])
|
||||||
except:
|
except:
|
||||||
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install zhipuai==1.0.7```。",
|
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
|
||||||
chatbot=chatbot, history=history, delay=0)
|
chatbot=chatbot, history=history, delay=0)
|
||||||
return
|
return
|
||||||
|
|
||||||
if validate_key() is False:
|
if validate_key() is False:
|
||||||
@ -53,16 +65,29 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
|
|||||||
if additional_fn is not None:
|
if additional_fn is not None:
|
||||||
from core_functional import handle_core_functionality
|
from core_functional import handle_core_functionality
|
||||||
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
|
||||||
|
chatbot[-1] = [inputs, ""]
|
||||||
# 开始接收回复
|
|
||||||
from .com_zhipuapi import ZhipuRequestInstance
|
|
||||||
sri = ZhipuRequestInstance()
|
|
||||||
for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
|
|
||||||
chatbot[-1] = (inputs, response)
|
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
# 总结输出
|
if llm_kwargs["llm_model"] == "zhipuai":
|
||||||
if response == f"[Local Message] 等待{model_name}响应中 ...":
|
llm_kwargs["llm_model"] = zhipuai_default_model
|
||||||
response = f"[Local Message] {model_name}响应异常 ..."
|
|
||||||
|
if llm_kwargs["llm_model"] in ["glm-4v"]:
|
||||||
|
have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
|
||||||
|
if not have_recent_file:
|
||||||
|
chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
|
||||||
|
return
|
||||||
|
if have_recent_file:
|
||||||
|
inputs = make_media_input(inputs, image_paths)
|
||||||
|
chatbot[-1] = [inputs, ""]
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
|
|
||||||
|
|
||||||
|
# 开始接收回复
|
||||||
|
from .com_zhipuglm import ZhipuChatInit
|
||||||
|
zhipu_bro_init = ZhipuChatInit()
|
||||||
|
for chunk, response in zhipu_bro_init.generate_chat(inputs, llm_kwargs, history, system_prompt):
|
||||||
|
chatbot[-1] = [inputs, response]
|
||||||
|
yield from update_ui(chatbot=chatbot, history=history)
|
||||||
history.extend([inputs, response])
|
history.extend([inputs, response])
|
||||||
yield from update_ui(chatbot=chatbot, history=history)
|
yield from update_ui(chatbot=chatbot, history=history)
|
@ -1,70 +0,0 @@
|
|||||||
from toolbox import get_conf
|
|
||||||
import threading
|
|
||||||
import logging
|
|
||||||
|
|
||||||
timeout_bot_msg = '[Local Message] Request timeout. Network error.'
|
|
||||||
|
|
||||||
class ZhipuRequestInstance():
|
|
||||||
def __init__(self):
|
|
||||||
|
|
||||||
self.time_to_yield_event = threading.Event()
|
|
||||||
self.time_to_exit_event = threading.Event()
|
|
||||||
|
|
||||||
self.result_buf = ""
|
|
||||||
|
|
||||||
def generate(self, inputs, llm_kwargs, history, system_prompt):
|
|
||||||
# import _thread as thread
|
|
||||||
import zhipuai
|
|
||||||
ZHIPUAI_API_KEY, ZHIPUAI_MODEL = get_conf("ZHIPUAI_API_KEY", "ZHIPUAI_MODEL")
|
|
||||||
zhipuai.api_key = ZHIPUAI_API_KEY
|
|
||||||
self.result_buf = ""
|
|
||||||
response = zhipuai.model_api.sse_invoke(
|
|
||||||
model=ZHIPUAI_MODEL,
|
|
||||||
prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
|
|
||||||
top_p=llm_kwargs['top_p']*0.7, # 智谱的API抽风,手动*0.7给做个线性变换
|
|
||||||
temperature=llm_kwargs['temperature']*0.95, # 智谱的API抽风,手动*0.7给做个线性变换
|
|
||||||
)
|
|
||||||
for event in response.events():
|
|
||||||
if event.event == "add":
|
|
||||||
# if self.result_buf == "" and event.data.startswith(" "):
|
|
||||||
# event.data = event.data.lstrip(" ") # 每次智谱为啥都要带个空格开头呢?
|
|
||||||
self.result_buf += event.data
|
|
||||||
yield self.result_buf
|
|
||||||
elif event.event == "error" or event.event == "interrupted":
|
|
||||||
raise RuntimeError("Unknown error:" + event.data)
|
|
||||||
elif event.event == "finish":
|
|
||||||
yield self.result_buf
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
raise RuntimeError("Unknown error:" + str(event))
|
|
||||||
if self.result_buf == "":
|
|
||||||
yield "智谱没有返回任何数据, 请检查ZHIPUAI_API_KEY和ZHIPUAI_MODEL是否填写正确."
|
|
||||||
logging.info(f'[raw_input] {inputs}')
|
|
||||||
logging.info(f'[response] {self.result_buf}')
|
|
||||||
return self.result_buf
|
|
||||||
|
|
||||||
def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
|
|
||||||
conversation_cnt = len(history) // 2
|
|
||||||
messages = [{"role": "user", "content": system_prompt}, {"role": "assistant", "content": "Certainly!"}]
|
|
||||||
if conversation_cnt:
|
|
||||||
for index in range(0, 2*conversation_cnt, 2):
|
|
||||||
what_i_have_asked = {}
|
|
||||||
what_i_have_asked["role"] = "user"
|
|
||||||
what_i_have_asked["content"] = history[index]
|
|
||||||
what_gpt_answer = {}
|
|
||||||
what_gpt_answer["role"] = "assistant"
|
|
||||||
what_gpt_answer["content"] = history[index+1]
|
|
||||||
if what_i_have_asked["content"] != "":
|
|
||||||
if what_gpt_answer["content"] == "":
|
|
||||||
continue
|
|
||||||
if what_gpt_answer["content"] == timeout_bot_msg:
|
|
||||||
continue
|
|
||||||
messages.append(what_i_have_asked)
|
|
||||||
messages.append(what_gpt_answer)
|
|
||||||
else:
|
|
||||||
messages[-1]['content'] = what_gpt_answer['content']
|
|
||||||
what_i_ask_now = {}
|
|
||||||
what_i_ask_now["role"] = "user"
|
|
||||||
what_i_ask_now["content"] = inputs
|
|
||||||
messages.append(what_i_ask_now)
|
|
||||||
return messages
|
|
84
request_llms/com_zhipuglm.py
Normal file
84
request_llms/com_zhipuglm.py
Normal file
@ -0,0 +1,84 @@
|
|||||||
|
# encoding: utf-8
|
||||||
|
# @Time : 2024/1/22
|
||||||
|
# @Author : Kilig947 & binary husky
|
||||||
|
# @Descr : 兼容最新的智谱Ai
|
||||||
|
from toolbox import get_conf
|
||||||
|
from zhipuai import ZhipuAI
|
||||||
|
from toolbox import get_conf, encode_image, get_pictures_list
|
||||||
|
import logging, os
|
||||||
|
|
||||||
|
|
||||||
|
def input_encode_handler(inputs, llm_kwargs):
|
||||||
|
if llm_kwargs["most_recent_uploaded"].get("path"):
|
||||||
|
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
|
||||||
|
md_encode = []
|
||||||
|
for md_path in image_paths:
|
||||||
|
type_ = os.path.splitext(md_path)[1].replace(".", "")
|
||||||
|
type_ = "jpeg" if type_ == "jpg" else type_
|
||||||
|
md_encode.append({"data": encode_image(md_path), "type": type_})
|
||||||
|
return inputs, md_encode
|
||||||
|
|
||||||
|
|
||||||
|
class ZhipuChatInit:
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
ZHIPUAI_API_KEY, ZHIPUAI_MODEL = get_conf("ZHIPUAI_API_KEY", "ZHIPUAI_MODEL")
|
||||||
|
if len(ZHIPUAI_MODEL) > 0:
|
||||||
|
logging.error('ZHIPUAI_MODEL 配置项选项已经弃用,请在LLM_MODEL中配置')
|
||||||
|
self.zhipu_bro = ZhipuAI(api_key=ZHIPUAI_API_KEY)
|
||||||
|
self.model = ''
|
||||||
|
|
||||||
|
def __conversation_user(self, user_input: str, llm_kwargs):
|
||||||
|
if self.model not in ["glm-4v"]:
|
||||||
|
return {"role": "user", "content": user_input}
|
||||||
|
else:
|
||||||
|
input_, encode_img = input_encode_handler(user_input, llm_kwargs=llm_kwargs)
|
||||||
|
what_i_have_asked = {"role": "user", "content": []}
|
||||||
|
what_i_have_asked['content'].append({"type": 'text', "text": user_input})
|
||||||
|
if encode_img:
|
||||||
|
img_d = {"type": "image_url",
|
||||||
|
"image_url": {'url': encode_img}}
|
||||||
|
what_i_have_asked['content'].append(img_d)
|
||||||
|
return what_i_have_asked
|
||||||
|
|
||||||
|
def __conversation_history(self, history, llm_kwargs):
|
||||||
|
messages = []
|
||||||
|
conversation_cnt = len(history) // 2
|
||||||
|
if conversation_cnt:
|
||||||
|
for index in range(0, 2 * conversation_cnt, 2):
|
||||||
|
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
|
||||||
|
what_gpt_answer = {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": history[index + 1]
|
||||||
|
}
|
||||||
|
messages.append(what_i_have_asked)
|
||||||
|
messages.append(what_gpt_answer)
|
||||||
|
return messages
|
||||||
|
|
||||||
|
def __conversation_message_payload(self, inputs, llm_kwargs, history, system_prompt):
|
||||||
|
messages = []
|
||||||
|
if system_prompt:
|
||||||
|
messages.append({"role": "system", "content": system_prompt})
|
||||||
|
self.model = llm_kwargs['llm_model']
|
||||||
|
messages.extend(self.__conversation_history(history, llm_kwargs)) # 处理 history
|
||||||
|
messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话
|
||||||
|
response = self.zhipu_bro.chat.completions.create(
|
||||||
|
model=self.model, messages=messages, stream=True,
|
||||||
|
temperature=llm_kwargs.get('temperature', 0.95) * 0.95, # 只能传默认的 temperature 和 top_p
|
||||||
|
top_p=llm_kwargs.get('top_p', 0.7) * 0.7,
|
||||||
|
max_tokens=llm_kwargs.get('max_tokens', 1024 * 4), # 最大输出模型的一半
|
||||||
|
)
|
||||||
|
return response
|
||||||
|
|
||||||
|
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
|
||||||
|
self.model = llm_kwargs['llm_model']
|
||||||
|
response = self.__conversation_message_payload(inputs, llm_kwargs, history, system_prompt)
|
||||||
|
bro_results = ''
|
||||||
|
for chunk in response:
|
||||||
|
bro_results += chunk.choices[0].delta.content
|
||||||
|
yield chunk.choices[0].delta.content, bro_results
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
zhipu = ZhipuChatInit()
|
||||||
|
zhipu.generate_chat('你好', {'llm_model': 'glm-4'}, [], '你是WPSAi')
|
@ -1,10 +1,10 @@
|
|||||||
https://public.gpt-academic.top/publish/gradio-3.32.7-py3-none-any.whl
|
https://public.gpt-academic.top/publish/gradio-3.32.8-py3-none-any.whl
|
||||||
gradio-client==0.8
|
gradio-client==0.8
|
||||||
pypdf2==2.12.1
|
pypdf2==2.12.1
|
||||||
zhipuai<2
|
zhipuai>=2
|
||||||
tiktoken>=0.3.3
|
tiktoken>=0.3.3
|
||||||
requests[socks]
|
requests[socks]
|
||||||
pydantic==1.10.11
|
pydantic==2.5.2
|
||||||
protobuf==3.18
|
protobuf==3.18
|
||||||
transformers>=4.27.1
|
transformers>=4.27.1
|
||||||
scipdf_parser>=0.52
|
scipdf_parser>=0.52
|
||||||
|
@ -20,10 +20,10 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
|
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
|
||||||
|
|
||||||
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2307.07522")
|
# plugin_test(plugin='crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF', main_input="2307.07522")
|
||||||
|
|
||||||
plugin_test(
|
plugin_test(
|
||||||
plugin="crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF",
|
plugin="crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF",
|
||||||
main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
|
main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -66,7 +66,7 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
|
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
|
||||||
|
|
||||||
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629")
|
# plugin_test(plugin='crazy_functions.Latex输出PDF->Latex翻译中文并重新编译PDF', main_input="2210.03629")
|
||||||
|
|
||||||
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求:100字以内,用第二人称。' --system_prompt=''" }
|
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示,想象一个穿着者,对这个人外貌、身处的环境、内心世界、人设进行描写。要求:100字以内,用第二人称。' --system_prompt=''" }
|
||||||
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
|
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
|
||||||
|
@ -1,296 +1 @@
|
|||||||
/**
|
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
|
||||||
* base64.ts
|
|
||||||
*
|
|
||||||
* Licensed under the BSD 3-Clause License.
|
|
||||||
* http://opensource.org/licenses/BSD-3-Clause
|
|
||||||
*
|
|
||||||
* References:
|
|
||||||
* http://en.wikipedia.org/wiki/Base64
|
|
||||||
*
|
|
||||||
* @author Dan Kogai (https://github.com/dankogai)
|
|
||||||
*/
|
|
||||||
const version = '3.7.2';
|
|
||||||
/**
|
|
||||||
* @deprecated use lowercase `version`.
|
|
||||||
*/
|
|
||||||
const VERSION = version;
|
|
||||||
const _hasatob = typeof atob === 'function';
|
|
||||||
const _hasbtoa = typeof btoa === 'function';
|
|
||||||
const _hasBuffer = typeof Buffer === 'function';
|
|
||||||
const _TD = typeof TextDecoder === 'function' ? new TextDecoder() : undefined;
|
|
||||||
const _TE = typeof TextEncoder === 'function' ? new TextEncoder() : undefined;
|
|
||||||
const b64ch = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=';
|
|
||||||
const b64chs = Array.prototype.slice.call(b64ch);
|
|
||||||
const b64tab = ((a) => {
|
|
||||||
let tab = {};
|
|
||||||
a.forEach((c, i) => tab[c] = i);
|
|
||||||
return tab;
|
|
||||||
})(b64chs);
|
|
||||||
const b64re = /^(?:[A-Za-z\d+\/]{4})*?(?:[A-Za-z\d+\/]{2}(?:==)?|[A-Za-z\d+\/]{3}=?)?$/;
|
|
||||||
const _fromCC = String.fromCharCode.bind(String);
|
|
||||||
const _U8Afrom = typeof Uint8Array.from === 'function'
|
|
||||||
? Uint8Array.from.bind(Uint8Array)
|
|
||||||
: (it, fn = (x) => x) => new Uint8Array(Array.prototype.slice.call(it, 0).map(fn));
|
|
||||||
const _mkUriSafe = (src) => src
|
|
||||||
.replace(/=/g, '').replace(/[+\/]/g, (m0) => m0 == '+' ? '-' : '_');
|
|
||||||
const _tidyB64 = (s) => s.replace(/[^A-Za-z0-9\+\/]/g, '');
|
|
||||||
/**
|
|
||||||
* polyfill version of `btoa`
|
|
||||||
*/
|
|
||||||
const btoaPolyfill = (bin) => {
|
|
||||||
// console.log('polyfilled');
|
|
||||||
let u32, c0, c1, c2, asc = '';
|
|
||||||
const pad = bin.length % 3;
|
|
||||||
for (let i = 0; i < bin.length;) {
|
|
||||||
if ((c0 = bin.charCodeAt(i++)) > 255 ||
|
|
||||||
(c1 = bin.charCodeAt(i++)) > 255 ||
|
|
||||||
(c2 = bin.charCodeAt(i++)) > 255)
|
|
||||||
throw new TypeError('invalid character found');
|
|
||||||
u32 = (c0 << 16) | (c1 << 8) | c2;
|
|
||||||
asc += b64chs[u32 >> 18 & 63]
|
|
||||||
+ b64chs[u32 >> 12 & 63]
|
|
||||||
+ b64chs[u32 >> 6 & 63]
|
|
||||||
+ b64chs[u32 & 63];
|
|
||||||
}
|
|
||||||
return pad ? asc.slice(0, pad - 3) + "===".substring(pad) : asc;
|
|
||||||
};
|
|
||||||
/**
|
|
||||||
* does what `window.btoa` of web browsers do.
|
|
||||||
* @param {String} bin binary string
|
|
||||||
* @returns {string} Base64-encoded string
|
|
||||||
*/
|
|
||||||
const _btoa = _hasbtoa ? (bin) => btoa(bin)
|
|
||||||
: _hasBuffer ? (bin) => Buffer.from(bin, 'binary').toString('base64')
|
|
||||||
: btoaPolyfill;
|
|
||||||
const _fromUint8Array = _hasBuffer
|
|
||||||
? (u8a) => Buffer.from(u8a).toString('base64')
|
|
||||||
: (u8a) => {
|
|
||||||
// cf. https://stackoverflow.com/questions/12710001/how-to-convert-uint8-array-to-base64-encoded-string/12713326#12713326
|
|
||||||
const maxargs = 0x1000;
|
|
||||||
let strs = [];
|
|
||||||
for (let i = 0, l = u8a.length; i < l; i += maxargs) {
|
|
||||||
strs.push(_fromCC.apply(null, u8a.subarray(i, i + maxargs)));
|
|
||||||
}
|
|
||||||
return _btoa(strs.join(''));
|
|
||||||
};
|
|
||||||
/**
|
|
||||||
* converts a Uint8Array to a Base64 string.
|
|
||||||
* @param {boolean} [urlsafe] URL-and-filename-safe a la RFC4648 §5
|
|
||||||
* @returns {string} Base64 string
|
|
||||||
*/
|
|
||||||
const fromUint8Array = (u8a, urlsafe = false) => urlsafe ? _mkUriSafe(_fromUint8Array(u8a)) : _fromUint8Array(u8a);
|
|
||||||
// This trick is found broken https://github.com/dankogai/js-base64/issues/130
|
|
||||||
// const utob = (src: string) => unescape(encodeURIComponent(src));
|
|
||||||
// reverting good old fationed regexp
|
|
||||||
const cb_utob = (c) => {
|
|
||||||
if (c.length < 2) {
|
|
||||||
var cc = c.charCodeAt(0);
|
|
||||||
return cc < 0x80 ? c
|
|
||||||
: cc < 0x800 ? (_fromCC(0xc0 | (cc >>> 6))
|
|
||||||
+ _fromCC(0x80 | (cc & 0x3f)))
|
|
||||||
: (_fromCC(0xe0 | ((cc >>> 12) & 0x0f))
|
|
||||||
+ _fromCC(0x80 | ((cc >>> 6) & 0x3f))
|
|
||||||
+ _fromCC(0x80 | (cc & 0x3f)));
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
var cc = 0x10000
|
|
||||||
+ (c.charCodeAt(0) - 0xD800) * 0x400
|
|
||||||
+ (c.charCodeAt(1) - 0xDC00);
|
|
||||||
return (_fromCC(0xf0 | ((cc >>> 18) & 0x07))
|
|
||||||
+ _fromCC(0x80 | ((cc >>> 12) & 0x3f))
|
|
||||||
+ _fromCC(0x80 | ((cc >>> 6) & 0x3f))
|
|
||||||
+ _fromCC(0x80 | (cc & 0x3f)));
|
|
||||||
}
|
|
||||||
};
|
|
||||||
const re_utob = /[\uD800-\uDBFF][\uDC00-\uDFFFF]|[^\x00-\x7F]/g;
|
|
||||||
/**
|
|
||||||
* @deprecated should have been internal use only.
|
|
||||||
* @param {string} src UTF-8 string
|
|
||||||
* @returns {string} UTF-16 string
|
|
||||||
*/
|
|
||||||
const utob = (u) => u.replace(re_utob, cb_utob);
|
|
||||||
//
|
|
||||||
const _encode = _hasBuffer
|
|
||||||
? (s) => Buffer.from(s, 'utf8').toString('base64')
|
|
||||||
: _TE
|
|
||||||
? (s) => _fromUint8Array(_TE.encode(s))
|
|
||||||
: (s) => _btoa(utob(s));
|
|
||||||
/**
|
|
||||||
* converts a UTF-8-encoded string to a Base64 string.
|
|
||||||
* @param {boolean} [urlsafe] if `true` make the result URL-safe
|
|
||||||
* @returns {string} Base64 string
|
|
||||||
*/
|
|
||||||
const encode = (src, urlsafe = false) => urlsafe
|
|
||||||
? _mkUriSafe(_encode(src))
|
|
||||||
: _encode(src);
|
|
||||||
/**
|
|
||||||
* converts a UTF-8-encoded string to URL-safe Base64 RFC4648 §5.
|
|
||||||
* @returns {string} Base64 string
|
|
||||||
*/
|
|
||||||
const encodeURI = (src) => encode(src, true);
|
|
||||||
// This trick is found broken https://github.com/dankogai/js-base64/issues/130
|
|
||||||
// const btou = (src: string) => decodeURIComponent(escape(src));
|
|
||||||
// reverting good old fationed regexp
|
|
||||||
const re_btou = /[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF]{2}|[\xF0-\xF7][\x80-\xBF]{3}/g;
|
|
||||||
const cb_btou = (cccc) => {
|
|
||||||
switch (cccc.length) {
|
|
||||||
case 4:
|
|
||||||
var cp = ((0x07 & cccc.charCodeAt(0)) << 18)
|
|
||||||
| ((0x3f & cccc.charCodeAt(1)) << 12)
|
|
||||||
| ((0x3f & cccc.charCodeAt(2)) << 6)
|
|
||||||
| (0x3f & cccc.charCodeAt(3)), offset = cp - 0x10000;
|
|
||||||
return (_fromCC((offset >>> 10) + 0xD800)
|
|
||||||
+ _fromCC((offset & 0x3FF) + 0xDC00));
|
|
||||||
case 3:
|
|
||||||
return _fromCC(((0x0f & cccc.charCodeAt(0)) << 12)
|
|
||||||
| ((0x3f & cccc.charCodeAt(1)) << 6)
|
|
||||||
| (0x3f & cccc.charCodeAt(2)));
|
|
||||||
default:
|
|
||||||
return _fromCC(((0x1f & cccc.charCodeAt(0)) << 6)
|
|
||||||
| (0x3f & cccc.charCodeAt(1)));
|
|
||||||
}
|
|
||||||
};
|
|
||||||
/**
|
|
||||||
* @deprecated should have been internal use only.
|
|
||||||
* @param {string} src UTF-16 string
|
|
||||||
* @returns {string} UTF-8 string
|
|
||||||
*/
|
|
||||||
const btou = (b) => b.replace(re_btou, cb_btou);
|
|
||||||
/**
|
|
||||||
* polyfill version of `atob`
|
|
||||||
*/
|
|
||||||
const atobPolyfill = (asc) => {
|
|
||||||
// console.log('polyfilled');
|
|
||||||
asc = asc.replace(/\s+/g, '');
|
|
||||||
if (!b64re.test(asc))
|
|
||||||
throw new TypeError('malformed base64.');
|
|
||||||
asc += '=='.slice(2 - (asc.length & 3));
|
|
||||||
let u24, bin = '', r1, r2;
|
|
||||||
for (let i = 0; i < asc.length;) {
|
|
||||||
u24 = b64tab[asc.charAt(i++)] << 18
|
|
||||||
| b64tab[asc.charAt(i++)] << 12
|
|
||||||
| (r1 = b64tab[asc.charAt(i++)]) << 6
|
|
||||||
| (r2 = b64tab[asc.charAt(i++)]);
|
|
||||||
bin += r1 === 64 ? _fromCC(u24 >> 16 & 255)
|
|
||||||
: r2 === 64 ? _fromCC(u24 >> 16 & 255, u24 >> 8 & 255)
|
|
||||||
: _fromCC(u24 >> 16 & 255, u24 >> 8 & 255, u24 & 255);
|
|
||||||
}
|
|
||||||
return bin;
|
|
||||||
};
|
|
||||||
/**
|
|
||||||
* does what `window.atob` of web browsers do.
|
|
||||||
* @param {String} asc Base64-encoded string
|
|
||||||
* @returns {string} binary string
|
|
||||||
*/
|
|
||||||
const _atob = _hasatob ? (asc) => atob(_tidyB64(asc))
|
|
||||||
: _hasBuffer ? (asc) => Buffer.from(asc, 'base64').toString('binary')
|
|
||||||
: atobPolyfill;
|
|
||||||
//
|
|
||||||
const _toUint8Array = _hasBuffer
|
|
||||||
? (a) => _U8Afrom(Buffer.from(a, 'base64'))
|
|
||||||
: (a) => _U8Afrom(_atob(a), c => c.charCodeAt(0));
|
|
||||||
/**
|
|
||||||
* converts a Base64 string to a Uint8Array.
|
|
||||||
*/
|
|
||||||
const toUint8Array = (a) => _toUint8Array(_unURI(a));
|
|
||||||
//
|
|
||||||
const _decode = _hasBuffer
|
|
||||||
? (a) => Buffer.from(a, 'base64').toString('utf8')
|
|
||||||
: _TD
|
|
||||||
? (a) => _TD.decode(_toUint8Array(a))
|
|
||||||
: (a) => btou(_atob(a));
|
|
||||||
const _unURI = (a) => _tidyB64(a.replace(/[-_]/g, (m0) => m0 == '-' ? '+' : '/'));
|
|
||||||
/**
|
|
||||||
* converts a Base64 string to a UTF-8 string.
|
|
||||||
* @param {String} src Base64 string. Both normal and URL-safe are supported
|
|
||||||
* @returns {string} UTF-8 string
|
|
||||||
*/
|
|
||||||
const decode = (src) => _decode(_unURI(src));
|
|
||||||
/**
|
|
||||||
* check if a value is a valid Base64 string
|
|
||||||
* @param {String} src a value to check
|
|
||||||
*/
|
|
||||||
const isValid = (src) => {
|
|
||||||
if (typeof src !== 'string')
|
|
||||||
return false;
|
|
||||||
const s = src.replace(/\s+/g, '').replace(/={0,2}$/, '');
|
|
||||||
return !/[^\s0-9a-zA-Z\+/]/.test(s) || !/[^\s0-9a-zA-Z\-_]/.test(s);
|
|
||||||
};
|
|
||||||
//
|
|
||||||
const _noEnum = (v) => {
|
|
||||||
return {
|
|
||||||
value: v, enumerable: false, writable: true, configurable: true
|
|
||||||
};
|
|
||||||
};
|
|
||||||
/**
|
|
||||||
* extend String.prototype with relevant methods
|
|
||||||
*/
|
|
||||||
const extendString = function () {
|
|
||||||
const _add = (name, body) => Object.defineProperty(String.prototype, name, _noEnum(body));
|
|
||||||
_add('fromBase64', function () { return decode(this); });
|
|
||||||
_add('toBase64', function (urlsafe) { return encode(this, urlsafe); });
|
|
||||||
_add('toBase64URI', function () { return encode(this, true); });
|
|
||||||
_add('toBase64URL', function () { return encode(this, true); });
|
|
||||||
_add('toUint8Array', function () { return toUint8Array(this); });
|
|
||||||
};
|
|
||||||
/**
|
|
||||||
* extend Uint8Array.prototype with relevant methods
|
|
||||||
*/
|
|
||||||
const extendUint8Array = function () {
|
|
||||||
const _add = (name, body) => Object.defineProperty(Uint8Array.prototype, name, _noEnum(body));
|
|
||||||
_add('toBase64', function (urlsafe) { return fromUint8Array(this, urlsafe); });
|
|
||||||
_add('toBase64URI', function () { return fromUint8Array(this, true); });
|
|
||||||
_add('toBase64URL', function () { return fromUint8Array(this, true); });
|
|
||||||
};
|
|
||||||
/**
|
|
||||||
* extend Builtin prototypes with relevant methods
|
|
||||||
*/
|
|
||||||
const extendBuiltins = () => {
|
|
||||||
extendString();
|
|
||||||
extendUint8Array();
|
|
||||||
};
|
|
||||||
const gBase64 = {
|
|
||||||
version: version,
|
|
||||||
VERSION: VERSION,
|
|
||||||
atob: _atob,
|
|
||||||
atobPolyfill: atobPolyfill,
|
|
||||||
btoa: _btoa,
|
|
||||||
btoaPolyfill: btoaPolyfill,
|
|
||||||
fromBase64: decode,
|
|
||||||
toBase64: encode,
|
|
||||||
encode: encode,
|
|
||||||
encodeURI: encodeURI,
|
|
||||||
encodeURL: encodeURI,
|
|
||||||
utob: utob,
|
|
||||||
btou: btou,
|
|
||||||
decode: decode,
|
|
||||||
isValid: isValid,
|
|
||||||
fromUint8Array: fromUint8Array,
|
|
||||||
toUint8Array: toUint8Array,
|
|
||||||
extendString: extendString,
|
|
||||||
extendUint8Array: extendUint8Array,
|
|
||||||
extendBuiltins: extendBuiltins,
|
|
||||||
};
|
|
||||||
// makecjs:CUT //
|
|
||||||
export { version };
|
|
||||||
export { VERSION };
|
|
||||||
export { _atob as atob };
|
|
||||||
export { atobPolyfill };
|
|
||||||
export { _btoa as btoa };
|
|
||||||
export { btoaPolyfill };
|
|
||||||
export { decode as fromBase64 };
|
|
||||||
export { encode as toBase64 };
|
|
||||||
export { utob };
|
|
||||||
export { encode };
|
|
||||||
export { encodeURI };
|
|
||||||
export { encodeURI as encodeURL };
|
|
||||||
export { btou };
|
|
||||||
export { decode };
|
|
||||||
export { isValid };
|
|
||||||
export { fromUint8Array };
|
|
||||||
export { toUint8Array };
|
|
||||||
export { extendString };
|
|
||||||
export { extendUint8Array };
|
|
||||||
export { extendBuiltins };
|
|
||||||
// and finally,
|
|
||||||
export { gBase64 as Base64 };
|
|
||||||
|
@ -59,6 +59,7 @@
|
|||||||
|
|
||||||
/* Scrollbar Width */
|
/* Scrollbar Width */
|
||||||
::-webkit-scrollbar {
|
::-webkit-scrollbar {
|
||||||
|
height: 12px;
|
||||||
width: 12px;
|
width: 12px;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
173
themes/common.js
173
themes/common.js
@ -234,7 +234,7 @@ let timeoutID = null;
|
|||||||
let lastInvocationTime = 0;
|
let lastInvocationTime = 0;
|
||||||
let lastArgs = null;
|
let lastArgs = null;
|
||||||
function do_something_but_not_too_frequently(min_interval, func) {
|
function do_something_but_not_too_frequently(min_interval, func) {
|
||||||
return function(...args) {
|
return function (...args) {
|
||||||
lastArgs = args;
|
lastArgs = args;
|
||||||
const now = Date.now();
|
const now = Date.now();
|
||||||
if (!lastInvocationTime || (now - lastInvocationTime) >= min_interval) {
|
if (!lastInvocationTime || (now - lastInvocationTime) >= min_interval) {
|
||||||
@ -263,13 +263,8 @@ function chatbotContentChanged(attempt = 1, force = false) {
|
|||||||
gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
|
gradioApp().querySelectorAll('#gpt-chatbot .message-wrap .message.bot').forEach(addCopyButton);
|
||||||
}, i === 0 ? 0 : 200);
|
}, i === 0 ? 0 : 200);
|
||||||
}
|
}
|
||||||
|
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
|
||||||
|
|
||||||
const run_mermaid_render = do_something_but_not_too_frequently(1000, function () {
|
|
||||||
const blocks = document.querySelectorAll(`pre.mermaid, diagram-div`);
|
|
||||||
if (blocks.length == 0) { return; }
|
|
||||||
uml("mermaid");
|
|
||||||
});
|
|
||||||
run_mermaid_render();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -672,9 +667,9 @@ function limit_scroll_position() {
|
|||||||
let scrollableDiv = document.querySelector('#gpt-chatbot > div.wrap');
|
let scrollableDiv = document.querySelector('#gpt-chatbot > div.wrap');
|
||||||
scrollableDiv.addEventListener('wheel', function (e) {
|
scrollableDiv.addEventListener('wheel', function (e) {
|
||||||
let preventScroll = false;
|
let preventScroll = false;
|
||||||
if (e.deltaX != 0) { prevented_offset = 0; return;}
|
if (e.deltaX != 0) { prevented_offset = 0; return; }
|
||||||
if (this.scrollHeight == this.clientHeight) { prevented_offset = 0; return;}
|
if (this.scrollHeight == this.clientHeight) { prevented_offset = 0; return; }
|
||||||
if (e.deltaY < 0) { prevented_offset = 0; return;}
|
if (e.deltaY < 0) { prevented_offset = 0; return; }
|
||||||
if (e.deltaY > 0 && this.scrollHeight - this.clientHeight - this.scrollTop <= 1) { preventScroll = true; }
|
if (e.deltaY > 0 && this.scrollHeight - this.clientHeight - this.scrollTop <= 1) { preventScroll = true; }
|
||||||
|
|
||||||
if (preventScroll) {
|
if (preventScroll) {
|
||||||
@ -713,3 +708,161 @@ function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
|
|||||||
// setInterval(function () { uml("mermaid") }, 5000); // 每50毫秒执行一次
|
// setInterval(function () { uml("mermaid") }, 5000); // 每50毫秒执行一次
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
function loadLive2D() {
|
||||||
|
try {
|
||||||
|
$("<link>").attr({ href: "file=themes/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css" }).appendTo('head');
|
||||||
|
$('body').append('<div class="waifu"><div class="waifu-tips"></div><canvas id="live2d" class="live2d"></canvas><div class="waifu-tool"><span class="fui-home"></span> <span class="fui-chat"></span> <span class="fui-eye"></span> <span class="fui-user"></span> <span class="fui-photo"></span> <span class="fui-info-circle"></span> <span class="fui-cross"></span></div></div>');
|
||||||
|
$.ajax({
|
||||||
|
url: "file=themes/waifu_plugin/waifu-tips.js", dataType: "script", cache: true, success: function () {
|
||||||
|
$.ajax({
|
||||||
|
url: "file=themes/waifu_plugin/live2d.js", dataType: "script", cache: true, success: function () {
|
||||||
|
/* 可直接修改部分参数 */
|
||||||
|
live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API
|
||||||
|
live2d_settings['modelId'] = 3; // 默认模型 ID
|
||||||
|
live2d_settings['modelTexturesId'] = 44; // 默认材质 ID
|
||||||
|
live2d_settings['modelStorage'] = false; // 不储存模型 ID
|
||||||
|
live2d_settings['waifuSize'] = '210x187';
|
||||||
|
live2d_settings['waifuTipsSize'] = '187x52';
|
||||||
|
live2d_settings['canSwitchModel'] = true;
|
||||||
|
live2d_settings['canSwitchTextures'] = true;
|
||||||
|
live2d_settings['canSwitchHitokoto'] = false;
|
||||||
|
live2d_settings['canTakeScreenshot'] = false;
|
||||||
|
live2d_settings['canTurnToHomePage'] = false;
|
||||||
|
live2d_settings['canTurnToAboutPage'] = false;
|
||||||
|
live2d_settings['showHitokoto'] = false; // 显示一言
|
||||||
|
live2d_settings['showF12Status'] = false; // 显示加载状态
|
||||||
|
live2d_settings['showF12Message'] = false; // 显示看板娘消息
|
||||||
|
live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示
|
||||||
|
live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示
|
||||||
|
live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词
|
||||||
|
/* 在 initModel 前添加 */
|
||||||
|
initModel("file=themes/waifu_plugin/waifu-tips.json");
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} catch (err) { console.log("[Error] JQuery is not defined.") }
|
||||||
|
}
|
||||||
|
|
||||||
|
function get_checkbox_selected_items(elem_id){
|
||||||
|
display_panel_arr = [];
|
||||||
|
document.getElementById(elem_id).querySelector('[data-testid="checkbox-group"]').querySelectorAll('label').forEach(label => {
|
||||||
|
// Get the span text
|
||||||
|
const spanText = label.querySelector('span').textContent;
|
||||||
|
// Get the input value
|
||||||
|
const checked = label.querySelector('input').checked;
|
||||||
|
if (checked) {
|
||||||
|
display_panel_arr.push(spanText)
|
||||||
|
}
|
||||||
|
});
|
||||||
|
return display_panel_arr;
|
||||||
|
}
|
||||||
|
|
||||||
|
function set_checkbox(key, bool, set_twice=false) {
|
||||||
|
set_success = false;
|
||||||
|
elem_ids = ["cbsc", "cbs"]
|
||||||
|
elem_ids.forEach(id => {
|
||||||
|
document.getElementById(id).querySelector('[data-testid="checkbox-group"]').querySelectorAll('label').forEach(label => {
|
||||||
|
// Get the span text
|
||||||
|
const spanText = label.querySelector('span').textContent;
|
||||||
|
if (spanText === key) {
|
||||||
|
if (bool){
|
||||||
|
label.classList.add('selected');
|
||||||
|
} else {
|
||||||
|
if (label.classList.contains('selected')) {
|
||||||
|
label.classList.remove('selected');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (set_twice){
|
||||||
|
setTimeout(() => {
|
||||||
|
if (bool){
|
||||||
|
label.classList.add('selected');
|
||||||
|
} else {
|
||||||
|
if (label.classList.contains('selected')) {
|
||||||
|
label.classList.remove('selected');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}, 5000);
|
||||||
|
}
|
||||||
|
|
||||||
|
label.querySelector('input').checked = bool;
|
||||||
|
set_success = true;
|
||||||
|
return
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!set_success){
|
||||||
|
console.log("设置checkbox失败,没有找到对应的key")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function apply_cookie_for_checkbox(dark) {
|
||||||
|
// console.log("apply_cookie_for_checkboxes")
|
||||||
|
let searchString = "输入清除键";
|
||||||
|
let bool_value = "False";
|
||||||
|
|
||||||
|
////////////////// darkmode ///////////////////
|
||||||
|
if (getCookie("js_darkmode_cookie")) {
|
||||||
|
dark = getCookie("js_darkmode_cookie")
|
||||||
|
}
|
||||||
|
dark = dark == "True";
|
||||||
|
if (document.querySelectorAll('.dark').length) {
|
||||||
|
if (!dark) {
|
||||||
|
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (dark) {
|
||||||
|
document.querySelector('body').classList.add('dark');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
////////////////////// clearButton ///////////////////////////
|
||||||
|
if (getCookie("js_clearbtn_show_cookie")) {
|
||||||
|
// have cookie
|
||||||
|
bool_value = getCookie("js_clearbtn_show_cookie")
|
||||||
|
bool_value = bool_value == "True";
|
||||||
|
searchString = "输入清除键";
|
||||||
|
if (bool_value) {
|
||||||
|
let clearButton = document.getElementById("elem_clear");
|
||||||
|
let clearButton2 = document.getElementById("elem_clear2");
|
||||||
|
clearButton.style.display = "block";
|
||||||
|
clearButton2.style.display = "block";
|
||||||
|
set_checkbox(searchString, true);
|
||||||
|
} else {
|
||||||
|
let clearButton = document.getElementById("elem_clear");
|
||||||
|
let clearButton2 = document.getElementById("elem_clear2");
|
||||||
|
clearButton.style.display = "none";
|
||||||
|
clearButton2.style.display = "none";
|
||||||
|
set_checkbox(searchString, false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
////////////////////// live2d ///////////////////////////
|
||||||
|
|
||||||
|
if (getCookie("js_live2d_show_cookie")) {
|
||||||
|
// have cookie
|
||||||
|
searchString = "添加Live2D形象";
|
||||||
|
bool_value = getCookie("js_live2d_show_cookie");
|
||||||
|
bool_value = bool_value == "True";
|
||||||
|
if (bool_value) {
|
||||||
|
loadLive2D();
|
||||||
|
set_checkbox(searchString, true);
|
||||||
|
} else {
|
||||||
|
$('.waifu').hide();
|
||||||
|
set_checkbox(searchString, false);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// do not have cookie
|
||||||
|
// get conf
|
||||||
|
display_panel_arr = get_checkbox_selected_items("cbsc");
|
||||||
|
searchString = "添加Live2D形象";
|
||||||
|
if (display_panel_arr.includes(searchString)) {
|
||||||
|
loadLive2D();
|
||||||
|
} else {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
@ -5,17 +5,14 @@ def get_common_html_javascript_code():
|
|||||||
js = "\n"
|
js = "\n"
|
||||||
for jsf in [
|
for jsf in [
|
||||||
"file=themes/common.js",
|
"file=themes/common.js",
|
||||||
"file=themes/mermaid.min.js",
|
|
||||||
"file=themes/mermaid_loader.js",
|
|
||||||
]:
|
]:
|
||||||
js += f"""<script src="{jsf}"></script>\n"""
|
js += f"""<script src="{jsf}"></script>\n"""
|
||||||
|
|
||||||
# 添加Live2D
|
# 添加Live2D
|
||||||
if ADD_WAIFU:
|
if ADD_WAIFU:
|
||||||
for jsf in [
|
for jsf in [
|
||||||
"file=docs/waifu_plugin/jquery.min.js",
|
"file=themes/waifu_plugin/jquery.min.js",
|
||||||
"file=docs/waifu_plugin/jquery-ui.min.js",
|
"file=themes/waifu_plugin/jquery-ui.min.js",
|
||||||
"file=docs/waifu_plugin/autoload.js",
|
|
||||||
]:
|
]:
|
||||||
js += f"""<script src="{jsf}"></script>\n"""
|
js += f"""<script src="{jsf}"></script>\n"""
|
||||||
return js
|
return js
|
1590
themes/mermaid.min.js
vendored
1590
themes/mermaid.min.js
vendored
File diff suppressed because one or more lines are too long
@ -1,55 +1 @@
|
|||||||
import { deflate, inflate } from '/file=themes/pako.esm.mjs';
|
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
|
||||||
import { toUint8Array, fromUint8Array, toBase64, fromBase64 } from '/file=themes/base64.mjs';
|
|
||||||
|
|
||||||
const base64Serde = {
|
|
||||||
serialize: (state) => {
|
|
||||||
return toBase64(state, true);
|
|
||||||
},
|
|
||||||
deserialize: (state) => {
|
|
||||||
return fromBase64(state);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const pakoSerde = {
|
|
||||||
serialize: (state) => {
|
|
||||||
const data = new TextEncoder().encode(state);
|
|
||||||
const compressed = deflate(data, { level: 9 });
|
|
||||||
return fromUint8Array(compressed, true);
|
|
||||||
},
|
|
||||||
deserialize: (state) => {
|
|
||||||
const data = toUint8Array(state);
|
|
||||||
return inflate(data, { to: 'string' });
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const serdes = {
|
|
||||||
base64: base64Serde,
|
|
||||||
pako: pakoSerde
|
|
||||||
};
|
|
||||||
|
|
||||||
export const serializeState = (state, serde = 'pako') => {
|
|
||||||
if (!(serde in serdes)) {
|
|
||||||
throw new Error(`Unknown serde type: ${serde}`);
|
|
||||||
}
|
|
||||||
const json = JSON.stringify(state);
|
|
||||||
const serialized = serdes[serde].serialize(json);
|
|
||||||
return `${serde}:${serialized}`;
|
|
||||||
};
|
|
||||||
|
|
||||||
const deserializeState = (state) => {
|
|
||||||
let type, serialized;
|
|
||||||
if (state.includes(':')) {
|
|
||||||
let tempType;
|
|
||||||
[tempType, serialized] = state.split(':');
|
|
||||||
if (tempType in serdes) {
|
|
||||||
type = tempType;
|
|
||||||
} else {
|
|
||||||
throw new Error(`Unknown serde type: ${tempType}`);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
type = 'base64';
|
|
||||||
serialized = state;
|
|
||||||
}
|
|
||||||
const json = serdes[type].deserialize(serialized);
|
|
||||||
return JSON.parse(json);
|
|
||||||
};
|
|
||||||
|
@ -1,197 +1 @@
|
|||||||
const uml = async className => {
|
// we have moved mermaid-related code to gradio-fix repository: binary-husky/gradio-fix@32150d0
|
||||||
|
|
||||||
// Custom element to encapsulate Mermaid content.
|
|
||||||
class MermaidDiv extends HTMLElement {
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Creates a special Mermaid div shadow DOM.
|
|
||||||
* Works around issues of shared IDs.
|
|
||||||
* @return {void}
|
|
||||||
*/
|
|
||||||
constructor() {
|
|
||||||
super()
|
|
||||||
|
|
||||||
// Create the Shadow DOM and attach style
|
|
||||||
const shadow = this.attachShadow({ mode: "open" })
|
|
||||||
const style = document.createElement("style")
|
|
||||||
style.textContent = `
|
|
||||||
:host {
|
|
||||||
display: block;
|
|
||||||
line-height: initial;
|
|
||||||
font-size: 16px;
|
|
||||||
}
|
|
||||||
div.diagram {
|
|
||||||
margin: 0;
|
|
||||||
overflow: visible;
|
|
||||||
}`
|
|
||||||
shadow.appendChild(style)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (typeof customElements.get("diagram-div") === "undefined") {
|
|
||||||
customElements.define("diagram-div", MermaidDiv)
|
|
||||||
}
|
|
||||||
|
|
||||||
const getFromCode = parent => {
|
|
||||||
// Handles <pre><code> text extraction.
|
|
||||||
let text = ""
|
|
||||||
for (let j = 0; j < parent.childNodes.length; j++) {
|
|
||||||
const subEl = parent.childNodes[j]
|
|
||||||
if (subEl.tagName.toLowerCase() === "code") {
|
|
||||||
for (let k = 0; k < subEl.childNodes.length; k++) {
|
|
||||||
const child = subEl.childNodes[k]
|
|
||||||
const whitespace = /^\s*$/
|
|
||||||
if (child.nodeName === "#text" && !(whitespace.test(child.nodeValue))) {
|
|
||||||
text = child.nodeValue
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return text
|
|
||||||
}
|
|
||||||
|
|
||||||
function createOrUpdateHyperlink(parentElement, linkText, linkHref) {
|
|
||||||
// Search for an existing anchor element within the parentElement
|
|
||||||
let existingAnchor = parentElement.querySelector("a");
|
|
||||||
|
|
||||||
// Check if an anchor element already exists
|
|
||||||
if (existingAnchor) {
|
|
||||||
// Update the hyperlink reference if it's different from the current one
|
|
||||||
if (existingAnchor.href !== linkHref) {
|
|
||||||
existingAnchor.href = linkHref;
|
|
||||||
}
|
|
||||||
// Update the target attribute to ensure it opens in a new tab
|
|
||||||
existingAnchor.target = '_blank';
|
|
||||||
|
|
||||||
// If the text must be dynamic, uncomment and use the following line:
|
|
||||||
// existingAnchor.textContent = linkText;
|
|
||||||
} else {
|
|
||||||
// If no anchor exists, create one and append it to the parentElement
|
|
||||||
let anchorElement = document.createElement("a");
|
|
||||||
anchorElement.href = linkHref; // Set hyperlink reference
|
|
||||||
anchorElement.textContent = linkText; // Set text displayed
|
|
||||||
anchorElement.target = '_blank'; // Ensure it opens in a new tab
|
|
||||||
parentElement.appendChild(anchorElement); // Append the new anchor element to the parent
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function removeLastLine(str) {
|
|
||||||
// 将字符串按换行符分割成数组
|
|
||||||
var lines = str.split('\n');
|
|
||||||
lines.pop();
|
|
||||||
// 将数组重新连接成字符串,并按换行符连接
|
|
||||||
var result = lines.join('\n');
|
|
||||||
return result;
|
|
||||||
}
|
|
||||||
|
|
||||||
// 给出配置 Provide a default config in case one is not specified
|
|
||||||
const defaultConfig = {
|
|
||||||
startOnLoad: false,
|
|
||||||
theme: "default",
|
|
||||||
flowchart: {
|
|
||||||
htmlLabels: false
|
|
||||||
},
|
|
||||||
er: {
|
|
||||||
useMaxWidth: false
|
|
||||||
},
|
|
||||||
sequence: {
|
|
||||||
useMaxWidth: false,
|
|
||||||
noteFontWeight: "14px",
|
|
||||||
actorFontSize: "14px",
|
|
||||||
messageFontSize: "16px"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (document.body.classList.contains("dark")) {
|
|
||||||
defaultConfig.theme = "dark"
|
|
||||||
}
|
|
||||||
|
|
||||||
const Module = await import('/file=themes/mermaid_editor.js');
|
|
||||||
|
|
||||||
function do_render(block, code, codeContent, cnt) {
|
|
||||||
var rendered_content = mermaid.render(`_diagram_${cnt}`, code);
|
|
||||||
////////////////////////////// 记录有哪些代码已经被渲染了 ///////////////////////////////////
|
|
||||||
let codeFinishRenderElement = block.querySelector("code_finish_render"); // 如果block下已存在code_already_rendered元素,则获取它
|
|
||||||
if (codeFinishRenderElement) { // 如果block下已存在code_already_rendered元素
|
|
||||||
codeFinishRenderElement.style.display = "none";
|
|
||||||
} else {
|
|
||||||
// 如果不存在code_finish_render元素,则将code元素中的内容添加到新创建的code_finish_render元素中
|
|
||||||
let codeFinishRenderElementNew = document.createElement("code_finish_render"); // 创建一个新的code_already_rendered元素
|
|
||||||
codeFinishRenderElementNew.style.display = "none";
|
|
||||||
codeFinishRenderElementNew.textContent = "";
|
|
||||||
block.appendChild(codeFinishRenderElementNew); // 将新创建的code_already_rendered元素添加到block中
|
|
||||||
codeFinishRenderElement = codeFinishRenderElementNew;
|
|
||||||
}
|
|
||||||
|
|
||||||
////////////////////////////// 创建一个用于渲染的容器 ///////////////////////////////////
|
|
||||||
let mermaidRender = block.querySelector(".mermaid_render"); // 尝试获取已存在的<div class='mermaid_render'>
|
|
||||||
if (!mermaidRender) {
|
|
||||||
mermaidRender = document.createElement("div"); // 不存在,创建新的<div class='mermaid_render'>
|
|
||||||
mermaidRender.classList.add("mermaid_render");
|
|
||||||
block.appendChild(mermaidRender); // 将新创建的元素附加到block
|
|
||||||
}
|
|
||||||
mermaidRender.innerHTML = rendered_content
|
|
||||||
codeFinishRenderElement.textContent = code // 标记已经渲染的部分
|
|
||||||
|
|
||||||
////////////////////////////// 创建一个“点击这里编辑脑图” ///////////////////////////////
|
|
||||||
let pako_encode = Module.serializeState({
|
|
||||||
"code": codeContent,
|
|
||||||
"mermaid": "{\n \"theme\": \"default\"\n}",
|
|
||||||
"autoSync": true,
|
|
||||||
"updateDiagram": false
|
|
||||||
});
|
|
||||||
createOrUpdateHyperlink(block, "点击这里编辑脑图", "https://mermaid.live/edit#" + pako_encode)
|
|
||||||
}
|
|
||||||
|
|
||||||
// 加载配置 Load up the config
|
|
||||||
mermaid.mermaidAPI.globalReset() // 全局复位
|
|
||||||
const config = (typeof mermaidConfig === "undefined") ? defaultConfig : mermaidConfig
|
|
||||||
mermaid.initialize(config)
|
|
||||||
// 查找需要渲染的元素 Find all of our Mermaid sources and render them.
|
|
||||||
const blocks = document.querySelectorAll(`pre.mermaid`);
|
|
||||||
|
|
||||||
for (let i = 0; i < blocks.length; i++) {
|
|
||||||
var block = blocks[i]
|
|
||||||
////////////////////////////// 如果代码没有发生变化,就不渲染了 ///////////////////////////////////
|
|
||||||
var code = getFromCode(block);
|
|
||||||
let code_elem = block.querySelector("code");
|
|
||||||
let codeContent = code_elem.textContent; // 获取code元素中的文本内容
|
|
||||||
|
|
||||||
// 判断codeContent是否包含'<gpt_academic_hide_mermaid_code>',如果是,则使code_elem隐藏
|
|
||||||
if (codeContent.indexOf('<gpt_academic_hide_mermaid_code>') !== -1) {
|
|
||||||
code_elem.style.display = "none";
|
|
||||||
}
|
|
||||||
|
|
||||||
// 如果block下已存在code_already_rendered元素,则获取它
|
|
||||||
let codePendingRenderElement = block.querySelector("code_pending_render");
|
|
||||||
if (codePendingRenderElement) { // 如果block下已存在code_pending_render元素
|
|
||||||
codePendingRenderElement.style.display = "none";
|
|
||||||
if (codePendingRenderElement.textContent !== codeContent) {
|
|
||||||
codePendingRenderElement.textContent = codeContent; // 如果现有的code_pending_render元素中的内容与code元素中的内容不同,更新code_pending_render元素中的内容
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
continue; // 如果相同,就不处理了
|
|
||||||
}
|
|
||||||
} else { // 如果不存在code_pending_render元素,则将code元素中的内容添加到新创建的code_pending_render元素中
|
|
||||||
let codePendingRenderElementNew = document.createElement("code_pending_render"); // 创建一个新的code_already_rendered元素
|
|
||||||
codePendingRenderElementNew.style.display = "none";
|
|
||||||
codePendingRenderElementNew.textContent = codeContent;
|
|
||||||
block.appendChild(codePendingRenderElementNew); // 将新创建的code_pending_render元素添加到block中
|
|
||||||
codePendingRenderElement = codePendingRenderElementNew;
|
|
||||||
}
|
|
||||||
|
|
||||||
////////////////////////////// 在这里才真正开始渲染 ///////////////////////////////////
|
|
||||||
try {
|
|
||||||
do_render(block, code, codeContent, i);
|
|
||||||
// console.log("渲染", codeContent);
|
|
||||||
} catch (err) {
|
|
||||||
try {
|
|
||||||
var lines = code.split('\n'); if (lines.length < 2) { continue; }
|
|
||||||
do_render(block, removeLastLine(code), codeContent, i);
|
|
||||||
// console.log("渲染", codeContent);
|
|
||||||
} catch (err) {
|
|
||||||
console.log("以下代码不能渲染", code, removeLastLine(code), err);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
6878
themes/pako.esm.mjs
6878
themes/pako.esm.mjs
File diff suppressed because it is too large
Load Diff
109
themes/theme.py
109
themes/theme.py
@ -46,8 +46,7 @@ cookie相关工具函数
|
|||||||
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
def init_cookie(cookies):
|
||||||
def init_cookie(cookies, chatbot):
|
|
||||||
# 为每一位访问的用户赋予一个独一无二的uuid编码
|
# 为每一位访问的用户赋予一个独一无二的uuid编码
|
||||||
cookies.update({"uuid": uuid.uuid4()})
|
cookies.update({"uuid": uuid.uuid4()})
|
||||||
return cookies
|
return cookies
|
||||||
@ -91,31 +90,107 @@ js_code_for_css_changing = """(css) => {
|
|||||||
}
|
}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
js_code_for_darkmode_init = """(dark) => {
|
|
||||||
dark = dark == "True";
|
|
||||||
if (document.querySelectorAll('.dark').length) {
|
|
||||||
if (!dark){
|
|
||||||
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if (dark){
|
|
||||||
document.querySelector('body').classList.add('dark');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
|
|
||||||
js_code_for_toggle_darkmode = """() => {
|
js_code_for_toggle_darkmode = """() => {
|
||||||
if (document.querySelectorAll('.dark').length) {
|
if (document.querySelectorAll('.dark').length) {
|
||||||
|
setCookie("js_darkmode_cookie", "False", 365);
|
||||||
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
|
document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark'));
|
||||||
} else {
|
} else {
|
||||||
|
setCookie("js_darkmode_cookie", "True", 365);
|
||||||
document.querySelector('body').classList.add('dark');
|
document.querySelector('body').classList.add('dark');
|
||||||
}
|
}
|
||||||
document.querySelectorAll('code_pending_render').forEach(code => {code.remove();})
|
document.querySelectorAll('code_pending_render').forEach(code => {code.remove();})
|
||||||
}"""
|
}"""
|
||||||
|
|
||||||
|
|
||||||
js_code_for_persistent_cookie_init = """(persistent_cookie) => {
|
js_code_for_persistent_cookie_init = """(py_pickle_cookie, cookie) => {
|
||||||
return getCookie("persistent_cookie");
|
return [getCookie("py_pickle_cookie"), cookie];
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
js_code_reset = """
|
||||||
|
(a,b,c)=>{
|
||||||
|
return [[], [], "已重置"];
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
js_code_clear = """
|
||||||
|
(a,b)=>{
|
||||||
|
return ["", ""];
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
js_code_show_or_hide = """
|
||||||
|
(display_panel_arr)=>{
|
||||||
|
setTimeout(() => {
|
||||||
|
// get conf
|
||||||
|
display_panel_arr = get_checkbox_selected_items("cbs");
|
||||||
|
|
||||||
|
////////////////////// 输入清除键 ///////////////////////////
|
||||||
|
let searchString = "输入清除键";
|
||||||
|
let ele = "none";
|
||||||
|
if (display_panel_arr.includes(searchString)) {
|
||||||
|
let clearButton = document.getElementById("elem_clear");
|
||||||
|
let clearButton2 = document.getElementById("elem_clear2");
|
||||||
|
clearButton.style.display = "block";
|
||||||
|
clearButton2.style.display = "block";
|
||||||
|
setCookie("js_clearbtn_show_cookie", "True", 365);
|
||||||
|
} else {
|
||||||
|
let clearButton = document.getElementById("elem_clear");
|
||||||
|
let clearButton2 = document.getElementById("elem_clear2");
|
||||||
|
clearButton.style.display = "none";
|
||||||
|
clearButton2.style.display = "none";
|
||||||
|
setCookie("js_clearbtn_show_cookie", "False", 365);
|
||||||
|
}
|
||||||
|
|
||||||
|
////////////////////// 基础功能区 ///////////////////////////
|
||||||
|
searchString = "基础功能区";
|
||||||
|
if (display_panel_arr.includes(searchString)) {
|
||||||
|
ele = document.getElementById("basic-panel");
|
||||||
|
ele.style.display = "block";
|
||||||
|
} else {
|
||||||
|
ele = document.getElementById("basic-panel");
|
||||||
|
ele.style.display = "none";
|
||||||
|
}
|
||||||
|
|
||||||
|
////////////////////// 函数插件区 ///////////////////////////
|
||||||
|
searchString = "函数插件区";
|
||||||
|
if (display_panel_arr.includes(searchString)) {
|
||||||
|
ele = document.getElementById("plugin-panel");
|
||||||
|
ele.style.display = "block";
|
||||||
|
} else {
|
||||||
|
ele = document.getElementById("plugin-panel");
|
||||||
|
ele.style.display = "none";
|
||||||
|
}
|
||||||
|
|
||||||
|
}, 50);
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
js_code_show_or_hide_group2 = """
|
||||||
|
(display_panel_arr)=>{
|
||||||
|
setTimeout(() => {
|
||||||
|
// console.log("display_panel_arr");
|
||||||
|
// get conf
|
||||||
|
display_panel_arr = get_checkbox_selected_items("cbsc");
|
||||||
|
|
||||||
|
////////////////////// 添加Live2D形象 ///////////////////////////
|
||||||
|
let searchString = "添加Live2D形象";
|
||||||
|
let ele = "none";
|
||||||
|
if (display_panel_arr.includes(searchString)) {
|
||||||
|
setCookie("js_live2d_show_cookie", "True", 365);
|
||||||
|
loadLive2D();
|
||||||
|
} else {
|
||||||
|
setCookie("js_live2d_show_cookie", "False", 365);
|
||||||
|
$('.waifu').hide();
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
}, 50);
|
||||||
}
|
}
|
||||||
"""
|
"""
|
||||||
|
0
themes/waifu_plugin/autoload.js
Normal file
0
themes/waifu_plugin/autoload.js
Normal file
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 56 KiB |
@ -92,7 +92,7 @@ String.prototype.render = function(context) {
|
|||||||
};
|
};
|
||||||
|
|
||||||
var re = /x/;
|
var re = /x/;
|
||||||
console.log(re);
|
// console.log(re);
|
||||||
|
|
||||||
function empty(obj) {return typeof obj=="undefined"||obj==null||obj==""?true:false}
|
function empty(obj) {return typeof obj=="undefined"||obj==null||obj==""?true:false}
|
||||||
function getRandText(text) {return Array.isArray(text) ? text[Math.floor(Math.random() * text.length + 1)-1] : text}
|
function getRandText(text) {return Array.isArray(text) ? text[Math.floor(Math.random() * text.length + 1)-1] : text}
|
||||||
@ -120,7 +120,7 @@ function hideMessage(timeout) {
|
|||||||
|
|
||||||
function initModel(waifuPath, type) {
|
function initModel(waifuPath, type) {
|
||||||
/* console welcome message */
|
/* console welcome message */
|
||||||
eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('8.d(" ");8.d("\\U,.\\y\\5.\\1\\1\\1\\1/\\1,\\u\\2 \\H\\n\\1\\1\\1\\1\\1\\b \', !-\\r\\j-i\\1/\\1/\\g\\n\\1\\1\\1 \\1 \\a\\4\\f\'\\1\\1\\1 L/\\a\\4\\5\\2\\n\\1\\1 \\1 /\\1 \\a,\\1 /|\\1 ,\\1 ,\\1\\1\\1 \',\\n\\1\\1\\1\\q \\1/ /-\\j/\\1\\h\\E \\9 \\5!\\1 i\\n\\1\\1\\1 \\3 \\6 7\\q\\4\\c\\1 \\3\'\\s-\\c\\2!\\t|\\1 |\\n\\1\\1\\1\\1 !,/7 \'0\'\\1\\1 \\X\\w| \\1 |\\1\\1\\1\\n\\1\\1\\1\\1 |.\\x\\"\\1\\l\\1\\1 ,,,, / |./ \\1 |\\n\\1\\1\\1\\1 \\3\'| i\\z.\\2,,A\\l,.\\B / \\1.i \\1|\\n\\1\\1\\1\\1\\1 \\3\'| | / C\\D/\\3\'\\5,\\1\\9.\\1|\\n\\1\\1\\1\\1\\1\\1 | |/i \\m|/\\1 i\\1,.\\6 |\\F\\1|\\n\\1\\1\\1\\1\\1\\1.|/ /\\1\\h\\G \\1 \\6!\\1\\1\\b\\1|\\n\\1\\1\\1 \\1 \\1 k\\5>\\2\\9 \\1 o,.\\6\\2 \\1 /\\2!\\n\\1\\1\\1\\1\\1\\1 !\'\\m//\\4\\I\\g\', \\b \\4\'7\'\\J\'\\n\\1\\1\\1\\1\\1\\1 \\3\'\\K|M,p,\\O\\3|\\P\\n\\1\\1\\1\\1\\1 \\1\\1\\1\\c-,/\\1|p./\\n\\1\\1\\1\\1\\1 \\1\\1\\1\'\\f\'\\1\\1!o,.:\\Q \\R\\S\\T v"+e.V+" / W "+e.N);8.d(" ");',60,60,'|u3000|uff64|uff9a|uff40|u30fd|uff8d||console|uff8a|uff0f|uff3c|uff84|log|live2d_settings|uff70|u00b4|uff49||u2010||u3000_|u3008||_|___|uff72|u2500|uff67|u30cf|u30fc||u30bd|u4ece|u30d8|uff1e|__|u30a4|k_|uff17_|u3000L_|u3000i|uff1a|u3009|uff34|uff70r|u30fdL__||___i|l2dVerDate|u30f3|u30ce|nLive2D|u770b|u677f|u5a18|u304f__|l2dVersion|FGHRSH|u00b40i'.split('|'),0,{}));
|
// eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('8.d(" ");8.d("\\U,.\\y\\5.\\1\\1\\1\\1/\\1,\\u\\2 \\H\\n\\1\\1\\1\\1\\1\\b \', !-\\r\\j-i\\1/\\1/\\g\\n\\1\\1\\1 \\1 \\a\\4\\f\'\\1\\1\\1 L/\\a\\4\\5\\2\\n\\1\\1 \\1 /\\1 \\a,\\1 /|\\1 ,\\1 ,\\1\\1\\1 \',\\n\\1\\1\\1\\q \\1/ /-\\j/\\1\\h\\E \\9 \\5!\\1 i\\n\\1\\1\\1 \\3 \\6 7\\q\\4\\c\\1 \\3\'\\s-\\c\\2!\\t|\\1 |\\n\\1\\1\\1\\1 !,/7 \'0\'\\1\\1 \\X\\w| \\1 |\\1\\1\\1\\n\\1\\1\\1\\1 |.\\x\\"\\1\\l\\1\\1 ,,,, / |./ \\1 |\\n\\1\\1\\1\\1 \\3\'| i\\z.\\2,,A\\l,.\\B / \\1.i \\1|\\n\\1\\1\\1\\1\\1 \\3\'| | / C\\D/\\3\'\\5,\\1\\9.\\1|\\n\\1\\1\\1\\1\\1\\1 | |/i \\m|/\\1 i\\1,.\\6 |\\F\\1|\\n\\1\\1\\1\\1\\1\\1.|/ /\\1\\h\\G \\1 \\6!\\1\\1\\b\\1|\\n\\1\\1\\1 \\1 \\1 k\\5>\\2\\9 \\1 o,.\\6\\2 \\1 /\\2!\\n\\1\\1\\1\\1\\1\\1 !\'\\m//\\4\\I\\g\', \\b \\4\'7\'\\J\'\\n\\1\\1\\1\\1\\1\\1 \\3\'\\K|M,p,\\O\\3|\\P\\n\\1\\1\\1\\1\\1 \\1\\1\\1\\c-,/\\1|p./\\n\\1\\1\\1\\1\\1 \\1\\1\\1\'\\f\'\\1\\1!o,.:\\Q \\R\\S\\T v"+e.V+" / W "+e.N);8.d(" ");',60,60,'|u3000|uff64|uff9a|uff40|u30fd|uff8d||console|uff8a|uff0f|uff3c|uff84|log|live2d_settings|uff70|u00b4|uff49||u2010||u3000_|u3008||_|___|uff72|u2500|uff67|u30cf|u30fc||u30bd|u4ece|u30d8|uff1e|__|u30a4|k_|uff17_|u3000L_|u3000i|uff1a|u3009|uff34|uff70r|u30fdL__||___i|l2dVerDate|u30f3|u30ce|nLive2D|u770b|u677f|u5a18|u304f__|l2dVersion|FGHRSH|u00b40i'.split('|'),0,{}));
|
||||||
|
|
||||||
/* 判断 JQuery */
|
/* 判断 JQuery */
|
||||||
if (typeof($.ajax) != 'function') typeof(jQuery.ajax) == 'function' ? window.$ = jQuery : console.log('[Error] JQuery is not defined.');
|
if (typeof($.ajax) != 'function') typeof(jQuery.ajax) == 'function' ? window.$ = jQuery : console.log('[Error] JQuery is not defined.');
|
@ -44,8 +44,8 @@
|
|||||||
{ "selector": ".container a[href^='http']", "text": ["要看看 <span style=\"color:#0099cc;\">{text}</span> 么?"] },
|
{ "selector": ".container a[href^='http']", "text": ["要看看 <span style=\"color:#0099cc;\">{text}</span> 么?"] },
|
||||||
{ "selector": ".fui-home", "text": ["点击前往首页,想回到上一页可以使用浏览器的后退功能哦"] },
|
{ "selector": ".fui-home", "text": ["点击前往首页,想回到上一页可以使用浏览器的后退功能哦"] },
|
||||||
{ "selector": ".fui-chat", "text": ["一言一语,一颦一笑。一字一句,一颗赛艇。"] },
|
{ "selector": ".fui-chat", "text": ["一言一语,一颦一笑。一字一句,一颗赛艇。"] },
|
||||||
{ "selector": ".fui-eye", "text": ["嗯··· 要切换 看板娘 吗?"] },
|
{ "selector": ".fui-eye", "text": ["嗯··· 要切换 Live2D形象 吗?"] },
|
||||||
{ "selector": ".fui-user", "text": ["喜欢换装 Play 吗?"] },
|
{ "selector": ".fui-user", "text": ["喜欢换装吗?"] },
|
||||||
{ "selector": ".fui-photo", "text": ["要拍张纪念照片吗?"] },
|
{ "selector": ".fui-photo", "text": ["要拍张纪念照片吗?"] },
|
||||||
{ "selector": ".fui-info-circle", "text": ["这里有关于我的信息呢"] },
|
{ "selector": ".fui-info-circle", "text": ["这里有关于我的信息呢"] },
|
||||||
{ "selector": ".fui-cross", "text": ["你不喜欢我了吗..."] },
|
{ "selector": ".fui-cross", "text": ["你不喜欢我了吗..."] },
|
||||||
@ -77,14 +77,28 @@
|
|||||||
"看什么看(*^▽^*)",
|
"看什么看(*^▽^*)",
|
||||||
"焦虑时,吃顿大餐心情就好啦^_^",
|
"焦虑时,吃顿大餐心情就好啦^_^",
|
||||||
"你这个年纪,怎么睡得着觉的你^_^",
|
"你这个年纪,怎么睡得着觉的你^_^",
|
||||||
"修改ADD_WAIFU=False,我就不再打扰你了~",
|
"打开“界面外观”菜单,可选择关闭Live2D形象",
|
||||||
"经常去github看看我们的更新吧,也许有好玩的新功能呢。",
|
"经常去Github看看我们的更新吧,也许有好玩的新功能呢。",
|
||||||
"试试本地大模型吧,有的也很强大的哦。",
|
"试试本地大模型吧,有的也很强大的哦。",
|
||||||
"很多强大的函数插件隐藏在下拉菜单中呢。",
|
"很多强大的函数插件隐藏在下拉菜单中呢。",
|
||||||
"红色的插件,使用之前需要把文件上传进去哦。",
|
"插件使用之前,需要把文件上传进去哦。",
|
||||||
"想添加功能按钮吗?读读readme很容易就学会啦。",
|
"上传文件时,可以把文件直接拖进对话中的哦。",
|
||||||
|
"上传文件时,可以文件或图片粘贴到输入区哦。",
|
||||||
|
"想添加基础功能按钮吗?打开“界面外观”菜单进行自定义吧!",
|
||||||
"敏感或机密的信息,不可以问AI的哦!",
|
"敏感或机密的信息,不可以问AI的哦!",
|
||||||
"LLM究竟是划时代的创新,还是扼杀创造力的毒药呢?"
|
"LLM究竟是划时代的创新,还是扼杀创造力的毒药呢?",
|
||||||
|
"休息一下,起来走动走动吧!",
|
||||||
|
"今天的阳光也很不错哦,不妨外出晒晒。",
|
||||||
|
"笑一笑,生活更美好!",
|
||||||
|
"遇到难题,深呼吸就能解决一半。",
|
||||||
|
"偶尔换换环境,灵感也许就来了。",
|
||||||
|
"小憩片刻,醒来便是满血复活。",
|
||||||
|
"技术改变生活,让我们共同进步。",
|
||||||
|
"保持好奇心,探索未知的世界。",
|
||||||
|
"遇到困难,记得还有朋友和AI陪在你身边。",
|
||||||
|
"劳逸结合,方能长久。",
|
||||||
|
"偶尔给自己放个假,放松心情。",
|
||||||
|
"不要害怕失败,勇敢尝试才能成功。"
|
||||||
] }
|
] }
|
||||||
],
|
],
|
||||||
"click": [
|
"click": [
|
4
version
4
version
@ -1,5 +1,5 @@
|
|||||||
{
|
{
|
||||||
"version": 3.71,
|
"version": 3.72,
|
||||||
"show_feature": true,
|
"show_feature": true,
|
||||||
"new_feature": "用绘图功能增强部分插件 <-> 基础功能区支持自动切换中英提示词 <-> 支持Mermaid绘图库(让大模型绘制脑图) <-> 支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区"
|
"new_feature": "支持切换多个智谱ai模型 <-> 用绘图功能增强部分插件 <-> 基础功能区支持自动切换中英提示词 <-> 支持Mermaid绘图库(让大模型绘制脑图) <-> 支持Gemini-pro <-> 支持直接拖拽文件到上传区 <-> 支持将图片粘贴到输入区"
|
||||||
}
|
}
|
||||||
|
Loading…
x
Reference in New Issue
Block a user