Merge branch 'master' into jsz14897502-master

This commit is contained in:
binary-husky 2023-09-09 18:22:29 +08:00
commit a0f592308a
41 changed files with 1514 additions and 356 deletions

View File

@ -0,0 +1,44 @@
# https://docs.github.com/en/actions/publishing-packages/publishing-docker-images#publishing-images-to-github-packages
name: build-with-all-capacity
on:
push:
branches:
- 'master'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}_with_all_capacity
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
push: true
file: docs/GithubAction+AllCapacity
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

25
.github/workflows/stale.yml vendored Normal file
View File

@ -0,0 +1,25 @@
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
#
# You can adjust the behavior by modifying this file.
# For more information, see:
# https://github.com/actions/stale
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '*/5 * * * *'
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: read
steps:
- uses: actions/stale@v8
with:
stale-issue-message: 'This issue is stale because it has been open 100 days with no activity. Remove stale label or comment or this will be closed in 1 days.'
days-before-stale: 100
days-before-close: 1
debug-only: true

View File

@ -10,13 +10,13 @@
**如果喜欢这个项目请给它一个Star如果您发明了好用的快捷键或函数插件欢迎发pull requests** **如果喜欢这个项目请给它一个Star如果您发明了好用的快捷键或函数插件欢迎发pull requests**
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself. If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a README in [English|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md) translated by this project itself.
To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental). To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
> **Note** > **Note**
> >
> 1.请注意只有 **高亮(如红色)** 标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。 > 1.请注意只有 **高亮** 标识的函数插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的PR。
> >
> 2.本项目中每个文件的功能都在[自译解报告`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告)详细说明。随着版本的迭代您也可以随时自行点击相关函数插件调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。[安装方法](#installation)。 > 2.本项目中每个文件的功能都在[自译解报告`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告)详细说明。随着版本的迭代您也可以随时自行点击相关函数插件调用GPT重新生成项目的自我解析报告。常见问题[`wiki`](https://github.com/binary-husky/gpt_academic/wiki)。[安装方法](#installation) | [配置说明](https://github.com/binary-husky/gpt_academic/wiki/%E9%A1%B9%E7%9B%AE%E9%85%8D%E7%BD%AE%E8%AF%B4%E6%98%8E)。
> >
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM和Moss等等。支持多个api-key共存可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。 > 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM和Moss等等。支持多个api-key共存可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
@ -53,7 +53,8 @@ Latex论文一键校对 | [函数插件] 仿Grammarly对Latex文章进行语法
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧? [多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)同时伺候的感觉一定会很不错吧?
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型提供ChatGLM2微调辅助插件 ⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型提供ChatGLM2微调辅助插件
更多LLM模型接入支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/) 更多LLM模型接入支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
⭐[虚空终端](https://github.com/binary-husky/void-terminal)pip包 | 脱离GUI在Python中直接调用本项目的函数插件开发中 ⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI在Python中直接调用本项目的所有函数插件开发中
⭐虚空终端插件 | [函数插件] 用自然语言,直接调度本项目其他插件
更多新功能展示 (图像生成等) …… | 见本文档结尾处 …… 更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
</div> </div>
@ -148,11 +149,14 @@ python main.py
### 安装方法II使用Docker ### 安装方法II使用Docker
[![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
1. 仅ChatGPT推荐大多数人选择等价于docker-compose方案1 1. 仅ChatGPT推荐大多数人选择等价于docker-compose方案1
[![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml) [![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
[![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml) [![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
[![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml) [![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
``` sh ``` sh
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目 git clone --depth=1 https://github.com/binary-husky/gpt_academic.git # 下载项目
cd gpt_academic # 进入路径 cd gpt_academic # 进入路径
@ -249,10 +253,13 @@ Tip不指定文件直接点击 `载入对话历史存档` 可以查看历史h
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/9fdcc391-f823-464f-9322-f8719677043b" height="250" > <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/9fdcc391-f823-464f-9322-f8719677043b" height="250" >
</div> </div>
3. 生成报告。大部分插件都会在执行结束后,生成工作报告 3. 虚空终端(从自然语言输入中,理解用户意图+自动调用其他插件)
- 步骤一:输入 “ 请调用插件翻译PDF论文地址为https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf ”
- 步骤二:点击“虚空终端”
<div align="center"> <div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="250" > <img src="https://github.com/binary-husky/gpt_academic/assets/96192199/66f1b044-e9ff-4eed-9126-5d4f3668f1ed" width="500" >
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="250" >
</div> </div>
4. 模块化功能设计,简单的接口却能支持强大的功能 4. 模块化功能设计,简单的接口却能支持强大的功能
@ -299,8 +306,10 @@ Tip不指定文件直接点击 `载入对话历史存档` 可以查看历史h
</div> </div>
### II版本: ### II版本:
- version 3.5(Todo): 使用自然语言调用本项目的所有函数插件(高优先级) - version 3.60todo: 优化虚空终端引入code interpreter和更多插件
- version 3.50: 使用自然语言调用本项目的所有函数插件虚空终端支持插件分类改进UI设计新主题
- version 3.49: 支持百度千帆平台和文心一言 - version 3.49: 支持百度千帆平台和文心一言
- version 3.48: 支持阿里达摩院通义千问上海AI-Lab书生讯飞星火 - version 3.48: 支持阿里达摩院通义千问上海AI-Lab书生讯飞星火
- version 3.46: 支持完全脱手操作的实时语音对话 - version 3.46: 支持完全脱手操作的实时语音对话

View File

@ -5,7 +5,7 @@ def check_proxy(proxies):
try: try:
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4) response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
data = response.json() data = response.json()
print(f'查询代理的地理位置,返回的结果是{data}') # print(f'查询代理的地理位置,返回的结果是{data}')
if 'country_name' in data: if 'country_name' in data:
country = data['country_name'] country = data['country_name']
result = f"代理配置 {proxies_https}, 代理所在地:{country}" result = f"代理配置 {proxies_https}, 代理所在地:{country}"

View File

@ -43,7 +43,11 @@ API_URL_REDIRECT = {}
DEFAULT_WORKER_NUM = 3 DEFAULT_WORKER_NUM = 3
# 对话窗的高度 # 色彩主题,可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
THEME = "Default"
# 对话窗的高度 仅在LAYOUT="TOP-DOWN"时生效)
CHATBOT_HEIGHT = 1115 CHATBOT_HEIGHT = 1115
@ -68,6 +72,10 @@ WEB_PORT = -1
MAX_RETRY = 2 MAX_RETRY = 2
# 插件分类默认选项
DEFAULT_FN_GROUPS = ['对话', '编程', '学术']
# 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 ) # 模型选择是 (注意: LLM_MODEL是默认选中的模型, 它*必须*被包含在AVAIL_LLM_MODELS列表中 )
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓ LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "api2d-gpt-3.5-turbo", AVAIL_LLM_MODELS = ["gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5", "api2d-gpt-3.5-turbo",
@ -83,7 +91,7 @@ BAIDU_CLOUD_QIANFAN_MODEL = 'ERNIE-Bot' # 可选 "ERNIE-Bot"(文心一言), "
# 如果使用ChatGLM2微调模型请把 LLM_MODEL="chatglmft",并在此处指定模型路径 # 如果使用ChatGLM2微调模型请把 LLM_MODEL="chatglmft",并在此处指定模型路径
ChatGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100" CHATGLM_PTUNING_CHECKPOINT = "" # 例如"/home/hmp/ChatGLM2-6B/ptuning/output/6b-pt-128-1e-2/checkpoint-100"
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU # 本地LLM模型如ChatGLM的执行方式 CPU/GPU
@ -99,10 +107,6 @@ CONCURRENT_COUNT = 100
AUTO_CLEAR_TXT = False AUTO_CLEAR_TXT = False
# 色彩主体,可选 ["Default", "Chuanhu-Small-and-Beautiful", "High-Contrast"]
THEME = "Default"
# 加一个live2d装饰 # 加一个live2d装饰
ADD_WAIFU = False ADD_WAIFU = False
@ -213,6 +217,18 @@ ALLOW_RESET_CONFIG = False
NEWBING_STYLE NEWBING_STYLE
NEWBING_COOKIES NEWBING_COOKIES
用户图形界面布局依赖关系示意图
CHATBOT_HEIGHT 对话窗的高度
CODE_HIGHLIGHT 代码高亮
LAYOUT 窗口布局
DARK_MODE 暗色模式 / 亮色模式
DEFAULT_FN_GROUPS 插件分类默认选项
THEME 色彩主题
AUTO_CLEAR_TXT 是否在提交时自动清空输入框
ADD_WAIFU 加一个live2d装饰
ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置该功能具有一定的危险性
插件在线服务配置依赖关系示意图 插件在线服务配置依赖关系示意图

View File

@ -63,6 +63,7 @@ def get_core_functions():
"英译中": { "英译中": {
"Prefix": r"翻译成地道的中文:" + "\n\n", "Prefix": r"翻译成地道的中文:" + "\n\n",
"Suffix": r"", "Suffix": r"",
"Visible": False,
}, },
"找图片": { "找图片": {
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL" + "Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL" +
@ -78,6 +79,7 @@ def get_core_functions():
"Prefix": r"Here are some bibliography items, please transform them into bibtex style." + "Prefix": r"Here are some bibliography items, please transform them into bibtex style." +
r"Note that, reference styles maybe more than one kind, you should transform each item correctly." + r"Note that, reference styles maybe more than one kind, you should transform each item correctly." +
r"Items need to be transformed:", r"Items need to be transformed:",
"Visible": False,
"Suffix": r"", "Suffix": r"",
} }
} }

View File

@ -34,87 +34,108 @@ def get_crazy_functions():
from crazy_functions.Latex全文翻译 import Latex中译英 from crazy_functions.Latex全文翻译 import Latex中译英
from crazy_functions.Latex全文翻译 import Latex英译中 from crazy_functions.Latex全文翻译 import Latex英译中
from crazy_functions.批量Markdown翻译 import Markdown中译英 from crazy_functions.批量Markdown翻译 import Markdown中译英
from crazy_functions.虚空终端 import 虚空终端
function_plugins = { function_plugins = {
"虚空终端": {
"Group": "对话|编程|学术",
"Color": "stop",
"AsButton": True,
"Function": HotReload(虚空终端)
},
"解析整个Python项目": { "解析整个Python项目": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径", "Info": "解析一个Python项目的所有源文件(.py) | 输入参数为路径",
"Function": HotReload(解析一个Python项目) "Function": HotReload(解析一个Python项目)
}, },
"载入对话历史存档(先上传存档或输入路径)": { "载入对话历史存档(先上传存档或输入路径)": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"Info": "载入对话历史存档 | 输入参数为路径", "Info": "载入对话历史存档 | 输入参数为路径",
"Function": HotReload(载入对话历史存档) "Function": HotReload(载入对话历史存档)
}, },
"删除所有本地对话历史记录(谨慎操作)": { "删除所有本地对话历史记录(谨慎操作)": {
"Group": "对话",
"AsButton": False, "AsButton": False,
"Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数", "Info": "删除所有本地对话历史记录,谨慎操作 | 不需要输入参数",
"Function": HotReload(删除所有本地对话历史记录) "Function": HotReload(删除所有本地对话历史记录)
}, },
"清除所有缓存文件(谨慎操作)": { "清除所有缓存文件(谨慎操作)": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数", "Info": "清除所有缓存文件,谨慎操作 | 不需要输入参数",
"Function": HotReload(清除缓存) "Function": HotReload(清除缓存)
}, },
"批量总结Word文档": { "批量总结Word文档": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "批量总结word文档 | 输入参数为路径", "Info": "批量总结word文档 | 输入参数为路径",
"Function": HotReload(总结word文档) "Function": HotReload(总结word文档)
}, },
"解析整个C++项目头文件": { "解析整个C++项目头文件": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径", "Info": "解析一个C++项目的所有头文件(.h/.hpp) | 输入参数为路径",
"Function": HotReload(解析一个C项目的头文件) "Function": HotReload(解析一个C项目的头文件)
}, },
"解析整个C++项目(.cpp/.hpp/.c/.h": { "解析整个C++项目(.cpp/.hpp/.c/.h": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h| 输入参数为路径", "Info": "解析一个C++项目的所有源文件(.cpp/.hpp/.c/.h| 输入参数为路径",
"Function": HotReload(解析一个C项目) "Function": HotReload(解析一个C项目)
}, },
"解析整个Go项目": { "解析整个Go项目": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "解析一个Go项目的所有源文件 | 输入参数为路径", "Info": "解析一个Go项目的所有源文件 | 输入参数为路径",
"Function": HotReload(解析一个Golang项目) "Function": HotReload(解析一个Golang项目)
}, },
"解析整个Rust项目": { "解析整个Rust项目": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "解析一个Rust项目的所有源文件 | 输入参数为路径", "Info": "解析一个Rust项目的所有源文件 | 输入参数为路径",
"Function": HotReload(解析一个Rust项目) "Function": HotReload(解析一个Rust项目)
}, },
"解析整个Java项目": { "解析整个Java项目": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "解析一个Java项目的所有源文件 | 输入参数为路径", "Info": "解析一个Java项目的所有源文件 | 输入参数为路径",
"Function": HotReload(解析一个Java项目) "Function": HotReload(解析一个Java项目)
}, },
"解析整个前端项目js,ts,css等": { "解析整个前端项目js,ts,css等": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "解析一个前端项目的所有源文件js,ts,css等 | 输入参数为路径", "Info": "解析一个前端项目的所有源文件js,ts,css等 | 输入参数为路径",
"Function": HotReload(解析一个前端项目) "Function": HotReload(解析一个前端项目)
}, },
"解析整个Lua项目": { "解析整个Lua项目": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "解析一个Lua项目的所有源文件 | 输入参数为路径", "Info": "解析一个Lua项目的所有源文件 | 输入参数为路径",
"Function": HotReload(解析一个Lua项目) "Function": HotReload(解析一个Lua项目)
}, },
"解析整个CSharp项目": { "解析整个CSharp项目": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径", "Info": "解析一个CSharp项目的所有源文件 | 输入参数为路径",
"Function": HotReload(解析一个CSharp项目) "Function": HotReload(解析一个CSharp项目)
}, },
"解析Jupyter Notebook文件": { "解析Jupyter Notebook文件": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"Info": "解析Jupyter Notebook文件 | 输入参数为路径", "Info": "解析Jupyter Notebook文件 | 输入参数为路径",
@ -123,92 +144,125 @@ def get_crazy_functions():
"ArgsReminder": "若输入0则不解析notebook中的Markdown块", # 高级参数输入区的显示提示 "ArgsReminder": "若输入0则不解析notebook中的Markdown块", # 高级参数输入区的显示提示
}, },
"读Tex论文写摘要": { "读Tex论文写摘要": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": False,
"Info": "读取Tex论文并写摘要 | 输入参数为路径",
"Function": HotReload(读文章写摘要) "Function": HotReload(读文章写摘要)
}, },
"翻译README或.MD": { "翻译README或MD": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "将Markdown翻译为中文 | 输入参数为路径或URL", "Info": "将Markdown翻译为中文 | 输入参数为路径或URL",
"Function": HotReload(Markdown英译中) "Function": HotReload(Markdown英译中)
}, },
"翻译Markdown或README支持Github链接": { "翻译Markdown或README支持Github链接": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"Info": "将Markdown或README翻译为中文 | 输入参数为路径或URL",
"Function": HotReload(Markdown英译中) "Function": HotReload(Markdown英译中)
}, },
"批量生成函数注释": { "批量生成函数注释": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "批量生成函数的注释 | 输入参数为路径",
"Function": HotReload(批量生成函数注释) "Function": HotReload(批量生成函数注释)
}, },
"保存当前的对话": { "保存当前的对话": {
"Group": "对话",
"AsButton": True, "AsButton": True,
"Info": "保存当前的对话 | 不需要输入参数", "Info": "保存当前的对话 | 不需要输入参数",
"Function": HotReload(对话历史存档) "Function": HotReload(对话历史存档)
}, },
"[多线程Demo] 解析此项目本身(源码自译解)": { "[多线程Demo]解析此项目本身(源码自译解)": {
"Group": "对话|编程",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "多线程解析并翻译此项目的源码 | 不需要输入参数",
"Function": HotReload(解析项目本身) "Function": HotReload(解析项目本身)
}, },
"[插件demo] 历史上的今天": { "[插件demo]历史上的今天": {
"Group": "对话",
"AsButton": True, "AsButton": True,
"Info": "查看历史上的今天事件 | 不需要输入参数",
"Function": HotReload(高阶功能模板函数) "Function": HotReload(高阶功能模板函数)
}, },
"精准翻译PDF论文": { "精准翻译PDF论文": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": True, # 加入下拉菜单中 "AsButton": True,
"Info": "精准翻译PDF论文为中文 | 输入参数为路径",
"Function": HotReload(批量翻译PDF文档) "Function": HotReload(批量翻译PDF文档)
}, },
"询问多个GPT模型": { "询问多个GPT模型": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Function": HotReload(同时问询) "Function": HotReload(同时问询)
}, },
"批量总结PDF文档": { "批量总结PDF文档": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "批量总结PDF文档的内容 | 输入参数为路径",
"Function": HotReload(批量总结PDF文档) "Function": HotReload(批量总结PDF文档)
}, },
"谷歌学术检索助手输入谷歌学术搜索页url": { "谷歌学术检索助手输入谷歌学术搜索页url": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL",
"Function": HotReload(谷歌检索小助手) "Function": HotReload(谷歌检索小助手)
}, },
"理解PDF文档内容 模仿ChatPDF": { "理解PDF文档内容 模仿ChatPDF": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "理解PDF文档的内容并进行回答 | 输入参数为路径",
"Function": HotReload(理解PDF文档内容标准文件输入) "Function": HotReload(理解PDF文档内容标准文件输入)
}, },
"英文Latex项目全文润色输入路径或上传压缩包": { "英文Latex项目全文润色输入路径或上传压缩包": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
"Function": HotReload(Latex英文润色) "Function": HotReload(Latex英文润色)
}, },
"英文Latex项目全文纠错输入路径或上传压缩包": { "英文Latex项目全文纠错输入路径或上传压缩包": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包",
"Function": HotReload(Latex英文纠错) "Function": HotReload(Latex英文纠错)
}, },
"中文Latex项目全文润色输入路径或上传压缩包": { "中文Latex项目全文润色输入路径或上传压缩包": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包",
"Function": HotReload(Latex中文润色) "Function": HotReload(Latex中文润色)
}, },
"Latex项目全文中译英输入路径或上传压缩包": { "Latex项目全文中译英输入路径或上传压缩包": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包",
"Function": HotReload(Latex中译英) "Function": HotReload(Latex中译英)
}, },
"Latex项目全文英译中输入路径或上传压缩包": { "Latex项目全文英译中输入路径或上传压缩包": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包",
"Function": HotReload(Latex英译中) "Function": HotReload(Latex英译中)
}, },
"批量Markdown中译英输入路径或上传压缩包": { "批量Markdown中译英输入路径或上传压缩包": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包",
"Function": HotReload(Markdown中译英) "Function": HotReload(Markdown中译英)
}, },
} }
@ -218,8 +272,10 @@ def get_crazy_functions():
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
function_plugins.update({ function_plugins.update({
"一键下载arxiv论文并翻译摘要先在input输入编号如1812.10695": { "一键下载arxiv论文并翻译摘要先在input输入编号如1812.10695": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
# "Info": "下载arxiv论文并翻译摘要 | 输入参数为arxiv编号如1812.10695",
"Function": HotReload(下载arxiv论文并翻译摘要) "Function": HotReload(下载arxiv论文并翻译摘要)
} }
}) })
@ -230,16 +286,20 @@ def get_crazy_functions():
from crazy_functions.联网的ChatGPT import 连接网络回答问题 from crazy_functions.联网的ChatGPT import 连接网络回答问题
function_plugins.update({ function_plugins.update({
"连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": { "连接网络回答问题(输入问题后点击该插件,需要访问谷歌)": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
# "Info": "连接网络回答问题(需要访问谷歌)| 输入参数是一个问题",
"Function": HotReload(连接网络回答问题) "Function": HotReload(连接网络回答问题)
} }
}) })
from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题 from crazy_functions.联网的ChatGPT_bing版 import 连接bing搜索回答问题
function_plugins.update({ function_plugins.update({
"连接网络回答问题中文Bing版输入问题后点击该插件": { "连接网络回答问题中文Bing版输入问题后点击该插件": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, # 加入下拉菜单中 "AsButton": False, # 加入下拉菜单中
"Info": "连接网络回答问题需要访问中文Bing| 输入参数是一个问题",
"Function": HotReload(连接bing搜索回答问题) "Function": HotReload(连接bing搜索回答问题)
} }
}) })
@ -250,6 +310,7 @@ def get_crazy_functions():
from crazy_functions.解析项目源代码 import 解析任意code项目 from crazy_functions.解析项目源代码 import 解析任意code项目
function_plugins.update({ function_plugins.update({
"解析项目源代码(手动指定和筛选源代码文件类型)": { "解析项目源代码(手动指定和筛选源代码文件类型)": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False "AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
@ -264,6 +325,7 @@ def get_crazy_functions():
from crazy_functions.询问多个大语言模型 import 同时问询_指定模型 from crazy_functions.询问多个大语言模型 import 同时问询_指定模型
function_plugins.update({ function_plugins.update({
"询问多个GPT模型手动指定询问哪些模型": { "询问多个GPT模型手动指定询问哪些模型": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False "AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
@ -278,6 +340,7 @@ def get_crazy_functions():
from crazy_functions.图片生成 import 图片生成 from crazy_functions.图片生成 import 图片生成
function_plugins.update({ function_plugins.update({
"图片生成先切换模型到openai或api2d": { "图片生成先切换模型到openai或api2d": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, # 调用时唤起高级参数输入区默认False "AdvancedArgs": True, # 调用时唤起高级参数输入区默认False
@ -293,6 +356,7 @@ def get_crazy_functions():
from crazy_functions.总结音视频 import 总结音视频 from crazy_functions.总结音视频 import 总结音视频
function_plugins.update({ function_plugins.update({
"批量总结音视频(输入路径或上传压缩包)": { "批量总结音视频(输入路径或上传压缩包)": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
@ -308,8 +372,10 @@ def get_crazy_functions():
from crazy_functions.数学动画生成manim import 动画生成 from crazy_functions.数学动画生成manim import 动画生成
function_plugins.update({ function_plugins.update({
"数学动画生成Manim": { "数学动画生成Manim": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"Info": "按照自然语言描述生成一个动画 | 输入参数是一段话",
"Function": HotReload(动画生成) "Function": HotReload(动画生成)
} }
}) })
@ -320,6 +386,7 @@ def get_crazy_functions():
from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言 from crazy_functions.批量Markdown翻译 import Markdown翻译指定语言
function_plugins.update({ function_plugins.update({
"Markdown翻译手动指定语言": { "Markdown翻译手动指定语言": {
"Group": "编程",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
@ -334,6 +401,7 @@ def get_crazy_functions():
from crazy_functions.Langchain知识库 import 知识库问答 from crazy_functions.Langchain知识库 import 知识库问答
function_plugins.update({ function_plugins.update({
"构建知识库(请先上传文件素材)": { "构建知识库(请先上传文件素材)": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
@ -348,6 +416,7 @@ def get_crazy_functions():
from crazy_functions.Langchain知识库 import 读取知识库作答 from crazy_functions.Langchain知识库 import 读取知识库作答
function_plugins.update({ function_plugins.update({
"知识库问答": { "知识库问答": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
@ -362,6 +431,7 @@ def get_crazy_functions():
from crazy_functions.交互功能函数模板 import 交互功能模板函数 from crazy_functions.交互功能函数模板 import 交互功能模板函数
function_plugins.update({ function_plugins.update({
"交互功能模板函数": { "交互功能模板函数": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"Function": HotReload(交互功能模板函数) "Function": HotReload(交互功能模板函数)
@ -374,6 +444,7 @@ def get_crazy_functions():
from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比 from crazy_functions.Latex输出PDF结果 import Latex英文纠错加PDF对比
function_plugins.update({ function_plugins.update({
"Latex英文纠错+高亮修正位置 [需Latex]": { "Latex英文纠错+高亮修正位置 [需Latex]": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
@ -384,6 +455,7 @@ def get_crazy_functions():
from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF from crazy_functions.Latex输出PDF结果 import Latex翻译中文并重新编译PDF
function_plugins.update({ function_plugins.update({
"Arixv论文精细翻译输入arxivID[需Latex]": { "Arixv论文精细翻译输入arxivID[需Latex]": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
@ -391,11 +463,13 @@ def get_crazy_functions():
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " + "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " + "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
'If the term "agent" is used in this section, it should be translated to "智能体". ', 'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "Arixv论文精细翻译 | 输入参数arxiv论文的ID比如1812.10695",
"Function": HotReload(Latex翻译中文并重新编译PDF) "Function": HotReload(Latex翻译中文并重新编译PDF)
} }
}) })
function_plugins.update({ function_plugins.update({
"本地Latex论文精细翻译上传Latex项目[需Latex]": { "本地Latex论文精细翻译上传Latex项目[需Latex]": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"AdvancedArgs": True, "AdvancedArgs": True,
@ -403,6 +477,7 @@ def get_crazy_functions():
"如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " + "如果有必要, 请在此处给出自定义翻译命令, 解决部分词汇翻译不准确的问题。 " +
"例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " + "例如当单词'agent'翻译不准确时, 请尝试把以下指令复制到高级参数区: " +
'If the term "agent" is used in this section, it should be translated to "智能体". ', 'If the term "agent" is used in this section, it should be translated to "智能体". ',
"Info": "本地Latex论文精细翻译 | 输入参数是路径",
"Function": HotReload(Latex翻译中文并重新编译PDF) "Function": HotReload(Latex翻译中文并重新编译PDF)
} }
}) })
@ -416,8 +491,10 @@ def get_crazy_functions():
from crazy_functions.语音助手 import 语音助手 from crazy_functions.语音助手 import 语音助手
function_plugins.update({ function_plugins.update({
"实时音频采集": { "实时音频采集": {
"Group": "对话",
"Color": "stop", "Color": "stop",
"AsButton": True, "AsButton": True,
"Info": "开始语言对话 | 没有输入参数",
"Function": HotReload(语音助手) "Function": HotReload(语音助手)
} }
}) })
@ -425,17 +502,32 @@ def get_crazy_functions():
print('Load function plugin failed') print('Load function plugin failed')
try: try:
from crazy_functions.虚空终端 import 自动终端 from crazy_functions.批量翻译PDF文档_NOUGAT import 批量翻译PDF文档
function_plugins.update({ function_plugins.update({
"自动终端": { "精准翻译PDF文档NOUGAT": {
"Group": "学术",
"Color": "stop", "Color": "stop",
"AsButton": False, "AsButton": False,
"Function": HotReload(自动终端) "Function": HotReload(批量翻译PDF文档)
} }
}) })
except: except:
print('Load function plugin failed') print('Load function plugin failed')
# try:
# from crazy_functions.CodeInterpreter import 虚空终端CodeInterpreter
# function_plugins.update({
# "CodeInterpreter开发中仅供测试": {
# "Group": "编程|对话",
# "Color": "stop",
# "AsButton": False,
# "Function": HotReload(虚空终端CodeInterpreter)
# }
# })
# except:
# print('Load function plugin failed')
# try: # try:
# from crazy_functions.chatglm微调工具 import 微调数据集生成 # from crazy_functions.chatglm微调工具 import 微调数据集生成
# function_plugins.update({ # function_plugins.update({
@ -449,4 +541,24 @@ def get_crazy_functions():
# }) # })
# except: # except:
# print('Load function plugin failed') # print('Load function plugin failed')
"""
设置默认值:
- 默认 Group = 对话
- 默认 AsButton = True
- 默认 AdvancedArgs = False
- 默认 Color = secondary
"""
for name, function_meta in function_plugins.items():
if "Group" not in function_meta:
function_plugins[name]["Group"] = '对话'
if "AsButton" not in function_meta:
function_plugins[name]["AsButton"] = True
if "AdvancedArgs" not in function_meta:
function_plugins[name]["AdvancedArgs"] = False
if "Color" not in function_meta:
function_plugins[name]["Color"] = 'secondary'
return function_plugins return function_plugins

View File

@ -0,0 +1,231 @@
from collections.abc import Callable, Iterable, Mapping
from typing import Any
from toolbox import CatchException, update_ui, gen_time_str, trimmed_format_exc, promote_file_to_downloadzone, clear_file_downloadzone
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import input_clipping, try_install_deps
from multiprocessing import Process, Pipe
import os
import time
templete = """
```python
import ... # Put dependencies here, e.g. import numpy as np
class TerminalFunction(object): # Do not change the name of the class, The name of the class must be `TerminalFunction`
def run(self, path): # The name of the function must be `run`, it takes only a positional argument.
# rewrite the function you have just written here
...
return generated_file_path
```
"""
def inspect_dependency(chatbot, history):
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return True
def get_code_block(reply):
import re
pattern = r"```([\s\S]*?)```" # regex pattern to match code blocks
matches = re.findall(pattern, reply) # find all code blocks in text
if len(matches) == 1:
return matches[0].strip('python') # code block
for match in matches:
if 'class TerminalFunction' in match:
return match.strip('python') # code block
raise RuntimeError("GPT is not generating proper code.")
def gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history):
# 输入
prompt_compose = [
f'Your job:\n'
f'1. write a single Python function, which takes a path of a `{file_type}` file as the only argument and returns a `string` containing the result of analysis or the path of generated files. \n',
f"2. You should write this function to perform following task: " + txt + "\n",
f"3. Wrap the output python function with markdown codeblock."
]
i_say = "".join(prompt_compose)
demo = []
# 第一步
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=i_say,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=demo,
sys_prompt= r"You are a programmer."
)
history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# 第二步
prompt_compose = [
"If previous stage is successful, rewrite the function you have just written to satisfy following templete: \n",
templete
]
i_say = "".join(prompt_compose); inputs_show_user = "If previous stage is successful, rewrite the function you have just written to satisfy executable templete. "
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=i_say, inputs_show_user=inputs_show_user,
llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
sys_prompt= r"You are a programmer."
)
code_to_return = gpt_say
history.extend([i_say, gpt_say])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
# # 第三步
# i_say = "Please list to packages to install to run the code above. Then show me how to use `try_install_deps` function to install them."
# i_say += 'For instance. `try_install_deps(["opencv-python", "scipy", "numpy"])`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=inputs_show_user,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer."
# )
# # # 第三步
# i_say = "Show me how to use `pip` to install packages to run the code above. "
# i_say += 'For instance. `pip install -r opencv-python scipy numpy`'
# installation_advance = yield from request_gpt_model_in_new_thread_with_ui_alive(
# inputs=i_say, inputs_show_user=i_say,
# llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
# sys_prompt= r"You are a programmer."
# )
installation_advance = ""
return code_to_return, installation_advance, txt, file_type, llm_kwargs, chatbot, history
def make_module(code):
module_file = 'gpt_fn_' + gen_time_str().replace('-','_')
with open(f'gpt_log/{module_file}.py', 'w', encoding='utf8') as f:
f.write(code)
def get_class_name(class_string):
import re
# Use regex to extract the class name
class_name = re.search(r'class (\w+)\(', class_string).group(1)
return class_name
class_name = get_class_name(code)
return f"gpt_log.{module_file}->{class_name}"
def init_module_instance(module):
import importlib
module_, class_ = module.split('->')
init_f = getattr(importlib.import_module(module_), class_)
return init_f()
def for_immediate_show_off_when_possible(file_type, fp, chatbot):
if file_type in ['png', 'jpg']:
image_path = os.path.abspath(fp)
chatbot.append(['这是一张图片, 展示如下:',
f'本地文件地址: <br/>`{image_path}`<br/>'+
f'本地文件预览: <br/><div align="center"><img src="file={image_path}"></div>'
])
return chatbot
def subprocess_worker(instance, file_path, return_dict):
return_dict['result'] = instance.run(file_path)
def have_any_recent_upload_files(chatbot):
_5min = 5 * 60
if not chatbot: return False # chatbot is None
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
if not most_recent_uploaded: return False # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
else: return False # most_recent_uploaded is too old
def get_recent_file_prompt_support(chatbot):
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded['path']
return path
@CatchException
def 虚空终端CodeInterpreter(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
"""
txt 输入栏用户输入的文本例如需要翻译的一段话再例如一个包含了待处理文件的路径
llm_kwargs gpt模型参数如温度和top_p等一般原样传递下去就行
plugin_kwargs 插件模型的参数暂时没有用武之地
chatbot 聊天显示框的句柄用于显示给用户
history 聊天历史前情提要
system_prompt 给gpt的静默提醒
web_port 当前软件运行的端口号
"""
raise NotImplementedError
# 清空历史,以免输入溢出
history = []; clear_file_downloadzone(chatbot)
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"CodeInterpreter开源版, 此插件处于开发阶段, 建议暂时不要使用, 插件初始化中 ..."
])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if have_any_recent_upload_files(chatbot):
file_path = get_recent_file_prompt_support(chatbot)
else:
chatbot.append(["文件检索", "没有发现任何近期上传的文件。"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 读取文件
if ("recently_uploaded_files" in plugin_kwargs) and (plugin_kwargs["recently_uploaded_files"] == ""): plugin_kwargs.pop("recently_uploaded_files")
recently_uploaded_files = plugin_kwargs.get("recently_uploaded_files", None)
file_path = recently_uploaded_files[-1]
file_type = file_path.split('.')[-1]
# 粗心检查
if 'private_upload' in txt:
chatbot.append([
"...",
f"请在输入框内填写需求,然后再次点击该插件(文件路径 {file_path} 已经被记忆)"
])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 开始干正事
for j in range(5): # 最多重试5次
try:
code, installation_advance, txt, file_type, llm_kwargs, chatbot, history = \
yield from gpt_interact_multi_step(txt, file_type, llm_kwargs, chatbot, history)
code = get_code_block(code)
res = make_module(code)
instance = init_module_instance(res)
break
except Exception as e:
chatbot.append([f"{j}次代码生成尝试,失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 代码生成结束, 开始执行
try:
import multiprocessing
manager = multiprocessing.Manager()
return_dict = manager.dict()
p = multiprocessing.Process(target=subprocess_worker, args=(instance, file_path, return_dict))
# only has 10 seconds to run
p.start(); p.join(timeout=10)
if p.is_alive(): p.terminate(); p.join()
p.close()
res = return_dict['result']
# res = instance.run(file_path)
except Exception as e:
chatbot.append(["执行失败了", f"错误追踪\n```\n{trimmed_format_exc()}\n```\n"])
# chatbot.append(["如果是缺乏依赖,请参考以下建议", installation_advance])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 顺利完成,收尾
res = str(res)
if os.path.exists(res):
chatbot.append(["执行成功了,结果是一个有效文件", "结果:" + res])
new_file_path = promote_file_to_downloadzone(res, chatbot=chatbot)
chatbot = for_immediate_show_off_when_possible(file_type, new_file_path, chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
else:
chatbot.append(["执行成功了,结果是一个字符串", "结果:" + res])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
"""
测试
裁剪图像保留下半部分
交换图像的蓝色通道和红色通道
将图像转为灰度图像
将csv文件转excel表格
"""

View File

@ -6,7 +6,7 @@ pj = os.path.join
ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/") ARXIV_CACHE_DIR = os.path.expanduser(f"~/arxiv_cache/")
# =================================== 工具函数 =============================================== # =================================== 工具函数 ===============================================
专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". ' # 专业词汇声明 = 'If the term "agent" is used in this section, it should be translated to "智能体". '
def switch_prompt(pfg, mode, more_requirement): def switch_prompt(pfg, mode, more_requirement):
""" """
Generate prompts and system prompts based on the mode for proofreading or translating. Generate prompts and system prompts based on the mode for proofreading or translating.
@ -109,7 +109,7 @@ def arxiv_download(chatbot, history, txt):
url_ = txt # https://arxiv.org/abs/1707.06690 url_ = txt # https://arxiv.org/abs/1707.06690
if not txt.startswith('https://arxiv.org/abs/'): if not txt.startswith('https://arxiv.org/abs/'):
msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}" msg = f"解析arxiv网址失败, 期望格式例如: https://arxiv.org/abs/1707.06690。实际得到格式: {url_}"
yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面 yield from update_ui_lastest_msg(msg, chatbot=chatbot, history=history) # 刷新界面
return msg, None return msg, None
# <-------------- set format -------------> # <-------------- set format ------------->
@ -255,7 +255,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
project_folder = txt project_folder = txt
else: else:
if txt == "": txt = '空空如也的输入栏' if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无法处理: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
@ -291,7 +291,7 @@ def Latex翻译中文并重新编译PDF(txt, llm_kwargs, plugin_kwargs, chatbot,
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot) promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)
else: else:
chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 也是可读的, 您可以到Github Issue区, 用该压缩包+对话历史存档进行反馈 ...')) chatbot.append((f"失败了", '虽然PDF生成失败了, 但请查收结果(压缩包), 内含已经翻译的Tex文档, 您可以到Github Issue区, 用该压缩包进行反馈。如系统是Linux请检查系统字体见Github wiki ...'))
yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history); time.sleep(1) # 刷新界面
promote_file_to_downloadzone(file=zip_res, chatbot=chatbot) promote_file_to_downloadzone(file=zip_res, chatbot=chatbot)

View File

@ -591,11 +591,16 @@ def get_files_from_everything(txt, type): # type='.md'
# 网络的远程文件 # 网络的远程文件
import requests import requests
from toolbox import get_conf from toolbox import get_conf
from toolbox import get_log_folder, gen_time_str
proxies, = get_conf('proxies') proxies, = get_conf('proxies')
r = requests.get(txt, proxies=proxies) try:
with open('./gpt_log/temp'+type, 'wb+') as f: f.write(r.content) r = requests.get(txt, proxies=proxies)
project_folder = './gpt_log/' except:
file_manifest = ['./gpt_log/temp'+type] raise ConnectionRefusedError(f"无法下载资源{txt},请检查。")
path = os.path.join(get_log_folder(plugin_name='web_download'), gen_time_str()+type)
with open(path, 'wb+') as f: f.write(r.content)
project_folder = get_log_folder(plugin_name='web_download')
file_manifest = [path]
elif txt.endswith(type): elif txt.endswith(type):
# 直接给定文件 # 直接给定文件
file_manifest = [txt] file_manifest = [txt]

View File

@ -37,10 +37,19 @@ Here is the output schema:
{schema} {schema}
```""" ```"""
PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
```
{schema}
```"""
class JsonStringError(Exception): ...
class GptJsonIO(): class GptJsonIO():
def __init__(self, schema): def __init__(self, schema, example_instruction=True):
self.pydantic_object = schema self.pydantic_object = schema
self.example_instruction = example_instruction
self.format_instructions = self.generate_format_instructions() self.format_instructions = self.generate_format_instructions()
def generate_format_instructions(self): def generate_format_instructions(self):
@ -53,9 +62,11 @@ class GptJsonIO():
if "type" in reduced_schema: if "type" in reduced_schema:
del reduced_schema["type"] del reduced_schema["type"]
# Ensure json in context is well-formed with double quotes. # Ensure json in context is well-formed with double quotes.
schema_str = json.dumps(reduced_schema) if self.example_instruction:
schema_str = json.dumps(reduced_schema)
return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str) return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
else:
return PYDANTIC_FORMAT_INSTRUCTIONS_SIMPLE.format(schema=schema_str)
def generate_output(self, text): def generate_output(self, text):
# Greedy search for 1st json candidate. # Greedy search for 1st json candidate.
@ -95,6 +106,6 @@ class GptJsonIO():
except Exception as e: except Exception as e:
# 没辙了,放弃治疗 # 没辙了,放弃治疗
logging.info('Repaire json fail.') logging.info('Repaire json fail.')
raise RuntimeError('Cannot repair json.', str(e)) raise JsonStringError('Cannot repair json.', str(e))
return result return result

View File

@ -1,4 +1,4 @@
import time, threading, json import time, logging, json
class AliyunASR(): class AliyunASR():
@ -12,14 +12,14 @@ class AliyunASR():
message = json.loads(message) message = json.loads(message)
self.parsed_sentence = message['payload']['result'] self.parsed_sentence = message['payload']['result']
self.event_on_entence_end.set() self.event_on_entence_end.set()
print(self.parsed_sentence) # print(self.parsed_sentence)
def test_on_start(self, message, *args): def test_on_start(self, message, *args):
# print("test_on_start:{}".format(message)) # print("test_on_start:{}".format(message))
pass pass
def test_on_error(self, message, *args): def test_on_error(self, message, *args):
print("on_error args=>{}".format(args)) logging.error("on_error args=>{}".format(args))
pass pass
def test_on_close(self, *args): def test_on_close(self, *args):
@ -36,7 +36,6 @@ class AliyunASR():
# print("on_completed:args=>{} message=>{}".format(args, message)) # print("on_completed:args=>{} message=>{}".format(args, message))
pass pass
def audio_convertion_thread(self, uuid): def audio_convertion_thread(self, uuid):
# 在一个异步线程中采集音频 # 在一个异步线程中采集音频
import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git import nls # pip install git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git

View File

@ -20,6 +20,11 @@ def get_avail_grobid_url():
def parse_pdf(pdf_path, grobid_url): def parse_pdf(pdf_path, grobid_url):
import scipdf # pip install scipdf_parser import scipdf # pip install scipdf_parser
if grobid_url.endswith('/'): grobid_url = grobid_url.rstrip('/') if grobid_url.endswith('/'): grobid_url = grobid_url.rstrip('/')
article_dict = scipdf.parse_pdf_to_dict(pdf_path, grobid_url=grobid_url) try:
article_dict = scipdf.parse_pdf_to_dict(pdf_path, grobid_url=grobid_url)
except GROBID_OFFLINE_EXCEPTION:
raise GROBID_OFFLINE_EXCEPTION("GROBID服务不可用请修改config中的GROBID_URL可修改成本地GROBID服务。")
except:
raise RuntimeError("解析PDF失败请检查PDF是否损坏。")
return article_dict return article_dict

View File

@ -1,9 +1,9 @@
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from typing import List from typing import List
from toolbox import update_ui_lastest_msg, get_conf from toolbox import update_ui_lastest_msg, disable_auto_promotion
from request_llm.bridge_all import predict_no_ui_long_connection from request_llm.bridge_all import predict_no_ui_long_connection
from crazy_functions.json_fns.pydantic_io import GptJsonIO from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
import copy, json, pickle, os, sys import copy, json, pickle, os, sys, time
def read_avail_plugin_enum(): def read_avail_plugin_enum():
@ -11,37 +11,85 @@ def read_avail_plugin_enum():
plugin_arr = get_crazy_functions() plugin_arr = get_crazy_functions()
# remove plugins with out explaination # remove plugins with out explaination
plugin_arr = {k:v for k, v in plugin_arr.items() if 'Info' in v} plugin_arr = {k:v for k, v in plugin_arr.items() if 'Info' in v}
plugin_arr_info = {"F{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_info = {"F_{:04d}".format(i):v["Info"] for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict = {"F{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)} plugin_arr_dict = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict_parse = {"F_{:04d}".format(i):v for i, v in enumerate(plugin_arr.values(), start=1)}
plugin_arr_dict_parse.update({f"F_{i}":v for i, v in enumerate(plugin_arr.values(), start=1)})
prompt = json.dumps(plugin_arr_info, ensure_ascii=False, indent=2) prompt = json.dumps(plugin_arr_info, ensure_ascii=False, indent=2)
prompt = "\n\nThe defination of PluginEnum:\nPluginEnum=" + prompt prompt = "\n\nThe defination of PluginEnum:\nPluginEnum=" + prompt
return prompt, plugin_arr_dict return prompt, plugin_arr_dict, plugin_arr_dict_parse
def wrap_code(txt):
txt = txt.replace('```','')
return f"\n```\n{txt}\n```\n"
def have_any_recent_upload_files(chatbot):
_5min = 5 * 60
if not chatbot: return False # chatbot is None
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
if not most_recent_uploaded: return False # most_recent_uploaded is None
if time.time() - most_recent_uploaded["time"] < _5min: return True # most_recent_uploaded is new
else: return False # most_recent_uploaded is too old
def get_recent_file_prompt_support(chatbot):
most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
path = most_recent_uploaded['path']
prompt = "\nAdditional Information:\n"
prompt = "In case that this plugin requires a path or a file as argument,"
prompt += f"it is important for you to know that the user has recently uploaded a file, located at: `{path}`"
prompt += f"Only use it when necessary, otherwise, you can ignore this file."
return prompt
def get_inputs_show_user(inputs, plugin_arr_enum_prompt):
# remove plugin_arr_enum_prompt from inputs string
inputs_show_user = inputs.replace(plugin_arr_enum_prompt, "")
inputs_show_user += plugin_arr_enum_prompt[:200] + '...'
inputs_show_user += '\n...\n'
inputs_show_user += '...\n'
inputs_show_user += '...}'
return inputs_show_user
def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention): def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
plugin_arr_enum_prompt, plugin_arr_dict = read_avail_plugin_enum() plugin_arr_enum_prompt, plugin_arr_dict, plugin_arr_dict_parse = read_avail_plugin_enum()
class Plugin(BaseModel): class Plugin(BaseModel):
plugin_selection: str = Field(description="The most related plugin from one of the PluginEnum.", default="F0000000000000") plugin_selection: str = Field(description="The most related plugin from one of the PluginEnum.", default="F_0000")
plugin_arg: str = Field(description="The argument of the plugin. A path or url or empty.", default="") reason_of_selection: str = Field(description="The reason why you should select this plugin.", default="This plugin satisfy user requirement most")
# ⭐ ⭐ ⭐ 选择插件 # ⭐ ⭐ ⭐ 选择插件
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n查找可用插件中...", chatbot=chatbot, history=history, delay=0) yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n查找可用插件中...", chatbot=chatbot, history=history, delay=0)
gpt_json_io = GptJsonIO(Plugin) gpt_json_io = GptJsonIO(Plugin)
gpt_json_io.format_instructions = "The format of your output should be a json that can be parsed by json.loads.\n"
gpt_json_io.format_instructions += """Output example: {"plugin_selection":"F_1234", "reason_of_selection":"F_1234 plugin satisfy user requirement most"}\n"""
gpt_json_io.format_instructions += "The plugins you are authorized to use are listed below:\n"
gpt_json_io.format_instructions += plugin_arr_enum_prompt gpt_json_io.format_instructions += plugin_arr_enum_prompt
inputs = "Choose the correct plugin and extract plugin_arg, the user requirement is: \n\n" + \ inputs = "Choose the correct plugin according to user requirements, the user requirement is: \n\n" + \
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \ ">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection( run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[]) inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
plugin_sel = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn) try:
gpt_reply = run_gpt_fn(inputs, "")
if plugin_sel.plugin_selection not in plugin_arr_dict: plugin_sel = gpt_json_io.generate_output_auto_repair(gpt_reply, run_gpt_fn)
msg = f'找不到合适插件执行该任务' except JsonStringError:
msg = f"抱歉, {llm_kwargs['llm_model']}无法理解您的需求。"
msg += "请求的Prompt为\n" + wrap_code(get_inputs_show_user(inputs, plugin_arr_enum_prompt))
msg += "语言模型回复为:\n" + wrap_code(gpt_reply)
msg += "\n但您可以尝试再试一次\n"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
return
if plugin_sel.plugin_selection not in plugin_arr_dict_parse:
msg = f"抱歉, 找不到合适插件执行该任务, 或者{llm_kwargs['llm_model']}无法理解您的需求。"
msg += f"语言模型{llm_kwargs['llm_model']}选择了不存在的插件:\n" + wrap_code(gpt_reply)
msg += "\n但您可以尝试再试一次\n"
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2) yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
return return
# ⭐ ⭐ ⭐ 确认插件参数 # ⭐ ⭐ ⭐ 确认插件参数
plugin = plugin_arr_dict[plugin_sel.plugin_selection] if not have_any_recent_upload_files(chatbot):
appendix_info = ""
else:
appendix_info = get_recent_file_prompt_support(chatbot)
plugin = plugin_arr_dict_parse[plugin_sel.plugin_selection]
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n提取插件参数...", chatbot=chatbot, history=history, delay=0) yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n提取插件参数...", chatbot=chatbot, history=history, delay=0)
class PluginExplicit(BaseModel): class PluginExplicit(BaseModel):
plugin_selection: str = plugin_sel.plugin_selection plugin_selection: str = plugin_sel.plugin_selection
@ -50,7 +98,7 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
gpt_json_io.format_instructions += "The information about this plugin is:" + plugin["Info"] gpt_json_io.format_instructions += "The information about this plugin is:" + plugin["Info"]
inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \ inputs = f"A plugin named {plugin_sel.plugin_selection} is selected, " + \
"you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \ "you should extract plugin_arg from the user requirement, the user requirement is: \n\n" + \
">> " + txt.rstrip('\n').replace('\n','\n>> ') + '\n\n' + \ ">> " + (txt + appendix_info).rstrip('\n').replace('\n','\n>> ') + '\n\n' + \
gpt_json_io.format_instructions gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection( run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[]) inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
@ -60,7 +108,7 @@ def execute_plugin(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prom
# ⭐ ⭐ ⭐ 执行插件 # ⭐ ⭐ ⭐ 执行插件
fn = plugin['Function'] fn = plugin['Function']
fn_name = fn.__name__ fn_name = fn.__name__
msg = f'正在调用插件: {fn_name}\n\n插件说明:{plugin["Info"]}\n\n插件参数:{plugin_sel.plugin_arg}' msg = f'{llm_kwargs["llm_model"]}为您选择了插件: `{fn_name}`\n\n插件说明:{plugin["Info"]}\n\n插件参数:{plugin_sel.plugin_arg}\n\n假如偏离了您的要求,按停止键终止。'
yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2) yield from update_ui_lastest_msg(lastmsg=msg, chatbot=chatbot, history=history, delay=2)
yield from fn(plugin_sel.plugin_arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, -1) yield from fn(plugin_sel.plugin_arg, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, -1)
return return

View File

@ -0,0 +1,28 @@
import pickle
class VoidTerminalState():
def __init__(self):
self.reset_state()
def reset_state(self):
self.has_provided_explaination = False
def lock_plugin(self, chatbot):
chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->虚空终端'
chatbot._cookies['plugin_state'] = pickle.dumps(self)
def unlock_plugin(self, chatbot):
self.reset_state()
chatbot._cookies['lock_plugin'] = None
chatbot._cookies['plugin_state'] = pickle.dumps(self)
def set_state(self, chatbot, key, value):
setattr(self, key, value)
chatbot._cookies['plugin_state'] = pickle.dumps(self)
def get_state(chatbot):
state = chatbot._cookies.get('plugin_state', None)
if state is not None: state = pickle.loads(state)
else: state = VoidTerminalState()
state.chatbot = chatbot
return state

View File

@ -0,0 +1,271 @@
from toolbox import CatchException, report_execption, gen_time_str
from toolbox import update_ui, promote_file_to_downloadzone, update_ui_lastest_msg, disable_auto_promotion
from toolbox import write_history_to_file, get_log_folder
from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
from .crazy_utils import read_and_clean_pdf_text
from .pdf_fns.parse_pdf import parse_pdf, get_avail_grobid_url
from colorful import *
import os
import math
import logging
def markdown_to_dict(article_content):
import markdown
from bs4 import BeautifulSoup
cur_t = ""
cur_c = ""
results = {}
for line in article_content:
if line.startswith('#'):
if cur_t!="":
if cur_t not in results:
results.update({cur_t:cur_c.lstrip('\n')})
else:
# 处理重名的章节
results.update({cur_t + " " + gen_time_str():cur_c.lstrip('\n')})
cur_t = line.rstrip('\n')
cur_c = ""
else:
cur_c += line
results_final = {}
for k in list(results.keys()):
if k.startswith('# '):
results_final['title'] = k.split('# ')[-1]
results_final['authors'] = results.pop(k).lstrip('\n')
if k.startswith('###### Abstract'):
results_final['abstract'] = results.pop(k).lstrip('\n')
results_final_sections = []
for k,v in results.items():
results_final_sections.append({
'heading':k.lstrip("# "),
'text':v if len(v) > 0 else f"The beginning of {k.lstrip('# ')} section."
})
results_final['sections'] = results_final_sections
return results_final
@CatchException
def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
disable_auto_promotion(chatbot)
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import nougat
import tiktoken
except:
report_execption(chatbot, history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade nougat-ocr tiktoken```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 清空历史,以免输入溢出
history = []
from .crazy_utils import get_files_from_everything
success, file_manifest, project_folder = get_files_from_everything(txt, type='.pdf')
# 检测输入参数,如没有给定输入参数,直接退出
if not success:
if txt == "": txt = '空空如也的输入栏'
# 如果没找到任何文件
if len(file_manifest) == 0:
report_execption(chatbot, history,
a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return
# 开始正式执行任务
yield from 解析PDF_基于NOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)
def nougat_with_timeout(command, cwd, timeout=3600):
import subprocess
process = subprocess.Popen(command, shell=True, cwd=cwd)
try:
stdout, stderr = process.communicate(timeout=timeout)
except subprocess.TimeoutExpired:
process.kill()
stdout, stderr = process.communicate()
print("Process timed out!")
return False
return True
def NOUGAT_parse_pdf(fp):
import glob
from toolbox import get_log_folder, gen_time_str
dst = os.path.join(get_log_folder(plugin_name='nougat'), gen_time_str())
os.makedirs(dst)
nougat_with_timeout(f'nougat --out "{os.path.abspath(dst)}" "{os.path.abspath(fp)}"', os.getcwd())
res = glob.glob(os.path.join(dst,'*.mmd'))
if len(res) == 0:
raise RuntimeError("Nougat解析论文失败。")
return res[0]
def 解析PDF_基于NOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
import copy
import tiktoken
TOKEN_LIMIT_PER_FRAGMENT = 1280
generated_conclusion_files = []
generated_html_files = []
DST_LANG = "中文"
for index, fp in enumerate(file_manifest):
chatbot.append(["当前进度:", f"正在解析论文请稍候。第一次运行时需要花费较长时间下载NOUGAT参数"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
fpp = NOUGAT_parse_pdf(fp)
with open(fpp, 'r', encoding='utf8') as f:
article_content = f.readlines()
article_dict = markdown_to_dict(article_content)
logging.info(article_dict)
prompt = "以下是一篇学术论文的基本信息:\n"
# title
title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'
# authors
authors = article_dict.get('authors', '无法获取 authors'); prompt += f'authors:{authors}\n\n'
# abstract
abstract = article_dict.get('abstract', '无法获取 abstract'); prompt += f'abstract:{abstract}\n\n'
# command
prompt += f"请将题目和摘要翻译为{DST_LANG}"
meta = [f'# Title:\n\n', title, f'# Abstract:\n\n', abstract ]
# 单线获取文章meta信息
paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=prompt,
inputs_show_user=prompt,
llm_kwargs=llm_kwargs,
chatbot=chatbot, history=[],
sys_prompt="You are an academic paper reader。",
)
# 多线,翻译
inputs_array = []
inputs_show_user_array = []
# get_token_num
from request_llm.bridge_all import model_info
enc = model_info[llm_kwargs['llm_model']]['tokenizer']
def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
def break_down(txt):
raw_token_num = get_token_num(txt)
if raw_token_num <= TOKEN_LIMIT_PER_FRAGMENT:
return [txt]
else:
# raw_token_num > TOKEN_LIMIT_PER_FRAGMENT
# find a smooth token limit to achieve even seperation
count = int(math.ceil(raw_token_num / TOKEN_LIMIT_PER_FRAGMENT))
token_limit_smooth = raw_token_num // count + count
return breakdown_txt_to_satisfy_token_limit_for_pdf(txt, get_token_fn=get_token_num, limit=token_limit_smooth)
for section in article_dict.get('sections'):
if len(section['text']) == 0: continue
section_frags = break_down(section['text'])
for i, fragment in enumerate(section_frags):
heading = section['heading']
if len(section_frags) > 1: heading += f' Part-{i+1}'
inputs_array.append(
f"你需要翻译{heading}章节,内容如下: \n\n{fragment}"
)
inputs_show_user_array.append(
f"# {heading}\n\n{fragment}"
)
gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
inputs_array=inputs_array,
inputs_show_user_array=inputs_show_user_array,
llm_kwargs=llm_kwargs,
chatbot=chatbot,
history_array=[meta for _ in inputs_array],
sys_prompt_array=[
"请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in inputs_array],
)
res_path = write_history_to_file(meta + ["# Meta Translation" , paper_meta_info] + gpt_response_collection, file_basename=None, file_fullname=None)
promote_file_to_downloadzone(res_path, rename_file=os.path.basename(fp)+'.md', chatbot=chatbot)
generated_conclusion_files.append(res_path)
ch = construct_html()
orig = ""
trans = ""
gpt_response_collection_html = copy.deepcopy(gpt_response_collection)
for i,k in enumerate(gpt_response_collection_html):
if i%2==0:
gpt_response_collection_html[i] = inputs_show_user_array[i//2]
else:
gpt_response_collection_html[i] = gpt_response_collection_html[i]
final = ["", "", "一、论文概况", "", "Abstract", paper_meta_info, "二、论文翻译", ""]
final.extend(gpt_response_collection_html)
for i, k in enumerate(final):
if i%2==0:
orig = k
if i%2==1:
trans = k
ch.add_row(a=orig, b=trans)
create_report_file_name = f"{os.path.basename(fp)}.trans.html"
html_file = ch.save_file(create_report_file_name)
generated_html_files.append(html_file)
promote_file_to_downloadzone(html_file, rename_file=os.path.basename(html_file), chatbot=chatbot)
chatbot.append(("给出输出文件清单", str(generated_conclusion_files + generated_html_files)))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
class construct_html():
def __init__(self) -> None:
self.css = """
.row {
display: flex;
flex-wrap: wrap;
}
.column {
flex: 1;
padding: 10px;
}
.table-header {
font-weight: bold;
border-bottom: 1px solid black;
}
.table-row {
border-bottom: 1px solid lightgray;
}
.table-cell {
padding: 5px;
}
"""
self.html_string = f'<!DOCTYPE html><head><meta charset="utf-8"><title>翻译结果</title><style>{self.css}</style></head>'
def add_row(self, a, b):
tmp = """
<div class="row table-row">
<div class="column table-cell">REPLACE_A</div>
<div class="column table-cell">REPLACE_B</div>
</div>
"""
from toolbox import markdown_convertion
tmp = tmp.replace('REPLACE_A', markdown_convertion(a))
tmp = tmp.replace('REPLACE_B', markdown_convertion(b))
self.html_string += tmp
def save_file(self, file_name):
with open(os.path.join(get_log_folder(), file_name), 'w', encoding='utf8') as f:
f.write(self.html_string.encode('utf-8', 'ignore').decode())
return os.path.join(get_log_folder(), file_name)

View File

@ -24,10 +24,11 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
try: try:
import fitz import fitz
import tiktoken import tiktoken
import scipdf
except: except:
report_execption(chatbot, history, report_execption(chatbot, history,
a=f"解析项目: {txt}", a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken```。") b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken scipdf_parser```。")
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
return return
@ -58,7 +59,6 @@ def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, syst
def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url): def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, grobid_url):
import copy import copy
import tiktoken
TOKEN_LIMIT_PER_FRAGMENT = 1280 TOKEN_LIMIT_PER_FRAGMENT = 1280
generated_conclusion_files = [] generated_conclusion_files = []
generated_html_files = [] generated_html_files = []
@ -66,7 +66,7 @@ def 解析PDF_基于GROBID(file_manifest, project_folder, llm_kwargs, plugin_kwa
for index, fp in enumerate(file_manifest): for index, fp in enumerate(file_manifest):
chatbot.append(["当前进度:", f"正在连接GROBID服务请稍候: {grobid_url}\n如果等待时间过长请修改config中的GROBID_URL可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 chatbot.append(["当前进度:", f"正在连接GROBID服务请稍候: {grobid_url}\n如果等待时间过长请修改config中的GROBID_URL可修改成本地GROBID服务。"]); yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
article_dict = parse_pdf(fp, grobid_url) article_dict = parse_pdf(fp, grobid_url)
print(article_dict) if article_dict is None: raise RuntimeError("解析PDF失败请检查PDF是否损坏。")
prompt = "以下是一篇学术论文的基本信息:\n" prompt = "以下是一篇学术论文的基本信息:\n"
# title # title
title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n' title = article_dict.get('title', '无法获取 title'); prompt += f'title:{title}\n\n'

View File

@ -75,7 +75,11 @@ def 连接网络回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, s
proxies, = get_conf('proxies') proxies, = get_conf('proxies')
urls = google(txt, proxies) urls = google(txt, proxies)
history = [] history = []
if len(urls) == 0:
chatbot.append((f"结论:{txt}",
"[Local Message] 受到google限制无法从google获取信息"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新
return
# ------------- < 第2步依次访问网页 > ------------- # ------------- < 第2步依次访问网页 > -------------
max_search_result = 5 # 最多收纳多少个网页的结果 max_search_result = 5 # 最多收纳多少个网页的结果
for index, url in enumerate(urls[:max_search_result]): for index, url in enumerate(urls[:max_search_result]):

View File

@ -75,7 +75,11 @@ def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, histor
proxies, = get_conf('proxies') proxies, = get_conf('proxies')
urls = bing_search(txt, proxies) urls = bing_search(txt, proxies)
history = [] history = []
if len(urls) == 0:
chatbot.append((f"结论:{txt}",
"[Local Message] 受到bing限制无法从bing获取信息"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间我们先及时地做一次界面更新
return
# ------------- < 第2步依次访问网页 > ------------- # ------------- < 第2步依次访问网页 > -------------
max_search_result = 8 # 最多收纳多少个网页的结果 max_search_result = 8 # 最多收纳多少个网页的结果
for index, url in enumerate(urls[:max_search_result]): for index, url in enumerate(urls[:max_search_result]):

View File

@ -1,24 +1,68 @@
"""
Explanation of the Void Terminal Plugin:
Please describe in natural language what you want to do.
1. You can open the plugin's dropdown menu to explore various capabilities of this project, and then describe your needs in natural language, for example:
- "Please call the plugin to translate a PDF paper for me. I just uploaded the paper to the upload area."
- "Please use the plugin to translate a PDF paper, with the address being https://www.nature.com/articles/s41586-019-1724-z.pdf."
- "Generate an image with blooming flowers and lush green grass using the plugin."
- "Translate the README using the plugin. The GitHub URL is https://github.com/facebookresearch/co-tracker."
- "Translate an Arxiv paper for me. The Arxiv ID is 1812.10695. Remember to use the plugin and don't do it manually!"
- "I don't like the current interface color. Modify the configuration and change the theme to THEME="High-Contrast"."
- "Could you please explain the structure of the Transformer network?"
2. If you use keywords like "call the plugin xxx", "modify the configuration xxx", "please", etc., your intention can be recognized more accurately.
3. Your intention can be recognized more accurately when using powerful models like GPT4. This plugin is relatively new, so please feel free to provide feedback on GitHub.
4. Now, if you need to process a file, please upload the file (drag the file to the file upload area) or describe the path to the file.
5. If you don't need to upload a file, you can simply repeat your command again.
"""
explain_msg = """
## 虚空终端插件说明:
1. 请用**自然语言**描述您需要做什么例如
- 请调用插件为我翻译PDF论文论文我刚刚放到上传区了
- 请调用插件翻译PDF论文地址为https://aaa/bbb/ccc.pdf
- 把Arxiv论文翻译成中文PDFarxiv论文的ID是1812.10695记得用插件
- 生成一张图片图中鲜花怒放绿草如茵用插件实现
- 用插件翻译READMEGithub网址是https://github.com/facebookresearch/co-tracker
- 我不喜欢当前的界面颜色修改配置把主题THEME更换为THEME="High-Contrast"
- 请问Transformer网络的结构是怎样的
2. 您可以打开插件下拉菜单以了解本项目的各种能力
3. 如果您使用调用插件xxx修改配置xxx请问等关键词您的意图可以被识别的更准确
4. 建议使用 GPT3.5 或更强的模型弱模型可能无法理解您的想法该插件诞生时间不长欢迎您前往Github反馈问题
5. 现在如果需要处理文件请您上传文件将文件拖动到文件上传区或者描述文件所在的路径
6. 如果不需要上传文件现在您只需要再次重复一次您的指令即可
"""
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from typing import List from typing import List
from toolbox import CatchException, update_ui, gen_time_str from toolbox import CatchException, update_ui, gen_time_str
from toolbox import update_ui_lastest_msg from toolbox import update_ui_lastest_msg, disable_auto_promotion
from request_llm.bridge_all import predict_no_ui_long_connection from request_llm.bridge_all import predict_no_ui_long_connection
from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive from crazy_functions.crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
from crazy_functions.crazy_utils import input_clipping from crazy_functions.crazy_utils import input_clipping
from crazy_functions.json_fns.pydantic_io import GptJsonIO from crazy_functions.json_fns.pydantic_io import GptJsonIO, JsonStringError
from crazy_functions.vt_fns.vt_state import VoidTerminalState
from crazy_functions.vt_fns.vt_modify_config import modify_configuration_hot from crazy_functions.vt_fns.vt_modify_config import modify_configuration_hot
from crazy_functions.vt_fns.vt_modify_config import modify_configuration_reboot from crazy_functions.vt_fns.vt_modify_config import modify_configuration_reboot
from crazy_functions.vt_fns.vt_call_plugin import execute_plugin from crazy_functions.vt_fns.vt_call_plugin import execute_plugin
from enum import Enum
import copy, json, pickle, os, sys
class UserIntention(BaseModel): class UserIntention(BaseModel):
user_prompt: str = Field(description="the content of user input", default="") user_prompt: str = Field(description="the content of user input", default="")
intention_type: str = Field(description="the type of user intention, choose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']", default="Chat") intention_type: str = Field(description="the type of user intention, choose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']", default="ExecutePlugin")
user_provide_file: bool = Field(description="whether the user provides a path to a file", default=False) user_provide_file: bool = Field(description="whether the user provides a path to a file", default=False)
user_provide_url: bool = Field(description="whether the user provides a url", default=False) user_provide_url: bool = Field(description="whether the user provides a url", default=False)
def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention): def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention):
gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
inputs=txt, inputs_show_user=txt, inputs=txt, inputs_show_user=txt,
@ -30,12 +74,24 @@ def chat(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_i
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
pass pass
def analyze_with_rule(txt):
explain_intention_to_user = {
'Chat': "聊天对话",
'ExecutePlugin': "调用插件",
'ModifyConfiguration': "修改配置",
}
def analyze_intention_with_simple_rules(txt):
user_intention = UserIntention() user_intention = UserIntention()
user_intention.user_prompt = txt user_intention.user_prompt = txt
is_certain = False is_certain = False
if '调用插件' in txt: if '请问' in txt:
is_certain = True
user_intention.intention_type = 'Chat'
if '用插件' in txt:
is_certain = True is_certain = True
user_intention.intention_type = 'ExecutePlugin' user_intention.intention_type = 'ExecutePlugin'
@ -45,43 +101,68 @@ def analyze_with_rule(txt):
return is_certain, user_intention return is_certain, user_intention
@CatchException @CatchException
def 自动终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): def 虚空终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
""" disable_auto_promotion(chatbot=chatbot)
txt 输入栏用户输入的文本例如需要翻译的一段话再例如一个包含了待处理文件的路径 # 获取当前虚空终端状态
llm_kwargs gpt模型参数, 如温度和top_p等, 一般原样传递下去就行 state = VoidTerminalState.get_state(chatbot)
plugin_kwargs 插件模型的参数, 如温度和top_p等, 一般原样传递下去就行 appendix_msg = ""
chatbot 聊天显示框的句柄用于显示给用户
history 聊天历史前情提要 # 用简单的关键词检测用户意图
system_prompt 给gpt的静默提醒 is_certain, _ = analyze_intention_with_simple_rules(txt)
web_port 当前软件运行的端口号 if txt.startswith('private_upload/') and len(txt) == 34:
""" state.set_state(chatbot=chatbot, key='has_provided_explaination', value=False)
history = [] # 清空历史,以免输入溢出 appendix_msg = "\n\n**很好,您已经上传了文件**,现在请您描述您的需求。"
chatbot.append(("自动终端状态: ", f"正在执行任务: {txt}"))
if is_certain or (state.has_provided_explaination):
# 如果意图明确,跳过提示环节
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
state.unlock_plugin(chatbot=chatbot)
yield from update_ui(chatbot=chatbot, history=history)
yield from 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port)
return
else:
# 如果意图模糊,提示
state.set_state(chatbot=chatbot, key='has_provided_explaination', value=True)
state.lock_plugin(chatbot=chatbot)
chatbot.append(("虚空终端状态:", explain_msg+appendix_msg))
yield from update_ui(chatbot=chatbot, history=history)
return
def 虚空终端主路由(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
history = []
chatbot.append(("虚空终端状态: ", f"正在执行任务: {txt}"))
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# 初始化插件状态
state = chatbot._cookies.get('plugin_state', None)
if state is not None: state = pickle.loads(state)
else: state = {}
def update_vt_state():
# 赋予插件锁定 锁定插件回调路径,当下一次用户提交时,会直接转到该函数
chatbot._cookies['lock_plugin'] = 'crazy_functions.虚空终端->自动终端'
chatbot._cookies['vt_state'] = pickle.dumps(state)
# ⭐ ⭐ ⭐ 分析用户意图 # ⭐ ⭐ ⭐ 分析用户意图
is_certain, user_intention = analyze_with_rule(txt) is_certain, user_intention = analyze_intention_with_simple_rules(txt)
if not is_certain: if not is_certain:
yield from update_ui_lastest_msg(lastmsg=f"正在执行任务: {txt}\n\n分析用户意图中", chatbot=chatbot, history=history, delay=0) yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n分析用户意图中", chatbot=chatbot, history=history, delay=0)
gpt_json_io = GptJsonIO(UserIntention) gpt_json_io = GptJsonIO(UserIntention)
inputs = "Analyze the intention of the user according to following user input: \n\n" + txt + '\n\n' + gpt_json_io.format_instructions rf_req = "\nchoose from ['ModifyConfiguration', 'ExecutePlugin', 'Chat']"
inputs = "Analyze the intention of the user according to following user input: \n\n" + \
">> " + (txt+rf_req).rstrip('\n').replace('\n','\n>> ') + '\n\n' + gpt_json_io.format_instructions
run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection( run_gpt_fn = lambda inputs, sys_prompt: predict_no_ui_long_connection(
inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[]) inputs=inputs, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=[])
user_intention = gpt_json_io.generate_output_auto_repair(run_gpt_fn(inputs, ""), run_gpt_fn) analyze_res = run_gpt_fn(inputs, "")
try:
user_intention = gpt_json_io.generate_output_auto_repair(analyze_res, run_gpt_fn)
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
except JsonStringError as e:
yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 失败 当前语言模型({llm_kwargs['llm_model']})不能理解您的意图", chatbot=chatbot, history=history, delay=0)
return
else: else:
pass pass
yield from update_ui_lastest_msg(
lastmsg=f"正在执行任务: {txt}\n\n用户意图理解: 意图={explain_intention_to_user[user_intention.intention_type]}",
chatbot=chatbot, history=history, delay=0)
# 用户意图: 修改本项目的配置 # 用户意图: 修改本项目的配置
if user_intention.intention_type == 'ModifyConfiguration': if user_intention.intention_type == 'ModifyConfiguration':
yield from modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention) yield from modify_configuration_reboot(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, user_intention)
@ -96,23 +177,3 @@ def 自动终端(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt
return return
# # if state == 'wait_user_keyword':
# # chatbot._cookies['lock_plugin'] = None # 解除插件锁定,避免遗忘导致死锁
# # chatbot._cookies['plugin_state_0001'] = None # 解除插件状态,避免遗忘导致死锁
# # # 解除插件锁定
# # chatbot.append((f"获取关键词:{txt}", ""))
# # yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# # inputs=inputs_show_user=f"Extract all image urls in this html page, pick the first 5 images and show them with markdown format: \n\n {page_return}"
# # gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
# # inputs=inputs, inputs_show_user=inputs_show_user,
# # llm_kwargs=llm_kwargs, chatbot=chatbot, history=[],
# # sys_prompt="When you want to show an image, use markdown format. e.g. ![image_description](image_url). If there are no image url provided, answer 'no image url provided'"
# # )
# # chatbot[-1] = [chatbot[-1][0], gpt_say]
# yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
# return

View File

@ -80,9 +80,9 @@ class InterviewAssistant(AliyunASR):
def __init__(self): def __init__(self):
self.capture_interval = 0.5 # second self.capture_interval = 0.5 # second
self.stop = False self.stop = False
self.parsed_text = "" self.parsed_text = "" # 下个句子中已经说完的部分, 由 test_on_result_chg() 写入
self.parsed_sentence = "" self.parsed_sentence = "" # 某段话的整个句子,由 test_on_sentence_end() 写入
self.buffered_sentence = "" self.buffered_sentence = "" #
self.event_on_result_chg = threading.Event() self.event_on_result_chg = threading.Event()
self.event_on_entence_end = threading.Event() self.event_on_entence_end = threading.Event()
self.event_on_commit_question = threading.Event() self.event_on_commit_question = threading.Event()
@ -132,7 +132,7 @@ class InterviewAssistant(AliyunASR):
self.plugin_wd.feed() self.plugin_wd.feed()
if self.event_on_result_chg.is_set(): if self.event_on_result_chg.is_set():
# update audio decode result # called when some words have finished
self.event_on_result_chg.clear() self.event_on_result_chg.clear()
chatbot[-1] = list(chatbot[-1]) chatbot[-1] = list(chatbot[-1])
chatbot[-1][0] = self.buffered_sentence + self.parsed_text chatbot[-1][0] = self.buffered_sentence + self.parsed_text
@ -144,7 +144,11 @@ class InterviewAssistant(AliyunASR):
# called when a sentence has ended # called when a sentence has ended
self.event_on_entence_end.clear() self.event_on_entence_end.clear()
self.parsed_text = self.parsed_sentence self.parsed_text = self.parsed_sentence
self.buffered_sentence += self.parsed_sentence self.buffered_sentence += self.parsed_text
chatbot[-1] = list(chatbot[-1])
chatbot[-1][0] = self.buffered_sentence
history = chatbot2history(chatbot)
yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
if self.event_on_commit_question.is_set(): if self.event_on_commit_question.is_set():
# called when a question should be commited # called when a question should be commited

View File

@ -1,7 +1,7 @@
#【请修改完参数后删除此行】请在以下方案中选择一种然后删除其他的方案最后docker-compose up运行 | Please choose from one of these options below, delete other options as well as This Line #【请修改完参数后删除此行】请在以下方案中选择一种然后删除其他的方案最后docker-compose up运行 | Please choose from one of these options below, delete other options as well as This Line
## =================================================== ## ===================================================
## 【方案一】 如果不需要运行本地模型(仅chatgpt,newbing类远程服务) ## 【方案一】 如果不需要运行本地模型(仅 chatgpt, azure, 星火, 千帆, claude 等在线大模型服务)
## =================================================== ## ===================================================
version: '3' version: '3'
services: services:
@ -13,7 +13,7 @@ services:
USE_PROXY: ' True ' USE_PROXY: ' True '
proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } ' proxies: ' { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } '
LLM_MODEL: ' gpt-3.5-turbo ' LLM_MODEL: ' gpt-3.5-turbo '
AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "newbing"] ' AVAIL_LLM_MODELS: ' ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "sparkv2", "qianfan"] '
WEB_PORT: ' 22303 ' WEB_PORT: ' 22303 '
ADD_WAIFU: ' True ' ADD_WAIFU: ' True '
# THEME: ' Chuanhu-Small-and-Beautiful ' # THEME: ' Chuanhu-Small-and-Beautiful '

View File

@ -1,62 +1,2 @@
# How to build | 如何构建: docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM . # 此Dockerfile不再维护请前往docs/GithubAction+ChatGLM+Moss
# How to run | (1) 我想直接一键运行选择0号GPU: docker run --rm -it --net=host --gpus \"device=0\" gpt-academic
# How to run | (2) 我想运行之前进容器做一些调整选择1号GPU: docker run --rm -it --net=host --gpus \"device=1\" gpt-academic bash
# 从NVIDIA源从而支持显卡运损检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
ARG useProxyNetwork=''
RUN apt-get update
RUN apt-get install -y curl proxychains curl
RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
# 配置代理网络构建Docker镜像时使用
# # comment out below if you do not need proxy network | 如果不需要翻墙 - 从此行向下删除
RUN $useProxyNetwork curl cip.cc
RUN sed -i '$ d' /etc/proxychains.conf
RUN sed -i '$ d' /etc/proxychains.conf
# 在这里填写主机的代理协议用于从github拉取代码
RUN echo "socks5 127.0.0.1 10880" >> /etc/proxychains.conf
ARG useProxyNetwork=proxychains
# # comment out above if you do not need proxy network | 如果不需要翻墙 - 从此行向上删除
# use python3 as the system default python
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
# 下载pytorch
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
# 下载分支
WORKDIR /gpt
RUN $useProxyNetwork git clone https://github.com/binary-husky/gpt_academic.git
WORKDIR /gpt/gpt_academic
RUN $useProxyNetwork python3 -m pip install -r requirements.txt
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
# 预热CHATGLM参数非必要 可选步骤)
RUN echo ' \n\
from transformers import AutoModel, AutoTokenizer \n\
chatglm_tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) \n\
chatglm_model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).float() ' >> warm_up_chatglm.py
RUN python3 -u warm_up_chatglm.py
# 禁用缓存,确保更新代码
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
RUN $useProxyNetwork git pull
# 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
# 为chatgpt-academic配置代理和API-KEY (非必要 可选步骤)
# 可同时填写多个API-KEY支持openai的key和api2d的key共存用英文逗号分割例如API_KEY = "sk-openaikey1,fkxxxx-api2dkey2,........"
# LLM_MODEL 是选择初始的模型
# LOCAL_MODEL_DEVICE 是选择chatglm等本地模型运行的设备可选 cpu 和 cuda
# [说明: 以下内容与`config.py`一一对应请查阅config.py来完成一下配置的填写]
RUN echo ' \n\
API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \n\
USE_PROXY = True \n\
LLM_MODEL = "chatglm" \n\
LOCAL_MODEL_DEVICE = "cuda" \n\
proxies = { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } ' >> config_private.py
# 启动
CMD ["python3", "-u", "main.py"]

View File

@ -1,59 +1 @@
# How to build | 如何构建: docker build -t gpt-academic-jittor --network=host -f Dockerfile+ChatGLM . # 此Dockerfile不再维护请前往docs/GithubAction+JittorLLMs
# How to run | (1) 我想直接一键运行选择0号GPU: docker run --rm -it --net=host --gpus \"device=0\" gpt-academic-jittor bash
# How to run | (2) 我想运行之前进容器做一些调整选择1号GPU: docker run --rm -it --net=host --gpus \"device=1\" gpt-academic-jittor bash
# 从NVIDIA源从而支持显卡运损检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
ARG useProxyNetwork=''
RUN apt-get update
RUN apt-get install -y curl proxychains curl g++
RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing
# 配置代理网络构建Docker镜像时使用
# # comment out below if you do not need proxy network | 如果不需要翻墙 - 从此行向下删除
RUN $useProxyNetwork curl cip.cc
RUN sed -i '$ d' /etc/proxychains.conf
RUN sed -i '$ d' /etc/proxychains.conf
# 在这里填写主机的代理协议用于从github拉取代码
RUN echo "socks5 127.0.0.1 10880" >> /etc/proxychains.conf
ARG useProxyNetwork=proxychains
# # comment out above if you do not need proxy network | 如果不需要翻墙 - 从此行向上删除
# use python3 as the system default python
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
# 下载pytorch
RUN $useProxyNetwork python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
# 下载分支
WORKDIR /gpt
RUN $useProxyNetwork git clone https://github.com/binary-husky/gpt_academic.git
WORKDIR /gpt/gpt_academic
RUN $useProxyNetwork python3 -m pip install -r requirements.txt
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_chatglm.txt
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_newbing.txt
RUN $useProxyNetwork python3 -m pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I
# 下载JittorLLMs
RUN $useProxyNetwork git clone https://github.com/binary-husky/JittorLLMs.git --depth 1 request_llm/jittorllms
# 禁用缓存,确保更新代码
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
RUN $useProxyNetwork git pull
# 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
# 为chatgpt-academic配置代理和API-KEY (非必要 可选步骤)
# 可同时填写多个API-KEY支持openai的key和api2d的key共存用英文逗号分割例如API_KEY = "sk-openaikey1,fkxxxx-api2dkey2,........"
# LLM_MODEL 是选择初始的模型
# LOCAL_MODEL_DEVICE 是选择chatglm等本地模型运行的设备可选 cpu 和 cuda
# [说明: 以下内容与`config.py`一一对应请查阅config.py来完成一下配置的填写]
RUN echo ' \n\
API_KEY = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,fkxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \n\
USE_PROXY = True \n\
LLM_MODEL = "chatglm" \n\
LOCAL_MODEL_DEVICE = "cuda" \n\
proxies = { "http": "socks5h://localhost:10880", "https": "socks5h://localhost:10880", } ' >> config_private.py
# 启动
CMD ["python3", "-u", "main.py"]

View File

@ -1,27 +1 @@
# 此Dockerfile适用于“无本地模型”的环境构建如果需要使用chatglm等本地模型请参考 docs/Dockerfile+ChatGLM # 此Dockerfile不再维护请前往docs/GithubAction+NoLocal+Latex
# - 1 修改 `config.py`
# - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/Dockerfile+NoLocal+Latex .
# - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
FROM fuqingxu/python311_texlive_ctex:latest
# 指定路径
WORKDIR /gpt
ARG useProxyNetwork=''
RUN $useProxyNetwork pip3 install gradio openai numpy arxiv rich -i https://pypi.douban.com/simple/
RUN $useProxyNetwork pip3 install colorama Markdown pygments pymupdf -i https://pypi.douban.com/simple/
# 装载项目文件
COPY . .
# 安装依赖
RUN $useProxyNetwork pip3 install -r requirements.txt -i https://pypi.douban.com/simple/
# 可选步骤,用于预热模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
# 启动
CMD ["python3", "-u", "main.py"]

View File

@ -0,0 +1,35 @@
# docker build -t gpt-academic-all-capacity -f docs/GithubAction+AllCapacity --network=host --build-arg http_proxy=http://localhost:10881 --build-arg https_proxy=http://localhost:10881 .
# 从NVIDIA源从而支持显卡检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM fuqingxu/11.3.1-runtime-ubuntu20.04-with-texlive:latest
# use python3 as the system default python
WORKDIR /gpt
RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.8
# 下载pytorch
RUN python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu113
# 下载分支
WORKDIR /gpt
RUN git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
WORKDIR /gpt/gpt_academic
RUN git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss
RUN python3 -m pip install openai numpy arxiv rich
RUN python3 -m pip install colorama Markdown pygments pymupdf
RUN python3 -m pip install python-docx moviepy pdfminer
RUN python3 -m pip install zh_langchain==0.2.1
RUN python3 -m pip install nougat-ocr
RUN python3 -m pip install rarfile py7zr manim manimgl
RUN python3 -m pip install aliyun-python-sdk-core==2.13.3 pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
RUN python3 -m pip install -r requirements.txt
RUN python3 -m pip install -r request_llm/requirements_moss.txt
RUN python3 -m pip install -r request_llm/requirements_qwen.txt
RUN python3 -m pip install -r request_llm/requirements_chatglm.txt
RUN python3 -m pip install -r request_llm/requirements_newbing.txt
# 预热Tiktoken模块
RUN python3 -c 'from check_proxy import warm_up_modules; warm_up_modules()'
# 启动
CMD ["python3", "-u", "main.py"]

View File

@ -1,7 +1,6 @@
# 从NVIDIA源从而支持显卡运损检查宿主的nvidia-smi中的cuda版本必须>=11.3 # 从NVIDIA源从而支持显卡运损检查宿主的nvidia-smi中的cuda版本必须>=11.3
FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04 FROM nvidia/cuda:11.3.1-runtime-ubuntu20.04
ARG useProxyNetwork=''
RUN apt-get update RUN apt-get update
RUN apt-get install -y curl proxychains curl gcc RUN apt-get install -y curl proxychains curl gcc
RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing RUN apt-get install -y git python python3 python-dev python3-dev --fix-missing

View File

@ -1,6 +1,6 @@
# 此Dockerfile适用于“无本地模型”的环境构建如果需要使用chatglm等本地模型请参考 docs/Dockerfile+ChatGLM # 此Dockerfile适用于“无本地模型”的环境构建如果需要使用chatglm等本地模型请参考 docs/Dockerfile+ChatGLM
# - 1 修改 `config.py` # - 1 修改 `config.py`
# - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/Dockerfile+NoLocal+Latex . # - 2 构建 docker build -t gpt-academic-nolocal-latex -f docs/GithubAction+NoLocal+Latex .
# - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex # - 3 运行 docker run -v /home/fuqingxu/arxiv_cache:/root/arxiv_cache --rm -it --net=host gpt-academic-nolocal-latex
FROM fuqingxu/python311_texlive_ctex:latest FROM fuqingxu/python311_texlive_ctex:latest
@ -10,6 +10,10 @@ WORKDIR /gpt
RUN pip3 install gradio openai numpy arxiv rich RUN pip3 install gradio openai numpy arxiv rich
RUN pip3 install colorama Markdown pygments pymupdf RUN pip3 install colorama Markdown pygments pymupdf
RUN pip3 install python-docx moviepy pdfminer
RUN pip3 install zh_langchain==0.2.1
RUN pip3 install nougat-ocr
RUN pip3 install aliyun-python-sdk-core==2.13.3 pyOpenSSL scipy git+https://github.com/aliyun/alibabacloud-nls-python-sdk.git
# 装载项目文件 # 装载项目文件
COPY . . COPY . .

View File

@ -2161,5 +2161,292 @@
"在运行过程中动态地修改配置": "Dynamically modify configurations during runtime", "在运行过程中动态地修改配置": "Dynamically modify configurations during runtime",
"请先把模型切换至gpt-*或者api2d-*": "Please switch the model to gpt-* or api2d-* first", "请先把模型切换至gpt-*或者api2d-*": "Please switch the model to gpt-* or api2d-* first",
"获取简单聊天的句柄": "Get handle of simple chat", "获取简单聊天的句柄": "Get handle of simple chat",
"获取插件的默认参数": "Get default parameters of plugin" "获取插件的默认参数": "Get default parameters of plugin",
"GROBID服务不可用": "GROBID service is unavailable",
"请问": "May I ask",
"如果等待时间过长": "If the waiting time is too long",
"编程": "programming",
"5. 现在": "5. Now",
"您不必读这个else分支": "You don't have to read this else branch",
"用插件实现": "Implement with plugins",
"插件分类默认选项": "Default options for plugin classification",
"填写多个可以均衡负载": "Filling in multiple can balance the load",
"色彩主题": "Color theme",
"可能附带额外依赖 -=-=-=-=-=-=-": "May come with additional dependencies -=-=-=-=-=-=-",
"讯飞星火认知大模型": "Xunfei Xinghuo cognitive model",
"ParsingLuaProject的所有源文件 | 输入参数为路径": "All source files of ParsingLuaProject | Input parameter is path",
"复制以下空间https": "Copy the following space https",
"如果意图明确": "If the intention is clear",
"如系统是Linux": "If the system is Linux",
"├── 语音功能": "├── Voice function",
"见Github wiki": "See Github wiki",
"⭐ ⭐ ⭐ 立即应用配置": "⭐ ⭐ ⭐ Apply configuration immediately",
"现在您只需要再次重复一次您的指令即可": "Now you just need to repeat your command again",
"没辙了": "No way",
"解析Jupyter Notebook文件 | 输入参数为路径": "Parse Jupyter Notebook file | Input parameter is path",
"⭐ ⭐ ⭐ 确认插件参数": "⭐ ⭐ ⭐ Confirm plugin parameters",
"找不到合适插件执行该任务": "Cannot find a suitable plugin to perform this task",
"接驳VoidTerminal": "Connect to VoidTerminal",
"**很好": "**Very good",
"对话|编程": "Conversation|Programming",
"对话|编程|学术": "Conversation|Programming|Academic",
"4. 建议使用 GPT3.5 或更强的模型": "4. It is recommended to use GPT3.5 or a stronger model",
"「请调用插件翻译PDF论文": "Please call the plugin to translate the PDF paper",
"3. 如果您使用「调用插件xxx」、「修改配置xxx」、「请问」等关键词": "3. If you use keywords such as 'call plugin xxx', 'modify configuration xxx', 'please', etc.",
"以下是一篇学术论文的基本信息": "The following is the basic information of an academic paper",
"GROBID服务器地址": "GROBID server address",
"修改配置": "Modify configuration",
"理解PDF文档的内容并进行回答 | 输入参数为路径": "Understand the content of the PDF document and answer | Input parameter is path",
"对于需要高级参数的插件": "For plugins that require advanced parameters",
"🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行": "Main process execution 🏃‍♂️🏃‍♂️🏃‍♂️",
"没有填写 HUGGINGFACE_ACCESS_TOKEN": "HUGGINGFACE_ACCESS_TOKEN not filled in",
"调度插件": "Scheduling plugin",
"语言模型": "Language model",
"├── ADD_WAIFU 加一个live2d装饰": "├── ADD_WAIFU Add a live2d decoration",
"初始化": "Initialization",
"选择了不存在的插件": "Selected a non-existent plugin",
"修改本项目的配置": "Modify the configuration of this project",
"如果输入的文件路径是正确的": "If the input file path is correct",
"2. 您可以打开插件下拉菜单以了解本项目的各种能力": "2. You can open the plugin dropdown menu to learn about various capabilities of this project",
"VoidTerminal插件说明": "VoidTerminal plugin description",
"无法理解您的需求": "Unable to understand your requirements",
"默认 AdvancedArgs = False": "Default AdvancedArgs = False",
"「请问Transformer网络的结构是怎样的": "What is the structure of the Transformer network?",
"比如1812.10695": "For example, 1812.10695",
"翻译README或MD": "Translate README or MD",
"读取新配置中": "Reading new configuration",
"假如偏离了您的要求": "If it deviates from your requirements",
"├── THEME 色彩主题": "├── THEME color theme",
"如果还找不到": "If still not found",
"问": "Ask",
"请检查系统字体": "Please check system fonts",
"如果错误": "If there is an error",
"作为替代": "As an alternative",
"ParseJavaProject的所有源文件 | 输入参数为路径": "All source files of ParseJavaProject | Input parameter is path",
"比对相同参数时生成的url与自己代码生成的url是否一致": "Check if the generated URL matches the one generated by your code when comparing the same parameters",
"清除本地缓存数据": "Clear local cache data",
"使用谷歌学术检索助手搜索指定URL的结果 | 输入参数为谷歌学术搜索页的URL": "Use Google Scholar search assistant to search for results of a specific URL | Input parameter is the URL of Google Scholar search page",
"运行方法": "Running method",
"您已经上传了文件**": "You have uploaded the file **",
"「给爷翻译Arxiv论文": "Translate Arxiv papers for me",
"请修改config中的GROBID_URL": "Please modify GROBID_URL in the config",
"处理特殊情况": "Handling special cases",
"不要自己瞎搞!」": "Don't mess around by yourself!",
"LoadConversationHistoryArchive | 输入参数为路径": "LoadConversationHistoryArchive | Input parameter is a path",
"| 输入参数是一个问题": "| Input parameter is a question",
"├── CHATBOT_HEIGHT 对话窗的高度": "├── CHATBOT_HEIGHT Height of the chat window",
"对C": "To C",
"默认关闭": "Default closed",
"当前进度": "Current progress",
"HUGGINGFACE的TOKEN": "HUGGINGFACE's TOKEN",
"查找可用插件中": "Searching for available plugins",
"下载LLAMA时起作用 https": "Works when downloading LLAMA https",
"使用 AK": "Using AK",
"正在执行任务": "Executing task",
"保存当前的对话 | 不需要输入参数": "Save current conversation | No input parameters required",
"对话": "Conversation",
"图中鲜花怒放": "Flowers blooming in the picture",
"批量将Markdown文件中文翻译为英文 | 输入参数为路径或上传压缩包": "Batch translate Chinese to English in Markdown files | Input parameter is a path or upload a compressed package",
"ParsingCSharpProject的所有源文件 | 输入参数为路径": "ParsingCSharpProject's all source files | Input parameter is a path",
"为我翻译PDF论文": "Translate PDF papers for me",
"聊天对话": "Chat conversation",
"拼接鉴权参数": "Concatenate authentication parameters",
"请检查config中的GROBID_URL": "Please check the GROBID_URL in the config",
"拼接字符串": "Concatenate strings",
"您的意图可以被识别的更准确": "Your intent can be recognized more accurately",
"该模型有七个 bin 文件": "The model has seven bin files",
"但思路相同": "But the idea is the same",
"你需要翻译": "You need to translate",
"或者描述文件所在的路径": "Or the path of the description file",
"请您上传文件": "Please upload the file",
"不常用": "Not commonly used",
"尚未充分测试的实验性插件 & 需要额外依赖的插件 -=--=-": "Experimental plugins that have not been fully tested & plugins that require additional dependencies -=--=-",
"⭐ ⭐ ⭐ 选择插件": "⭐ ⭐ ⭐ Select plugin",
"当前配置不允许被修改!如需激活本功能": "The current configuration does not allow modification! To activate this feature",
"正在连接GROBID服务": "Connecting to GROBID service",
"用户图形界面布局依赖关系示意图": "Diagram of user interface layout dependencies",
"是否允许通过自然语言描述修改本页的配置": "Allow modifying the configuration of this page through natural language description",
"self.chatbot被序列化": "self.chatbot is serialized",
"本地Latex论文精细翻译 | 输入参数是路径": "Locally translate Latex papers with fine-grained translation | Input parameter is the path",
"抱歉": "Sorry",
"以下这部分是最早加入的最稳定的模型 -=-=-=-=-=-=-": "The following section is the earliest and most stable model added",
"「用插件翻译README": "Translate README with plugins",
"如果不正确": "If incorrect",
"⭐ ⭐ ⭐ 读取可配置项目条目": "⭐ ⭐ ⭐ Read configurable project entries",
"开始语言对话 | 没有输入参数": "Start language conversation | No input parameters",
"谨慎操作 | 不需要输入参数": "Handle with caution | No input parameters required",
"对英文Latex项目全文进行纠错处理 | 输入参数为路径或上传压缩包": "Correct the entire English Latex project | Input parameter is the path or upload compressed package",
"如果需要处理文件": "If file processing is required",
"提供图像的内容": "Provide the content of the image",
"查看历史上的今天事件 | 不需要输入参数": "View historical events of today | No input parameters required",
"这个稍微啰嗦一点": "This is a bit verbose",
"多线程解析并翻译此项目的源码 | 不需要输入参数": "Parse and translate the source code of this project in multi-threading | No input parameters required",
"此处打印出建立连接时候的url": "Print the URL when establishing the connection here",
"精准翻译PDF论文为中文 | 输入参数为路径": "Translate PDF papers accurately into Chinese | Input parameter is the path",
"检测到操作错误!当您上传文档之后": "Operation error detected! After you upload the document",
"在线大模型配置关联关系示意图": "Online large model configuration relationship diagram",
"你的填写的空间名如grobid": "Your filled space name such as grobid",
"获取方法": "Get method",
"| 输入参数为路径": "| Input parameter is the path",
"⭐ ⭐ ⭐ 执行插件": "⭐ ⭐ ⭐ Execute plugin",
"├── ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置": "├── ALLOW_RESET_CONFIG Whether to allow modifying the configuration of this page through natural language description",
"重新页面即可生效": "Refresh the page to take effect",
"设为public": "Set as public",
"并在此处指定模型路径": "And specify the model path here",
"分析用户意图中": "Analyzing user intent",
"刷新下拉列表": "Refresh the drop-down list",
"失败 当前语言模型": "Failed current language model",
"1. 请用**自然语言**描述您需要做什么": "1. Please describe what you need to do in **natural language**",
"对Latex项目全文进行中译英处理 | 输入参数为路径或上传压缩包": "Translate the full text of Latex projects from Chinese to English | Input parameter is the path or upload a compressed package",
"没有配置BAIDU_CLOUD_API_KEY": "No configuration for BAIDU_CLOUD_API_KEY",
"设置默认值": "Set default value",
"如果太多了会导致gpt无法理解": "If there are too many, it will cause GPT to be unable to understand",
"绿草如茵": "Green grass",
"├── LAYOUT 窗口布局": "├── LAYOUT window layout",
"用户意图理解": "User intent understanding",
"生成RFC1123格式的时间戳": "Generate RFC1123 formatted timestamp",
"欢迎您前往Github反馈问题": "Welcome to go to Github to provide feedback",
"排除已经是按钮的插件": "Exclude plugins that are already buttons",
"亦在下拉菜单中显示": "Also displayed in the dropdown menu",
"导致无法反序列化": "Causing deserialization failure",
"意图=": "Intent =",
"章节": "Chapter",
"调用插件": "Invoke plugin",
"ParseRustProject的所有源文件 | 输入参数为路径": "All source files of ParseRustProject | Input parameter is path",
"需要点击“函数插件区”按钮进行处理": "Need to click the 'Function Plugin Area' button for processing",
"默认 AsButton = True": "Default AsButton = True",
"收到websocket错误的处理": "Handling websocket errors",
"用插件": "Use Plugin",
"没有选择任何插件组": "No plugin group selected",
"答": "Answer",
"可修改成本地GROBID服务": "Can modify to local GROBID service",
"用户意图": "User intent",
"对英文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包": "Polish the full text of English Latex projects | Input parameters are paths or uploaded compressed packages",
"「我不喜欢当前的界面颜色": "I don't like the current interface color",
"「请调用插件": "Please call the plugin",
"VoidTerminal状态": "VoidTerminal status",
"新配置": "New configuration",
"支持Github链接": "Support Github links",
"没有配置BAIDU_CLOUD_SECRET_KEY": "No BAIDU_CLOUD_SECRET_KEY configured",
"获取当前VoidTerminal状态": "Get the current VoidTerminal status",
"刷新按钮": "Refresh button",
"为了防止pickle.dumps": "To prevent pickle.dumps",
"放弃治疗": "Give up treatment",
"可指定不同的生成长度、top_p等相关超参": "Can specify different generation lengths, top_p and other related hyperparameters",
"请将题目和摘要翻译为": "Translate the title and abstract",
"通过appid和用户的提问来生成请参数": "Generate request parameters through appid and user's question",
"ImageGeneration | 输入参数字符串": "ImageGeneration | Input parameter string",
"将文件拖动到文件上传区": "Drag and drop the file to the file upload area",
"如果意图模糊": "If the intent is ambiguous",
"星火认知大模型": "Spark Cognitive Big Model",
"执行中. 删除 gpt_log & private_upload": "Executing. Delete gpt_log & private_upload",
"默认 Color = secondary": "Default Color = secondary",
"此处也不需要修改": "No modification is needed here",
"⭐ ⭐ ⭐ 分析用户意图": "⭐ ⭐ ⭐ Analyze user intent",
"再试一次": "Try again",
"请写bash命令实现以下功能": "Please write a bash command to implement the following function",
"批量SummarizingWordDocuments | 输入参数为路径": "Batch SummarizingWordDocuments | Input parameter is the path",
"/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析": "Parse the python file in /Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns",
"当我要求你写bash命令时": "When I ask you to write a bash command",
"├── AUTO_CLEAR_TXT 是否在提交时自动清空输入框": "├── AUTO_CLEAR_TXT Whether to automatically clear the input box when submitting",
"按停止键终止": "Press the stop key to terminate",
"文心一言": "Original text",
"不能理解您的意图": "Cannot understand your intention",
"用简单的关键词检测用户意图": "Detect user intention with simple keywords",
"中文": "Chinese",
"解析一个C++项目的所有源文件": "Parse all source files of a C++ project",
"请求的Prompt为": "Requested prompt is",
"参考本demo的时候可取消上方打印的注释": "You can remove the comments above when referring to this demo",
"开始接收回复": "Start receiving replies",
"接入讯飞星火大模型 https": "Access to Xunfei Xinghuo large model https",
"用该压缩包进行反馈": "Use this compressed package for feedback",
"翻译Markdown或README": "Translate Markdown or README",
"SK 生成鉴权签名": "SK generates authentication signature",
"插件参数": "Plugin parameters",
"需要访问中文Bing": "Need to access Chinese Bing",
"ParseFrontendProject的所有源文件": "Parse all source files of ParseFrontendProject",
"现在将执行效果稍差的旧版代码": "Now execute the older version code with slightly worse performance",
"您需要明确说明并在指令中提到它": "You need to specify and mention it in the command",
"请在config.py中设置ALLOW_RESET_CONFIG=True后重启软件": "Please set ALLOW_RESET_CONFIG=True in config.py and restart the software",
"按照自然语言描述生成一个动画 | 输入参数是一段话": "Generate an animation based on natural language description | Input parameter is a sentence",
"你的hf用户名如qingxu98": "Your hf username is qingxu98",
"Arixv论文精细翻译 | 输入参数arxiv论文的ID": "Fine translation of Arixv paper | Input parameter is the ID of arxiv paper",
"无法获取 abstract": "Unable to retrieve abstract",
"尽可能地仅用一行命令解决我的要求": "Try to solve my request using only one command",
"提取插件参数": "Extract plugin parameters",
"配置修改完成": "Configuration modification completed",
"正在修改配置中": "Modifying configuration",
"ParsePythonProject的所有源文件": "All source files of ParsePythonProject",
"请求错误": "Request error",
"精准翻译PDF论文": "Accurate translation of PDF paper",
"无法获取 authors": "Unable to retrieve authors",
"该插件诞生时间不长": "This plugin has not been around for long",
"返回项目根路径": "Return project root path",
"BatchSummarizePDFDocuments的内容 | 输入参数为路径": "Content of BatchSummarizePDFDocuments | Input parameter is a path",
"百度千帆": "Baidu Qianfan",
"解析一个C++项目的所有头文件": "Parse all header files of a C++ project",
"现在请您描述您的需求": "Now please describe your requirements",
"该功能具有一定的危险性": "This feature has a certain level of danger",
"收到websocket关闭的处理": "Processing when receiving websocket closure",
"读取Tex论文并写摘要 | 输入参数为路径": "Read Tex paper and write abstract | Input parameter is the path",
"地址为https": "The address is https",
"限制最多前10个配置项": "Limit up to 10 configuration items",
"6. 如果不需要上传文件": "6. If file upload is not needed",
"默认 Group = 对话": "Default Group = Conversation",
"五秒后即将重启!若出现报错请无视即可": "Restarting in five seconds! Please ignore if there is an error",
"收到websocket连接建立的处理": "Processing when receiving websocket connection establishment",
"批量生成函数的注释 | 输入参数为路径": "Batch generate function comments | Input parameter is the path",
"聊天": "Chat",
"但您可以尝试再试一次": "But you can try again",
"千帆大模型平台": "Qianfan Big Model Platform",
"直接运行 python tests/test_plugins.py": "Run python tests/test_plugins.py directly",
"或是None": "Or None",
"进行hmac-sha256进行加密": "Perform encryption using hmac-sha256",
"批量总结音频或视频 | 输入参数为路径": "Batch summarize audio or video | Input parameter is path",
"插件在线服务配置依赖关系示意图": "Plugin online service configuration dependency diagram",
"开始初始化模型": "Start initializing model",
"弱模型可能无法理解您的想法": "Weak model may not understand your ideas",
"解除大小写限制": "Remove case sensitivity restriction",
"跳过提示环节": "Skip prompt section",
"接入一些逆向工程https": "Access some reverse engineering https",
"执行完成": "Execution completed",
"如果需要配置": "If configuration is needed",
"此处不修改;如果使用本地或无地域限制的大模型时": "Do not modify here; if using local or region-unrestricted large models",
"你是一个Linux大师级用户": "You are a Linux master-level user",
"arxiv论文的ID是1812.10695": "The ID of the arxiv paper is 1812.10695",
"而不是点击“提交”按钮": "Instead of clicking the 'Submit' button",
"解析一个Go项目的所有源文件 | 输入参数为路径": "Parse all source files of a Go project | Input parameter is path",
"对中文Latex项目全文进行润色处理 | 输入参数为路径或上传压缩包": "Polish the entire text of a Chinese Latex project | Input parameter is path or upload compressed package",
"「生成一张图片": "Generate an image",
"将Markdown或README翻译为中文 | 输入参数为路径或URL": "Translate Markdown or README to Chinese | Input parameters are path or URL",
"训练时间": "Training time",
"将请求的鉴权参数组合为字典": "Combine the requested authentication parameters into a dictionary",
"对Latex项目全文进行英译中处理 | 输入参数为路径或上传压缩包": "Translate the entire text of Latex project from English to Chinese | Input parameters are path or uploaded compressed package",
"内容如下": "The content is as follows",
"用于高质量地读取PDF文档": "Used for high-quality reading of PDF documents",
"上下文太长导致 token 溢出": "The context is too long, causing token overflow",
"├── DARK_MODE 暗色模式 / 亮色模式": "├── DARK_MODE Dark mode / Light mode",
"语言模型回复为": "The language model replies as",
"from crazy_functions.chatglm微调工具 import 微调数据集生成": "from crazy_functions.chatglm fine-tuning tool import fine-tuning dataset generation",
"为您选择了插件": "Selected plugin for you",
"无法获取 title": "Unable to get title",
"收到websocket消息的处理": "Processing of received websocket messages",
"2023年": "2023",
"清除所有缓存文件": "Clear all cache files",
"├── PDF文档精准解析": "├── Accurate parsing of PDF documents",
"论文我刚刚放到上传区了": "I just put the paper in the upload area",
"生成url": "Generate URL",
"以下部分是新加入的模型": "The following section is the newly added model",
"学术": "Academic",
"├── DEFAULT_FN_GROUPS 插件分类默认选项": "├── DEFAULT_FN_GROUPS Plugin classification default options",
"不推荐使用": "Not recommended for use",
"正在同时咨询": "Consulting simultaneously",
"将Markdown翻译为中文 | 输入参数为路径或URL": "Translate Markdown to Chinese | Input parameters are path or URL",
"Github网址是https": "The Github URL is https",
"试着加上.tex后缀试试": "Try adding the .tex suffix",
"对项目中的各个插件进行测试": "Test each plugin in the project",
"插件说明": "Plugin description",
"├── CODE_HIGHLIGHT 代码高亮": "├── CODE_HIGHLIGHT Code highlighting",
"记得用插件": "Remember to use the plugin",
"谨慎操作": "Handle with caution"
} }

View File

@ -83,5 +83,10 @@
"图片生成": "ImageGeneration", "图片生成": "ImageGeneration",
"动画生成": "AnimationGeneration", "动画生成": "AnimationGeneration",
"语音助手": "VoiceAssistant", "语音助手": "VoiceAssistant",
"启动微调": "StartFineTuning" "启动微调": "StartFineTuning",
"清除缓存": "ClearCache",
"辅助功能": "Accessibility",
"虚空终端": "VoidTerminal",
"解析PDF_基于GROBID": "ParsePDF_BasedOnGROBID",
"虚空终端主路由": "VoidTerminalMainRoute"
} }

74
main.py
View File

@ -6,18 +6,18 @@ def main():
from request_llm.bridge_all import predict from request_llm.bridge_all import predict
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = \ proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT') CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT') ENABLE_AUDIO, AUTO_CLEAR_TXT = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT')
# 如果WEB_PORT是-1, 则随机选取WEB端口 # 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
if not AUTHENTICATION: AUTHENTICATION = None
from check_proxy import get_current_version from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration from themes.theme import adjust_theme, advanced_css, theme_declaration
initial_prompt = "Serve me as a writing and programming assistant." initial_prompt = "Serve me as a writing and programming assistant."
title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}" title_html = f"<h1 align=\"center\">GPT 学术优化 {get_current_version()}</h1>{theme_declaration}"
description = """代码开源和更新[地址🚀](https://github.com/binary-husky/chatgpt_academic),感谢热情的[开发者们❤️](https://github.com/binary-husky/chatgpt_academic/graphs/contributors)""" description = "代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic)"
description += "感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors)"
# 问询记录, python 版本建议3.9+(越新越好) # 问询记录, python 版本建议3.9+(越新越好)
import logging, uuid import logging, uuid
@ -34,7 +34,10 @@ def main():
# 高级函数插件 # 高级函数插件
from crazy_functional import get_crazy_functions from crazy_functional import get_crazy_functions
crazy_fns = get_crazy_functions() DEFAULT_FN_GROUPS, = get_conf('DEFAULT_FN_GROUPS')
plugins = get_crazy_functions()
all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')]))
match_group = lambda tags, groups: any([g in groups for g in tags.split('|')])
# 处理markdown文本格式的转变 # 处理markdown文本格式的转变
gr.Chatbot.postprocess = format_io gr.Chatbot.postprocess = format_io
@ -83,25 +86,33 @@ def main():
if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue
variant = functional[k]["Color"] if "Color" in functional[k] else "secondary" variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
functional[k]["Button"] = gr.Button(k, variant=variant) functional[k]["Button"] = gr.Button(k, variant=variant)
functional[k]["Button"].style(size="sm")
with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn: with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn:
with gr.Row(): with gr.Row():
gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)") gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)")
with gr.Row(elem_id="input-plugin-group"):
plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS,
multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False)
with gr.Row(): with gr.Row():
for k in crazy_fns: for k, plugin in plugins.items():
if not crazy_fns[k].get("AsButton", True): continue if not plugin.get("AsButton", True): continue
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary" visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False
crazy_fns[k]["Button"] = gr.Button(k, variant=variant) variant = plugins[k]["Color"] if "Color" in plugin else "secondary"
crazy_fns[k]["Button"].style(size="sm") plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant, visible=visible).style(size="sm")
with gr.Row(): with gr.Row():
with gr.Accordion("更多函数插件", open=True): with gr.Accordion("更多函数插件", open=True):
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)] dropdown_fn_list = []
for k, plugin in plugins.items():
if not match_group(plugin['Group'], DEFAULT_FN_GROUPS): continue
if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件
elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
with gr.Row(): with gr.Row():
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False) dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False)
with gr.Row(): with gr.Row():
plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False, plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False,
placeholder="这里是特殊函数插件的高级参数输入区").style(container=False) placeholder="这里是特殊函数插件的高级参数输入区").style(container=False)
with gr.Row(): with gr.Row():
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary") switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm")
with gr.Row(): with gr.Row():
with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up: with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up:
file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple") file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple")
@ -112,7 +123,6 @@ def main():
max_length_sl = gr.Slider(minimum=256, maximum=8192, value=4096, step=1, interactive=True, label="Local LLM MaxLength",) max_length_sl = gr.Slider(minimum=256, maximum=8192, value=4096, step=1, interactive=True, label="Local LLM MaxLength",)
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区") checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False) md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False)
gr.Markdown(description) gr.Markdown(description)
with gr.Accordion("备选输入区", open=True, visible=False, elem_id="input-panel2") as area_input_secondary: with gr.Accordion("备选输入区", open=True, visible=False, elem_id="input-panel2") as area_input_secondary:
with gr.Row(): with gr.Row():
@ -123,6 +133,7 @@ def main():
resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm") resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm")
stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm") stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm")
clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm") clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm")
# 功能区显示开关与功能区的互动 # 功能区显示开关与功能区的互动
def fn_area_visibility(a): def fn_area_visibility(a):
ret = {} ret = {}
@ -160,19 +171,19 @@ def main():
click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo) click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo)
cancel_handles.append(click_handle) cancel_handles.append(click_handle)
# 文件上传区接收文件后与chatbot的互动 # 文件上传区接收文件后与chatbot的互动
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes], [chatbot, txt, txt2]) file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies])
# 函数插件-固定按钮区 # 函数插件-固定按钮区
for k in crazy_fns: for k in plugins:
if not crazy_fns[k].get("AsButton", True): continue if not plugins[k].get("AsButton", True): continue
click_handle = crazy_fns[k]["Button"].click(ArgsGeneralWrapper(crazy_fns[k]["Function"]), [*input_combo, gr.State(PORT)], output_combo) click_handle = plugins[k]["Button"].click(ArgsGeneralWrapper(plugins[k]["Function"]), [*input_combo, gr.State(PORT)], output_combo)
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]) click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
cancel_handles.append(click_handle) cancel_handles.append(click_handle)
# 函数插件-下拉菜单与随变按钮的互动 # 函数插件-下拉菜单与随变按钮的互动
def on_dropdown_changed(k): def on_dropdown_changed(k):
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary" variant = plugins[k]["Color"] if "Color" in plugins[k] else "secondary"
ret = {switchy_bt: gr.update(value=k, variant=variant)} ret = {switchy_bt: gr.update(value=k, variant=variant)}
if crazy_fns[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区 if plugins[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区
ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + crazy_fns[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))}) ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + plugins[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))})
else: else:
ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")}) ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")})
return ret return ret
@ -183,13 +194,26 @@ def main():
# 随变按钮的回调函数注册 # 随变按钮的回调函数注册
def route(request: gr.Request, k, *args, **kwargs): def route(request: gr.Request, k, *args, **kwargs):
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(request, *args, **kwargs) yield from ArgsGeneralWrapper(plugins[k]["Function"])(request, *args, **kwargs)
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo) click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]) click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot])
cancel_handles.append(click_handle) cancel_handles.append(click_handle)
# 终止按钮的回调函数注册 # 终止按钮的回调函数注册
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
plugins_as_btn = {name:plugin for name, plugin in plugins.items() if plugin.get('Button', None)}
def on_group_change(group_list):
btn_list = []
fns_list = []
if not group_list: # 处理特殊情况:没有选择任何插件组
return [*[plugin['Button'].update(visible=False) for _, plugin in plugins_as_btn.items()], gr.Dropdown.update(choices=[])]
for k, plugin in plugins.items():
if plugin.get("AsButton", True):
btn_list.append(plugin['Button'].update(visible=match_group(plugin['Group'], group_list))) # 刷新按钮
if plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示
elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表
return [*btn_list, gr.Dropdown.update(choices=fns_list)]
plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown])
if ENABLE_AUDIO: if ENABLE_AUDIO:
from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution
rad = RealtimeAudioDistribution() rad = RealtimeAudioDistribution()
@ -221,8 +245,10 @@ def main():
auto_opentab_delay() auto_opentab_delay()
demo.queue(concurrency_count=CONCURRENT_COUNT).launch( demo.queue(concurrency_count=CONCURRENT_COUNT).launch(
server_name="0.0.0.0", server_port=PORT, server_name="0.0.0.0",
favicon_path="docs/logo.png", auth=AUTHENTICATION, server_port=PORT,
favicon_path="docs/logo.png",
auth=AUTHENTICATION if len(AUTHENTICATION) != 0 else None,
blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"]) blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"])
# 如果需要在二级路径下运行 # 如果需要在二级路径下运行

View File

@ -478,6 +478,8 @@ def step_2_core_key_translate():
up = trans_json(need_translate, language=LANG, special=False) up = trans_json(need_translate, language=LANG, special=False)
map_to_json(up, language=LANG) map_to_json(up, language=LANG)
cached_translation = read_map_from_json(language=LANG) cached_translation = read_map_from_json(language=LANG)
LANG_STD = 'std'
cached_translation.update(read_map_from_json(language=LANG_STD))
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0]))) cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
# =============================================== # ===============================================

View File

@ -63,9 +63,9 @@ class GetGLMFTHandle(Process):
# if not os.path.exists(conf): raise RuntimeError('找不到微调模型信息') # if not os.path.exists(conf): raise RuntimeError('找不到微调模型信息')
# with open(conf, 'r', encoding='utf8') as f: # with open(conf, 'r', encoding='utf8') as f:
# model_args = json.loads(f.read()) # model_args = json.loads(f.read())
ChatGLM_PTUNING_CHECKPOINT, = get_conf('ChatGLM_PTUNING_CHECKPOINT') CHATGLM_PTUNING_CHECKPOINT, = get_conf('CHATGLM_PTUNING_CHECKPOINT')
assert os.path.exists(ChatGLM_PTUNING_CHECKPOINT), "找不到微调模型检查点" assert os.path.exists(CHATGLM_PTUNING_CHECKPOINT), "找不到微调模型检查点"
conf = os.path.join(ChatGLM_PTUNING_CHECKPOINT, "config.json") conf = os.path.join(CHATGLM_PTUNING_CHECKPOINT, "config.json")
with open(conf, 'r', encoding='utf8') as f: with open(conf, 'r', encoding='utf8') as f:
model_args = json.loads(f.read()) model_args = json.loads(f.read())
if 'model_name_or_path' not in model_args: if 'model_name_or_path' not in model_args:
@ -78,9 +78,9 @@ class GetGLMFTHandle(Process):
config.pre_seq_len = model_args['pre_seq_len'] config.pre_seq_len = model_args['pre_seq_len']
config.prefix_projection = model_args['prefix_projection'] config.prefix_projection = model_args['prefix_projection']
print(f"Loading prefix_encoder weight from {ChatGLM_PTUNING_CHECKPOINT}") print(f"Loading prefix_encoder weight from {CHATGLM_PTUNING_CHECKPOINT}")
model = AutoModel.from_pretrained(model_args['model_name_or_path'], config=config, trust_remote_code=True) model = AutoModel.from_pretrained(model_args['model_name_or_path'], config=config, trust_remote_code=True)
prefix_state_dict = torch.load(os.path.join(ChatGLM_PTUNING_CHECKPOINT, "pytorch_model.bin")) prefix_state_dict = torch.load(os.path.join(CHATGLM_PTUNING_CHECKPOINT, "pytorch_model.bin"))
new_prefix_state_dict = {} new_prefix_state_dict = {}
for k, v in prefix_state_dict.items(): for k, v in prefix_state_dict.items():
if k.startswith("transformer.prefix_encoder."): if k.startswith("transformer.prefix_encoder."):

View File

@ -49,16 +49,17 @@ def get_access_token():
def generate_message_payload(inputs, llm_kwargs, history, system_prompt): def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
conversation_cnt = len(history) // 2 conversation_cnt = len(history) // 2
if system_prompt == "": system_prompt = "Hello"
messages = [{"role": "user", "content": system_prompt}] messages = [{"role": "user", "content": system_prompt}]
messages.append({"role": "assistant", "content": 'Certainly!'}) messages.append({"role": "assistant", "content": 'Certainly!'})
if conversation_cnt: if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2): for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {} what_i_have_asked = {}
what_i_have_asked["role"] = "user" what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index] what_i_have_asked["content"] = history[index] if history[index]!="" else "Hello"
what_gpt_answer = {} what_gpt_answer = {}
what_gpt_answer["role"] = "assistant" what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1] what_gpt_answer["content"] = history[index+1] if history[index]!="" else "Hello"
if what_i_have_asked["content"] != "": if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue if what_gpt_answer["content"] == timeout_bot_msg: continue

View File

@ -2,11 +2,17 @@
import time import time
import threading import threading
import importlib import importlib
from toolbox import update_ui, get_conf from toolbox import update_ui, get_conf, update_ui_lastest_msg
from multiprocessing import Process, Pipe from multiprocessing import Process, Pipe
model_name = '星火认知大模型' model_name = '星火认知大模型'
def validate_key():
XFYUN_APPID, = get_conf('XFYUN_APPID', )
if XFYUN_APPID == '00000000' or XFYUN_APPID == '':
return False
return True
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
""" """
多线程方法 多线程方法
@ -15,6 +21,9 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
watch_dog_patience = 5 watch_dog_patience = 5
response = "" response = ""
if validate_key() is False:
raise RuntimeError('请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET')
from .com_sparkapi import SparkRequestInstance from .com_sparkapi import SparkRequestInstance
sri = SparkRequestInstance() sri = SparkRequestInstance()
for response in sri.generate(inputs, llm_kwargs, history, sys_prompt): for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
@ -32,6 +41,10 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot.append((inputs, "")) chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history) yield from update_ui(chatbot=chatbot, history=history)
if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message]: 请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET", chatbot=chatbot, history=history, delay=0)
return
if additional_fn is not None: if additional_fn is not None:
from core_functional import handle_core_functionality from core_functional import handle_core_functionality
inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot) inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)

View File

@ -58,7 +58,7 @@ class Ws_Param(object):
class SparkRequestInstance(): class SparkRequestInstance():
def __init__(self): def __init__(self):
XFYUN_APPID, XFYUN_API_SECRET, XFYUN_API_KEY = get_conf('XFYUN_APPID', 'XFYUN_API_SECRET', 'XFYUN_API_KEY') XFYUN_APPID, XFYUN_API_SECRET, XFYUN_API_KEY = get_conf('XFYUN_APPID', 'XFYUN_API_SECRET', 'XFYUN_API_KEY')
if XFYUN_APPID == '00000000' or XFYUN_APPID == '': raise RuntimeError('请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET')
self.appid = XFYUN_APPID self.appid = XFYUN_APPID
self.api_secret = XFYUN_API_SECRET self.api_secret = XFYUN_API_SECRET
self.api_key = XFYUN_API_KEY self.api_key = XFYUN_API_KEY

View File

@ -20,4 +20,4 @@ arxiv
rich rich
pypdf2==2.12.1 pypdf2==2.12.1
websocket-client websocket-client
scipdf_parser==0.3 scipdf_parser>=0.3

View File

@ -9,9 +9,10 @@ validate_path() # 返回项目根路径
from tests.test_utils import plugin_test from tests.test_utils import plugin_test
if __name__ == "__main__": if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.虚空终端->自动终端', main_input='修改api-key为sk-jhoejriotherjep') # plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep')
plugin_test(plugin='crazy_functions.批量翻译PDF文档_NOUGAT->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
plugin_test(plugin='crazy_functions.虚空终端->自动终端', main_input='调用插件对C:/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析') # plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='调用插件对C:/Users/fuqingxu/Desktop/旧文件/gpt/chatgpt_academic/crazy_functions/latex_fns中的python文件进行解析')
# plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表') # plugin_test(plugin='crazy_functions.命令行助手->命令行助手', main_input='查看当前的docker容器列表')

View File

@ -1,15 +1,21 @@
/* hide remove all button */ /* hide remove all button */
.remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e { .remove-all.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
visibility: hidden; visibility: hidden;
} }
/* hide selector border */ /* hide selector border */
#input-plugin-group .wrap.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e { #input-plugin-group .wrap.svelte-aqlk7e.svelte-aqlk7e.svelte-aqlk7e {
border: 0px; border: 0px;
box-shadow: none; box-shadow: none;
} }
/* hide selector label */ /* hide selector label */
#input-plugin-group .svelte-1gfkn6j { #input-plugin-group .svelte-1gfkn6j {
visibility: hidden; visibility: hidden;
} }
/* height of the upload box */
.wrap.svelte-xwlu1w {
min-height: var(--size-32);
}

View File

@ -24,6 +24,19 @@ pj = os.path.join
class ChatBotWithCookies(list): class ChatBotWithCookies(list):
def __init__(self, cookie): def __init__(self, cookie):
"""
cookies = {
'top_p': top_p,
'temperature': temperature,
'lock_plugin': bool,
"files_to_promote": ["file1", "file2"],
"most_recent_uploaded": {
"path": "uploaded_path",
"time": time.time(),
"time_str": "timestr",
}
}
"""
self._cookies = cookie self._cookies = cookie
def write_list(self, list): def write_list(self, list):
@ -47,6 +60,8 @@ def ArgsGeneralWrapper(f):
# 引入一个有cookie的chatbot # 引入一个有cookie的chatbot
cookies.update({ cookies.update({
'top_p':top_p, 'top_p':top_p,
'api_key': cookies['api_key'],
'llm_model': llm_model,
'temperature':temperature, 'temperature':temperature,
}) })
llm_kwargs = { llm_kwargs = {
@ -69,7 +84,7 @@ def ArgsGeneralWrapper(f):
# 处理个别特殊插件的锁定状态 # 处理个别特殊插件的锁定状态
module, fn_name = cookies['lock_plugin'].split('->') module, fn_name = cookies['lock_plugin'].split('->')
f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name) f_hot_reload = getattr(importlib.import_module(module, fn_name), fn_name)
yield from f_hot_reload(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args) yield from f_hot_reload(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, request)
return decorated return decorated
@ -266,8 +281,7 @@ def report_execption(chatbot, history, a, b):
向chatbot中添加错误信息 向chatbot中添加错误信息
""" """
chatbot.append((a, b)) chatbot.append((a, b))
history.append(a) history.extend([a, b])
history.append(b)
def text_divide_paragraph(text): def text_divide_paragraph(text):
@ -290,6 +304,7 @@ def text_divide_paragraph(text):
text = "</br>".join(lines) text = "</br>".join(lines)
return pre + text + suf return pre + text + suf
@lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度 @lru_cache(maxsize=128) # 使用 lru缓存 加快转换速度
def markdown_convertion(txt): def markdown_convertion(txt):
""" """
@ -344,19 +359,41 @@ def markdown_convertion(txt):
content = content.replace('</script>\n</script>', '</script>') content = content.replace('</script>\n</script>', '</script>')
return content return content
def no_code(txt): def is_equation(txt):
if '```' not in txt: """
return True 判定是否为公式 | 测试1 写出洛伦兹定律使用tex格式公式 测试2 给出柯西不等式使用latex格式 测试3 写出麦克斯韦方程组
else: """
if '```reference' in txt: return True # newbing if '```' in txt and '```reference' not in txt: return False
else: return False if '$' not in txt and '\\[' not in txt: return False
mathpatterns = {
r'(?<!\\|\$)(\$)([^\$]+)(\$)': {'allow_multi_lines': False}, #  $...$
r'(?<!\\)(\$\$)([^\$]+)(\$\$)': {'allow_multi_lines': True}, # $$...$$
r'(?<!\\)(\\\[)(.+?)(\\\])': {'allow_multi_lines': False}, # \[...\]
# r'(?<!\\)(\\\()(.+?)(\\\))': {'allow_multi_lines': False}, # \(...\)
# r'(?<!\\)(\\begin{([a-z]+?\*?)})(.+?)(\\end{\2})': {'allow_multi_lines': True}, # \begin...\end
# r'(?<!\\)(\$`)([^`]+)(`\$)': {'allow_multi_lines': False}, # $`...`$
}
matches = []
for pattern, property in mathpatterns.items():
flags = re.ASCII|re.DOTALL if property['allow_multi_lines'] else re.ASCII
matches.extend(re.findall(pattern, txt, flags))
if len(matches) == 0: return False
contain_any_eq = False
illegal_pattern = re.compile(r'[^\x00-\x7F]|echo')
for match in matches:
if len(match) != 3: return False
eq_canidate = match[1]
if illegal_pattern.search(eq_canidate):
return False
else:
contain_any_eq = True
return contain_any_eq
if ('$' in txt) and no_code(txt): # 有$标识的公式符号,且没有代码段```的标识 if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
# convert everything to html format # convert everything to html format
split = markdown.markdown(text='---') split = markdown.markdown(text='---')
convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs) convert_stage_1 = markdown.markdown(text=txt, extensions=['sane_lists', 'tables', 'mdx_math', 'fenced_code'], extension_configs=markdown_extension_configs)
convert_stage_1 = markdown_bug_hunt(convert_stage_1) convert_stage_1 = markdown_bug_hunt(convert_stage_1)
# re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s).
# 1. convert to easy-to-copy tex (do not render math) # 1. convert to easy-to-copy tex (do not render math)
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL) convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
# 2. convert to rendered equation # 2. convert to rendered equation
@ -364,7 +401,7 @@ def markdown_convertion(txt):
# cat them together # cat them together
return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
else: else:
return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf return pre + markdown.markdown(txt, extensions=['sane_lists', 'tables', 'fenced_code', 'codehilite']) + suf
def close_up_code_segment_during_stream(gpt_reply): def close_up_code_segment_during_stream(gpt_reply):
@ -479,7 +516,8 @@ def find_recent_files(directory):
current_time = time.time() current_time = time.time()
one_minute_ago = current_time - 60 one_minute_ago = current_time - 60
recent_files = [] recent_files = []
if not os.path.exists(directory):
os.makedirs(directory, exist_ok=True)
for filename in os.listdir(directory): for filename in os.listdir(directory):
file_path = os.path.join(directory, filename) file_path = os.path.join(directory, filename)
if file_path.endswith('.log'): if file_path.endswith('.log'):
@ -503,15 +541,15 @@ def promote_file_to_downloadzone(file, rename_file=None, chatbot=None):
if not os.path.exists(new_path): shutil.copyfile(file, new_path) if not os.path.exists(new_path): shutil.copyfile(file, new_path)
# 将文件添加到chatbot cookie中避免多用户干扰 # 将文件添加到chatbot cookie中避免多用户干扰
if chatbot: if chatbot:
if 'file_to_promote' in chatbot._cookies: current = chatbot._cookies['file_to_promote'] if 'files_to_promote' in chatbot._cookies: current = chatbot._cookies['files_to_promote']
else: current = [] else: current = []
chatbot._cookies.update({'file_to_promote': [new_path] + current}) chatbot._cookies.update({'files_to_promote': [new_path] + current})
def disable_auto_promotion(chatbot): def disable_auto_promotion(chatbot):
chatbot._cookies.update({'file_to_promote': []}) chatbot._cookies.update({'files_to_promote': []})
return return
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes): def on_file_uploaded(files, chatbot, txt, txt2, checkboxes, cookies):
""" """
当文件被上传时的回调函数 当文件被上传时的回调函数
""" """
@ -545,15 +583,21 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
chatbot.append(['我上传了文件,请查收', chatbot.append(['我上传了文件,请查收',
f'[Local Message] 收到以下文件: \n\n{moved_files_str}' + f'[Local Message] 收到以下文件: \n\n{moved_files_str}' +
f'\n\n调用路径参数已自动修正到: \n\n{txt}' + f'\n\n调用路径参数已自动修正到: \n\n{txt}' +
f'\n\n现在您点击任意“红颜色”标识的函数插件时,以上文件将被作为输入参数'+err_msg]) f'\n\n现在您点击任意函数插件时,以上文件将被作为输入参数'+err_msg])
return chatbot, txt, txt2 cookies.update({
'most_recent_uploaded': {
'path': f'private_upload/{time_tag}',
'time': time.time(),
'time_str': time_tag
}})
return chatbot, txt, txt2, cookies
def on_report_generated(cookies, files, chatbot): def on_report_generated(cookies, files, chatbot):
from toolbox import find_recent_files from toolbox import find_recent_files
if 'file_to_promote' in cookies: if 'files_to_promote' in cookies:
report_files = cookies['file_to_promote'] report_files = cookies['files_to_promote']
cookies.pop('file_to_promote') cookies.pop('files_to_promote')
else: else:
report_files = find_recent_files('gpt_log') report_files = find_recent_files('gpt_log')
if len(report_files) == 0: if len(report_files) == 0:
@ -1001,7 +1045,7 @@ def get_plugin_default_kwargs():
chatbot = ChatBotWithCookies(llm_kwargs) chatbot = ChatBotWithCookies(llm_kwargs)
# txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port # txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port
default_plugin_kwargs = { DEFAULT_FN_GROUPS_kwargs = {
"main_input": "./README.md", "main_input": "./README.md",
"llm_kwargs": llm_kwargs, "llm_kwargs": llm_kwargs,
"plugin_kwargs": {}, "plugin_kwargs": {},
@ -1010,7 +1054,7 @@ def get_plugin_default_kwargs():
"system_prompt": "You are a good AI.", "system_prompt": "You are a good AI.",
"web_port": WEB_PORT "web_port": WEB_PORT
} }
return default_plugin_kwargs return DEFAULT_FN_GROUPS_kwargs
def get_chat_default_kwargs(): def get_chat_default_kwargs():
""" """

View File

@ -1,5 +1,5 @@
{ {
"version": 3.49, "version": 3.50,
"show_feature": true, "show_feature": true,
"new_feature": "支持借助GROBID实现PDF高精度翻译 <-> 接入百度千帆平台和文心一言 <-> 接入阿里通义千问、讯飞星火、上海AI-Lab书生 <-> 优化一键升级 <-> 提高arxiv翻译速度和成功率 <-> 支持自定义APIKEY格式 <-> 临时修复theme的文件丢失问题 <-> 新增实时语音对话插件(自动断句,脱手对话) <-> 支持加载自定义的ChatGLM2微调模型 <-> 动态ChatBot窗口高度 <-> 修复Azure接口的BUG <-> 完善多语言模块" "new_feature": "支持插件分类! <-> 支持用户使用自然语言调度各个插件(虚空终端) <-> 改进UI设计新主题 <-> 支持借助GROBID实现PDF高精度翻译 <-> 接入百度千帆平台和文心一言 <-> 接入阿里通义千问、讯飞星火、上海AI-Lab书生 <-> 优化一键升级 <-> 提高arxiv翻译速度和成功率"
} }