Merge branch 'master' into frontier

This commit is contained in:
binary-husky 2023-11-24 03:28:07 +08:00
commit 5b06a6cae5
12 changed files with 219 additions and 122 deletions

View File

@ -28,7 +28,7 @@ To translate this project to arbitrary language with GPT, read and run [`multi_l
功能(⭐= 近期新增功能) | 描述
--- | ---
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, [通义千问](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary)上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/)[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)智谱APIDALLE3
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary)上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/)[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)[智谱API](https://open.bigmodel.cn/)DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
@ -92,36 +92,38 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
### 安装方法I直接运行 (Windows, Linux or MacOS)
1. 下载项目
```sh
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
cd gpt_academic
```
```sh
git clone --depth=1 https://github.com/binary-husky/gpt_academic.git
cd gpt_academic
```
2. 配置API_KEY
`config.py`配置API KEY等设置[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
`config.py`配置API KEY等设置[点击查看特殊网络环境设置方法](https://github.com/binary-husky/gpt_academic/issues/1) 。[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解该读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。 」
「 程序会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。如您能理解该读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中(仅复制您修改过的配置条目即可)。 」
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py`。 」
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py`。 」
3. 安装依赖
```sh
# 选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11备注使用官方pip源或者阿里pip源, 临时换源方法python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
python -m pip install -r requirements.txt
```sh
# 选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11备注使用官方pip源或者阿里pip源, 临时换源方法python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
python -m pip install -r requirements.txt
# 选择II: 使用Anaconda步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr)
conda create -n gptac_venv python=3.11 # 创建anaconda环境
conda activate gptac_venv # 激活anaconda环境
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
```
# 选择II: 使用Anaconda步骤也是类似的 (https://www.bilibili.com/video/BV1rc411W7Dr)
conda create -n gptac_venv python=3.11 # 创建anaconda环境
conda activate gptac_venv # 激活anaconda环境
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
```
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端请点击展开此处</summary>
<p>
【可选步骤】如果需要支持清华ChatGLM2/复旦MOSS作为后端需要额外安装更多依赖前提条件熟悉Python + 用过Pytorch + 电脑配置够强):
```sh
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1以上默认安装的为torch+cpu版使用cuda需要卸载torch重新安装torch+cuda 2如因本机配置不够无法加载模型可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
@ -143,39 +145,39 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-
4. 运行
```sh
python main.py
```
```sh
python main.py
```
### 安装方法II使用Docker
0. 部署项目的全部能力这个是包含cuda和latex的大型镜像。但如果您网速慢、硬盘小则不推荐使用这个
[![fullcapacity](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-all-capacity.yml)
``` sh
# 修改docker-compose.yml保留方案0并删除其他方案。然后运行
docker-compose up
```
``` sh
# 修改docker-compose.yml保留方案0并删除其他方案。然后运行
docker-compose up
```
1. 仅ChatGPT+文心一言+spark等在线模型推荐大多数人选择
[![basic](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-without-local-llms.yml)
[![basiclatex](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-latex.yml)
[![basicaudio](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-audio-assistant.yml)
``` sh
# 修改docker-compose.yml保留方案1并删除其他方案。然后运行
docker-compose up
```
``` sh
# 修改docker-compose.yml保留方案1并删除其他方案。然后运行
docker-compose up
```
P.S. 如果需要依赖Latex的插件功能请见Wiki。另外您也可以直接使用方案4或者方案0获取Latex功能。
2. ChatGPT + ChatGLM2 + MOSS + LLAMA2 + 通义千问(需要熟悉[Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)运行时)
[![chatglm](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml/badge.svg?branch=master)](https://github.com/binary-husky/gpt_academic/actions/workflows/build-with-chatglm.yml)
``` sh
# 修改docker-compose.yml保留方案2并删除其他方案。然后运行
docker-compose up
```
``` sh
# 修改docker-compose.yml保留方案2并删除其他方案。然后运行
docker-compose up
```
### 安装方法III其他部署姿势
@ -196,9 +198,11 @@ docker-compose up
# Advanced Usage
### I自定义新的便捷按钮学术快捷键
任意文本编辑器打开`core_functional.py`,添加条目如下,然后重启程序。(如按钮已存在,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
例如
```
```python
"超级英译中": {
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": "请翻译把下面一段内容成中文然后用一个markdown表格逐一解释文中出现的专有名词\n\n",
@ -207,6 +211,7 @@ docker-compose up
"Suffix": "",
},
```
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
</div>
@ -283,6 +288,7 @@ Tip不指定文件直接点击 `载入对话历史存档` 可以查看历史h
### II版本:
- version 3.70todo: 优化AutoGen插件主题并设计一系列衍生插件
- version 3.60: 引入AutoGen作为新一代插件的基石
- version 3.57: 支持GLM3星火v3文心一言v4修复本地模型的并发BUG
@ -303,7 +309,7 @@ Tip不指定文件直接点击 `载入对话历史存档` 可以查看历史h
- version 3.0: 对chatglm和其他小型llm的支持
- version 2.6: 重构了插件结构,提高了交互性,加入更多插件
- version 2.5: 自更新解决总结大工程源代码时文本过长、token溢出的问题
- version 2.4: (1)新增PDF全文翻译功能; (2)新增输入区切换位置的功能; (3)新增垂直布局选项; (4)多线程函数插件优化。
- version 2.4: 新增PDF全文翻译功能; 新增输入区切换位置的功能
- version 2.3: 增强多线程交互性
- version 2.2: 函数插件支持热重载
- version 2.1: 可折叠式布局
@ -325,6 +331,7 @@ GPT Academic开发者QQ群`610599535`
1. `master` 分支: 主分支,稳定版
2. `frontier` 分支: 开发分支,测试版
3. 如何接入其他大模型:[接入其他大模型](request_llms/README.md)
### V参考与学习

View File

@ -91,8 +91,8 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview",
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
"api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
"chatglm3", "moss", "newbing", "claude-2"]
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
"chatglm3", "moss", "claude-2"]
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
@ -271,11 +271,31 @@ NUM_CUSTOM_BASIC_BTN = 4
BAIDU_CLOUD_API_KEY
BAIDU_CLOUD_SECRET_KEY
"newbing" Newbing接口不再稳定不推荐使用
"zhipuai" 智谱AI大模型chatglm_turbo
ZHIPUAI_API_KEY
ZHIPUAI_MODEL
"newbing" Newbing接口不再稳定不推荐使用
NEWBING_STYLE
NEWBING_COOKIES
本地大模型示意图
"chatglm3"
"chatglm"
"chatglm_onnx"
"chatglmft"
"internlm"
"moss"
"jittorllms_pangualpha"
"jittorllms_llama"
"deepseekcoder"
"qwen"
RWKV的支持见Wiki
"llama2"
用户图形界面布局依赖关系示意图
CHATBOT_HEIGHT 对话窗的高度
@ -286,7 +306,7 @@ NUM_CUSTOM_BASIC_BTN = 4
THEME 色彩主题
AUTO_CLEAR_TXT 是否在提交时自动清空输入框
ADD_WAIFU 加一个live2d装饰
ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置该功能具有一定的危险性
ALLOW_RESET_CONFIG 是否允许通过自然语言描述修改本页的配置该功能具有一定的危险性
插件在线服务配置依赖关系示意图
@ -298,7 +318,7 @@ NUM_CUSTOM_BASIC_BTN = 4
ALIYUN_ACCESSKEY
ALIYUN_SECRET
PDF文档精准解析
GROBID_URLS
PDF文档精准解析
GROBID_URLS
"""

View File

@ -1,4 +1,4 @@
from toolbox import update_ui, get_conf, trimmed_format_exc, get_log_folder
from toolbox import update_ui, get_conf, trimmed_format_exc, get_max_token
import threading
import os
import logging
@ -92,7 +92,7 @@ def request_gpt_model_in_new_thread_with_ui_alive(
# 【选择处理】 尝试计算比例,尽可能多地保留文本
from toolbox import get_reduce_token_percent
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
MAX_TOKEN = 4096
MAX_TOKEN = get_max_token(llm_kwargs)
EXCEED_ALLO = 512 + 512 * exceeded_cnt
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
mutable[0] += f'[Local Message] 警告文本过长将进行截断Token溢出数{n_exceed}\n\n'
@ -224,7 +224,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
# 【选择处理】 尝试计算比例,尽可能多地保留文本
from toolbox import get_reduce_token_percent
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
MAX_TOKEN = 4096
MAX_TOKEN = get_max_token(llm_kwargs)
EXCEED_ALLO = 512 + 512 * exceeded_cnt
inputs, history = input_clipping(inputs, history, max_token_limit=MAX_TOKEN-EXCEED_ALLO)
gpt_say += f'[Local Message] 警告文本过长将进行截断Token溢出数{n_exceed}\n\n'

View File

@ -1,79 +1,35 @@
# 如何使用其他大语言模型
## ChatGLM
- 安装依赖 `pip install -r request_llms/requirements_chatglm.txt`
- 修改配置在config.py中将LLM_MODEL的值改为"chatglm"
``` sh
LLM_MODEL = "chatglm"
```
- 运行!
``` sh
`python main.py`
```
## Claude-Stack
- 请参考此教程获取 https://zhuanlan.zhihu.com/p/627485689
- 1、SLACK_CLAUDE_BOT_ID
- 2、SLACK_CLAUDE_USER_TOKEN
- 把token加入config.py
## Newbing
- 使用cookie editor获取cookiejson
- 把cookiejson加入config.py NEWBING_COOKIES
## Moss
- 使用docker-compose
## RWKV
- 使用docker-compose
## LLAMA
- 使用docker-compose
## 盘古
- 使用docker-compose
P.S. 如果您按照以下步骤成功接入了新的大模型欢迎发Pull Requests如果您在自己接入新模型的过程中遇到困难欢迎加README底部QQ群联系群主
---
## Text-Generation-UI (TGUI调试中暂不可用)
# 如何接入其他本地大语言模型
### 1. 部署TGUI
``` sh
# 1 下载模型
git clone https://github.com/oobabooga/text-generation-webui.git
# 2 这个仓库的最新代码有问题,回滚到几周之前
git reset --hard fcda3f87767e642d1c0411776e549e1d3894843d
# 3 切换路径
cd text-generation-webui
# 4 安装text-generation的额外依赖
pip install accelerate bitsandbytes flexgen gradio llamacpp markdown numpy peft requests rwkv safetensors sentencepiece tqdm datasets git+https://github.com/huggingface/transformers
# 5 下载模型
python download-model.py facebook/galactica-1.3b
# 其他可选如 facebook/opt-1.3b
# facebook/galactica-1.3b
# facebook/galactica-6.7b
# facebook/galactica-120b
# facebook/pygmalion-1.3b 等
# 详情见 https://github.com/oobabooga/text-generation-webui
1. 复制`request_llms/bridge_llama2.py`,重命名为你喜欢的名字
# 6 启动text-generation
python server.py --cpu --listen --listen-port 7865 --model facebook_galactica-1.3b
```
2. 修改`load_model_and_tokenizer`方法加载你的模型和分词器去该模型官网找demo复制粘贴即可
### 2. 修改config.py
3. 修改`llm_stream_generator`方法定义推理模型去该模型官网找demo复制粘贴即可
``` sh
# LLM_MODEL格式: tgui:[模型]@[ws地址]:[ws端口] , 端口要和上面给定的端口一致
LLM_MODEL = "tgui:galactica-1.3b@localhost:7860"
```
4. 命令行测试
- 修改`tests/test_llms.py`(聪慧如您,只需要看一眼该文件就明白怎么修改了)
- 运行`python tests/test_llms.py`
### 3. 运行!
``` sh
cd chatgpt-academic
python main.py
```
5. 测试通过后,在`request_llms/bridge_all.py`中做最后的修改,把你的模型完全接入到框架中(聪慧如您,只需要看一眼该文件就明白怎么修改了)
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果
# 如何接入其他在线大语言模型
1. 复制`request_llms/bridge_zhipu.py`,重命名为你喜欢的名字
2. 修改`predict_no_ui_long_connection`
3. 修改`predict`
4. 命令行测试
- 修改`tests/test_llms.py`(聪慧如您,只需要看一眼该文件就明白怎么修改了)
- 运行`python tests/test_llms.py`
5. 测试通过后,在`request_llms/bridge_all.py`中做最后的修改,把你的模型完全接入到框架中(聪慧如您,只需要看一眼该文件就明白怎么修改了)
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果

View File

@ -543,6 +543,22 @@ if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai
})
except:
print(trimmed_format_exc())
if "deepseekcoder" in AVAIL_LLM_MODELS: # deepseekcoder
try:
from .bridge_deepseekcoder import predict_no_ui_long_connection as deepseekcoder_noui
from .bridge_deepseekcoder import predict as deepseekcoder_ui
model_info.update({
"deepseekcoder": {
"fn_with_ui": deepseekcoder_ui,
"fn_without_ui": deepseekcoder_noui,
"endpoint": None,
"max_token": 4096,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
}
})
except:
print(trimmed_format_exc())
# <-- 用于定义和切换多个azure模型 -->
AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")

View File

@ -0,0 +1,88 @@
model_name = "deepseek-coder-6.7b-instruct"
cmd_to_install = "未知" # "`pip install -r request_llms/requirements_qwen.txt`"
import os
from toolbox import ProxyNetworkActivate
from toolbox import get_conf
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
from threading import Thread
def download_huggingface_model(model_name, max_retry, local_dir):
from huggingface_hub import snapshot_download
for i in range(1, max_retry):
try:
snapshot_download(repo_id=model_name, local_dir=local_dir, resume_download=True)
break
except Exception as e:
print(f'\n\n下载失败,重试第{i}次中...\n\n')
return local_dir
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
class GetCoderLMHandle(LocalLLMHandle):
def load_model_info(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
self.model_name = model_name
self.cmd_to_install = cmd_to_install
def load_model_and_tokenizer(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
with ProxyNetworkActivate('Download_LLM'):
from transformers import AutoTokenizer, AutoModelForCausalLM, TextIteratorStreamer
model_name = "deepseek-ai/deepseek-coder-6.7b-instruct"
# local_dir = f"~/.cache/{model_name}"
# if not os.path.exists(local_dir):
# tokenizer = download_huggingface_model(model_name, max_retry=128, local_dir=local_dir)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
self._streamer = TextIteratorStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
if get_conf('LOCAL_MODEL_DEVICE') != 'cpu':
model = model.cuda()
return model, tokenizer
def llm_stream_generator(self, **kwargs):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
def adaptor(kwargs):
query = kwargs['query']
max_length = kwargs['max_length']
top_p = kwargs['top_p']
temperature = kwargs['temperature']
history = kwargs['history']
return query, max_length, top_p, temperature, history
query, max_length, top_p, temperature, history = adaptor(kwargs)
history.append({ 'role': 'user', 'content': query})
messages = history
inputs = self._tokenizer.apply_chat_template(messages, return_tensors="pt").to(self._model.device)
generation_kwargs = dict(
inputs=inputs,
max_new_tokens=max_length,
do_sample=False,
top_p=top_p,
streamer = self._streamer,
top_k=50,
temperature=temperature,
num_return_sequences=1,
eos_token_id=32021,
)
thread = Thread(target=self._model.generate, kwargs=generation_kwargs, daemon=True)
thread.start()
generated_text = ""
for new_text in self._streamer:
generated_text += new_text
# print(generated_text)
yield generated_text
def try_to_import_special_deps(self, **kwargs): pass
# import something that will raise error if the user does not install requirement_*.txt
# 🏃‍♂️🏃‍♂️🏃‍♂️ 主进程执行
# import importlib
# importlib.import_module('modelscope')
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetCoderLMHandle, model_name, history_format='chatglm3')

View File

@ -12,7 +12,7 @@ from threading import Thread
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
class GetONNXGLMHandle(LocalLLMHandle):
class GetLlamaHandle(LocalLLMHandle):
def load_model_info(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
@ -87,4 +87,4 @@ class GetONNXGLMHandle(LocalLLMHandle):
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetONNXGLMHandle, model_name)
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetLlamaHandle, model_name)

View File

@ -15,7 +15,7 @@ from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
class GetONNXGLMHandle(LocalLLMHandle):
class GetQwenLMHandle(LocalLLMHandle):
def load_model_info(self):
# 🏃‍♂️🏃‍♂️🏃‍♂️ 子进程执行
@ -64,4 +64,4 @@ class GetONNXGLMHandle(LocalLLMHandle):
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 GPT-Academic Interface
# ------------------------------------------------------------------------------------------------------------------------
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetONNXGLMHandle, model_name)
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)

View File

@ -1,6 +1,7 @@
import time
from toolbox import update_ui, get_conf, update_ui_lastest_msg
from toolbox import check_packages, report_exception
model_name = '智谱AI大模型'
@ -37,6 +38,14 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history)
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
check_packages(["zhipuai"])
except:
yield from update_ui_lastest_msg(f"导入软件依赖失败。使用该模型需要额外依赖,安装方法```pip install --upgrade zhipuai```。",
chatbot=chatbot, history=history, delay=0)
return
if validate_key() is False:
yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
return

View File

@ -198,7 +198,7 @@ class LocalLLMHandle(Process):
if res.startswith(self.std_tag):
new_output = res[len(self.std_tag):]
std_out = std_out[:std_out_clip_len]
# print(new_output, end='')
print(new_output, end='')
std_out = new_output + std_out
yield self.std_tag + '\n```\n' + std_out + '\n```\n'
elif res == '[Finish]':

View File

@ -15,7 +15,8 @@ if __name__ == "__main__":
# from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
# from request_llms.bridge_claude import predict_no_ui_long_connection
from request_llms.bridge_internlm import predict_no_ui_long_connection
# from request_llms.bridge_internlm import predict_no_ui_long_connection
from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
# from request_llms.bridge_qwen import predict_no_ui_long_connection
# from request_llms.bridge_spark import predict_no_ui_long_connection
# from request_llms.bridge_zhipu import predict_no_ui_long_connection

View File

@ -1,5 +1,5 @@
{
"version": 3.60,
"version": 3.61,
"show_feature": true,
"new_feature": "修复多个BUG <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮 <-> 新汇报PDF汇总页面 <-> 重新编译Gradio优化使用体验"
"new_feature": "修复潜在的多用户冲突问题 <-> 接入Deepseek Coder <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮"
}