添加了qwen1.8b模型
This commit is contained in:
parent
da376068e1
commit
94ab41d3c0
83
README.md
83
README.md
@ -24,7 +24,6 @@
|
||||
[Installation-image]: https://img.shields.io/badge/Installation-v3.6.1-blue?style=flat-square
|
||||
[Wiki-image]: https://img.shields.io/badge/wiki-项目文档-black?style=flat-square
|
||||
[PRs-image]: https://img.shields.io/badge/PRs-welcome-pink?style=flat-square
|
||||
|
||||
[License-url]: https://github.com/binary-husky/gpt_academic/blob/master/LICENSE
|
||||
[Github-url]: https://github.com/binary-husky/gpt_academic
|
||||
[Releases-url]: https://github.com/binary-husky/gpt_academic/releases
|
||||
@ -32,7 +31,6 @@
|
||||
[Wiki-url]: https://github.com/binary-husky/gpt_academic/wiki
|
||||
[PRs-url]: https://github.com/binary-husky/gpt_academic/pulls
|
||||
|
||||
|
||||
</div>
|
||||
<br>
|
||||
|
||||
@ -41,11 +39,10 @@
|
||||
If you like this project, please give it a Star. Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanese.md) | [한국어](docs/README.Korean.md) | [Русский](docs/README.Russian.md) | [Français](docs/README.French.md). All translations have been provided by the project itself. To translate this project to arbitrary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).
|
||||
<br>
|
||||
|
||||
|
||||
> 1.请注意只有 **高亮** 标识的插件(按钮)才支持读取文件,部分插件位于插件区的**下拉菜单**中。另外我们以**最高优先级**欢迎和处理任何新插件的 PR。
|
||||
>
|
||||
> 2.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPT‐Academic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用 GPT 重新生成项目的自我解析报告。常见问题请查阅 wiki。
|
||||
> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) []([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
|
||||
> [](#installation) [](https://github.com/binary-husky/gpt_academic/releases) [](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) [](<[https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki)>)
|
||||
>
|
||||
> 3.本项目兼容并鼓励尝试国产大语言模型 ChatGLM 等。支持多个 api-key 共存,可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交后即可生效。
|
||||
|
||||
@ -53,43 +50,42 @@ If you like this project, please give it a Star. Read this in [English](docs/REA
|
||||
|
||||
<div align="center">
|
||||
|
||||
功能(⭐= 近期新增功能) | 描述
|
||||
--- | ---
|
||||
⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),上海AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/)
|
||||
润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码
|
||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
|
||||
模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
|
||||
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键剖析Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW)
|
||||
读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读latex/pdf论文全文并生成摘要
|
||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色latex论文
|
||||
批量注释生成 | [插件] 一键批量生成函数注释
|
||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面5种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔
|
||||
chat分析报告生成 | [插件] 运行后自动生成总结汇报
|
||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF论文提取题目&摘要+翻译全文(多线程)
|
||||
[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
|
||||
Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼写纠错+输出对照PDF
|
||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
|
||||
互联网信息聚合+GPT | [插件] 一键[让GPT从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时
|
||||
⭐Arxiv论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译arxiv论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具
|
||||
⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机
|
||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
|
||||
⭐AutoGen多智能体插件 | [插件] 借助微软AutoGen,探索多Agent的智能涌现可能!
|
||||
启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器url后面添加```/?__theme=dark```可以切换dark主题
|
||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被GPT3.5、GPT4、[清华ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧?
|
||||
⭐ChatGLM2微调模型 | 支持加载ChatGLM2微调模型,提供ChatGLM2微调辅助插件
|
||||
更多LLM模型接入,支持[huggingface部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入Newbing接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古α](https://openi.org.cn/pangu/)
|
||||
⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip包 | 脱离GUI,在Python中直接调用本项目的所有函数插件(开发中)
|
||||
⭐虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件
|
||||
更多新功能展示 (图像生成等) …… | 见本文档结尾处 ……
|
||||
</div>
|
||||
| 功能(⭐= 近期新增功能) | 描述 |
|
||||
| ------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| ⭐[接入新模型](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | 百度[千帆](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)与文心一言, 通义千问[Qwen-7B](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary),通义千问[Qwen-1_8B](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat/summary),上海 AI-Lab[书生](https://github.com/InternLM/InternLM),讯飞[星火](https://xinghuo.xfyun.cn/),[LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),[智谱 API](https://open.bigmodel.cn/),DALLE3, [DeepseekCoder](https://coder.deepseek.com/) |
|
||||
| 润色、翻译、代码解释 | 一键润色、翻译、查找论文语法错误、解释代码 |
|
||||
| [自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键 |
|
||||
| 模块化设计 | 支持自定义强大的[插件](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) |
|
||||
| [程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [插件] 一键剖析 Python/C/C++/Java/Lua/...项目树 或 [自我剖析](https://www.bilibili.com/video/BV1cj411A7VW) |
|
||||
| 读论文、[翻译](https://www.bilibili.com/video/BV1KT411x7Wn)论文 | [插件] 一键解读 latex/pdf 论文全文并生成摘要 |
|
||||
| Latex 全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [插件] 一键翻译或润色 latex 论文 |
|
||||
| 批量注释生成 | [插件] 一键批量生成函数注释 |
|
||||
| Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [插件] 看到上面 5 种语言的[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)了吗?就是出自他的手笔 |
|
||||
| chat 分析报告生成 | [插件] 运行后自动生成总结汇报 |
|
||||
| [PDF 论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [插件] PDF 论文提取题目&摘要+翻译全文(多线程) |
|
||||
| [Arxiv 小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [插件] 输入 arxiv 文章 url 即可一键翻译摘要+下载 PDF |
|
||||
| Latex 论文一键校对 | [插件] 仿 Grammarly 对 Latex 文章进行语法、拼写纠错+输出对照 PDF |
|
||||
| [谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [插件] 给定任意谷歌学术搜索页面 URL,让 gpt 帮你[写 relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) |
|
||||
| 互联网信息聚合+GPT | [插件] 一键[让 GPT 从互联网获取信息](https://www.bilibili.com/video/BV1om4y127ck)回答问题,让信息永不过时 |
|
||||
| ⭐Arxiv 论文精细翻译 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [插件] 一键[以超高质量翻译 arxiv 论文](https://www.bilibili.com/video/BV1dz4y1v77A/),目前最好的论文翻译工具 |
|
||||
| ⭐[实时语音对话输入](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [插件] 异步[监听音频](https://www.bilibili.com/video/BV1AV4y187Uy/),自动断句,自动寻找回答时机 |
|
||||
| 公式/图片/表格显示 | 可以同时显示公式的[tex 形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮 |
|
||||
| ⭐AutoGen 多智能体插件 | [插件] 借助微软 AutoGen,探索多 Agent 的智能涌现可能! |
|
||||
| 启动暗色[主题](https://github.com/binary-husky/gpt_academic/issues/173) | 在浏览器 url 后面添加`/?__theme=dark`可以切换 dark 主题 |
|
||||
| [多 LLM 模型](https://www.bilibili.com/video/BV1wT411p7yf)支持 | 同时被 GPT3.5、GPT4、[清华 ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[复旦 MOSS](https://github.com/OpenLMLab/MOSS)伺候的感觉一定会很不错吧? |
|
||||
| ⭐ChatGLM2 微调模型 | 支持加载 ChatGLM2 微调模型,提供 ChatGLM2 微调辅助插件 |
|
||||
| 更多 LLM 模型接入,支持[huggingface 部署](https://huggingface.co/spaces/qingxu98/gpt-academic) | 加入 Newbing 接口(新必应),引入清华[Jittorllms](https://github.com/Jittor/JittorLLMs)支持[LLaMA](https://github.com/facebookresearch/llama)和[盘古 α](https://openi.org.cn/pangu/) |
|
||||
| ⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip 包 | 脱离 GUI,在 Python 中直接调用本项目的所有函数插件(开发中) |
|
||||
| ⭐ 虚空终端插件 | [插件] 能够使用自然语言直接调度本项目其他插件 |
|
||||
| 更多新功能展示 (图像生成等) …… | 见本文档结尾处 …… |
|
||||
|
||||
</div>
|
||||
|
||||
- 新界面(修改`config.py`中的 LAYOUT 选项即可实现“左右布局”和“上下布局”的切换)
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/279702205-d81137c3-affd-4cd1-bb5e-b15610389762.gif" width="700" >
|
||||
</div>
|
||||
|
||||
|
||||
- 所有按钮都通过读取 functional.py 动态生成,可随意加自定义功能,解放剪贴板
|
||||
<div align="center">
|
||||
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
|
||||
@ -118,6 +114,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
<br><br>
|
||||
|
||||
# Installation
|
||||
|
||||
### 安装方法 I:直接运行 (Windows, Linux or MacOS)
|
||||
|
||||
1. 下载项目
|
||||
@ -135,8 +132,8 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
|
||||
「 支持通过`环境变量`配置项目,环境变量的书写格式参考`docker-compose.yml`文件或者我们的[Wiki 页面](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。配置读取优先级: `环境变量` > `config_private.py` > `config.py` 」。
|
||||
|
||||
|
||||
3. 安装依赖
|
||||
|
||||
```sh
|
||||
# (选择I: 如熟悉python, python推荐版本 3.9 ~ 3.11)备注:使用官方pip源或者阿里pip源, 临时换源方法:python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
|
||||
python -m pip install -r requirements.txt
|
||||
@ -147,7 +144,6 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
|
||||
python -m pip install -r requirements.txt # 这个步骤和pip安装一样的步骤
|
||||
```
|
||||
|
||||
|
||||
<details><summary>如果需要支持清华ChatGLM2/复旦MOSS/RWKV作为后端,请点击展开此处</summary>
|
||||
<p>
|
||||
|
||||
@ -171,8 +167,6 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-
|
||||
</p>
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
4. 运行
|
||||
```sh
|
||||
python main.py
|
||||
@ -208,8 +202,8 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
|
||||
### 安装方法 III:其他部署方法
|
||||
|
||||
1. **Windows 一键运行脚本**。
|
||||
完全不熟悉 python 环境的 Windows 用户可以下载[Release](https://github.com/binary-husky/gpt_academic/releases)中发布的一键运行脚本安装无本地模型的版本。脚本贡献来源:[oobabooga](https://github.com/oobabooga/one-click-installers)。
|
||||
|
||||
@ -226,6 +220,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
|
||||
<br><br>
|
||||
|
||||
# Advanced Usage
|
||||
|
||||
### I:自定义新的便捷按钮(学术快捷键)
|
||||
|
||||
任意文本编辑器打开`core_functional.py`,添加如下条目,然后重启程序。(如果按钮已存在,那么可以直接修改(前缀、后缀都已支持热修改),无需重启程序即可生效。)
|
||||
@ -246,6 +241,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
|
||||
</div>
|
||||
|
||||
### II:自定义函数插件
|
||||
|
||||
编写强大的函数插件来执行任何你想得到的和想不到的任务。
|
||||
本项目的插件编写、调试难度很低,只要您具备一定的 python 基础知识,就可以仿照我们提供的模板实现自己的插件功能。
|
||||
详情请参考[函数插件指南](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)。
|
||||
@ -253,6 +249,7 @@ P.S. 如果需要依赖Latex的插件功能,请见Wiki。另外,您也可以
|
||||
<br><br>
|
||||
|
||||
# Updates
|
||||
|
||||
### I:动态
|
||||
|
||||
1. 对话保存功能。在函数插件区调用 `保存当前的对话` 即可将当前对话保存为可读+可复原的 html 文件,
|
||||
@ -315,8 +312,6 @@ Tip:不指定文件直接点击 `载入对话历史存档` 可以查看历史h
|
||||
<img src="https://github.com/binary-husky/gpt_academic/assets/96192199/b6799499-b6fb-4f0c-9c8e-1b441872f4e8" width="500" >
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
### II:版本:
|
||||
|
||||
- version 3.70(todo): 优化 AutoGen 插件主题并设计一系列衍生插件
|
||||
@ -353,9 +348,10 @@ GPT Academic开发者QQ群:`610599535`
|
||||
- 官方 Gradio 目前有很多兼容性问题,请**务必使用`requirement.txt`安装 Gradio**
|
||||
|
||||
### III:主题
|
||||
可以通过修改`THEME`选项(config.py)变更主题
|
||||
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
||||
|
||||
可以通过修改`THEME`选项(config.py)变更主题
|
||||
|
||||
1. `Chuanhu-Small-and-Beautiful` [网址](https://github.com/GaiZhenbiao/ChuanhuChatGPT/)
|
||||
|
||||
### IV:本项目的开发分支
|
||||
|
||||
@ -363,7 +359,6 @@ GPT Academic开发者QQ群:`610599535`
|
||||
2. `frontier` 分支: 开发分支,测试版
|
||||
3. 如何接入其他大模型:[接入其他大模型](request_llms/README.md)
|
||||
|
||||
|
||||
### V:参考与学习
|
||||
|
||||
```
|
||||
|
@ -91,10 +91,10 @@ AVAIL_LLM_MODELS = ["gpt-3.5-turbo-1106","gpt-4-1106-preview","gpt-4-vision-prev
|
||||
"gpt-3.5-turbo-16k", "gpt-3.5-turbo", "azure-gpt-3.5",
|
||||
"api2d-gpt-3.5-turbo", 'api2d-gpt-3.5-turbo-16k',
|
||||
"gpt-4", "gpt-4-32k", "azure-gpt-4", "api2d-gpt-4",
|
||||
"chatglm3", "moss", "claude-2","qwen"]
|
||||
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "qwen", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
|
||||
"chatglm3", "moss", "claude-2","qwen-1_8B","qwen-7B"]
|
||||
# P.S. 其他可用的模型还包括 ["zhipuai", "qianfan", "deepseekcoder", "llama2", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-random"
|
||||
# "spark", "sparkv2", "sparkv3", "chatglm_onnx", "claude-1-100k", "claude-2", "internlm", "jittorllms_pangualpha", "jittorllms_llama"]
|
||||
# 如果你需要使用Qwen的本地模型,比如qwen1.8b,那么还需要在request_llms\bridge_qwen.py设置一下模型的路径!
|
||||
# 如果你需要使用Qwen的本地模型,比如qwen1.8b,那么还需要在request_llms下找到对应的文件,设置一下模型的路径!
|
||||
|
||||
# 定义界面上“询问多个GPT模型”插件应该使用哪些模型,请从AVAIL_LLM_MODELS中选择,并在不同模型之间用`&`间隔,例如"gpt-3.5-turbo&chatglm3&azure-gpt-4"
|
||||
MULTI_QUERY_LLM_MODELS = "gpt-3.5-turbo&chatglm3"
|
||||
@ -291,7 +291,8 @@ NUM_CUSTOM_BASIC_BTN = 4
|
||||
├── "jittorllms_pangualpha"
|
||||
├── "jittorllms_llama"
|
||||
├── "deepseekcoder"
|
||||
├── "qwen"
|
||||
├── "qwen-1_8B"
|
||||
├── "qwen-7B"
|
||||
├── RWKV的支持见Wiki
|
||||
└── "llama2"
|
||||
|
||||
|
@ -431,12 +431,12 @@ if "chatglm_onnx" in AVAIL_LLM_MODELS:
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
if "qwen" in AVAIL_LLM_MODELS:
|
||||
if "qwen-1_8B" in AVAIL_LLM_MODELS: # qwen-1.8B
|
||||
try:
|
||||
from .bridge_qwen import predict_no_ui_long_connection as qwen_noui
|
||||
from .bridge_qwen import predict as qwen_ui
|
||||
from .bridge_qwen_1_8B import predict_no_ui_long_connection as qwen_noui
|
||||
from .bridge_qwen_1_8B import predict as qwen_ui
|
||||
model_info.update({
|
||||
"qwen": {
|
||||
"qwen-1_8B": {
|
||||
"fn_with_ui": qwen_ui,
|
||||
"fn_without_ui": qwen_noui,
|
||||
"endpoint": None,
|
||||
@ -447,6 +447,24 @@ if "qwen" in AVAIL_LLM_MODELS:
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
|
||||
if "qwen-7B" in AVAIL_LLM_MODELS: # qwen-7B
|
||||
try:
|
||||
from .bridge_qwen_7B import predict_no_ui_long_connection as qwen_noui
|
||||
from .bridge_qwen_7B import predict as qwen_ui
|
||||
model_info.update({
|
||||
"qwen-7B": {
|
||||
"fn_with_ui": qwen_ui,
|
||||
"fn_without_ui": qwen_noui,
|
||||
"endpoint": None,
|
||||
"max_token": 4096,
|
||||
"tokenizer": tokenizer_gpt35,
|
||||
"token_cnt": get_token_num_gpt35,
|
||||
}
|
||||
})
|
||||
except:
|
||||
print(trimmed_format_exc())
|
||||
|
||||
if "chatgpt_website" in AVAIL_LLM_MODELS: # 接入一些逆向工程https://github.com/acheong08/ChatGPT-to-API/
|
||||
try:
|
||||
from .bridge_chatgpt_website import predict_no_ui_long_connection as chatgpt_website_noui
|
||||
|
67
request_llms/bridge_qwen_1_8B.py
Normal file
67
request_llms/bridge_qwen_1_8B.py
Normal file
@ -0,0 +1,67 @@
|
||||
model_name = "Qwen1_8B"
|
||||
cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
|
||||
|
||||
|
||||
from transformers import AutoModel, AutoTokenizer
|
||||
import time
|
||||
import threading
|
||||
import importlib
|
||||
from toolbox import update_ui, get_conf, ProxyNetworkActivate
|
||||
from multiprocessing import Process, Pipe
|
||||
from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 Local Model
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
class GetQwenLMHandle(LocalLLMHandle):
|
||||
|
||||
def load_model_info(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
self.model_name = model_name
|
||||
self.cmd_to_install = cmd_to_install
|
||||
|
||||
def load_model_and_tokenizer(self):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
import os, glob
|
||||
import os
|
||||
import platform
|
||||
from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
|
||||
|
||||
with ProxyNetworkActivate('Download_LLM'):
|
||||
model_id = 'Qwen/Qwen-1_8B-Chat'
|
||||
self._tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen-1_8B-Chat', trust_remote_code=True, resume_download=True)
|
||||
# use fp16
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True, fp16=True).eval()
|
||||
model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
|
||||
self._model = model
|
||||
|
||||
return self._model, self._tokenizer
|
||||
|
||||
def llm_stream_generator(self, **kwargs):
|
||||
# 🏃♂️🏃♂️🏃♂️ 子进程执行
|
||||
def adaptor(kwargs):
|
||||
query = kwargs['query']
|
||||
max_length = kwargs['max_length']
|
||||
top_p = kwargs['top_p']
|
||||
temperature = kwargs['temperature']
|
||||
history = kwargs['history']
|
||||
return query, max_length, top_p, temperature, history
|
||||
|
||||
query, max_length, top_p, temperature, history = adaptor(kwargs)
|
||||
|
||||
for response in self._model.chat_stream(self._tokenizer, query, history=history):
|
||||
yield response
|
||||
|
||||
def try_to_import_special_deps(self, **kwargs):
|
||||
# import something that will raise error if the user does not install requirement_*.txt
|
||||
# 🏃♂️🏃♂️🏃♂️ 主进程执行
|
||||
import importlib
|
||||
importlib.import_module('modelscope')
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
# 🔌💻 GPT-Academic Interface
|
||||
# ------------------------------------------------------------------------------------------------------------------------
|
||||
predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetQwenLMHandle, model_name)
|
@ -1,4 +1,4 @@
|
||||
model_name = "Qwen"
|
||||
model_name = "Qwen-7B"
|
||||
cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
|
||||
|
||||
|
@ -16,8 +16,9 @@ if __name__ == "__main__":
|
||||
# from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_claude import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_internlm import predict_no_ui_long_connection
|
||||
from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_qwen import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
|
||||
from request_llms.bridge_qwen_1_8B import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_spark import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
|
||||
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
|
||||
|
Loading…
x
Reference in New Issue
Block a user