Compare commits

...

338 Commits
master ... dev

Author SHA1 Message Date
binary-husky
a239abac50
Update version 2023-04-04 22:34:28 +08:00
binary-husky
1042d28e1f
Update version 2023-04-04 22:20:39 +08:00
binary-husky
7b75422c26
Update version 2023-04-04 22:20:21 +08:00
binary-husky
99817e9040
Update version 2023-04-04 22:17:47 +08:00
qingxu fu
c9fa26405d 规划版本号 2023-04-04 21:38:20 +08:00
binary-husky
005232afa6
Update issue templates 2023-04-04 17:13:40 +08:00
binary-husky
5b8cc5a899
Update README.md 2023-04-04 15:33:53 +08:00
qingxu fu
a4137e7170 修复代码英文重构Bug 2023-04-04 15:23:42 +08:00
qingxu fu
aaf44750d9 默认暗色护眼主题 2023-04-03 20:56:00 +08:00
binary-husky
bd6eb90449
Merge pull request #290 from LiZheGuang/master
fix: 🐛 修复react解析项目不显示在下拉列表的问题
2023-04-03 17:58:28 +08:00
LiZheGuang
b5a48369a4 fix: 🐛 修复react解析项目不显示在下拉列表的问题 2023-04-03 17:44:09 +08:00
binary-husky
6b5bdbe98a
Update issue templates 2023-04-03 17:00:51 +08:00
qingxu fu
69624c66d7 update README 2023-04-03 09:32:01 +08:00
binary-husky
69be335d22
Update README.md 2023-04-03 01:49:40 +08:00
binary-husky
bb1e410cb4
Update README.md 2023-04-03 01:47:49 +08:00
binary-husky
9ccc53fa96
Update README.md 2023-04-03 01:39:17 +08:00
binary-husky
4b83486b3d
Update README.md 2023-04-03 01:38:44 +08:00
binary-husky
417c8325de
Update README.md 2023-04-03 01:03:00 +08:00
binary-husky
d51ae6abb2
Update README.md 2023-04-03 01:01:57 +08:00
binary-husky
fff7b8ef91
Update README.md 2023-04-02 22:15:12 +08:00
binary-husky
c59eb8ff9e
Update README.md 2023-04-02 22:04:33 +08:00
binary-husky
a74f0a9343
Update config.py 2023-04-02 22:02:41 +08:00
binary-husky
f4905a60e2
Update README.md 2023-04-02 21:51:41 +08:00
binary-husky
a562849e4c
Update README.md 2023-04-02 21:44:44 +08:00
binary-husky
6105b7f73b
Update README.md 2023-04-02 21:41:36 +08:00
binary-husky
3486fb5c10
Update README.md 2023-04-02 21:39:28 +08:00
binary-husky
5f7a1a3da3
Update README.md 2023-04-02 21:33:09 +08:00
binary-husky
8d086ce7c0
Update README.md 2023-04-02 21:31:44 +08:00
binary-husky
b188c4a2b5
Update README.md 2023-04-02 21:28:59 +08:00
binary-husky
10cf456aa8
Update README.md 2023-04-02 21:27:19 +08:00
Your Name
4043db7f33 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-04-02 20:35:16 +08:00
Your Name
9a192fd473 #236 2023-04-02 20:35:09 +08:00
Your Name
9f91fca4d2 remove verbose print 2023-04-02 20:22:11 +08:00
Your Name
160b001bef CHATBOT_HEIGHT - 1 2023-04-02 20:20:37 +08:00
Your Name
1d912bc10d return None instead of [] when no file is concluded 2023-04-02 20:18:58 +08:00
Your Name
51b3f8adca +异常处理 2023-04-02 20:03:25 +08:00
Your Name
16de1812d3 微调theme 2023-04-02 20:02:47 +08:00
binary-husky
641b96548a
Update README.md 2023-04-02 16:55:18 +08:00
Your Name
34f4ba211d Merge branch 'CSS' of https://github.com/Keldos-Li/chatgpt_academic (#236) 2023-04-02 16:01:35 +08:00
Your Name
19aba350a3 修改按钮提示 2023-04-02 15:48:54 +08:00
binary-husky
ab57f4bfb0
Merge pull request #253 from RongkangXiong/dev
add crazy_functions 解析一个Java项目
2023-04-02 15:40:03 +08:00
Your Name
b7e0a48cd2 添加Golang、Java等项目的支持 2023-04-02 15:33:09 +08:00
Your Name
58b051ead3 加入 arxiv 小助手插件 2023-04-02 15:19:21 +08:00
RongkangXiong
0c7378e096 add crazy_functions 解析一个Rect项目 2023-04-02 03:07:21 +08:00
RongkangXiong
a52ae14457 add crazy_functions 解析一个Java项目 2023-04-02 02:59:03 +08:00
Your Name
ce3e9b6289 Merge branch 'master' into dev 2023-04-02 01:24:03 +08:00
Your Name
9f07531a16 update 2023-04-02 01:23:15 +08:00
Your Name
2f646b3199 stage llm model interface 2023-04-02 01:18:51 +08:00
Your Name
30b1cbd95c q 2023-04-02 00:51:17 +08:00
Your Name
6c32961211 接入TGUI 2023-04-02 00:40:05 +08:00
Your Name
ba7c7290c1 成功借助tgui调用更多LLM 2023-04-02 00:22:41 +08:00
Your Name
f245f94444 up 2023-04-01 23:46:32 +08:00
Your Name
e86ffad6e7 wait new pr 2023-04-01 21:56:55 +08:00
Your Name
706b2604d8 修改文件名 2023-04-01 21:45:58 +08:00
Keldos
e35f7a7186 fix: 修正CSS中的注释解决列表显示
- 同时使用.markdown-body缩限了css作用域
2023-04-01 20:34:18 +08:00
binary-husky
7c91cfebfa
Update README.md 2023-04-01 20:21:31 +08:00
Your Name
9d84ddbc62 advanced theme 2023-04-01 19:48:14 +08:00
Your Name
3d3ffd6de2 新的arxiv论文插件 2023-04-01 19:43:56 +08:00
binary-husky
82038a42da
Merge pull request #239 from ylsislove/golang-code-analysis
feat: add function to parse Golang projects
2023-04-01 19:42:06 +08:00
binary-husky
0ce2d423cf
Update functional_crazy.py 2023-04-01 19:37:39 +08:00
wangyu
4d6bdca3fc feat: add function to parse Golang projects
This commit adds a new function to parse Golang projects to the collection of crazy functions.
2023-04-01 19:19:36 +08:00
Your Name
c64dbb03fd fix bug 2023-04-01 19:07:58 +08:00
Your Name
8575c82ed7 README 2023-04-01 18:07:26 +08:00
Your Name
bca61754e3 Typo in Prompt 2023-04-01 17:29:30 +08:00
Your Name
f61ea1559c python3.7 compat 2023-04-01 17:11:59 +08:00
Keldos
12c36a68ce feat: 调整表格样式 2023-04-01 16:58:51 +08:00
Keldos
998e127b2f feat: 使用CSS完善表格、列表、代码块、对话气泡显示样式
移植了 川虎ChatGPT 的CSS——但是川虎ChatGPT的CSS也是我写的~
2023-04-01 16:51:38 +08:00
Your Name
c9c9449a59 README up 2023-04-01 16:36:57 +08:00
Your Name
722b538261 update README 2023-04-01 16:35:45 +08:00
Your Name
29fb436e76 Merge branch 'dev' 2023-04-01 16:31:57 +08:00
binary-husky
6789eaee45
Update README.md 2023-04-01 04:25:03 +08:00
binary-husky
937823ec64
Update README.md 2023-04-01 04:19:02 +08:00
Your Name
3c95299f48 交互优化 2023-04-01 04:11:31 +08:00
Your Name
639e24fc82 将css样式移动到theme文件,减少main.py的代码行数 2023-04-01 03:39:43 +08:00
binary-husky
364810983a
Merge pull request #209 from jr-shen/dev-1
(1)修改语法检查的prompt,确保输出格式统一。

之前使用时经常发现输出没有把修改的部分加粗,或者在表格中把整段文字输出了,影响阅读。因此在之前的prompt基础上增加了一个example,确保输出格式统一。

(2)表格内增加了边框线,使行/列之间的分隔更清楚。

使用时发现没有边框的表格在里面文字较多时难以区分。因此增加表格内边框线。
2023-04-01 03:37:02 +08:00
Your Name
cb9404c4de 优化Token溢出时的处理 2023-04-01 03:36:05 +08:00
Your Name
17a18e99fa 隐藏、显示功能区 2023-04-01 00:21:27 +08:00
Your Name
30ea77a496 更清朗些的UI 2023-03-31 23:54:25 +08:00
Your Name
01931b0bd2 更清朗的UI 2023-03-31 23:51:17 +08:00
Junru Shen
6a2c7db7c1 add markdown table border line to make text boundary more clear 2023-03-31 23:40:21 +08:00
Junru Shen
9c15c446a6 make grammar correction prompt more clear 2023-03-31 23:38:49 +08:00
Your Name
44ed8a46ad 对word和pdf进行简易的支持 2023-03-31 23:18:45 +08:00
Your Name
41801c017c Merge branch 'master' into dev 2023-03-31 23:08:30 +08:00
Your Name
3e88422ab6 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-31 22:49:45 +08:00
Your Name
7c8b8b95b2 修复bug 2023-03-31 22:49:39 +08:00
Your Name
65b1d78516 优化自译解功能 2023-03-31 22:36:46 +08:00
binary-husky
e815afb792
Update README.md 2023-03-31 21:48:45 +08:00
Your Name
ac681d3201 移动函数到调用模组 2023-03-31 21:46:47 +08:00
binary-husky
ecebdf3ab5
Merge pull request #204 from Eralien/dev-clean_pdf
feat: clean pdf fitz text
2023-03-31 21:42:18 +08:00
binary-husky
8aea6536e0
add contributor 2023-03-31 21:41:17 +08:00
binary-husky
9c90b28bed
Merge pull request #147 from JasonGuo1/master
feat(toolbox.py,总结word文档.py): 支持rar格式与7z格式解压;word读取
2023-03-31 21:39:05 +08:00
Your Name
36ef2fa884 JasonGuo1 2023-03-31 21:37:46 +08:00
Your Name
89ca010a11 Merge branch 'master' of https://github.com/JasonGuo1/chatgpt_academic into JasonGuo1-master 2023-03-31 21:31:31 +08:00
Siyuan Feng
60fc2bf4c1 feat: clean pdf fitz text 2023-03-31 21:26:55 +08:00
binary-husky
838c3dc881
Merge pull request #117 from XMB-7/better_prompt
feat: better prompt
2023-03-31 21:19:25 +08:00
Your Name
16c59e1bf6 Merge branch 'better_prompt' of https://github.com/XMB-7/chatgpt_academic into XMB-7-better_prompt 2023-03-31 21:18:28 +08:00
binary-husky
fb889cb4ce
Merge pull request #174 from Euclid-Jie/Euclid_Test
feature(read pdf paper then write summary)
2023-03-31 21:06:02 +08:00
Your Name
c91cfc64d9 整合 2023-03-31 21:05:18 +08:00
Your Name
dcab956cff Merge branch 'dev' into Euclid-Jie-Euclid_Test 2023-03-31 21:03:43 +08:00
Your Name
da55ae68f6 pdfminer整合到一个文件中 2023-03-31 21:03:12 +08:00
Your Name
201f53c0f0 Merge branch 'Euclid_Test' of https://github.com/Euclid-Jie/chatgpt_academic into Euclid-Jie-Euclid_Test 2023-03-31 20:26:59 +08:00
Your Name
77a98f03d0 修改文本 2023-03-31 20:12:27 +08:00
Your Name
9ee5545ad5 fix import error 2023-03-31 20:05:31 +08:00
Your Name
d37b0ce447 Merge branch 'dev' of github.com:binary-husky/chatgpt_academic into dev 2023-03-31 20:04:11 +08:00
Your Name
0abb84ae4b config新增说明 2023-03-31 20:02:12 +08:00
binary-husky
e6034b6928
Merge pull request #194 from fulyaec/enhance-chataca
修改AUTHENTICATION的判断,使得AUTHENTICATION为None/[]/""时都可以正确判断
2023-03-31 19:49:40 +08:00
Your Name
ac9e72b9f8 revert toolbox 2023-03-31 19:46:01 +08:00
Your Name
cb1ac5d13d Merge branch 'enhance-chataca' of https://github.com/fulyaec/chatgpt_academic into fulyaec-enhance-chataca 2023-03-31 19:45:23 +08:00
binary-husky
3497addc7b
Merge pull request #198 from oneLuckyman/feature-match-API_KEY
一个小改进:更精准的 API_KEY 确认机制
2023-03-31 19:28:21 +08:00
Your Name
f17c580fd9 Merge remote-tracking branch 'origin/hot-reload-test' 2023-03-31 19:21:15 +08:00
Jia Xinglong
808e23c98a 使用 re 模块的 match 函数可以更精准的匹配和确认 API_KEY 是否正确 2023-03-31 17:38:39 +08:00
fulyaec
e3e4fa19a2 refactor and enhance 2023-03-31 16:24:40 +08:00
binary-husky
3cc0635628
Update main.py 2023-03-31 13:33:03 +08:00
binary-husky
7149f9fe90
Update README.md 2023-03-31 13:29:37 +08:00
binary-husky
d2f009ba8d
Update README.md 2023-03-31 13:11:10 +08:00
欧玮杰
8f4f13efd5 fix(the ".PDF" file can not be recognized): 2023-03-31 10:26:40 +08:00
欧玮杰
fefe96144f fix(fix "gbk" encode error in 批量总结PDF文档 line14):
由于不可编码字符,导致报错,添加软解码,处理原始文本。
2023-03-31 10:03:10 +08:00
欧玮杰
5521f0e41c feature(read pdf paper then write summary):
add a func called readPdf in toolbox, which can read pdf paper to str. then use bs4.BeautifulSoup to clean content.
2023-03-31 00:54:01 +08:00
binary-husky
2d5d719696
Merge pull request #171 from RoderickChan/add-deploy-instruction
在README中添加远程部署的指导
2023-03-31 00:00:35 +08:00
binary-husky
2b339464ee
Update README.md 2023-03-30 23:59:01 +08:00
binary-husky
43ad798c68
Update README.md 2023-03-30 23:34:17 +08:00
RoderickChan
a441c90436 在README中添加远程部署的指导方案 2023-03-30 23:31:44 +08:00
JasonGuo1
5ca623e9c5 feat(总结word文档):增加读取docx、doc格式的功能 2023-03-30 23:23:41 +08:00
binary-husky
c74a8b04b5
添加Wiki链接 2023-03-30 23:09:45 +08:00
JasonGuo1
34c693a571 feat(toolbox):调整了空格的问题 2023-03-30 20:28:15 +08:00
binary-husky
d4cd1877f4
Merge pull request #151 from SadPencil/patch-1
Fix a typo
2023-03-30 19:14:48 +08:00
qingxu fu
57a668bf30 自译解报告 2023-03-30 18:21:17 +08:00
qingxu fu
700d14be10 Merge branch 'hot-reload-test' of https://github.com/binary-husky/chatgpt_academic into hot-reload-test 2023-03-30 18:05:03 +08:00
qingxu fu
3fe96956ce 新增热更新功能 2023-03-30 18:04:20 +08:00
qingxu fu
c358c95630 新增热更新功能 2023-03-30 18:01:06 +08:00
Sad Pencil
70c02a08e9
Fix a typo 2023-03-30 15:55:46 +08:00
JasonGuo1
27f7153f37 feat(toolbox): 支持rar格式与7z格式解压,修改了下注释 2023-03-30 15:48:55 +08:00
JasonGuo1
eb69704c20 feat(toolbox): 支持rar格式与7z格式解压,修改了下注释 2023-03-30 15:48:00 +08:00
JasonGuo1
d616915348 feat(toolbox): 支持rar格式与7z格式解压,修改了下注释 2023-03-30 15:47:18 +08:00
JasonGuo1
5381df8f35 feat(toolbox): 支持rar格式与7z格式解压,修改了下注释 2023-03-30 15:45:58 +08:00
JasonGuo1
a7c857d4d9 feat(支持rar格式与7z格式解压) 2023-03-30 15:24:01 +08:00
binary-husky
1c3ce18eff
Update README.md 2023-03-30 14:48:46 +08:00
binary-husky
a689960e31
Update README.md 2023-03-30 14:48:20 +08:00
binary-husky
07eca9fe55
Update README.md 2023-03-30 14:47:19 +08:00
binary-husky
00f29f2120
Update README.md 2023-03-30 14:08:24 +08:00
binary-husky
10ea3daaa5
Update README.md 2023-03-30 13:24:30 +08:00
binary-husky
eb95eec122
Merge pull request #124 from Freddd13/master
feat: 添加wsl2使用windows proxy的方法
2023-03-30 12:56:13 +08:00
qingxu fu
e03634f9e2 查找语法错误之前先清除换行符 2023-03-30 12:52:28 +08:00
qingxu fu
77d4628877 语法错误查找prompt更新 2023-03-30 12:16:18 +08:00
qingxu fu
8c3d37bbce up 2023-03-30 11:51:55 +08:00
qingxu fu
28f6801870 标准化代码格式 2023-03-30 11:50:11 +08:00
qingxu fu
6d7fb20b5a 修改配置的读取方式 2023-03-30 11:05:38 +08:00
Freddd13
9dc0d3273b update: 修改readme 2023-03-30 02:00:15 +08:00
Freddd13
13e976774f Merge remote-tracking branch 'upstream/master' 2023-03-30 01:58:39 +08:00
Freddd13
35d3346a1c feat: 支持wsl2使用windows proxy 2023-03-30 01:47:39 +08:00
Your Name
eaec1c3728 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-30 00:15:37 +08:00
Your Name
0140b0cda8 change UI layout 2023-03-30 00:15:31 +08:00
binary-husky
f98fe5b9ab
Update README.md 2023-03-30 00:09:58 +08:00
Xiaoming Bai
932c430d5d
better prompt 2023-03-30 00:06:02 +08:00
binary-husky
0063f8dd82
Update README.md 2023-03-29 23:53:33 +08:00
binary-husky
23b8c47c54
Update README.md 2023-03-29 23:50:20 +08:00
binary-husky
5353a32345
Update README.md 2023-03-29 23:48:58 +08:00
binary-husky
d569d2f9c1
Update README.md 2023-03-29 23:44:37 +08:00
binary-husky
67e6070d80
Update README.md 2023-03-29 23:44:01 +08:00
binary-husky
01873f7811
Merge pull request #108 from sjiang95/condainstall
readme: update
2023-03-29 23:22:44 +08:00
Your Name
c2318bc197 Merge branch 'dev' 2023-03-29 23:15:29 +08:00
binary-husky
86dc4ca3af
Merge pull request #102 from ValeriaWong/master
feat(读文章写摘要):支持pdf文件批量阅读及总结 #101
2023-03-29 23:14:12 +08:00
Your Name
4a2f73d007 change UI layout 2023-03-29 23:04:37 +08:00
Your Name
9c643e6d8c change ui layout 2023-03-29 23:00:16 +08:00
Your Name
34323c9d36 Merge https://github.com/ValeriaWong/chatgpt_academic into ValeriaWong-master 2023-03-29 21:49:56 +08:00
Your Name
48cc2349e3 add pip package check 2023-03-29 21:47:56 +08:00
Your Name
7d57967519 Merge branch 'master' of https://github.com/ValeriaWong/chatgpt_academic 2023-03-29 21:44:59 +08:00
Shengjiang Quan
68ffa54c84 readme: update
Re-format a part of the markdown content
and add conda instruction for installation.

Signed-off-by: Shengjiang Quan <qsj287068067@126.com>
2023-03-29 22:36:15 +09:00
ValeriaWong
02ddcdf731 Merge branch 'master' of https://github.com/ValeriaWong/chatgpt_academic 2023-03-29 21:05:25 +08:00
ValeriaWong
14ea206a5a
Merge branch 'binary-husky:master' into master 2023-03-29 20:57:07 +08:00
ValeriaWong
814c93bb75 feat(读文章写摘要):支持pdf文件批量阅读及总结 #101 2023-03-29 20:55:13 +08:00
binary-husky
fb953148b9
Update main.py 2023-03-29 20:47:34 +08:00
Your Name
2bcd34360a bug quick fix 2023-03-29 20:41:07 +08:00
binary-husky
b3211362f9
Merge pull request #82 from Okabe-Rintarou-0/master
支持暂停按钮 #53
2023-03-29 20:38:06 +08:00
Your Name
be1bf6cb77 提交后不清空输入栏,添加停止键 2023-03-29 20:36:58 +08:00
Your Name
5f771d8acf Merge branch 'master' of https://github.com/Okabe-Rintarou-0/chatgpt_academic into Okabe-Rintarou-0-master 2023-03-29 20:26:13 +08:00
Your Name
095579c323 Merge branch 'master' of https://github.com/Okabe-Rintarou-0/chatgpt_academic into Okabe-Rintarou-0-master 2023-03-29 20:07:38 +08:00
Your Name
f9308300f3 handle ip location lookup error 2023-03-29 19:37:39 +08:00
binary-husky
6ca28fbff2
Merge pull request #87 from Okabe-Rintarou-0/fix-markdown-display
正确显示多行输入的 markdown #84
2023-03-29 19:32:19 +08:00
binary-husky
11f619f76f
Merge pull request #96 from eltociear/patch-1
fix typo in predict.py
2023-03-29 18:47:51 +08:00
Your Name
39d0176673 新增代理配置说明 2023-03-29 18:07:33 +08:00
Ikko Eltociear Ashimine
043bd59ffc
fix typo in predict.py
refleshing -> refreshing
2023-03-29 18:57:37 +09:00
ValeriaWong
2ac602e0aa feat(读文章写摘要):支持pdf文件批量阅读及总结 2023-03-29 17:57:17 +08:00
Your Name
50ee37f23c error message change 2023-03-29 16:50:37 +08:00
Your Name
f0d9098df5 dev 2023-03-29 16:47:15 +08:00
okabe
2401cf2136 fix: markdown display bug #84 2023-03-29 15:29:40 +08:00
okabe
dd3c4f988c feat: support stop generate button (#53) 2023-03-29 14:53:53 +08:00
Your Name
22e7dc617b fix directory return bug 2023-03-29 14:28:57 +08:00
Your Name
a3c937c202 update comments 2023-03-29 14:16:59 +08:00
505030475
1b01d0fd8a 修复变量名 2023-03-29 13:58:30 +08:00
505030475
0b018bf871 Merge remote-tracking branch 'origin/test-3-29' 2023-03-29 13:44:57 +08:00
505030475
88d41ab4ca config comments 2023-03-29 13:43:07 +08:00
binary-husky
6ebadeb2a6
Update README.md 2023-03-29 13:38:43 +08:00
binary-husky
515045a8d1
Merge pull request #57 from GaiZhenbiao/master
Adding a bunch of nice-to-have features
2023-03-29 13:37:58 +08:00
Your Name
74d5061969 Merge branch 'test-3-29' of github.com:binary-husky/chatgpt_academic into test-3-29 2023-03-29 12:29:52 +08:00
Your Name
0b7c1c50ca 优化Unsplash API的使用 2023-03-29 12:28:45 +08:00
Your Name
f45e0d3486 优化Unsplash API的使用 2023-03-29 12:27:47 +08:00
Your Name
881132557e 历史上的今天,带图片 2023-03-29 12:21:47 +08:00
Your Name
6d852d76b0 更新一个更有意思的模板函数 2023-03-29 11:36:55 +08:00
Your Name
b588291cdf [实验] 历史上的今天(高级函数demo) 2023-03-29 11:34:03 +08:00
Your Name
b4e0fe39ea bug fix 2023-03-29 01:42:11 +08:00
Your Name
7295978c93 change description 2023-03-29 01:39:15 +08:00
Your Name
f2359e0442 更好的多线程交互性 2023-03-29 01:32:28 +08:00
Your Name
ced6898daa introduce project self-translation 2023-03-29 01:11:53 +08:00
Tuchuanhuhuhu
39ad492a65 增加“重置”按钮,提交之后自动清空输入框 2023-03-28 23:33:19 +08:00
Tuchuanhuhuhu
dd8ec0d511 temprature的取值范围为[0, 2] 2023-03-28 23:20:54 +08:00
Tuchuanhuhuhu
43d59d936f 增加并行处理与权限控制 2023-03-28 23:17:12 +08:00
Your Name
4bca8b4f82 simplify codes 2023-03-28 23:09:25 +08:00
Chuan Hu
46eba1f399
Improve the way to open webbrowser 2023-03-28 22:47:30 +08:00
Your Name
5a6877f9fa 界面色彩自定义 2023-03-28 22:35:55 +08:00
Your Name
2c15b51ea4 explain color and theme 2023-03-28 22:31:43 +08:00
Your Name
9666b9b99b Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-28 22:25:27 +08:00
Your Name
baf61477d2 remove .vscode from git 2023-03-28 22:24:59 +08:00
Your Name
00e411bd68 update todo 2023-03-28 22:10:22 +08:00
Your Name
37bcdf684d fix unicode bug 2023-03-28 20:31:44 +08:00
binary-husky
c80808a0c5
Merge pull request #46 from mambaHu/master
Markdown analysis report garbled issue
2023-03-28 20:04:01 +08:00
luca hu
b69313884d improving garbled words issue with utf8 2023-03-28 19:34:18 +08:00
binary-husky
3b6675755e
Delete jpeg-compressor.tps 2023-03-28 17:21:14 +08:00
binary-husky
3e2e4937db
Delete JpegLibrary.tps 2023-03-28 17:21:06 +08:00
binary-husky
e6f7e75e19
Delete UElibJPG.Build.cs 2023-03-28 17:20:54 +08:00
505030475
232d06a1ab theme 2023-03-28 12:59:31 +08:00
505030475
22db1147db o 2023-03-28 12:53:05 +08:00
binary-husky
eb77532df5
Update README.md 2023-03-28 01:36:15 +08:00
binary-husky
9ee3d1390d
Update README.md 2023-03-28 01:13:55 +08:00
Your Name
141df08332 http post error show 2023-03-27 18:25:07 +08:00
Your Name
248bcb7095 bug fix 2023-03-27 15:16:50 +08:00
Your Name
7c7b1cb030 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-27 15:14:12 +08:00
Your Name
290a33ea74 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-27 15:14:05 +08:00
binary-husky
aabe2818e7
Update README.md 2023-03-27 15:09:02 +08:00
binary-husky
c3a6a09c4c
Update README.md 2023-03-27 15:01:49 +08:00
binary-husky
8c5332748f
Update README.md 2023-03-27 15:01:07 +08:00
binary-husky
0b0860defc
Update README.md 2023-03-27 14:57:12 +08:00
binary-husky
18894b04f1
Update README.md 2023-03-27 14:56:32 +08:00
binary-husky
8e75ad8134
Update README.md 2023-03-27 14:56:20 +08:00
binary-husky
7c33580bf6
Update README.md 2023-03-27 14:56:03 +08:00
Your Name
223e747d57 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-27 14:53:46 +08:00
Your Name
8c7be26661 up 2023-03-27 14:51:05 +08:00
binary-husky
a1fceeae45
Update README.md 2023-03-27 14:47:52 +08:00
binary-husky
4b534400ea
Update README.md 2023-03-27 14:45:32 +08:00
binary-husky
ac9b590660
Update README.md 2023-03-27 13:47:08 +08:00
binary-husky
695ed5e02d
Update README.md 2023-03-27 13:45:08 +08:00
Your Name
c063510442 UI change 2023-03-27 13:24:29 +08:00
Your Name
bc6d0926b1 file IO 2023-03-27 13:01:22 +08:00
qingxu fu
0f20ffeff4 localFileToRemote 2023-03-27 11:29:11 +08:00
Your Name
9299c93b17 Merge branch 'test-3-26' 2023-03-26 20:21:39 +08:00
binary-husky
3463a1173a
Merge pull request #10 from ifyz/patch-1
Update main.py
2023-03-26 20:21:14 +08:00
Your Name
74eaff4919 fix dockerfile 2023-03-26 20:18:55 +08:00
binary-husky
dd51708309
Update main.py 2023-03-26 20:11:44 +08:00
Your Name
25693c362c UI 2023-03-26 20:10:14 +08:00
Your Name
9eb15f5e68 调整样式 2023-03-26 20:04:59 +08:00
Your Name
7e195244f1 up 2023-03-26 19:32:04 +08:00
Your Name
98d97ebbc8 Merge branch 'ifyz-patch-1' into test-3-26 2023-03-26 19:14:49 +08:00
Your Name
6ab3672cfe Merge branch 'patch-1' of https://github.com/ifyz/chatgpt_academic into ifyz-patch-1 2023-03-26 19:14:27 +08:00
Your Name
6f1ad29e3f add comments 2023-03-26 19:13:58 +08:00
ifyz
61c65d90fe
Update main.py
使用全局变量,禁用Gradio 的分析功能。解决国内用户因调用GoogleAnalytics导致的加载缓慢。
使用本地字体,修改Gradio默认从Googleapis调用字体。从而解决用户由于国内网络环境打开首页缓慢的问题。
2023-03-26 17:11:58 +08:00
ifyz
8965706169
Update main.py
使用本地字体,修改Gradio默认从Googleapis调用字体。从而解决用户由于国内网络环境打开首页缓慢的问题。
2023-03-26 15:48:16 +08:00
qingxu fu
a63ae898ce Merge branch 'master' of https://github.com/binary-husky/chatgpt_academic into master 2023-03-24 21:03:33 +08:00
binary-husky
186f10a3e3
Update README.md 2023-03-24 21:03:09 +08:00
binary-husky
67ed394484
Update README.md 2023-03-24 21:02:02 +08:00
qingxu fu
2f4c46b22c trim button text 2023-03-24 20:56:34 +08:00
binary-husky
2f5c977a27
Update predict.py 2023-03-24 19:54:52 +08:00
binary-husky
f801cfbcf6
Update .gitattributes 2023-03-24 19:51:52 +08:00
binary-husky
15693f3f8f
Update .gitattributes 2023-03-24 19:50:54 +08:00
binary-husky
8cbac095c1
Create .gitattributes 2023-03-24 19:49:58 +08:00
Your Name
289ace03cf 测试实验性功能 使用说明 2023-03-24 19:47:37 +08:00
Your Name
cefc40aff8 update readme 2023-03-24 19:42:21 +08:00
binary-husky
15dd4b7a80
Update README.md 2023-03-24 19:38:33 +08:00
Your Name
5f8473badb source 2023-03-24 19:37:47 +08:00
Your Name
a8cfe2f113 remote additional file 2023-03-24 19:35:13 +08:00
Your Name
27e056ea35 move images 2023-03-24 19:34:21 +08:00
binary-husky
278f21fde0
Update README.md 2023-03-24 19:20:43 +08:00
Your Name
121f288878 push 2023-03-24 19:10:34 +08:00
binary-husky
5a6b46f0b5
Update README.md 2023-03-24 19:04:55 +08:00
binary-husky
22766d3c12
Update README.md 2023-03-24 19:03:03 +08:00
Your Name
4c72e6cc80 fix count down error 2023-03-24 18:53:43 +08:00
Your Name
38705af90b beta 2023-03-24 18:47:45 +08:00
Your Name
35e15aab8c bug fix 2023-03-24 18:08:48 +08:00
Your Name
1cad53f641 模块化封装 2023-03-24 18:04:59 +08:00
Your Name
818690037a up 2023-03-24 16:34:48 +08:00
Your Name
8053f696d5 用gpt给自己生成注释 2023-03-24 16:25:40 +08:00
Your Name
0b1afab5dd muban 2023-03-24 16:22:26 +08:00
Your Name
aa87d031f5 易读性+ 2023-03-24 16:17:01 +08:00
Your Name
ba2dd3d0ff 批量生成函数注释 2023-03-24 16:14:25 +08:00
Your Name
c19419e1fb 生成文本报告 2023-03-24 15:42:09 +08:00
Your Name
031232c986 better traceback 2023-03-24 15:25:14 +08:00
Your Name
ea4cd5a75a 增加读latex文章的功能,添加测试样例 2023-03-24 14:56:57 +08:00
Your Name
3fb519bf83 Merge remote-tracking branch 'origin/master' into test-3-24 2023-03-24 13:29:37 +08:00
Your Name
1e4bcc0757 设置及时响应 2023-03-24 13:28:12 +08:00
Your Name
44b40ff726 update 2023-03-24 13:12:25 +08:00
binary-husky
791dbf8592
Update functional_crazy.py 2023-03-24 13:11:41 +08:00
binary-husky
82fe0f758d
Update functional_crazy.py 2023-03-24 13:06:42 +08:00
binary-husky
8b1def3425
Update functional_crazy.py 2023-03-24 13:06:34 +08:00
binary-husky
8e85e83cec
Update functional_crazy.py 2023-03-24 13:02:47 +08:00
qingxu fu
ecdeda8e92 Merge branch 'master' of github.com:binary-husky/chatgpt_academic into master 2023-03-24 11:43:24 +08:00
qingxu fu
831009f027 auto retry 2023-03-24 11:42:39 +08:00
binary-husky
c6da02422b
Update README.md 2023-03-24 00:43:33 +08:00
binary-husky
2ad73e9ec3
Update README.md 2023-03-23 22:15:31 +08:00
binary-husky
a0f73a61ea
Update predict.py 2023-03-23 22:13:09 +08:00
binary-husky
1b6d611888
Update README.md 2023-03-23 17:03:08 +08:00
Your Name
47d9ea4921 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-23 00:35:01 +08:00
Your Name
b1e33b0f7a 正确地显示requests错误 2023-03-23 00:34:55 +08:00
binary-husky
1e125ca05f
Update README.md 2023-03-23 00:20:39 +08:00
binary-husky
16253ed53c
Update README.md 2023-03-23 00:15:44 +08:00
binary-husky
8beb75762c
Update README.md 2023-03-22 22:52:15 +08:00
Your Name
8da4a0af45 Merge branch 'master' of github.com:binary-husky/chatgpt_academic 2023-03-22 22:42:56 +08:00
Your Name
371bef6e1a bug fix 2023-03-22 22:42:50 +08:00
Your Name
2aaa836d81 程序自解析功能 2023-03-22 22:37:14 +08:00
binary-husky
14d53a288f
Update README.md 2023-03-22 22:34:15 +08:00
binary-husky
b1d6e711c0
Update README.md 2023-03-22 20:47:28 +08:00
binary-husky
32b005199d
Update README.md 2023-03-22 20:06:09 +08:00
binary-husky
a8886562a1
Update README.md 2023-03-22 20:02:03 +08:00
binary-husky
a60d881268
Update README.md 2023-03-22 19:58:10 +08:00
binary-husky
b9967686a1
私密配置
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
2023-03-22 19:49:45 +08:00
binary-husky
dd3b3524bd
借鉴github.com/GaiZhenbiao/ChuanhuChatGPT项目 2023-03-22 19:47:49 +08:00
binary-husky
a880819ffd
from github.com/polarwinkel/mdtex2html 2023-03-22 19:46:08 +08:00
binary-husky
b8d802a922
Update README.md 2023-03-22 19:39:02 +08:00
binary-husky
01aafd4df2
Update README.md 2023-03-22 19:36:39 +08:00
binary-husky
7b3b6f5241
Update README.md 2023-03-22 19:35:34 +08:00
binary-husky
7a50af1897
Update README.md 2023-03-22 19:33:26 +08:00
binary-husky
5dc406c641
修复gradio不吃代理的问题 2023-03-22 19:22:42 +08:00
qingxu fu
3103817990 add private conf 2023-03-22 17:54:15 +08:00
qingxu fu
88aaf310c4 upload 2023-03-22 17:48:25 +08:00
qingxu fu
e8d86a3242 代理位置 2023-03-22 17:45:10 +08:00
qingxu fu
37fd1b1c97 upload 2023-03-22 17:35:23 +08:00
qingxu fu
e3e313beab logging 2023-03-22 17:32:48 +08:00
qingxu fu
380a8a9d4d fix logging encoding 2023-03-22 17:30:30 +08:00
qingxu fu
d00f6bb1a6 add proxy debug funtion 2023-03-22 17:25:37 +08:00
binary-husky
f8a44a82a9
Update README.md 2023-03-22 16:09:37 +08:00
binary-husky
2be45c386d
Update predict.py 2023-03-21 21:24:38 +08:00
binary-husky
2b7e8b9278
Update README.md 2023-03-21 17:53:40 +08:00
binary-husky
8590014462
Update README.md 2023-03-21 17:53:04 +08:00
binary-husky
8215caddb2
Update README.md 2023-03-21 15:49:52 +08:00
505030475
93a9d27b1e ok 2023-03-21 13:53:24 +08:00
binary-husky
e128523103
Update README.md 2023-03-21 13:45:08 +08:00
505030475
da0caab4cf add deploy method for windows 2023-03-21 13:35:53 +08:00
binary-husky
6e9813565a
Update README.md 2023-03-20 18:56:09 +08:00
binary-husky
01cc409f99
Update README.md 2023-03-20 18:55:06 +08:00
Your Name
3438f8f291 readme 2023-03-20 18:39:48 +08:00
61 changed files with 14940 additions and 249 deletions

5
.gitattributes vendored Normal file
View File

@ -0,0 +1,5 @@
*.h linguist-detectable=false
*.cpp linguist-detectable=false
*.tex linguist-detectable=false
*.cs linguist-detectable=false
*.tps linguist-detectable=false

19
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,19 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug 简述**
**Screen Shot 截图**
**Terminal Traceback 终端traceback如果有**
Before submitting an issue 提交issue之前
- Please try to upgrade your code. 如果您的代码不是最新的,建议您先尝试更新代码
- Please check project wiki for common problem solutions.项目[wiki](https://github.com/binary-husky/chatgpt_academic/wiki)有一些常见问题的解决方法

View File

@ -0,0 +1,10 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---

7
.gitignore vendored
View File

@ -131,6 +131,13 @@ dmypy.json
# Pyre type checker
.pyre/
.vscode
history
ssr_conf
config_private.py
gpt_log
private.md
private_upload
other_llms
cradle.py

View File

@ -1,8 +1,4 @@
FROM ubuntu:latest
RUN apt-get update && \
apt-get install -y python3 python3-pip && \
rm -rf /var/lib/apt/lists/*
FROM python:3.11
RUN echo '[global]' > /etc/pip.conf && \
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \

270
README.md
View File

@ -1,26 +1,280 @@
# ChatGPT 学术优化
**如果喜欢这个项目请给它一个Star**
**如果喜欢这个项目请给它一个Star如果你发明了更好用的快捷键或函数插件欢迎发issue或者pull requestsdev分支**
## 使用docker
If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request to `dev` branch.
```
代码中参考了很多其他优秀项目中的设计,主要包括:
# 借鉴项目1借鉴了ChuanhuChatGPT中读取OpenAI json的方法、记录历史问询记录的方法以及gradio queue的使用技巧
https://github.com/GaiZhenbiao/ChuanhuChatGPT
# 借鉴项目2借鉴了mdtex2html中公式处理的方法
https://github.com/polarwinkel/mdtex2html
项目使用OpenAI的gpt-3.5-turbo模型期待gpt-4早点放宽门槛😂
```
> **Note**
>
> 1.请注意只有“红颜色”标识的函数插件按钮才支持读取文件。目前对pdf/word格式文件的支持插件正在逐步完善中需要更多developer的帮助。
>
> 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代您也可以随时自行点击相关函数插件调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。
>
> 3.如果您不太习惯部分中文命名的函数、注释或者界面您可以随时点击相关函数插件调用ChatGPT一键生成纯英文的项目源代码。
<div align="center">
功能 | 描述
--- | ---
一键润色 | 支持一键润色、一键查找论文语法错误
一键中英互译 | 一键中英互译
一键代码解释 | 可以正确显示代码、解释代码
自定义快捷键 | 支持自定义快捷键
配置代理服务器 | 支持配置代理服务器
模块化设计 | 支持自定义高阶的实验性功能与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
自我程序剖析 | [函数插件] 一键读懂本项目的源代码
程序剖析 | [函数插件] 一键可以剖析其他Python/C/C++/Java项目树
读论文 | [函数插件] 一键解读latex论文全文并生成摘要
批量注释生成 | [函数插件] 一键批量生成函数注释
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
arxiv小助手 | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
公式显示 | 可以同时显示公式的tex形式和渲染形式
图片显示 | 可以在markdown中显示图片
多线程函数插件支持 | 支持多线调用chatgpt一键处理海量文本或程序
支持GPT输出的markdown表格 | 可以输出支持GPT的markdown表格
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
…… | ……
</div>
<!-- - 新界面master主分支, 右dev开发前沿 -->
- 新界面
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229222589-b30ff298-adb1-4e1e-8352-466085919bfb.png" width="700" >
</div>
- 所有按钮都通过读取functional.py动态生成可随意加自定义功能解放粘贴板
<div align="center">
<img src="img/公式.gif" width="700" >
</div>
- 润色/纠错
<div align="center">
<img src="img/润色.gif" width="700" >
</div>
- 支持GPT输出的markdown表格
<div align="center">
<img src="img/demo2.jpg" width="500" >
</div>
- 如果输出包含公式会同时以tex形式和渲染形式显示方便复制和阅读
<div align="center">
<img src="img/demo.jpg" width="500" >
</div>
- 懒得看项目代码整个工程直接给chatgpt炫嘴里
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
</div>
## 直接运行 (Windows, Linux or MacOS)
### 1. 下载项目
```sh
git clone https://github.com/binary-husky/chatgpt_academic.git
cd chatgpt_academic
```
### 2. 配置API_KEY和代理设置
`config.py`中,配置 海外Proxy 和 OpenAI API KEY说明如下
```
1. 如果你在国内,需要设置海外代理才能够顺利使用 OpenAI API设置方法请仔细阅读config.py1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies
2. 配置 OpenAI API KEY。你需要在 OpenAI 官网上注册并获取 API KEY。一旦你拿到了 API KEY在 config.py 文件里配置好即可。
3. 与代理网络有关的issue网络超时、代理不起作用汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1
```
P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控可以让您的隐私信息更加安全。
### 3. 安装依赖
```sh
# (选择一)推荐
python -m pip install -r requirements.txt
# 选择二如果您使用anaconda步骤也是类似的
# (选择二.1conda create -n gptac_venv python=3.11
# (选择二.2conda activate gptac_venv
# (选择二.3python -m pip install -r requirements.txt
# 备注使用官方pip源或者阿里pip源其他pip源如一些大学的pip有可能出问题临时换源方法
# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
```
### 4. 运行
```sh
python main.py
```
### 5. 测试实验性功能
```
- 测试C++项目头文件分析
input区域 输入 `./crazy_functions/test_project/cpp/libJPG` 然后点击 "[实验] 解析整个C++项目input输入项目根路径"
- 测试给Latex项目写摘要
input区域 输入 `./crazy_functions/test_project/latex/attention` 然后点击 "[实验] 读tex论文写摘要input输入项目根路径"
- 测试Python项目分析
input区域 输入 `./crazy_functions/test_project/python/dqn` 然后点击 "[实验] 解析整个py项目input输入项目根路径"
- 测试自我代码解读
点击 "[实验] 请解析并解构此项目本身"
- 测试实验功能模板函数要求gpt回答历史上的今天发生了什么您可以根据此函数为模板实现更复杂的功能
点击 "[实验] 实验功能函数模板"
```
## 使用docker (Linux)
``` sh
# 下载项目
git clone https://github.com/binary-husky/chatgpt_academic.git
cd chatgpt_academic
# 配置 海外Proxy 和 OpenAI API KEY
config.py
用任意文本编辑器编辑 config.py
# 安装
docker build -t gpt-academic .
# 运行
docker run --rm -it --net=host gpt-academic
# 测试实验性功能
## 测试自我代码解读
点击 "[实验] 请解析并解构此项目本身"
## 测试实验功能模板函数要求gpt回答历史上的今天发生了什么您可以根据此函数为模板实现更复杂的功能
点击 "[实验] 实验功能函数模板"
##请注意在docker中运行时需要额外注意程序的文件访问权限问题
## 测试C++项目头文件分析
input区域 输入 ./crazy_functions/test_project/cpp/libJPG 然后点击 "[实验] 解析整个C++项目input输入项目根路径"
## 测试给Latex项目写摘要
input区域 输入 ./crazy_functions/test_project/latex/attention 然后点击 "[实验] 读tex论文写摘要input输入项目根路径"
## 测试Python项目分析
input区域 输入 ./crazy_functions/test_project/python/dqn 然后点击 "[实验] 解析整个py项目input输入项目根路径"
```
## 参考项目
## 其他部署方式
- 使用WSL2Windows Subsystem for Linux 子系统)
请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
- nginx远程部署
请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E7%9A%84%E6%8C%87%E5%AF%BC)
## 自定义新的便捷按钮(学术快捷键自定义)
打开functional.py添加条目如下然后重启程序即可。如果按钮已经添加成功并可见那么前缀、后缀都支持热修改无需重启程序即可生效。
例如
```
https://github.com/Python-Markdown/markdown
https://github.com/gradio-app/gradio
https://github.com/polarwinkel/mdtex2html
https://github.com/GaiZhenbiao/ChuanhuChatGPT
"超级英译中": {
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": "请翻译把下面一段内容成中文然后用一个markdown表格逐一解释文中出现的专有名词\n\n",
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。
"Suffix": "",
},
```
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
</div>
如果你发明了更好用的学术快捷键欢迎发issue或者pull requests
## 配置代理
### 方法一:常规方法
在```config.py```中修改端口与代理软件对应
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226571294-37a47cd9-4d40-4c16-97a2-d360845406f7.png" width="500" >
<img src="https://user-images.githubusercontent.com/96192199/226838985-e5c95956-69c2-4c23-a4dd-cd7944eeb451.png" width="500" >
</div>
配置完成后,你可以用以下命令测试代理是否工作,如果一切正常,下面的代码将输出你的代理服务器所在地:
```
python check_proxy.py
```
### 方法二:纯新手教程
[纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
## 兼容性测试
### 图片显示:
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
</div>
### 如果一个程序能够读懂并剖析自己:
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
</div>
### 其他任意Python/Cpp项目剖析
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
</div>
### Latex论文一键阅读理解与摘要生成
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
</div>
### 自动报告生成
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
</div>
### 模块化功能设计
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
</div>
### 源代码转译英文
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
</div>
## Todo 与 版本规划:
- version 3 (Todo):
- - 支持gpt4和其他更多llm
- version 2.3+ (Todo):
- - 总结大工程源代码时文本过长、token溢出的问题
- - 实现项目打包部署
- - 函数插件参数接口优化
- - 自更新
- version 2.3: 增强多线程交互性
- version 2.2: 函数插件支持热重载
- version 2.1: 可折叠式布局
- version 2.0: 引入模块化函数插件
- version 1.0: 基础功能

27
check_proxy.py Normal file
View File

@ -0,0 +1,27 @@
def check_proxy(proxies):
import requests
proxies_https = proxies['https'] if proxies is not None else ''
try:
response = requests.get("https://ipapi.co/json/", proxies=proxies, timeout=4)
data = response.json()
print(f'查询代理的地理位置,返回的结果是{data}')
if 'country_name' in data:
country = data['country_name']
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
elif 'error' in data:
result = f"代理配置 {proxies_https}, 代理所在地未知IP查询频率受限"
print(result)
return result
except:
result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
print(result)
return result
if __name__ == '__main__':
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
from toolbox import get_conf
proxies, = get_conf('proxies')
check_proxy(proxies)

View File

@ -1,11 +1,46 @@
# my_api_key = "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"
API_KEY = "sk-此处填API秘钥"
API_URL = "https://api.openai.com/v1/chat/completions"
# [step 1]>> 例如: API_KEY = "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" 此key无效
API_KEY = "sk-此处填API密钥"
# [step 2]>> 改为True应用代理如果直接在海外服务器部署此处不修改
USE_PROXY = False
if USE_PROXY:
proxies = { "http": "socks5h://localhost:11284", "https": "socks5h://localhost:11284", }
print('网络代理状态:运行。')
# 填写格式是 [协议]:// [地址] :[端口]填写之前不要忘记把USE_PROXY改成True如果直接在海外服务器部署此处不修改
# 例如 "socks5h://localhost:11284"
# [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http
# [地址] 懂的都懂不懂就填localhost或者127.0.0.1肯定错不了localhost意思是代理软件安装在本机上
# [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都应该在最显眼的位置上
# 代理网络的地址,打开你的科学上网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284)
proxies = {
# [协议]:// [地址] :[端口]
"http": "socks5h://localhost:11284",
"https": "socks5h://localhost:11284",
}
else:
proxies = None
print('网络代理状态:未配置。无代理状态下很可能无法访问。')
# [step 3]>> 以下配置可以优化体验,但大部分场合下并不需要修改
# 对话窗的高度
CHATBOT_HEIGHT = 1115
# 发送请求到OpenAI后等待多久判定为超时
TIMEOUT_SECONDS = 25
# 网页的端口, -1代表随机端口
WEB_PORT = -1
# 如果OpenAI不响应网络卡顿、代理失败、KEY失效重试的次数限制
MAX_RETRY = 2
# OpenAI模型选择是gpt4现在只对申请成功的人开放
LLM_MODEL = "gpt-3.5-turbo"
# OpenAI的API_URL
API_URL = "https://api.openai.com/v1/chat/completions"
# 设置并行使用的线程数
CONCURRENT_COUNT = 100
# 设置用户名和密码相关功能不稳定与gradio版本和网络都相关如果本地使用不建议加这个
AUTHENTICATION = [] # [("username", "password"), ("username2", "password2"), ...]

View File

View File

@ -0,0 +1,87 @@
#include "libipc/buffer.h"
#include "libipc/utility/pimpl.h"
#include <cstring>
namespace ipc {
bool operator==(buffer const & b1, buffer const & b2) {
return (b1.size() == b2.size()) && (std::memcmp(b1.data(), b2.data(), b1.size()) == 0);
}
bool operator!=(buffer const & b1, buffer const & b2) {
return !(b1 == b2);
}
class buffer::buffer_ : public pimpl<buffer_> {
public:
void* p_;
std::size_t s_;
void* a_;
buffer::destructor_t d_;
buffer_(void* p, std::size_t s, buffer::destructor_t d, void* a)
: p_(p), s_(s), a_(a), d_(d) {
}
~buffer_() {
if (d_ == nullptr) return;
d_((a_ == nullptr) ? p_ : a_, s_);
}
};
buffer::buffer()
: buffer(nullptr, 0, nullptr, nullptr) {
}
buffer::buffer(void* p, std::size_t s, destructor_t d)
: p_(p_->make(p, s, d, nullptr)) {
}
buffer::buffer(void* p, std::size_t s, destructor_t d, void* additional)
: p_(p_->make(p, s, d, additional)) {
}
buffer::buffer(void* p, std::size_t s)
: buffer(p, s, nullptr) {
}
buffer::buffer(char const & c)
: buffer(const_cast<char*>(&c), 1) {
}
buffer::buffer(buffer&& rhs)
: buffer() {
swap(rhs);
}
buffer::~buffer() {
p_->clear();
}
void buffer::swap(buffer& rhs) {
std::swap(p_, rhs.p_);
}
buffer& buffer::operator=(buffer rhs) {
swap(rhs);
return *this;
}
bool buffer::empty() const noexcept {
return (impl(p_)->p_ == nullptr) || (impl(p_)->s_ == 0);
}
void* buffer::data() noexcept {
return impl(p_)->p_;
}
void const * buffer::data() const noexcept {
return impl(p_)->p_;
}
std::size_t buffer::size() const noexcept {
return impl(p_)->s_;
}
} // namespace ipc

View File

@ -0,0 +1,701 @@
#include <type_traits>
#include <cstring>
#include <algorithm>
#include <utility> // std::pair, std::move, std::forward
#include <atomic>
#include <type_traits> // aligned_storage_t
#include <string>
#include <vector>
#include <array>
#include <cassert>
#include "libipc/ipc.h"
#include "libipc/def.h"
#include "libipc/shm.h"
#include "libipc/pool_alloc.h"
#include "libipc/queue.h"
#include "libipc/policy.h"
#include "libipc/rw_lock.h"
#include "libipc/waiter.h"
#include "libipc/utility/log.h"
#include "libipc/utility/id_pool.h"
#include "libipc/utility/scope_guard.h"
#include "libipc/utility/utility.h"
#include "libipc/memory/resource.h"
#include "libipc/platform/detail.h"
#include "libipc/circ/elem_array.h"
namespace {
using msg_id_t = std::uint32_t;
using acc_t = std::atomic<msg_id_t>;
template <std::size_t DataSize, std::size_t AlignSize>
struct msg_t;
template <std::size_t AlignSize>
struct msg_t<0, AlignSize> {
msg_id_t cc_id_;
msg_id_t id_;
std::int32_t remain_;
bool storage_;
};
template <std::size_t DataSize, std::size_t AlignSize>
struct msg_t : msg_t<0, AlignSize> {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
msg_t() = default;
msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size)
: msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} {
if (this->storage_) {
if (data != nullptr) {
// copy storage-id
*reinterpret_cast<ipc::storage_id_t*>(&data_) =
*static_cast<ipc::storage_id_t const *>(data);
}
}
else std::memcpy(&data_, data, size);
}
};
template <typename T>
ipc::buff_t make_cache(T& data, std::size_t size) {
auto ptr = ipc::mem::alloc(size);
std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size));
return { ptr, size, ipc::mem::free };
}
struct cache_t {
std::size_t fill_;
ipc::buff_t buff_;
cache_t(std::size_t f, ipc::buff_t && b)
: fill_(f), buff_(std::move(b))
{}
void append(void const * data, std::size_t size) {
if (fill_ >= buff_.size() || data == nullptr || size == 0) return;
auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size());
std::memcpy(static_cast<ipc::byte_t*>(buff_.data()) + fill_, data, new_fill - fill_);
fill_ = new_fill;
}
};
auto cc_acc() {
static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t));
return static_cast<acc_t*>(acc_h.get());
}
IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept {
return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align;
}
IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept {
return ipc::make_align(alignof(std::max_align_t), align_chunk_size(
ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>)) + size));
}
struct chunk_t {
std::atomic<ipc::circ::cc_t> &conns() noexcept {
return *reinterpret_cast<std::atomic<ipc::circ::cc_t> *>(this);
}
void *data() noexcept {
return reinterpret_cast<ipc::byte_t *>(this)
+ ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>));
}
};
struct chunk_info_t {
ipc::id_pool<> pool_;
ipc::spin_lock lock_;
IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept {
return ipc::id_pool<>::max_count * chunk_size;
}
ipc::byte_t *chunks_mem() noexcept {
return reinterpret_cast<ipc::byte_t *>(this + 1);
}
chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept {
if (id < 0) return nullptr;
return reinterpret_cast<chunk_t *>(chunks_mem() + (chunk_size * id));
}
};
auto& chunk_storages() {
class chunk_handle_t {
ipc::shm::handle handle_;
public:
chunk_info_t *get_info(std::size_t chunk_size) {
if (!handle_.valid() &&
!handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(),
sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) {
ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size);
return nullptr;
}
auto info = static_cast<chunk_info_t*>(handle_.get());
if (info == nullptr) {
ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size);
return nullptr;
}
return info;
}
};
static ipc::map<std::size_t, chunk_handle_t> chunk_hs;
return chunk_hs;
}
chunk_info_t *chunk_storage_info(std::size_t chunk_size) {
auto &storages = chunk_storages();
std::decay_t<decltype(storages)>::iterator it;
{
static ipc::rw_lock lock;
IPC_UNUSED_ std::shared_lock<ipc::rw_lock> guard {lock};
if ((it = storages.find(chunk_size)) == storages.end()) {
using chunk_handle_t = std::decay_t<decltype(storages)>::value_type::second_type;
guard.unlock();
IPC_UNUSED_ std::lock_guard<ipc::rw_lock> guard {lock};
it = storages.emplace(chunk_size, chunk_handle_t{}).first;
}
}
return it->second.get_info(chunk_size);
}
std::pair<ipc::storage_id_t, void*> acquire_storage(std::size_t size, ipc::circ::cc_t conns) {
std::size_t chunk_size = calc_chunk_size(size);
auto info = chunk_storage_info(chunk_size);
if (info == nullptr) return {};
info->lock_.lock();
info->pool_.prepare();
// got an unique id
auto id = info->pool_.acquire();
info->lock_.unlock();
auto chunk = info->at(chunk_size, id);
if (chunk == nullptr) return {};
chunk->conns().store(conns, std::memory_order_relaxed);
return { id, chunk->data() };
}
void *find_storage(ipc::storage_id_t id, std::size_t size) {
if (id < 0) {
ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
return nullptr;
}
std::size_t chunk_size = calc_chunk_size(size);
auto info = chunk_storage_info(chunk_size);
if (info == nullptr) return nullptr;
return info->at(chunk_size, id)->data();
}
void release_storage(ipc::storage_id_t id, std::size_t size) {
if (id < 0) {
ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
return;
}
std::size_t chunk_size = calc_chunk_size(size);
auto info = chunk_storage_info(chunk_size);
if (info == nullptr) return;
info->lock_.lock();
info->pool_.release(id);
info->lock_.unlock();
}
template <ipc::relat Rp, ipc::relat Rc>
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::unicast>,
std::atomic<ipc::circ::cc_t> &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept {
return true;
}
template <ipc::relat Rp, ipc::relat Rc>
bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::broadcast>,
std::atomic<ipc::circ::cc_t> &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept {
auto last_conns = curr_conns & ~conn_id;
for (unsigned k = 0;;) {
auto chunk_conns = conns.load(std::memory_order_acquire);
if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) {
return (chunk_conns & last_conns) == 0;
}
ipc::yield(k);
}
}
template <typename Flag>
void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) {
if (id < 0) {
ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size);
return;
}
std::size_t chunk_size = calc_chunk_size(size);
auto info = chunk_storage_info(chunk_size);
if (info == nullptr) return;
auto chunk = info->at(chunk_size, id);
if (chunk == nullptr) return;
if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) {
return;
}
info->lock_.lock();
info->pool_.release(id);
info->lock_.unlock();
}
template <typename MsgT>
bool clear_message(void* p) {
auto msg = static_cast<MsgT*>(p);
if (msg->storage_) {
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg->remain_;
if (r_size <= 0) {
ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size);
return true;
}
release_storage(
*reinterpret_cast<ipc::storage_id_t*>(&msg->data_),
static_cast<std::size_t>(r_size));
}
return true;
}
struct conn_info_head {
ipc::string name_;
msg_id_t cc_id_; // connection-info id
ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_;
ipc::shm::handle acc_h_;
conn_info_head(char const * name)
: name_ {name}
, cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)}
, cc_waiter_{("__CC_CONN__" + name_).c_str()}
, wt_waiter_{("__WT_CONN__" + name_).c_str()}
, rd_waiter_{("__RD_CONN__" + name_).c_str()}
, acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} {
}
void quit_waiting() {
cc_waiter_.quit_waiting();
wt_waiter_.quit_waiting();
rd_waiter_.quit_waiting();
}
auto acc() {
return static_cast<acc_t*>(acc_h_.get());
}
auto& recv_cache() {
thread_local ipc::unordered_map<msg_id_t, cache_t> tls;
return tls;
}
};
template <typename W, typename F>
bool wait_for(W& waiter, F&& pred, std::uint64_t tm) {
if (tm == 0) return !pred();
for (unsigned k = 0; pred();) {
bool ret = true;
ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] {
ret = waiter.wait_if(std::forward<F>(pred), tm);
k = 0;
});
if (!ret) return false; // timeout or fail
if (k == 0) break; // k has been reset
}
return true;
}
template <typename Policy,
std::size_t DataSize = ipc::data_length,
std::size_t AlignSize = (ipc::detail::min)(DataSize, alignof(std::max_align_t))>
struct queue_generator {
using queue_t = ipc::queue<msg_t<DataSize, AlignSize>, Policy>;
struct conn_info_t : conn_info_head {
queue_t que_;
conn_info_t(char const * name)
: conn_info_head{name}
, que_{("__QU_CONN__" +
ipc::to_string(DataSize) + "__" +
ipc::to_string(AlignSize) + "__" + name).c_str()} {
}
void disconnect_receiver() {
bool dis = que_.disconnect();
this->quit_waiting();
if (dis) {
this->recv_cache().clear();
}
}
};
};
template <typename Policy>
struct detail_impl {
using policy_t = Policy;
using flag_t = typename policy_t::flag_t;
using queue_t = typename queue_generator<policy_t>::queue_t;
using conn_info_t = typename queue_generator<policy_t>::conn_info_t;
constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept {
return static_cast<conn_info_t*>(h);
}
constexpr static queue_t* queue_of(ipc::handle_t h) noexcept {
return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_);
}
/* API implementations */
static void disconnect(ipc::handle_t h) {
auto que = queue_of(h);
if (que == nullptr) {
return;
}
que->shut_sending();
assert(info_of(h) != nullptr);
info_of(h)->disconnect_receiver();
}
static bool reconnect(ipc::handle_t * ph, bool start_to_recv) {
assert(ph != nullptr);
assert(*ph != nullptr);
auto que = queue_of(*ph);
if (que == nullptr) {
return false;
}
if (start_to_recv) {
que->shut_sending();
if (que->connect()) { // wouldn't connect twice
info_of(*ph)->cc_waiter_.broadcast();
return true;
}
return false;
}
// start_to_recv == false
if (que->connected()) {
info_of(*ph)->disconnect_receiver();
}
return que->ready_sending();
}
static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) {
assert(ph != nullptr);
if (*ph == nullptr) {
*ph = ipc::mem::alloc<conn_info_t>(name);
}
return reconnect(ph, start_to_recv);
}
static void destroy(ipc::handle_t h) {
disconnect(h);
ipc::mem::free(info_of(h));
}
static std::size_t recv_count(ipc::handle_t h) noexcept {
auto que = queue_of(h);
if (que == nullptr) {
return ipc::invalid_value;
}
return que->conn_count();
}
static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
auto que = queue_of(h);
if (que == nullptr) {
return false;
}
return wait_for(info_of(h)->cc_waiter_, [que, r_count] {
return que->conn_count() < r_count;
}, tm);
}
template <typename F>
static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) {
if (data == nullptr || size == 0) {
ipc::error("fail: send(%p, %zd)\n", data, size);
return false;
}
auto que = queue_of(h);
if (que == nullptr) {
ipc::error("fail: send, queue_of(h) == nullptr\n");
return false;
}
if (que->elems() == nullptr) {
ipc::error("fail: send, queue_of(h)->elems() == nullptr\n");
return false;
}
if (!que->ready_sending()) {
ipc::error("fail: send, que->ready_sending() == false\n");
return false;
}
ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed);
if (conns == 0) {
ipc::error("fail: send, there is no receiver on this connection.\n");
return false;
}
// calc a new message id
auto acc = info_of(h)->acc();
if (acc == nullptr) {
ipc::error("fail: send, info_of(h)->acc() == nullptr\n");
return false;
}
auto msg_id = acc->fetch_add(1, std::memory_order_relaxed);
auto try_push = std::forward<F>(gen_push)(info_of(h), que, msg_id);
if (size > ipc::large_msg_limit) {
auto dat = acquire_storage(size, conns);
void * buf = dat.second;
if (buf != nullptr) {
std::memcpy(buf, data, size);
return try_push(static_cast<std::int32_t>(size) -
static_cast<std::int32_t>(ipc::data_length), &(dat.first), 0);
}
// try using message fragment
//ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size);
}
// push message fragment
std::int32_t offset = 0;
for (std::int32_t i = 0; i < static_cast<std::int32_t>(size / ipc::data_length); ++i, offset += ipc::data_length) {
if (!try_push(static_cast<std::int32_t>(size) - offset - static_cast<std::int32_t>(ipc::data_length),
static_cast<ipc::byte_t const *>(data) + offset, ipc::data_length)) {
return false;
}
}
// if remain > 0, this is the last message fragment
std::int32_t remain = static_cast<std::int32_t>(size) - offset;
if (remain > 0) {
if (!try_push(remain - static_cast<std::int32_t>(ipc::data_length),
static_cast<ipc::byte_t const *>(data) + offset,
static_cast<std::size_t>(remain))) {
return false;
}
}
return true;
}
static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
return send([tm](auto info, auto que, auto msg_id) {
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
if (!wait_for(info->wt_waiter_, [&] {
return !que->push(
[](void*) { return true; },
info->cc_id_, msg_id, remain, data, size);
}, tm)) {
ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size);
if (!que->force_push(
clear_message<typename queue_t::value_t>,
info->cc_id_, msg_id, remain, data, size)) {
return false;
}
}
info->rd_waiter_.broadcast();
return true;
};
}, h, data, size);
}
static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
return send([tm](auto info, auto que, auto msg_id) {
return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) {
if (!wait_for(info->wt_waiter_, [&] {
return !que->push(
[](void*) { return true; },
info->cc_id_, msg_id, remain, data, size);
}, tm)) {
return false;
}
info->rd_waiter_.broadcast();
return true;
};
}, h, data, size);
}
static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) {
auto que = queue_of(h);
if (que == nullptr) {
ipc::error("fail: recv, queue_of(h) == nullptr\n");
return {};
}
if (!que->connected()) {
// hasn't connected yet, just return.
return {};
}
auto& rc = info_of(h)->recv_cache();
for (;;) {
// pop a new message
typename queue_t::value_t msg;
if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] {
return !que->pop(msg);
}, tm)) {
// pop failed, just return.
return {};
}
info_of(h)->wt_waiter_.broadcast();
if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) {
continue; // ignore message to self
}
// msg.remain_ may minus & abs(msg.remain_) < data_length
std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg.remain_;
if (r_size <= 0) {
ipc::error("fail: recv, r_size = %d\n", (int)r_size);
return {};
}
std::size_t msg_size = static_cast<std::size_t>(r_size);
// large message
if (msg.storage_) {
ipc::storage_id_t buf_id = *reinterpret_cast<ipc::storage_id_t*>(&msg.data_);
void* buf = find_storage(buf_id, msg_size);
if (buf != nullptr) {
struct recycle_t {
ipc::storage_id_t storage_id;
ipc::circ::cc_t curr_conns;
ipc::circ::cc_t conn_id;
} *r_info = ipc::mem::alloc<recycle_t>(recycle_t{
buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id()
});
if (r_info == nullptr) {
ipc::log("fail: ipc::mem::alloc<recycle_t>.\n");
return ipc::buff_t{buf, msg_size}; // no recycle
} else {
return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) {
auto r_info = static_cast<recycle_t *>(p_info);
IPC_UNUSED_ auto finally = ipc::guard([r_info] {
ipc::mem::free(r_info);
});
recycle_storage<flag_t>(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id);
}, r_info};
}
} else {
ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size);
continue;
}
}
// find cache with msg.id_
auto cac_it = rc.find(msg.id_);
if (cac_it == rc.end()) {
if (msg_size <= ipc::data_length) {
return make_cache(msg.data_, msg_size);
}
// gc
if (rc.size() > 1024) {
std::vector<msg_id_t> need_del;
for (auto const & pair : rc) {
auto cmp = std::minmax(msg.id_, pair.first);
if (cmp.second - cmp.first > 8192) {
need_del.push_back(pair.first);
}
}
for (auto id : need_del) rc.erase(id);
}
// cache the first message fragment
rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) });
}
// has cached before this message
else {
auto& cac = cac_it->second;
// this is the last message fragment
if (msg.remain_ <= 0) {
cac.append(&(msg.data_), msg_size);
// finish this message, erase it from cache
auto buff = std::move(cac.buff_);
rc.erase(cac_it);
return buff;
}
// there are remain datas after this message
cac.append(&(msg.data_), ipc::data_length);
}
}
}
static ipc::buff_t try_recv(ipc::handle_t h) {
return recv(h, 0);
}
}; // detail_impl<Policy>
template <typename Flag>
using policy_t = ipc::policy::choose<ipc::circ::elem_array, Flag>;
} // internal-linkage
namespace ipc {
template <typename Flag>
ipc::handle_t chan_impl<Flag>::inited() {
ipc::detail::waiter::init();
return nullptr;
}
template <typename Flag>
bool chan_impl<Flag>::connect(ipc::handle_t * ph, char const * name, unsigned mode) {
return detail_impl<policy_t<Flag>>::connect(ph, name, mode & receiver);
}
template <typename Flag>
bool chan_impl<Flag>::reconnect(ipc::handle_t * ph, unsigned mode) {
return detail_impl<policy_t<Flag>>::reconnect(ph, mode & receiver);
}
template <typename Flag>
void chan_impl<Flag>::disconnect(ipc::handle_t h) {
detail_impl<policy_t<Flag>>::disconnect(h);
}
template <typename Flag>
void chan_impl<Flag>::destroy(ipc::handle_t h) {
detail_impl<policy_t<Flag>>::destroy(h);
}
template <typename Flag>
char const * chan_impl<Flag>::name(ipc::handle_t h) {
auto info = detail_impl<policy_t<Flag>>::info_of(h);
return (info == nullptr) ? nullptr : info->name_.c_str();
}
template <typename Flag>
std::size_t chan_impl<Flag>::recv_count(ipc::handle_t h) {
return detail_impl<policy_t<Flag>>::recv_count(h);
}
template <typename Flag>
bool chan_impl<Flag>::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) {
return detail_impl<policy_t<Flag>>::wait_for_recv(h, r_count, tm);
}
template <typename Flag>
bool chan_impl<Flag>::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
return detail_impl<policy_t<Flag>>::send(h, data, size, tm);
}
template <typename Flag>
buff_t chan_impl<Flag>::recv(ipc::handle_t h, std::uint64_t tm) {
return detail_impl<policy_t<Flag>>::recv(h, tm);
}
template <typename Flag>
bool chan_impl<Flag>::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) {
return detail_impl<policy_t<Flag>>::try_send(h, data, size, tm);
}
template <typename Flag>
buff_t chan_impl<Flag>::try_recv(ipc::handle_t h) {
return detail_impl<policy_t<Flag>>::try_recv(h);
}
template struct chan_impl<ipc::wr<relat::single, relat::single, trans::unicast >>;
// template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::unicast >>; // TBD
// template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::unicast >>; // TBD
template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::broadcast>>;
template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::broadcast>>;
} // namespace ipc

View File

@ -0,0 +1,25 @@
#pragma once
#include <type_traits>
#include "libipc/def.h"
#include "libipc/prod_cons.h"
#include "libipc/circ/elem_array.h"
namespace ipc {
namespace policy {
template <template <typename, std::size_t...> class Elems, typename Flag>
struct choose;
template <typename Flag>
struct choose<circ::elem_array, Flag> {
using flag_t = Flag;
template <std::size_t DataSize, std::size_t AlignSize>
using elems_t = circ::elem_array<ipc::prod_cons_impl<flag_t>, DataSize, AlignSize>;
};
} // namespace policy
} // namespace ipc

View File

@ -0,0 +1,17 @@
#include "libipc/pool_alloc.h"
#include "libipc/memory/resource.h"
namespace ipc {
namespace mem {
void* pool_alloc::alloc(std::size_t size) {
return async_pool_alloc::alloc(size);
}
void pool_alloc::free(void* p, std::size_t size) {
async_pool_alloc::free(p, size);
}
} // namespace mem
} // namespace ipc

View File

@ -0,0 +1,433 @@
#pragma once
#include <atomic>
#include <utility>
#include <cstring>
#include <type_traits>
#include <cstdint>
#include "libipc/def.h"
#include "libipc/platform/detail.h"
#include "libipc/circ/elem_def.h"
#include "libipc/utility/log.h"
#include "libipc/utility/utility.h"
namespace ipc {
////////////////////////////////////////////////////////////////
/// producer-consumer implementation
////////////////////////////////////////////////////////////////
template <typename Flag>
struct prod_cons_impl;
template <>
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
};
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
constexpr circ::u2_t cursor() const noexcept {
return 0;
}
template <typename W, typename F, typename E>
bool push(W* /*wrapper*/, F&& f, E* elems) {
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
return false; // full
}
std::forward<F>(f)(&(elems[cur_wt].data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
/**
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
* So we could just disconnect all connections of receiver, and return false.
*/
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
return false;
}
template <typename W, typename F, typename R, typename E>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
return false; // empty
}
std::forward<F>(f)(&(elems[cur_rd].data_));
std::forward<R>(out)(true);
rd_.fetch_add(1, std::memory_order_release);
return true;
}
};
template <>
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(1);
return false;
}
template <typename W, typename F, typename R,
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
byte_t buff[DS];
for (unsigned k = 0;;) {
auto cur_rd = rd_.load(std::memory_order_relaxed);
if (circ::index_of(cur_rd) ==
circ::index_of(wt_.load(std::memory_order_acquire))) {
return false; // empty
}
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
std::forward<F>(f)(buff);
std::forward<R>(out)(true);
return true;
}
ipc::yield(k);
}
}
};
template <>
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
using flag_t = std::uint64_t;
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
};
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
template <typename W, typename F, typename E>
bool push(W* /*wrapper*/, F&& f, E* elems) {
circ::u2_t cur_ct, nxt_ct;
for (unsigned k = 0;;) {
cur_ct = ct_.load(std::memory_order_relaxed);
if (circ::index_of(nxt_ct = cur_ct + 1) ==
circ::index_of(rd_.load(std::memory_order_acquire))) {
return false; // full
}
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
break;
}
ipc::yield(k);
}
auto* el = elems + circ::index_of(cur_ct);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
while (1) {
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
return true;
}
if ((~cac_ct) != cur_ct) {
return true;
}
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
return true;
}
wt_.store(nxt_ct, std::memory_order_release);
cur_ct = nxt_ct;
nxt_ct = cur_ct + 1;
el = elems + circ::index_of(cur_ct);
}
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(1);
return false;
}
template <typename W, typename F, typename R,
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
byte_t buff[DS];
for (unsigned k = 0;;) {
auto cur_rd = rd_.load(std::memory_order_relaxed);
auto cur_wt = wt_.load(std::memory_order_acquire);
auto id_rd = circ::index_of(cur_rd);
auto id_wt = circ::index_of(cur_wt);
if (id_rd == id_wt) {
auto* el = elems + id_wt;
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
if ((~cac_ct) != cur_wt) {
return false; // empty
}
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
wt_.store(cur_wt + 1, std::memory_order_release);
}
k = 0;
}
else {
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
std::forward<F>(f)(buff);
std::forward<R>(out)(true);
return true;
}
ipc::yield(k);
}
}
}
};
template <>
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
using rc_t = std::uint64_t;
enum : rc_t {
ep_mask = 0x00000000ffffffffull,
ep_incr = 0x0000000100000000ull
};
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<rc_t> rc_ { 0 }; // read-counter
};
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
circ::u2_t cursor() const noexcept {
return wt_.load(std::memory_order_acquire);
}
template <typename W, typename F, typename E>
bool push(W* wrapper, F&& f, E* elems) {
E* el;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & ep_mask;
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
return false; // has not finished yet
}
// consider rem_cc to be 0 here
if (el->rc_.compare_exchange_weak(
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
break;
}
ipc::yield(k);
}
std::forward<F>(f)(&(el->data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&& f, E* elems) {
E* el;
epoch_ += ep_incr;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & ep_mask;
if (cc & rem_cc) {
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
if (cc == 0) return false; // no reader
}
// just compare & exchange
if (el->rc_.compare_exchange_weak(
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
break;
}
ipc::yield(k);
}
std::forward<F>(f)(&(el->data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
template <typename W, typename F, typename R, typename E>
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
if (cur == cursor()) return false; // acquire
auto* el = elems + circ::index_of(cur++);
std::forward<F>(f)(&(el->data_));
for (unsigned k = 0;;) {
auto cur_rc = el->rc_.load(std::memory_order_acquire);
if ((cur_rc & ep_mask) == 0) {
std::forward<R>(out)(true);
return true;
}
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
return true;
}
ipc::yield(k);
}
}
};
template <>
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
using rc_t = std::uint64_t;
using flag_t = std::uint64_t;
enum : rc_t {
rc_mask = 0x00000000ffffffffull,
ep_mask = 0x00ffffffffffffffull,
ep_incr = 0x0100000000000000ull,
ic_mask = 0xff000000ffffffffull,
ic_incr = 0x0000000100000000ull
};
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<rc_t > rc_ { 0 }; // read-counter
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
};
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
circ::u2_t cursor() const noexcept {
return ct_.load(std::memory_order_acquire);
}
constexpr static rc_t inc_rc(rc_t rc) noexcept {
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
}
constexpr static rc_t inc_mask(rc_t rc) noexcept {
return inc_rc(rc) & ~rc_mask;
}
template <typename W, typename F, typename E>
bool push(W* wrapper, F&& f, E* elems) {
E* el;
circ::u2_t cur_ct;
rc_t epoch = epoch_.load(std::memory_order_acquire);
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
circ::cc_t rem_cc = cur_rc & rc_mask;
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
return false; // has not finished yet
}
else if (!rem_cc) {
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
if ((cur_fl != cur_ct) && cur_fl) {
return false; // full
}
}
// consider rem_cc to be 0 here
if (el->rc_.compare_exchange_weak(
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
break;
}
ipc::yield(k);
}
// only one thread/process would touch here at one time
ct_.store(cur_ct + 1, std::memory_order_release);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&& f, E* elems) {
E* el;
circ::u2_t cur_ct;
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & rc_mask;
if (cc & rem_cc) {
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
if (cc == 0) return false; // no reader
}
// just compare & exchange
if (el->rc_.compare_exchange_weak(
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
if (epoch == epoch_.load(std::memory_order_acquire)) {
break;
}
else if (push(wrapper, std::forward<F>(f), elems)) {
return true;
}
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
}
ipc::yield(k);
}
// only one thread/process would touch here at one time
ct_.store(cur_ct + 1, std::memory_order_release);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
return true;
}
template <typename W, typename F, typename R, typename E, std::size_t N>
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
auto* el = elems + circ::index_of(cur);
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
if (cur_fl != ~static_cast<flag_t>(cur)) {
return false; // empty
}
++cur;
std::forward<F>(f)(&(el->data_));
for (unsigned k = 0;;) {
auto cur_rc = el->rc_.load(std::memory_order_acquire);
if ((cur_rc & rc_mask) == 0) {
std::forward<R>(out)(true);
el->f_ct_.store(cur + N - 1, std::memory_order_release);
return true;
}
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
bool last_one = false;
if ((last_one = (nxt_rc & rc_mask) == 0)) {
el->f_ct_.store(cur + N - 1, std::memory_order_release);
}
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
std::forward<R>(out)(last_one);
return true;
}
ipc::yield(k);
}
}
};
} // namespace ipc

View File

@ -0,0 +1,216 @@
#pragma once
#include <type_traits>
#include <new>
#include <utility> // [[since C++14]]: std::exchange
#include <algorithm>
#include <atomic>
#include <tuple>
#include <thread>
#include <chrono>
#include <string>
#include <cassert> // assert
#include "libipc/def.h"
#include "libipc/shm.h"
#include "libipc/rw_lock.h"
#include "libipc/utility/log.h"
#include "libipc/platform/detail.h"
#include "libipc/circ/elem_def.h"
namespace ipc {
namespace detail {
class queue_conn {
protected:
circ::cc_t connected_ = 0;
shm::handle elems_h_;
template <typename Elems>
Elems* open(char const * name) {
if (name == nullptr || name[0] == '\0') {
ipc::error("fail open waiter: name is empty!\n");
return nullptr;
}
if (!elems_h_.acquire(name, sizeof(Elems))) {
return nullptr;
}
auto elems = static_cast<Elems*>(elems_h_.get());
if (elems == nullptr) {
ipc::error("fail acquire elems: %s\n", name);
return nullptr;
}
elems->init();
return elems;
}
void close() {
elems_h_.release();
}
public:
queue_conn() = default;
queue_conn(const queue_conn&) = delete;
queue_conn& operator=(const queue_conn&) = delete;
bool connected() const noexcept {
return connected_ != 0;
}
circ::cc_t connected_id() const noexcept {
return connected_;
}
template <typename Elems>
auto connect(Elems* elems) noexcept
/*needs 'optional' here*/
-> std::tuple<bool, bool, decltype(std::declval<Elems>().cursor())> {
if (elems == nullptr) return {};
// if it's already connected, just return
if (connected()) return {connected(), false, 0};
connected_ = elems->connect_receiver();
return {connected(), true, elems->cursor()};
}
template <typename Elems>
bool disconnect(Elems* elems) noexcept {
if (elems == nullptr) return false;
// if it's already disconnected, just return false
if (!connected()) return false;
elems->disconnect_receiver(std::exchange(connected_, 0));
return true;
}
};
template <typename Elems>
class queue_base : public queue_conn {
using base_t = queue_conn;
public:
using elems_t = Elems;
using policy_t = typename elems_t::policy_t;
protected:
elems_t * elems_ = nullptr;
decltype(std::declval<elems_t>().cursor()) cursor_ = 0;
bool sender_flag_ = false;
public:
using base_t::base_t;
queue_base() = default;
explicit queue_base(char const * name)
: queue_base{} {
elems_ = open<elems_t>(name);
}
explicit queue_base(elems_t * elems) noexcept
: queue_base{} {
assert(elems != nullptr);
elems_ = elems;
}
/* not virtual */ ~queue_base() {
base_t::close();
}
elems_t * elems() noexcept { return elems_; }
elems_t const * elems() const noexcept { return elems_; }
bool ready_sending() noexcept {
if (elems_ == nullptr) return false;
return sender_flag_ || (sender_flag_ = elems_->connect_sender());
}
void shut_sending() noexcept {
if (elems_ == nullptr) return;
if (!sender_flag_) return;
elems_->disconnect_sender();
}
bool connect() noexcept {
auto tp = base_t::connect(elems_);
if (std::get<0>(tp) && std::get<1>(tp)) {
cursor_ = std::get<2>(tp);
return true;
}
return std::get<0>(tp);
}
bool disconnect() noexcept {
return base_t::disconnect(elems_);
}
std::size_t conn_count() const noexcept {
return (elems_ == nullptr) ? static_cast<std::size_t>(invalid_value) : elems_->conn_count();
}
bool valid() const noexcept {
return elems_ != nullptr;
}
bool empty() const noexcept {
return !valid() || (cursor_ == elems_->cursor());
}
template <typename T, typename F, typename... P>
bool push(F&& prep, P&&... params) {
if (elems_ == nullptr) return false;
return elems_->push(this, [&](void* p) {
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
});
}
template <typename T, typename F, typename... P>
bool force_push(F&& prep, P&&... params) {
if (elems_ == nullptr) return false;
return elems_->force_push(this, [&](void* p) {
if (prep(p)) ::new (p) T(std::forward<P>(params)...);
});
}
template <typename T, typename F>
bool pop(T& item, F&& out) {
if (elems_ == nullptr) {
return false;
}
return elems_->pop(this, &(this->cursor_), [&item](void* p) {
::new (&item) T(std::move(*static_cast<T*>(p)));
}, std::forward<F>(out));
}
};
} // namespace detail
template <typename T, typename Policy>
class queue final : public detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>> {
using base_t = detail::queue_base<typename Policy::template elems_t<sizeof(T), alignof(T)>>;
public:
using value_t = T;
using base_t::base_t;
template <typename... P>
bool push(P&&... params) {
return base_t::template push<T>(std::forward<P>(params)...);
}
template <typename... P>
bool force_push(P&&... params) {
return base_t::template force_push<T>(std::forward<P>(params)...);
}
bool pop(T& item) {
return base_t::pop(item, [](bool) {});
}
template <typename F>
bool pop(T& item, F&& out) {
return base_t::pop(item, std::forward<F>(out));
}
};
} // namespace ipc

View File

@ -0,0 +1,103 @@
#include <string>
#include <utility>
#include "libipc/shm.h"
#include "libipc/utility/pimpl.h"
#include "libipc/memory/resource.h"
namespace ipc {
namespace shm {
class handle::handle_ : public pimpl<handle_> {
public:
shm::id_t id_ = nullptr;
void* m_ = nullptr;
ipc::string n_;
std::size_t s_ = 0;
};
handle::handle()
: p_(p_->make()) {
}
handle::handle(char const * name, std::size_t size, unsigned mode)
: handle() {
acquire(name, size, mode);
}
handle::handle(handle&& rhs)
: handle() {
swap(rhs);
}
handle::~handle() {
release();
p_->clear();
}
void handle::swap(handle& rhs) {
std::swap(p_, rhs.p_);
}
handle& handle::operator=(handle rhs) {
swap(rhs);
return *this;
}
bool handle::valid() const noexcept {
return impl(p_)->m_ != nullptr;
}
std::size_t handle::size() const noexcept {
return impl(p_)->s_;
}
char const * handle::name() const noexcept {
return impl(p_)->n_.c_str();
}
std::int32_t handle::ref() const noexcept {
return shm::get_ref(impl(p_)->id_);
}
void handle::sub_ref() noexcept {
shm::sub_ref(impl(p_)->id_);
}
bool handle::acquire(char const * name, std::size_t size, unsigned mode) {
release();
impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode);
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
return valid();
}
std::int32_t handle::release() {
if (impl(p_)->id_ == nullptr) return -1;
return shm::release(detach());
}
void* handle::get() const {
return impl(p_)->m_;
}
void handle::attach(id_t id) {
if (id == nullptr) return;
release();
impl(p_)->id_ = id;
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
}
id_t handle::detach() {
auto old = impl(p_)->id_;
impl(p_)->id_ = nullptr;
impl(p_)->m_ = nullptr;
impl(p_)->s_ = 0;
impl(p_)->n_.clear();
return old;
}
} // namespace shm
} // namespace ipc

View File

@ -0,0 +1,83 @@
#pragma once
#include <utility>
#include <string>
#include <mutex>
#include <atomic>
#include "libipc/def.h"
#include "libipc/mutex.h"
#include "libipc/condition.h"
#include "libipc/platform/detail.h"
namespace ipc {
namespace detail {
class waiter {
ipc::sync::condition cond_;
ipc::sync::mutex lock_;
std::atomic<bool> quit_ {false};
public:
static void init();
waiter() = default;
waiter(char const *name) {
open(name);
}
~waiter() {
close();
}
bool valid() const noexcept {
return cond_.valid() && lock_.valid();
}
bool open(char const *name) noexcept {
quit_.store(false, std::memory_order_relaxed);
if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) {
return false;
}
if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) {
cond_.close();
return false;
}
return valid();
}
void close() noexcept {
cond_.close();
lock_.close();
}
template <typename F>
bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept {
IPC_UNUSED_ std::lock_guard<ipc::sync::mutex> guard {lock_};
while ([this, &pred] {
return !quit_.load(std::memory_order_relaxed)
&& std::forward<F>(pred)();
}()) {
if (!cond_.wait(lock_, tm)) return false;
}
return true;
}
bool notify() noexcept {
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
return cond_.notify(lock_);
}
bool broadcast() noexcept {
std::lock_guard<ipc::sync::mutex>{lock_}; // barrier
return cond_.broadcast(lock_);
}
bool quit_waiting() {
quit_.store(true, std::memory_order_release);
return broadcast();
}
};
} // namespace detail
} // namespace ipc

View File

@ -0,0 +1,3 @@
https://github.com/mutouyun/cpp-ipc
A high-performance inter-process communication library using shared memory on Linux/Windows.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,316 @@
// jpgd.h - C++ class for JPEG decompression.
// Public domain, Rich Geldreich <richgel99@gmail.com>
#ifndef JPEG_DECODER_H
#define JPEG_DECODER_H
#include <stdlib.h>
#include <stdio.h>
#include <setjmp.h>
namespace jpgd
{
typedef unsigned char uint8;
typedef signed short int16;
typedef unsigned short uint16;
typedef unsigned int uint;
typedef signed int int32;
// Loads a JPEG image from a memory buffer or a file.
// req_comps can be 1 (grayscale), 3 (RGB), or 4 (RGBA).
// On return, width/height will be set to the image's dimensions, and actual_comps will be set to the either 1 (grayscale) or 3 (RGB).
// Notes: For more control over where and how the source data is read, see the decompress_jpeg_image_from_stream() function below, or call the jpeg_decoder class directly.
// Requesting a 8 or 32bpp image is currently a little faster than 24bpp because the jpeg_decoder class itself currently always unpacks to either 8 or 32bpp.
// BEGIN EPIC MOD
//unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps);
unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format);
// END EPIC MOD
unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps);
// Success/failure error codes.
enum jpgd_status
{
JPGD_SUCCESS = 0, JPGD_FAILED = -1, JPGD_DONE = 1,
JPGD_BAD_DHT_COUNTS = -256, JPGD_BAD_DHT_INDEX, JPGD_BAD_DHT_MARKER, JPGD_BAD_DQT_MARKER, JPGD_BAD_DQT_TABLE,
JPGD_BAD_PRECISION, JPGD_BAD_HEIGHT, JPGD_BAD_WIDTH, JPGD_TOO_MANY_COMPONENTS,
JPGD_BAD_SOF_LENGTH, JPGD_BAD_VARIABLE_MARKER, JPGD_BAD_DRI_LENGTH, JPGD_BAD_SOS_LENGTH,
JPGD_BAD_SOS_COMP_ID, JPGD_W_EXTRA_BYTES_BEFORE_MARKER, JPGD_NO_ARITHMITIC_SUPPORT, JPGD_UNEXPECTED_MARKER,
JPGD_NOT_JPEG, JPGD_UNSUPPORTED_MARKER, JPGD_BAD_DQT_LENGTH, JPGD_TOO_MANY_BLOCKS,
JPGD_UNDEFINED_QUANT_TABLE, JPGD_UNDEFINED_HUFF_TABLE, JPGD_NOT_SINGLE_SCAN, JPGD_UNSUPPORTED_COLORSPACE,
JPGD_UNSUPPORTED_SAMP_FACTORS, JPGD_DECODE_ERROR, JPGD_BAD_RESTART_MARKER, JPGD_ASSERTION_ERROR,
JPGD_BAD_SOS_SPECTRAL, JPGD_BAD_SOS_SUCCESSIVE, JPGD_STREAM_READ, JPGD_NOTENOUGHMEM
};
// Input stream interface.
// Derive from this class to read input data from sources other than files or memory. Set m_eof_flag to true when no more data is available.
// The decoder is rather greedy: it will keep on calling this method until its internal input buffer is full, or until the EOF flag is set.
// It the input stream contains data after the JPEG stream's EOI (end of image) marker it will probably be pulled into the internal buffer.
// Call the get_total_bytes_read() method to determine the actual size of the JPEG stream after successful decoding.
class jpeg_decoder_stream
{
public:
jpeg_decoder_stream() { }
virtual ~jpeg_decoder_stream() { }
// The read() method is called when the internal input buffer is empty.
// Parameters:
// pBuf - input buffer
// max_bytes_to_read - maximum bytes that can be written to pBuf
// pEOF_flag - set this to true if at end of stream (no more bytes remaining)
// Returns -1 on error, otherwise return the number of bytes actually written to the buffer (which may be 0).
// Notes: This method will be called in a loop until you set *pEOF_flag to true or the internal buffer is full.
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) = 0;
};
// stdio FILE stream class.
class jpeg_decoder_file_stream : public jpeg_decoder_stream
{
jpeg_decoder_file_stream(const jpeg_decoder_file_stream &);
jpeg_decoder_file_stream &operator =(const jpeg_decoder_file_stream &);
FILE *m_pFile;
bool m_eof_flag, m_error_flag;
public:
jpeg_decoder_file_stream();
virtual ~jpeg_decoder_file_stream();
bool open(const char *Pfilename);
void close();
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
};
// Memory stream class.
class jpeg_decoder_mem_stream : public jpeg_decoder_stream
{
const uint8 *m_pSrc_data;
uint m_ofs, m_size;
public:
jpeg_decoder_mem_stream() : m_pSrc_data(NULL), m_ofs(0), m_size(0) { }
jpeg_decoder_mem_stream(const uint8 *pSrc_data, uint size) : m_pSrc_data(pSrc_data), m_ofs(0), m_size(size) { }
virtual ~jpeg_decoder_mem_stream() { }
bool open(const uint8 *pSrc_data, uint size);
void close() { m_pSrc_data = NULL; m_ofs = 0; m_size = 0; }
virtual int read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag);
};
// Loads JPEG file from a jpeg_decoder_stream.
unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps);
enum
{
JPGD_IN_BUF_SIZE = 8192, JPGD_MAX_BLOCKS_PER_MCU = 10, JPGD_MAX_HUFF_TABLES = 8, JPGD_MAX_QUANT_TABLES = 4,
JPGD_MAX_COMPONENTS = 4, JPGD_MAX_COMPS_IN_SCAN = 4, JPGD_MAX_BLOCKS_PER_ROW = 8192, JPGD_MAX_HEIGHT = 16384, JPGD_MAX_WIDTH = 16384
};
typedef int16 jpgd_quant_t;
typedef int16 jpgd_block_t;
class jpeg_decoder
{
public:
// Call get_error_code() after constructing to determine if the stream is valid or not. You may call the get_width(), get_height(), etc.
// methods after the constructor is called. You may then either destruct the object, or begin decoding the image by calling begin_decoding(), then decode() on each scanline.
jpeg_decoder(jpeg_decoder_stream *pStream);
~jpeg_decoder();
// Call this method after constructing the object to begin decompression.
// If JPGD_SUCCESS is returned you may then call decode() on each scanline.
int begin_decoding();
// Returns the next scan line.
// For grayscale images, pScan_line will point to a buffer containing 8-bit pixels (get_bytes_per_pixel() will return 1).
// Otherwise, it will always point to a buffer containing 32-bit RGBA pixels (A will always be 255, and get_bytes_per_pixel() will return 4).
// Returns JPGD_SUCCESS if a scan line has been returned.
// Returns JPGD_DONE if all scan lines have been returned.
// Returns JPGD_FAILED if an error occurred. Call get_error_code() for a more info.
int decode(const void** pScan_line, uint* pScan_line_len);
inline jpgd_status get_error_code() const { return m_error_code; }
inline int get_width() const { return m_image_x_size; }
inline int get_height() const { return m_image_y_size; }
inline int get_num_components() const { return m_comps_in_frame; }
inline int get_bytes_per_pixel() const { return m_dest_bytes_per_pixel; }
inline int get_bytes_per_scan_line() const { return m_image_x_size * get_bytes_per_pixel(); }
// Returns the total number of bytes actually consumed by the decoder (which should equal the actual size of the JPEG file).
inline int get_total_bytes_read() const { return m_total_bytes_read; }
private:
jpeg_decoder(const jpeg_decoder &);
jpeg_decoder &operator =(const jpeg_decoder &);
typedef void (*pDecode_block_func)(jpeg_decoder *, int, int, int);
struct huff_tables
{
bool ac_table;
uint look_up[256];
uint look_up2[256];
uint8 code_size[256];
uint tree[512];
};
struct coeff_buf
{
uint8 *pData;
int block_num_x, block_num_y;
int block_len_x, block_len_y;
int block_size;
};
struct mem_block
{
mem_block *m_pNext;
size_t m_used_count;
size_t m_size;
char m_data[1];
};
jmp_buf m_jmp_state;
mem_block *m_pMem_blocks;
int m_image_x_size;
int m_image_y_size;
jpeg_decoder_stream *m_pStream;
int m_progressive_flag;
uint8 m_huff_ac[JPGD_MAX_HUFF_TABLES];
uint8* m_huff_num[JPGD_MAX_HUFF_TABLES]; // pointer to number of Huffman codes per bit size
uint8* m_huff_val[JPGD_MAX_HUFF_TABLES]; // pointer to Huffman codes per bit size
jpgd_quant_t* m_quant[JPGD_MAX_QUANT_TABLES]; // pointer to quantization tables
int m_scan_type; // Gray, Yh1v1, Yh1v2, Yh2v1, Yh2v2 (CMYK111, CMYK4114 no longer supported)
int m_comps_in_frame; // # of components in frame
int m_comp_h_samp[JPGD_MAX_COMPONENTS]; // component's horizontal sampling factor
int m_comp_v_samp[JPGD_MAX_COMPONENTS]; // component's vertical sampling factor
int m_comp_quant[JPGD_MAX_COMPONENTS]; // component's quantization table selector
int m_comp_ident[JPGD_MAX_COMPONENTS]; // component's ID
int m_comp_h_blocks[JPGD_MAX_COMPONENTS];
int m_comp_v_blocks[JPGD_MAX_COMPONENTS];
int m_comps_in_scan; // # of components in scan
int m_comp_list[JPGD_MAX_COMPS_IN_SCAN]; // components in this scan
int m_comp_dc_tab[JPGD_MAX_COMPONENTS]; // component's DC Huffman coding table selector
int m_comp_ac_tab[JPGD_MAX_COMPONENTS]; // component's AC Huffman coding table selector
int m_spectral_start; // spectral selection start
int m_spectral_end; // spectral selection end
int m_successive_low; // successive approximation low
int m_successive_high; // successive approximation high
int m_max_mcu_x_size; // MCU's max. X size in pixels
int m_max_mcu_y_size; // MCU's max. Y size in pixels
int m_blocks_per_mcu;
int m_max_blocks_per_row;
int m_mcus_per_row, m_mcus_per_col;
int m_mcu_org[JPGD_MAX_BLOCKS_PER_MCU];
int m_total_lines_left; // total # lines left in image
int m_mcu_lines_left; // total # lines left in this MCU
int m_real_dest_bytes_per_scan_line;
int m_dest_bytes_per_scan_line; // rounded up
int m_dest_bytes_per_pixel; // 4 (RGB) or 1 (Y)
huff_tables* m_pHuff_tabs[JPGD_MAX_HUFF_TABLES];
coeff_buf* m_dc_coeffs[JPGD_MAX_COMPONENTS];
coeff_buf* m_ac_coeffs[JPGD_MAX_COMPONENTS];
int m_eob_run;
int m_block_y_mcu[JPGD_MAX_COMPONENTS];
uint8* m_pIn_buf_ofs;
int m_in_buf_left;
int m_tem_flag;
bool m_eof_flag;
uint8 m_in_buf_pad_start[128];
uint8 m_in_buf[JPGD_IN_BUF_SIZE + 128];
uint8 m_in_buf_pad_end[128];
int m_bits_left;
uint m_bit_buf;
int m_restart_interval;
int m_restarts_left;
int m_next_restart_num;
int m_max_mcus_per_row;
int m_max_blocks_per_mcu;
int m_expanded_blocks_per_mcu;
int m_expanded_blocks_per_row;
int m_expanded_blocks_per_component;
bool m_freq_domain_chroma_upsample;
int m_max_mcus_per_col;
uint m_last_dc_val[JPGD_MAX_COMPONENTS];
jpgd_block_t* m_pMCU_coefficients;
int m_mcu_block_max_zag[JPGD_MAX_BLOCKS_PER_MCU];
uint8* m_pSample_buf;
int m_crr[256];
int m_cbb[256];
int m_crg[256];
int m_cbg[256];
uint8* m_pScan_line_0;
uint8* m_pScan_line_1;
jpgd_status m_error_code;
bool m_ready_flag;
int m_total_bytes_read;
void free_all_blocks();
// BEGIN EPIC MOD
UE_NORETURN void stop_decoding(jpgd_status status);
// END EPIC MOD
void *alloc(size_t n, bool zero = false);
void word_clear(void *p, uint16 c, uint n);
void prep_in_buffer();
void read_dht_marker();
void read_dqt_marker();
void read_sof_marker();
void skip_variable_marker();
void read_dri_marker();
void read_sos_marker();
int next_marker();
int process_markers();
void locate_soi_marker();
void locate_sof_marker();
int locate_sos_marker();
void init(jpeg_decoder_stream * pStream);
void create_look_ups();
void fix_in_buffer();
void transform_mcu(int mcu_row);
void transform_mcu_expand(int mcu_row);
coeff_buf* coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y);
inline jpgd_block_t *coeff_buf_getp(coeff_buf *cb, int block_x, int block_y);
void load_next_row();
void decode_next_row();
void make_huff_table(int index, huff_tables *pH);
void check_quant_tables();
void check_huff_tables();
void calc_mcu_block_order();
int init_scan();
void init_frame();
void process_restart();
void decode_scan(pDecode_block_func decode_block_func);
void init_progressive();
void init_sequential();
void decode_start();
void decode_init(jpeg_decoder_stream * pStream);
void H2V2Convert();
void H2V1Convert();
void H1V2Convert();
void H1V1Convert();
void gray_convert();
void expanded_convert();
void find_eoi();
inline uint get_char();
inline uint get_char(bool *pPadding_flag);
inline void stuff_char(uint8 q);
inline uint8 get_octet();
inline uint get_bits(int num_bits);
inline uint get_bits_no_markers(int numbits);
inline int huff_decode(huff_tables *pH);
inline int huff_decode(huff_tables *pH, int& extrabits);
static inline uint8 clamp(int i);
static void decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
static void decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
static void decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y);
static void decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y);
};
} // namespace jpgd
#endif // JPEG_DECODER_H

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,172 @@
// jpge.h - C++ class for JPEG compression.
// Public domain, Rich Geldreich <richgel99@gmail.com>
// Alex Evans: Added RGBA support, linear memory allocator.
#ifndef JPEG_ENCODER_H
#define JPEG_ENCODER_H
#include <stdint.h>
namespace jpge
{
typedef unsigned char uint8;
typedef signed short int16;
typedef signed int int32;
typedef unsigned short uint16;
typedef unsigned int uint32;
typedef unsigned int uint;
// JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common.
enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 };
// JPEG compression parameters structure.
struct params
{
inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { }
inline bool check_valid() const
{
if ((m_quality < 1) || (m_quality > 100)) return false;
if ((uint)m_subsampling > (uint)H2V2) return false;
return true;
}
// Quality: 1-100, higher is better. Typical values are around 50-95.
int m_quality;
// m_subsampling:
// 0 = Y (grayscale) only
// 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU)
// 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU)
// 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common)
subsampling_t m_subsampling;
// Disables CbCr discrimination - only intended for testing.
// If true, the Y quantization table is also used for the CbCr channels.
bool m_no_chroma_discrim_flag;
bool m_two_pass_flag;
};
// Writes JPEG image to a file.
// num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels.
bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
// Writes JPEG image to memory buffer.
// On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes.
// If return value is true, buf_size will be set to the size of the compressed data.
bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params());
// Output stream abstract class - used by the jpeg_encoder class to write to the output stream.
// put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts.
class output_stream
{
public:
virtual ~output_stream() { };
virtual bool put_buf(const void* Pbuf, int64_t len) = 0;
template<class T> inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); }
};
// Lower level jpeg_encoder class - useful if more control is needed than the above helper functions.
class jpeg_encoder
{
public:
jpeg_encoder();
~jpeg_encoder();
// Initializes the compressor.
// pStream: The stream object to use for writing compressed data.
// params - Compression parameters structure, defined above.
// width, height - Image dimensions.
// channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data.
// Returns false on out of memory or if a stream write fails.
bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params());
const params &get_params() const { return m_params; }
// Deinitializes the compressor, freeing any allocated memory. May be called at any time.
void deinit();
uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; }
inline uint get_cur_pass() { return m_pass_num; }
// Call this method with each source scanline.
// width * src_channels bytes per scanline is expected (RGB or Y format).
// You must call with NULL after all scanlines are processed to finish compression.
// Returns false on out of memory or if a stream write fails.
bool process_scanline(const void* pScanline);
private:
jpeg_encoder(const jpeg_encoder &);
jpeg_encoder &operator =(const jpeg_encoder &);
typedef int32 sample_array_t;
output_stream *m_pStream;
params m_params;
uint8 m_num_components;
uint8 m_comp_h_samp[3], m_comp_v_samp[3];
int m_image_x, m_image_y, m_image_bpp, m_image_bpl;
int m_image_x_mcu, m_image_y_mcu;
int m_image_bpl_xlt, m_image_bpl_mcu;
int m_mcus_per_row;
int m_mcu_x, m_mcu_y;
uint8 *m_mcu_lines[16];
uint8 m_mcu_y_ofs;
sample_array_t m_sample_array[64];
int16 m_coefficient_array[64];
int32 m_quantization_tables[2][64];
uint m_huff_codes[4][256];
uint8 m_huff_code_sizes[4][256];
uint8 m_huff_bits[4][17];
uint8 m_huff_val[4][256];
uint32 m_huff_count[4][256];
int m_last_dc_val[3];
enum { JPGE_OUT_BUF_SIZE = 2048 };
uint8 m_out_buf[JPGE_OUT_BUF_SIZE];
uint8 *m_pOut_buf;
uint m_out_buf_left;
uint32 m_bit_buffer;
uint m_bits_in;
uint8 m_pass_num;
bool m_all_stream_writes_succeeded;
void optimize_huffman_table(int table_num, int table_len);
void emit_byte(uint8 i);
void emit_word(uint i);
void emit_marker(int marker);
void emit_jfif_app0();
void emit_dqt();
void emit_sof();
void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag);
void emit_dhts();
void emit_sos();
void emit_markers();
void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val);
void compute_quant_table(int32 *dst, int16 *src);
void adjust_quant_table(int32 *dst, int32 *src);
void first_pass_init();
bool second_pass_init();
bool jpg_open(int p_x_res, int p_y_res, int src_channels);
void load_block_8_8_grey(int x);
void load_block_8_8(int x, int y, int c);
void load_block_16_8(int x, int c);
void load_block_16_8_8(int x, int c);
void load_quantized_coefficients(int component_num);
void flush_output_buffer();
void put_bits(uint bits, uint len);
void code_coefficients_pass_one(int component_num);
void code_coefficients_pass_two(int component_num);
void code_block(int component_num);
void process_mcu_row();
bool terminate_pass_one();
bool terminate_pass_two();
bool process_end_of_image();
void load_mcu(const void* src);
void clear();
void init();
};
} // namespace jpge
#endif // JPEG_ENCODER

View File

@ -0,0 +1,3 @@
jpge.h - C++ class for JPEG compression.
Public domain, Rich Geldreich <richgel99@gmail.com>
Alex Evans: Added RGBA support, linear memory allocator.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,433 @@
#pragma once
#include <atomic>
#include <utility>
#include <cstring>
#include <type_traits>
#include <cstdint>
#include "libipc/def.h"
#include "libipc/platform/detail.h"
#include "libipc/circ/elem_def.h"
#include "libipc/utility/log.h"
#include "libipc/utility/utility.h"
namespace ipc {
////////////////////////////////////////////////////////////////
/// producer-consumer implementation
////////////////////////////////////////////////////////////////
template <typename Flag>
struct prod_cons_impl;
template <>
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
};
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
constexpr circ::u2_t cursor() const noexcept {
return 0;
}
template <typename W, typename F, typename E>
bool push(W* /*wrapper*/, F&& f, E* elems) {
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
return false; // full
}
std::forward<F>(f)(&(elems[cur_wt].data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
/**
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
* So we could just disconnect all connections of receiver, and return false.
*/
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
return false;
}
template <typename W, typename F, typename R, typename E>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
return false; // empty
}
std::forward<F>(f)(&(elems[cur_rd].data_));
std::forward<R>(out)(true);
rd_.fetch_add(1, std::memory_order_release);
return true;
}
};
template <>
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(1);
return false;
}
template <typename W, typename F, typename R,
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
byte_t buff[DS];
for (unsigned k = 0;;) {
auto cur_rd = rd_.load(std::memory_order_relaxed);
if (circ::index_of(cur_rd) ==
circ::index_of(wt_.load(std::memory_order_acquire))) {
return false; // empty
}
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
std::forward<F>(f)(buff);
std::forward<R>(out)(true);
return true;
}
ipc::yield(k);
}
}
};
template <>
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
using flag_t = std::uint64_t;
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
};
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
template <typename W, typename F, typename E>
bool push(W* /*wrapper*/, F&& f, E* elems) {
circ::u2_t cur_ct, nxt_ct;
for (unsigned k = 0;;) {
cur_ct = ct_.load(std::memory_order_relaxed);
if (circ::index_of(nxt_ct = cur_ct + 1) ==
circ::index_of(rd_.load(std::memory_order_acquire))) {
return false; // full
}
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
break;
}
ipc::yield(k);
}
auto* el = elems + circ::index_of(cur_ct);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
while (1) {
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
return true;
}
if ((~cac_ct) != cur_ct) {
return true;
}
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
return true;
}
wt_.store(nxt_ct, std::memory_order_release);
cur_ct = nxt_ct;
nxt_ct = cur_ct + 1;
el = elems + circ::index_of(cur_ct);
}
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&&, E*) {
wrapper->elems()->disconnect_receiver(1);
return false;
}
template <typename W, typename F, typename R,
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
byte_t buff[DS];
for (unsigned k = 0;;) {
auto cur_rd = rd_.load(std::memory_order_relaxed);
auto cur_wt = wt_.load(std::memory_order_acquire);
auto id_rd = circ::index_of(cur_rd);
auto id_wt = circ::index_of(cur_wt);
if (id_rd == id_wt) {
auto* el = elems + id_wt;
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
if ((~cac_ct) != cur_wt) {
return false; // empty
}
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
wt_.store(cur_wt + 1, std::memory_order_release);
}
k = 0;
}
else {
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
std::forward<F>(f)(buff);
std::forward<R>(out)(true);
return true;
}
ipc::yield(k);
}
}
}
};
template <>
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
using rc_t = std::uint64_t;
enum : rc_t {
ep_mask = 0x00000000ffffffffull,
ep_incr = 0x0000000100000000ull
};
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<rc_t> rc_ { 0 }; // read-counter
};
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
circ::u2_t cursor() const noexcept {
return wt_.load(std::memory_order_acquire);
}
template <typename W, typename F, typename E>
bool push(W* wrapper, F&& f, E* elems) {
E* el;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & ep_mask;
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
return false; // has not finished yet
}
// consider rem_cc to be 0 here
if (el->rc_.compare_exchange_weak(
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
break;
}
ipc::yield(k);
}
std::forward<F>(f)(&(el->data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&& f, E* elems) {
E* el;
epoch_ += ep_incr;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & ep_mask;
if (cc & rem_cc) {
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
if (cc == 0) return false; // no reader
}
// just compare & exchange
if (el->rc_.compare_exchange_weak(
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
break;
}
ipc::yield(k);
}
std::forward<F>(f)(&(el->data_));
wt_.fetch_add(1, std::memory_order_release);
return true;
}
template <typename W, typename F, typename R, typename E>
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
if (cur == cursor()) return false; // acquire
auto* el = elems + circ::index_of(cur++);
std::forward<F>(f)(&(el->data_));
for (unsigned k = 0;;) {
auto cur_rc = el->rc_.load(std::memory_order_acquire);
if ((cur_rc & ep_mask) == 0) {
std::forward<R>(out)(true);
return true;
}
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
return true;
}
ipc::yield(k);
}
}
};
template <>
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
using rc_t = std::uint64_t;
using flag_t = std::uint64_t;
enum : rc_t {
rc_mask = 0x00000000ffffffffull,
ep_mask = 0x00ffffffffffffffull,
ep_incr = 0x0100000000000000ull,
ic_mask = 0xff000000ffffffffull,
ic_incr = 0x0000000100000000ull
};
template <std::size_t DataSize, std::size_t AlignSize>
struct elem_t {
std::aligned_storage_t<DataSize, AlignSize> data_ {};
std::atomic<rc_t > rc_ { 0 }; // read-counter
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
};
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
circ::u2_t cursor() const noexcept {
return ct_.load(std::memory_order_acquire);
}
constexpr static rc_t inc_rc(rc_t rc) noexcept {
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
}
constexpr static rc_t inc_mask(rc_t rc) noexcept {
return inc_rc(rc) & ~rc_mask;
}
template <typename W, typename F, typename E>
bool push(W* wrapper, F&& f, E* elems) {
E* el;
circ::u2_t cur_ct;
rc_t epoch = epoch_.load(std::memory_order_acquire);
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
circ::cc_t rem_cc = cur_rc & rc_mask;
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
return false; // has not finished yet
}
else if (!rem_cc) {
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
if ((cur_fl != cur_ct) && cur_fl) {
return false; // full
}
}
// consider rem_cc to be 0 here
if (el->rc_.compare_exchange_weak(
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
break;
}
ipc::yield(k);
}
// only one thread/process would touch here at one time
ct_.store(cur_ct + 1, std::memory_order_release);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
return true;
}
template <typename W, typename F, typename E>
bool force_push(W* wrapper, F&& f, E* elems) {
E* el;
circ::u2_t cur_ct;
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
for (unsigned k = 0;;) {
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
if (cc == 0) return false; // no reader
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
// check all consumers have finished reading this element
auto cur_rc = el->rc_.load(std::memory_order_acquire);
circ::cc_t rem_cc = cur_rc & rc_mask;
if (cc & rem_cc) {
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
if (cc == 0) return false; // no reader
}
// just compare & exchange
if (el->rc_.compare_exchange_weak(
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
if (epoch == epoch_.load(std::memory_order_acquire)) {
break;
}
else if (push(wrapper, std::forward<F>(f), elems)) {
return true;
}
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
}
ipc::yield(k);
}
// only one thread/process would touch here at one time
ct_.store(cur_ct + 1, std::memory_order_release);
std::forward<F>(f)(&(el->data_));
// set flag & try update wt
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
return true;
}
template <typename W, typename F, typename R, typename E, std::size_t N>
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
auto* el = elems + circ::index_of(cur);
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
if (cur_fl != ~static_cast<flag_t>(cur)) {
return false; // empty
}
++cur;
std::forward<F>(f)(&(el->data_));
for (unsigned k = 0;;) {
auto cur_rc = el->rc_.load(std::memory_order_acquire);
if ((cur_rc & rc_mask) == 0) {
std::forward<R>(out)(true);
el->f_ct_.store(cur + N - 1, std::memory_order_release);
return true;
}
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
bool last_one = false;
if ((last_one = (nxt_rc & rc_mask) == 0)) {
el->f_ct_.store(cur + N - 1, std::memory_order_release);
}
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
std::forward<R>(out)(last_one);
return true;
}
ipc::yield(k);
}
}
};
} // namespace ipc

View File

@ -0,0 +1,58 @@
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
%\begin{table}[h!]
%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
%\label{tab:op_complexities}
%\begin{center}
%\vspace{-5pt}
%\scalebox{0.75}{
%\begin{tabular}{l|c|c|c}
%\hline \hline
%Layer Type & Receptive & Complexity & Sequential \\
% & Field & & Operations \\
%\hline
%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
%\hline
%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
%\hline
%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
%\hline
%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
%\hline
%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
%\hline \hline
%\end{tabular}
%}
%\end{center}
%\end{table}

View File

@ -0,0 +1,18 @@
Recurrent neural networks, long short-term memory \citep{hochreiter1997} and gated recurrent \citep{gruEval14} neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation \citep{sutskever14, bahdanau2014neural, cho2014learning}. Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures \citep{wu2016google,luong2015effective,jozefowicz2016exploring}.
Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t-1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples.
%\marginpar{not sure if the memory constraints are understandable here}
Recent work has achieved significant improvements in computational efficiency through factorization tricks \citep{Kuchaiev2017Factorization} and conditional computation \citep{shazeer2017outrageously}, while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains.
%\marginpar{@all: there is work on analyzing what attention really does in seq2seq models, couldn't find it right away}
Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences \citep{bahdanau2014neural, structuredAttentionNetworks}. In all but a few cases \citep{decomposableAttnModel}, however, such attention mechanisms are used in conjunction with a recurrent network.
%\marginpar{not sure if "cross-positional communication" is understandable without explanation}
%\marginpar{insert exact training times and stats for the model that reaches sota earliest, maybe even a single GPU model?}
In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.
%\marginpar{you removed the constant number of repetitions part. I wrote it because I wanted to make it clear that the model does not only perform attention once, while it's also not recurrent. I thought that might be important to get across early.}
% Just a standard paragraph with citations, rewrite.
%After the seminal papers of \citep{sutskever14}, \citep{bahdanau2014neural}, and \citep{cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation and language modeling with recurrent sequence models. Recent effort \citep{shazeer2017outrageously} has combined the power of conditional computation with sequence models to train very large models for machine translation, pushing SOTA at lower computational cost. Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state encumbers recurrnet models to process multiple inputs at once, and their time complexity is a linear function of the length of the input and output, both during training and inference. [What I want to say here is that although this is fine during decoding, at training time, we are given both input and output and this linear nature does not allow the RNN to process all inputs and outputs simultaneously and haven't been used on datasets that are the of the scale of the web. What's the largest dataset we have ? . Talk about Nividia and possibly other's effors to speed up things, and possibly other efforts that alleviate this, but are still limited by it's comptuational nature]. Rest of the intro: What if you could construct the state based on the actual inputs and outputs, then you could construct them all at once. This has been the foundation of many promising recent efforts, bytenet,facenet (Also talk about quasi rnn here). Now we talk about attention!! Along with cell architectures such as long short-term meory (LSTM) \citep{hochreiter1997}, and gated recurrent units (GRUs) \citep{cho2014learning}, attention has emerged as an essential ingredient in successful sequence models, in particular for machine translation. In recent years, many, if not all, state-of-the-art (SOTA) results in machine translation have been achieved with attention-based sequence models \citep{wu2016google,luong2015effective,jozefowicz2016exploring}. Talk about the neon work on how it played with attention to do self attention! Then talk about what we do.

View File

@ -0,0 +1,155 @@
\begin{figure}
\centering
\includegraphics[scale=0.6]{Figures/ModalNet-21}
\caption{The Transformer - model architecture.}
\label{fig:model-arch}
\end{figure}
% Although the primary workhorse of our model is attention,
%Our model maintains the encoder-decoder structure that is common to many so-called sequence-to-sequence models \citep{bahdanau2014neural,sutskever14}. As in all such architectures, the encoder computes a representation of the input sequence, and the decoder consumes these representations along with the output tokens to autoregressively produce the output sequence. Where, traditionally, the encoder and decoder contain stacks of recurrent or convolutional layers, our encoder and decoder stacks are composed of attention layers and position-wise feed-forward layers (Figure~\ref{fig:model-arch}). The following sections describe the gross architecture and these particular components in detail.
Most competitive neural sequence transduction models have an encoder-decoder structure \citep{cho2014learning,bahdanau2014neural,sutskever14}. Here, the encoder maps an input sequence of symbol representations $(x_1, ..., x_n)$ to a sequence of continuous representations $\mathbf{z} = (z_1, ..., z_n)$. Given $\mathbf{z}$, the decoder then generates an output sequence $(y_1,...,y_m)$ of symbols one element at a time. At each step the model is auto-regressive \citep{graves2013generating}, consuming the previously generated symbols as additional input when generating the next.
The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure~\ref{fig:model-arch}, respectively.
\subsection{Encoder and Decoder Stacks}
\paragraph{Encoder:}The encoder is composed of a stack of $N=6$ identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection \citep{he2016deep} around each of the two sub-layers, followed by layer normalization \cite{layernorm2016}. That is, the output of each sub-layer is $\mathrm{LayerNorm}(x + \mathrm{Sublayer}(x))$, where $\mathrm{Sublayer}(x)$ is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension $\dmodel=512$.
\paragraph{Decoder:}The decoder is also composed of a stack of $N=6$ identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position $i$ can depend only on the known outputs at positions less than $i$.
% In our model (Figure~\ref{fig:model-arch}), the encoder and decoder are composed of stacks of alternating self-attention layers (for cross-positional communication) and position-wise feed-forward layers (for in-place computation). In addition, the decoder stack contains encoder-decoder attention layers. Since attention is agnostic to the distances between words, our model requires a "positional encoding" to be added to the encoder and decoder input. The following sections describe all of these components in detail.
\subsection{Attention} \label{sec:attention}
An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.
\subsubsection{Scaled Dot-Product Attention} \label{sec:scaled-dot-prod}
% \begin{figure}
% \centering
% \includegraphics[scale=0.6]{Figures/ModalNet-19}
% \caption{Scaled Dot-Product Attention.}
% \label{fig:multi-head-att}
% \end{figure}
We call our particular attention "Scaled Dot-Product Attention" (Figure~\ref{fig:multi-head-att}). The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. We compute the dot products of the query with all keys, divide each by $\sqrt{d_k}$, and apply a softmax function to obtain the weights on the values.
In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$. The keys and values are also packed together into matrices $K$ and $V$. We compute the matrix of outputs as:
\begin{equation}
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(\frac{QK^T}{\sqrt{d_k}})V
\end{equation}
The two most commonly used attention functions are additive attention \citep{bahdanau2014neural}, and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of $\frac{1}{\sqrt{d_k}}$. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.
%We scale the dot products by $1/\sqrt{d_k}$ to limit the magnitude of the dot products, which works well in practice. Otherwise, we found applying the softmax to often result in weights very close to 0 or 1, and hence minuscule gradients.
% Already described in the subsequent section
%When used as part of decoder self-attention, an optional mask function is applied just before the softmax to prevent positions from attending to subsequent positions. This mask simply sets the logits corresponding to all illegal connections (those outside of the lower triangle) to $-\infty$.
%\paragraph{Comparison to Additive Attention: } We choose dot product attention over additive attention \citep{bahdanau2014neural} since it can be computed using highly optimized matrix multiplication code. This optimization is particularly important to us, as we employ many attention layers in our model.
While for small values of $d_k$ the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of $d_k$ \citep{DBLP:journals/corr/BritzGLL17}. We suspect that for large values of $d_k$, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients \footnote{To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean $0$ and variance $1$. Then their dot product, $q \cdot k = \sum_{i=1}^{d_k} q_ik_i$, has mean $0$ and variance $d_k$.}. To counteract this effect, we scale the dot products by $\frac{1}{\sqrt{d_k}}$.
%We suspect this to be caused by the dot products growing too large in magnitude to result in useful gradients after applying the softmax function. To counteract this, we scale the dot product by $1/\sqrt{d_k}$.
\subsubsection{Multi-Head Attention} \label{sec:multihead}
\begin{figure}
\begin{minipage}[t]{0.5\textwidth}
\centering
Scaled Dot-Product Attention \\
\vspace{0.5cm}
\includegraphics[scale=0.6]{Figures/ModalNet-19}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\centering
Multi-Head Attention \\
\vspace{0.1cm}
\includegraphics[scale=0.6]{Figures/ModalNet-20}
\end{minipage}
% \centering
\caption{(left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several attention layers running in parallel.}
\label{fig:multi-head-att}
\end{figure}
Instead of performing a single attention function with $\dmodel$-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values $h$ times with different, learned linear projections to $d_k$, $d_k$ and $d_v$ dimensions, respectively.
On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding $d_v$-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure~\ref{fig:multi-head-att}.
Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.
\begin{align*}
\mathrm{MultiHead}(Q, K, V) &= \mathrm{Concat}(\mathrm{head_1}, ..., \mathrm{head_h})W^O\\
% \mathrm{where} \mathrm{head_i} &= \mathrm{Attention}(QW_Q_i^{\dmodel \times d_q}, KW_K_i^{\dmodel \times d_k}, VW^V_i^{\dmodel \times d_v})\\
\text{where}~\mathrm{head_i} &= \mathrm{Attention}(QW^Q_i, KW^K_i, VW^V_i)\\
\end{align*}
Where the projections are parameter matrices $W^Q_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^K_i \in \mathbb{R}^{\dmodel \times d_k}$, $W^V_i \in \mathbb{R}^{\dmodel \times d_v}$ and $W^O \in \mathbb{R}^{hd_v \times \dmodel}$.
%find it better (and no more expensive) to have multiple parallel attention layers (each over the full set of positions) with proportionally lower-dimensional keys, values and queries. We call this "Multi-Head Attention" (Figure~\ref{fig:multi-head-att}). The keys, values, and queries for each of these parallel attention layers are computed by learned linear transformations of the inputs to the multi-head attention. We use different linear transformations across different parallel attention layers. The output of the parallel attention layers are concatenated, and then passed through a final learned linear transformation.
In this work we employ $h=8$ parallel attention layers, or heads. For each of these we use $d_k=d_v=\dmodel/h=64$.
Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.
\subsubsection{Applications of Attention in our Model}
The Transformer uses multi-head attention in three different ways:
\begin{itemize}
\item In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as \citep{wu2016google, bahdanau2014neural,JonasFaceNet2017}.
\item The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.
\item Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to $-\infty$) all values in the input of the softmax which correspond to illegal connections. See Figure~\ref{fig:multi-head-att}.
\end{itemize}
\subsection{Position-wise Feed-Forward Networks}\label{sec:ffn}
In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.
\begin{equation}
\mathrm{FFN}(x)=\max(0, xW_1 + b_1) W_2 + b_2
\end{equation}
While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is $\dmodel=512$, and the inner-layer has dimensionality $d_{ff}=2048$.
%In the appendix, we describe how the position-wise feed-forward network can also be seen as a form of attention.
%from Jakob: The number of operations required for the model to relate signals from two arbitrary input or output positions grows in the distance between positions in input or output, linearly for ConvS2S and logarithmically for ByteNet, making it harder to learn dependencies between these positions \citep{hochreiter2001gradient}. In the transformer this is reduced to a constant number of operations, albeit at the cost of effective resolution caused by averaging attention-weighted positions, an effect we aim to counteract with multi-headed attention.
%Figure~\ref{fig:simple-att} presents a simple attention function, $A$, with a single head, that forms the basis of our multi-head attention. $A$ takes a query key vector $\kq$, matrices of memory keys $\km$ and memory values $\vm$ ,and produces a query value vector $\vq$ as
%\begin{equation*} \label{eq:attention}
% A(\kq, \km, \vm) = {\vm}^T (Softmax(\km \kq).
%\end{equation*}
%We linearly transform $\kq,\,\km$, and $\vm$ with learned matrices ${\Wkq \text{,} \, \Wkm}$, and ${\Wvm}$ before calling the attention function, and transform the output query with $\Wvq$ before handing it to the feed forward layer. Each attention layer has it's own set of transformation matrices, which are shared across all query positions. $A$ is applied in parallel for each query position, and is implemented very efficiently as a batch of matrix multiplies. The self-attention and encoder-decoder attention layers use $A$, but with different arguments. For example, in encdoder self-attention, queries in encoder layer $i$ attention to memories in encoder layer $i-1$. To ensure that decoder self-attention layers do not look at future words, we add $- \inf$ to the softmax logits in positions $j+1$ to query length for query position $l$.
%In simple attention, the query value is a weighted combination of the memory values where the attention weights sum to one. Although this function performs well in practice, the constraint on attention weights can restrict the amount of information that flows from memories to queries because the query cannot focus on multiple memory positions at once, which might be desirable when translating long sequences. \marginpar{@usz, could you think of an example of this ?} We remedy this by maintaining multiple attention heads at each query position that attend to all memory positions in parallel, with a different set of parameters per attention head $h$.
%\marginpar{}
\subsection{Embeddings and Softmax}
Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension $\dmodel$. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to \citep{press2016using}. In the embedding layers, we multiply those weights by $\sqrt{\dmodel}$.
\subsection{Positional Encoding}
Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $\dmodel$ as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed \citep{JonasFaceNet2017}.
In this work, we use sine and cosine functions of different frequencies:
\begin{align*}
PE_{(pos,2i)} = sin(pos / 10000^{2i/\dmodel}) \\
PE_{(pos,2i+1)} = cos(pos / 10000^{2i/\dmodel})
\end{align*}
where $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\pi$ to $10000 \cdot 2\pi$. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.
We also experimented with using learned positional embeddings \citep{JonasFaceNet2017} instead, and found that the two versions produced nearly identical results (see Table~\ref{tab:variations} row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.

View File

@ -0,0 +1,45 @@
\pagebreak
\section*{Two Feed-Forward Layers = Attention over Parameters}\label{sec:parameter_attention}
In addition to attention layers, our model contains position-wise feed-forward networks (Section \ref{sec:ffn}), which consist of two linear transformations with a ReLU activation in between. In fact, these networks too can be seen as a form of attention. Compare the formula for such a network with the formula for a simple dot-product attention layer (biases and scaling factors omitted):
\begin{align*}
FFN(x, W_1, W_2) = ReLU(xW_1)W_2 \\
A(q, K, V) = Softmax(qK^T)V
\end{align*}
Based on the similarity of these formulae, the two-layer feed-forward network can be seen as a kind of attention, where the keys and values are the rows of the trainable parameter matrices $W_1$ and $W_2$, and where we use ReLU instead of Softmax in the compatibility function.
%the compatablity function is $compat(q, k_i) = ReLU(q \cdot k_i)$ instead of $Softmax(qK_T)_i$.
Given this similarity, we experimented with replacing the position-wise feed-forward networks with attention layers similar to the ones we use everywhere else our model. The multi-head-attention-over-parameters sublayer is identical to the multi-head attention described in \ref{sec:multihead}, except that the "keys" and "values" inputs to each attention head are trainable model parameters, as opposed to being linear projections of a previous layer. These parameters are scaled up by a factor of $\sqrt{d_{model}}$ in order to be more similar to activations.
In our first experiment, we replaced each position-wise feed-forward network with a multi-head-attention-over-parameters sublayer with $h_p=8$ heads, key-dimensionality $d_{pk}=64$, and value-dimensionality $d_{pv}=64$, using $n_p=1536$ key-value pairs for each attention head. The sublayer has a total of $2097152$ parameters, including the parameters in the query projection and the output projection. This matches the number of parameters in the position-wise feed-forward network that we replaced. While the theoretical amount of computation is also the same, in practice, the attention version caused the step times to be about 30\% longer.
In our second experiment, we used $h_p=8$ heads, and $n_p=512$ key-value pairs for each attention head, again matching the total number of parameters in the base model.
Results for the first experiment were slightly worse than for the base model, and results for the second experiment were slightly better, see Table~\ref{tab:parameter_attention}.
\begin{table}[h]
\caption{Replacing the position-wise feed-forward networks with multihead-attention-over-parameters produces similar results to the base model. All metrics are on the English-to-German translation development set, newstest2013.}
\label{tab:parameter_attention}
\begin{center}
\vspace{-2mm}
%\scalebox{1.0}{
\begin{tabular}{c|cccccc|cccc}
\hline\rule{0pt}{2.0ex}
& \multirow{2}{*}{$\dmodel$} & \multirow{2}{*}{$\dff$} &
\multirow{2}{*}{$h_p$} & \multirow{2}{*}{$d_{pk}$} & \multirow{2}{*}{$d_{pv}$} &
\multirow{2}{*}{$n_p$} &
PPL & BLEU & params & training\\
& & & & & & & (dev) & (dev) & $\times10^6$ & time \\
\hline\rule{0pt}{2.0ex}
base & 512 & 2048 & & & & & 4.92 & 25.8 & 65 & 12 hours\\
\hline\rule{0pt}{2.0ex}
AOP$_1$ & 512 & & 8 & 64 & 64 & 1536 & 4.92& 25.5 & 65 & 16 hours\\
AOP$_2$ & 512 & & 16 & 64 & 64 & 512 & \textbf{4.86} & \textbf{25.9} & 65 & 16 hours \\
\hline
\end{tabular}
%}
\end{center}
\end{table}

View File

@ -0,0 +1,8 @@
chatgpt的老祖宗《Attention is all you need》
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
真实的摘要如下
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
https://arxiv.org/abs/1706.03762

View File

@ -0,0 +1,2 @@
from stable_baselines3.dqn.dqn import DQN
from stable_baselines3.dqn.policies import CnnPolicy, MlpPolicy

View File

@ -0,0 +1,245 @@
from typing import Any, Dict, List, Optional, Tuple, Type, Union
import gym
import numpy as np
import torch as th
from torch.nn import functional as F
from stable_baselines3.common import logger
from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm
from stable_baselines3.common.preprocessing import maybe_transpose
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update
from stable_baselines3.dqn.policies import DQNPolicy
class DQN(OffPolicyAlgorithm):
"""
Deep Q-Network (DQN)
Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236
Default hyperparameters are taken from the nature paper,
except for the optimizer and learning rate that were taken from Stable Baselines defaults.
:param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
:param env: The environment to learn from (if registered in Gym, can be str)
:param learning_rate: The learning rate, it can be a function
of the current progress remaining (from 1 to 0)
:param buffer_size: size of the replay buffer
:param learning_starts: how many steps of the model to collect transitions for before learning starts
:param batch_size: Minibatch size for each gradient update
:param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update
:param gamma: the discount factor
:param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit
like ``(5, "step")`` or ``(2, "episode")``.
:param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``)
Set to ``-1`` means to do as many gradient steps as steps done in the environment
during the rollout.
:param optimize_memory_usage: Enable a memory efficient variant of the replay buffer
at a cost of more complexity.
See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195
:param target_update_interval: update the target network every ``target_update_interval``
environment steps.
:param exploration_fraction: fraction of entire training period over which the exploration rate is reduced
:param exploration_initial_eps: initial value of random action probability
:param exploration_final_eps: final value of random action probability
:param max_grad_norm: The maximum value for the gradient clipping
:param tensorboard_log: the log location for tensorboard (if None, no logging)
:param create_eval_env: Whether to create a second environment that will be
used for evaluating the agent periodically. (Only available when passing string for the environment)
:param policy_kwargs: additional arguments to be passed to the policy on creation
:param verbose: the verbosity level: 0 no output, 1 info, 2 debug
:param seed: Seed for the pseudo random generators
:param device: Device (cpu, cuda, ...) on which the code should be run.
Setting it to auto, the code will be run on the GPU if possible.
:param _init_setup_model: Whether or not to build the network at the creation of the instance
"""
def __init__(
self,
policy: Union[str, Type[DQNPolicy]],
env: Union[GymEnv, str],
learning_rate: Union[float, Schedule] = 1e-4,
buffer_size: int = 1000000,
learning_starts: int = 50000,
batch_size: Optional[int] = 32,
tau: float = 1.0,
gamma: float = 0.99,
train_freq: Union[int, Tuple[int, str]] = 4,
gradient_steps: int = 1,
optimize_memory_usage: bool = False,
target_update_interval: int = 10000,
exploration_fraction: float = 0.1,
exploration_initial_eps: float = 1.0,
exploration_final_eps: float = 0.05,
max_grad_norm: float = 10,
tensorboard_log: Optional[str] = None,
create_eval_env: bool = False,
policy_kwargs: Optional[Dict[str, Any]] = None,
verbose: int = 0,
seed: Optional[int] = None,
device: Union[th.device, str] = "auto",
_init_setup_model: bool = True,
):
super(DQN, self).__init__(
policy,
env,
DQNPolicy,
learning_rate,
buffer_size,
learning_starts,
batch_size,
tau,
gamma,
train_freq,
gradient_steps,
action_noise=None, # No action noise
policy_kwargs=policy_kwargs,
tensorboard_log=tensorboard_log,
verbose=verbose,
device=device,
create_eval_env=create_eval_env,
seed=seed,
sde_support=False,
optimize_memory_usage=optimize_memory_usage,
supported_action_spaces=(gym.spaces.Discrete,),
)
self.exploration_initial_eps = exploration_initial_eps
self.exploration_final_eps = exploration_final_eps
self.exploration_fraction = exploration_fraction
self.target_update_interval = target_update_interval
self.max_grad_norm = max_grad_norm
# "epsilon" for the epsilon-greedy exploration
self.exploration_rate = 0.0
# Linear schedule will be defined in `_setup_model()`
self.exploration_schedule = None
self.q_net, self.q_net_target = None, None
if _init_setup_model:
self._setup_model()
def _setup_model(self) -> None:
super(DQN, self)._setup_model()
self._create_aliases()
self.exploration_schedule = get_linear_fn(
self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction
)
def _create_aliases(self) -> None:
self.q_net = self.policy.q_net
self.q_net_target = self.policy.q_net_target
def _on_step(self) -> None:
"""
Update the exploration rate and target network if needed.
This method is called in ``collect_rollouts()`` after each step in the environment.
"""
if self.num_timesteps % self.target_update_interval == 0:
polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau)
self.exploration_rate = self.exploration_schedule(self._current_progress_remaining)
logger.record("rollout/exploration rate", self.exploration_rate)
def train(self, gradient_steps: int, batch_size: int = 100) -> None:
# Update learning rate according to schedule
self._update_learning_rate(self.policy.optimizer)
losses = []
for _ in range(gradient_steps):
# Sample replay buffer
replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env)
with th.no_grad():
# Compute the next Q-values using the target network
next_q_values = self.q_net_target(replay_data.next_observations)
# Follow greedy policy: use the one with the highest value
next_q_values, _ = next_q_values.max(dim=1)
# Avoid potential broadcast issue
next_q_values = next_q_values.reshape(-1, 1)
# 1-step TD target
target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values
# Get current Q-values estimates
current_q_values = self.q_net(replay_data.observations)
# Retrieve the q-values for the actions from the replay buffer
current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long())
# Compute Huber loss (less sensitive to outliers)
loss = F.smooth_l1_loss(current_q_values, target_q_values)
losses.append(loss.item())
# Optimize the policy
self.policy.optimizer.zero_grad()
loss.backward()
# Clip gradient norm
th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)
self.policy.optimizer.step()
# Increase update counter
self._n_updates += gradient_steps
logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
logger.record("train/loss", np.mean(losses))
def predict(
self,
observation: np.ndarray,
state: Optional[np.ndarray] = None,
mask: Optional[np.ndarray] = None,
deterministic: bool = False,
) -> Tuple[np.ndarray, Optional[np.ndarray]]:
"""
Overrides the base_class predict function to include epsilon-greedy exploration.
:param observation: the input observation
:param state: The last states (can be None, used in recurrent policies)
:param mask: The last masks (can be None, used in recurrent policies)
:param deterministic: Whether or not to return deterministic actions.
:return: the model's action and the next state
(used in recurrent policies)
"""
if not deterministic and np.random.rand() < self.exploration_rate:
if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space):
n_batch = observation.shape[0]
action = np.array([self.action_space.sample() for _ in range(n_batch)])
else:
action = np.array(self.action_space.sample())
else:
action, state = self.policy.predict(observation, state, mask, deterministic)
return action, state
def learn(
self,
total_timesteps: int,
callback: MaybeCallback = None,
log_interval: int = 4,
eval_env: Optional[GymEnv] = None,
eval_freq: int = -1,
n_eval_episodes: int = 5,
tb_log_name: str = "DQN",
eval_log_path: Optional[str] = None,
reset_num_timesteps: bool = True,
) -> OffPolicyAlgorithm:
return super(DQN, self).learn(
total_timesteps=total_timesteps,
callback=callback,
log_interval=log_interval,
eval_env=eval_env,
eval_freq=eval_freq,
n_eval_episodes=n_eval_episodes,
tb_log_name=tb_log_name,
eval_log_path=eval_log_path,
reset_num_timesteps=reset_num_timesteps,
)
def _excluded_save_params(self) -> List[str]:
return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"]
def _get_torch_save_params(self) -> Tuple[List[str], List[str]]:
state_dicts = ["policy", "policy.optimizer"]
return state_dicts, []

View File

@ -0,0 +1,237 @@
from typing import Any, Dict, List, Optional, Type
import gym
import torch as th
from torch import nn
from stable_baselines3.common.policies import BasePolicy, register_policy
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp
from stable_baselines3.common.type_aliases import Schedule
class QNetwork(BasePolicy):
"""
Action-Value (Q-Value) network for DQN
:param observation_space: Observation space
:param action_space: Action space
:param net_arch: The specification of the policy and value networks.
:param activation_fn: Activation function
:param normalize_images: Whether to normalize images or not,
dividing by 255.0 (True by default)
"""
def __init__(
self,
observation_space: gym.spaces.Space,
action_space: gym.spaces.Space,
features_extractor: nn.Module,
features_dim: int,
net_arch: Optional[List[int]] = None,
activation_fn: Type[nn.Module] = nn.ReLU,
normalize_images: bool = True,
):
super(QNetwork, self).__init__(
observation_space,
action_space,
features_extractor=features_extractor,
normalize_images=normalize_images,
)
if net_arch is None:
net_arch = [64, 64]
self.net_arch = net_arch
self.activation_fn = activation_fn
self.features_extractor = features_extractor
self.features_dim = features_dim
self.normalize_images = normalize_images
action_dim = self.action_space.n # number of actions
q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn)
self.q_net = nn.Sequential(*q_net)
def forward(self, obs: th.Tensor) -> th.Tensor:
"""
Predict the q-values.
:param obs: Observation
:return: The estimated Q-Value for each action.
"""
return self.q_net(self.extract_features(obs))
def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor:
q_values = self.forward(observation)
# Greedy action
action = q_values.argmax(dim=1).reshape(-1)
return action
def _get_constructor_parameters(self) -> Dict[str, Any]:
data = super()._get_constructor_parameters()
data.update(
dict(
net_arch=self.net_arch,
features_dim=self.features_dim,
activation_fn=self.activation_fn,
features_extractor=self.features_extractor,
)
)
return data
class DQNPolicy(BasePolicy):
"""
Policy class with Q-Value Net and target net for DQN
:param observation_space: Observation space
:param action_space: Action space
:param lr_schedule: Learning rate schedule (could be constant)
:param net_arch: The specification of the policy and value networks.
:param activation_fn: Activation function
:param features_extractor_class: Features extractor to use.
:param features_extractor_kwargs: Keyword arguments
to pass to the features extractor.
:param normalize_images: Whether to normalize images or not,
dividing by 255.0 (True by default)
:param optimizer_class: The optimizer to use,
``th.optim.Adam`` by default
:param optimizer_kwargs: Additional keyword arguments,
excluding the learning rate, to pass to the optimizer
"""
def __init__(
self,
observation_space: gym.spaces.Space,
action_space: gym.spaces.Space,
lr_schedule: Schedule,
net_arch: Optional[List[int]] = None,
activation_fn: Type[nn.Module] = nn.ReLU,
features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor,
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
normalize_images: bool = True,
optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
optimizer_kwargs: Optional[Dict[str, Any]] = None,
):
super(DQNPolicy, self).__init__(
observation_space,
action_space,
features_extractor_class,
features_extractor_kwargs,
optimizer_class=optimizer_class,
optimizer_kwargs=optimizer_kwargs,
)
if net_arch is None:
if features_extractor_class == FlattenExtractor:
net_arch = [64, 64]
else:
net_arch = []
self.net_arch = net_arch
self.activation_fn = activation_fn
self.normalize_images = normalize_images
self.net_args = {
"observation_space": self.observation_space,
"action_space": self.action_space,
"net_arch": self.net_arch,
"activation_fn": self.activation_fn,
"normalize_images": normalize_images,
}
self.q_net, self.q_net_target = None, None
self._build(lr_schedule)
def _build(self, lr_schedule: Schedule) -> None:
"""
Create the network and the optimizer.
:param lr_schedule: Learning rate schedule
lr_schedule(1) is the initial learning rate
"""
self.q_net = self.make_q_net()
self.q_net_target = self.make_q_net()
self.q_net_target.load_state_dict(self.q_net.state_dict())
# Setup optimizer with initial learning rate
self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs)
def make_q_net(self) -> QNetwork:
# Make sure we always have separate networks for features extractors etc
net_args = self._update_features_extractor(self.net_args, features_extractor=None)
return QNetwork(**net_args).to(self.device)
def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
return self._predict(obs, deterministic=deterministic)
def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
return self.q_net._predict(obs, deterministic=deterministic)
def _get_constructor_parameters(self) -> Dict[str, Any]:
data = super()._get_constructor_parameters()
data.update(
dict(
net_arch=self.net_args["net_arch"],
activation_fn=self.net_args["activation_fn"],
lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone
optimizer_class=self.optimizer_class,
optimizer_kwargs=self.optimizer_kwargs,
features_extractor_class=self.features_extractor_class,
features_extractor_kwargs=self.features_extractor_kwargs,
)
)
return data
MlpPolicy = DQNPolicy
class CnnPolicy(DQNPolicy):
"""
Policy class for DQN when using images as input.
:param observation_space: Observation space
:param action_space: Action space
:param lr_schedule: Learning rate schedule (could be constant)
:param net_arch: The specification of the policy and value networks.
:param activation_fn: Activation function
:param features_extractor_class: Features extractor to use.
:param normalize_images: Whether to normalize images or not,
dividing by 255.0 (True by default)
:param optimizer_class: The optimizer to use,
``th.optim.Adam`` by default
:param optimizer_kwargs: Additional keyword arguments,
excluding the learning rate, to pass to the optimizer
"""
def __init__(
self,
observation_space: gym.spaces.Space,
action_space: gym.spaces.Space,
lr_schedule: Schedule,
net_arch: Optional[List[int]] = None,
activation_fn: Type[nn.Module] = nn.ReLU,
features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN,
features_extractor_kwargs: Optional[Dict[str, Any]] = None,
normalize_images: bool = True,
optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
optimizer_kwargs: Optional[Dict[str, Any]] = None,
):
super(CnnPolicy, self).__init__(
observation_space,
action_space,
lr_schedule,
net_arch,
activation_fn,
features_extractor_class,
features_extractor_kwargs,
normalize_images,
optimizer_class,
optimizer_kwargs,
)
register_policy("MlpPolicy", MlpPolicy)
register_policy("CnnPolicy", CnnPolicy)

View File

@ -0,0 +1,2 @@
github stablebaseline3
https://github.com/DLR-RM/stable-baselines3

View File

@ -0,0 +1,27 @@
"In practice, we found that a high-entropy initial state is more likely to increase the speed of training.
The entropy is calculated by:
$$H=-\sum_{k= 1}^{n_k} p(k) \cdot \log p(k), p(k)=\frac{|A_k|}{|\mathcal{A}|}$$
where $H$ is the entropy, $|A_k|$ is the number of agent nodes in $k$-th cluster, $|\mathcal{A}|$ is the total number of agents.
To ensure the Cooperation Graph initialization has higher entropy,
we will randomly generate multiple initial states,
rank by their entropy and then pick the one with maximum $H$."
```
FROM ubuntu:latest
RUN apt-get update && \
apt-get install -y python3 python3-pip && \
rm -rf /var/lib/apt/lists/*
RUN echo '[global]' > /etc/pip.conf && \
echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \
echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf
RUN pip3 install gradio requests[socks] mdtex2html
COPY . /gpt
WORKDIR /gpt
CMD ["python3", "main.py"]
```

View File

@ -0,0 +1,186 @@
from predict import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down, get_conf
import re, requests, unicodedata, os
def download_arxiv_(url_pdf):
if 'arxiv.org' not in url_pdf:
if ('.' in url_pdf) and ('/' not in url_pdf):
new_url = 'https://arxiv.org/abs/'+url_pdf
print('下载编号:', url_pdf, '自动定位:', new_url)
# download_arxiv_(new_url)
return download_arxiv_(new_url)
else:
print('不能识别的URL')
return None
if 'abs' in url_pdf:
url_pdf = url_pdf.replace('abs', 'pdf')
url_pdf = url_pdf + '.pdf'
url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs')
title, other_info = get_name(_url_=url_abs)
paper_id = title.split()[0] # '[1712.00559]'
if '2' in other_info['year']:
title = other_info['year'] + ' ' + title
known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI']
for k in known_conf:
if k in other_info['comment']:
title = k + ' ' + title
download_dir = './gpt_log/arxiv/'
os.makedirs(download_dir, exist_ok=True)
title_str = title.replace('?', '')\
.replace(':', '')\
.replace('\"', '')\
.replace('\n', '')\
.replace(' ', ' ')\
.replace(' ', ' ')
requests_pdf_url = url_pdf
file_path = download_dir+title_str
# if os.path.exists(file_path):
# print('返回缓存文件')
# return './gpt_log/arxiv/'+title_str
print('下载中')
proxies, = get_conf('proxies')
r = requests.get(requests_pdf_url, proxies=proxies)
with open(file_path, 'wb+') as f:
f.write(r.content)
print('下载完成')
# print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf))
# subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True)
x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors'])
x = x.replace('?', '')\
.replace(':', '')\
.replace('\"', '')\
.replace('\n', '')\
.replace(' ', ' ')\
.replace(' ', ' ')
return './gpt_log/arxiv/'+title_str, other_info
def get_name(_url_):
import os
from bs4 import BeautifulSoup
print('正在获取文献名!')
print(_url_)
# arxiv_recall = {}
# if os.path.exists('./arxiv_recall.pkl'):
# with open('./arxiv_recall.pkl', 'rb') as f:
# arxiv_recall = pickle.load(f)
# if _url_ in arxiv_recall:
# print('在缓存中')
# return arxiv_recall[_url_]
proxies, = get_conf('proxies')
res = requests.get(_url_, proxies=proxies)
bs = BeautifulSoup(res.text, 'html.parser')
other_details = {}
# get year
try:
year = bs.find_all(class_='dateline')[0].text
year = re.search(r'(\d{4})', year, re.M | re.I).group(1)
other_details['year'] = year
abstract = bs.find_all(class_='abstract mathjax')[0].text
other_details['abstract'] = abstract
except:
other_details['year'] = ''
print('年份获取失败')
# get author
try:
authors = bs.find_all(class_='authors')[0].text
authors = authors.split('Authors:')[1]
other_details['authors'] = authors
except:
other_details['authors'] = ''
print('authors获取失败')
# get comment
try:
comment = bs.find_all(class_='metatable')[0].text
real_comment = None
for item in comment.replace('\n', ' ').split(' '):
if 'Comments' in item:
real_comment = item
if real_comment is not None:
other_details['comment'] = real_comment
else:
other_details['comment'] = ''
except:
other_details['comment'] = ''
print('年份获取失败')
title_str = BeautifulSoup(
res.text, 'html.parser').find('title').contents[0]
print('获取成功:', title_str)
# arxiv_recall[_url_] = (title_str+'.pdf', other_details)
# with open('./arxiv_recall.pkl', 'wb') as f:
# pickle.dump(arxiv_recall, f)
return title_str+'.pdf', other_details
@CatchException
def 下载arxiv论文并翻译摘要(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……"
import glob
import os
# 基本信息:功能、贡献者
chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import pdfminer, bs4
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
yield chatbot, history, '正常'
return
# 清空历史,以免输入溢出
history = []
# 提取摘要下载PDF文档
try:
pdf_path, info = download_arxiv_(txt)
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"下载pdf文件未成功")
yield chatbot, history, '正常'
return
# 翻译摘要等
i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}"
i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
yield chatbot, history, msg
# 写入文件
import shutil
# 重置文件的创建时间
shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path)
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载"))
yield chatbot, history, msg

View File

@ -0,0 +1,168 @@
import threading
from predict import predict_no_ui_long_connection
from toolbox import CatchException, write_results_to_file, report_execption
def extract_code_block_carefully(txt):
splitted = txt.split('```')
n_code_block_seg = len(splitted) - 1
if n_code_block_seg <= 1: return txt
# 剩下的情况都开头除去 ``` 结尾除去一次 ```
txt_out = '```'.join(splitted[1:-1])
return txt_out
def breakdown_txt_to_satisfy_token_limit(txt, limit, must_break_at_empty_line=True):
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
get_token_cnt = lambda txt: len(tokenizer(txt)["input_ids"])
def cut(txt_tocut, must_break_at_empty_line): # 递归
if get_token_cnt(txt_tocut) <= limit:
return [txt_tocut]
else:
lines = txt_tocut.split('\n')
estimated_line_cut = limit / get_token_cnt(txt_tocut) * len(lines)
estimated_line_cut = int(estimated_line_cut)
for cnt in reversed(range(estimated_line_cut)):
if must_break_at_empty_line:
if lines[cnt] != "": continue
print(cnt)
prev = "\n".join(lines[:cnt])
post = "\n".join(lines[cnt:])
if get_token_cnt(prev) < limit: break
if cnt == 0:
print('what the f?')
raise RuntimeError("存在一行极长的文本!")
print(len(post))
# 列表递归接龙
result = [prev]
result.extend(cut(post, must_break_at_empty_line))
return result
try:
return cut(txt, must_break_at_empty_line=True)
except RuntimeError:
return cut(txt, must_break_at_empty_line=False)
def break_txt_into_half_at_some_linebreak(txt):
lines = txt.split('\n')
n_lines = len(lines)
pre = lines[:(n_lines//2)]
post = lines[(n_lines//2):]
return "\n".join(pre), "\n".join(post)
@CatchException
def 全项目切换英文(txt, top_p, temperature, chatbot, history, sys_prompt, WEB_PORT):
# 第1步清空历史以免输入溢出
history = []
# 第2步尝试导入依赖如果缺少依赖则给出安装建议
try:
import openai, transformers
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade openai transformers```。")
yield chatbot, history, '正常'
return
# 第3步集合文件
import time, glob, os, shutil, re, openai
os.makedirs('gpt_log/generated_english_version', exist_ok=True)
os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True)
file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
[f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
# file_manifest = ['./toolbox.py']
i_say_show_user_buffer = []
# 第4步随便显示点什么防止卡顿的感觉
for index, fp in enumerate(file_manifest):
# if 'test_project' in fp: continue
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}'
i_say_show_user_buffer.append(i_say_show_user)
chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示."))
yield chatbot, history, '正常'
# 第5步Token限制下的截断与处理
MAX_TOKEN = 2500
# from transformers import GPT2TokenizerFast
# print('加载tokenizer中')
# tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# get_token_cnt = lambda txt: len(tokenizer(txt)["input_ids"])
# print('加载tokenizer结束')
# 第6步任务函数
mutable_return = [None for _ in file_manifest]
observe_window = [[""] for _ in file_manifest]
def thread_worker(fp,index):
if index > 10:
time.sleep(60)
print('Openai 限制免费用户每分钟20次请求降低请求频率中。')
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```'
try:
gpt_say = ""
# 分解代码文件
file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, MAX_TOKEN)
for file_content_partial in file_content_breakdown:
i_say = i_say_template(fp, file_content_partial)
# # ** gpt request **
gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, top_p=top_p, temperature=temperature, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index])
gpt_say_partial = extract_code_block_carefully(gpt_say_partial)
gpt_say += gpt_say_partial
mutable_return[index] = gpt_say
except ConnectionAbortedError as token_exceed_err:
print('至少一个线程任务Token溢出而失败', e)
except Exception as e:
print('至少一个线程任务意外失败', e)
# 第7步所有线程同时开始执行任务函数
handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)]
for h in handles:
h.daemon = True
h.start()
chatbot.append(('开始了吗?', f'多线程操作已经开始'))
yield chatbot, history, '正常'
# 第8步循环轮询各个线程是否执行完毕
cnt = 0
while True:
cnt += 1
time.sleep(0.2)
th_alive = [h.is_alive() for h in handles]
if not any(th_alive): break
# 更好的UI视觉效果
observe_win = []
for thread_index, alive in enumerate(th_alive):
observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace('<br/>','.....').replace('$','.')+"... ]")
stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)]
stat_str = ''.join(stat)
chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1)))
yield chatbot, history, '正常'
# 第9步把结果写入文件
for index, h in enumerate(handles):
h.join() # 这里其实不需要join了肯定已经都结束了
fp = file_manifest[index]
gpt_say = mutable_return[index]
i_say_show_user = i_say_show_user_buffer[index]
where_to_relocate = f'gpt_log/generated_english_version/{fp}'
if gpt_say is not None:
with open(where_to_relocate, 'w+', encoding='utf-8') as f:
f.write(gpt_say)
else: # 失败
shutil.copyfile(file_manifest[index], where_to_relocate)
chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}'))
history.append(i_say_show_user); history.append(gpt_say)
yield chatbot, history, '正常'
time.sleep(1)
# 第10步备份一个文件
res = write_results_to_file(history)
chatbot.append(("生成一份任务执行报告", res))
yield chatbot, history, '正常'

View File

@ -0,0 +1,127 @@
from predict import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
fast_debug = False
def 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, os
# pip install python-docx 用于docx格式跨平台
# pip install pywin32 用于doc格式仅支持Win平台
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
if fp.split(".")[-1] == "docx":
from docx import Document
doc = Document(fp)
file_content = "\n".join([para.text for para in doc.paragraphs])
else:
import win32com.client
word = win32com.client.Dispatch("Word.Application")
word.visible = False
# 打开文件
print('fp', os.getcwd())
doc = word.Documents.Open(os.getcwd() + '/' + fp)
# file_content = doc.Content.Text
doc = word.ActiveDocument
file_content = doc.Range().Text
doc.Close()
word.Quit()
print(file_content)
prefix = "接下来请你逐文件分析下面的论文文件," if index == 0 else ""
# private_upload里面的文件名在解压zip后容易出现乱码rar和7z格式正常故可以只分析文章内容不输入文件名
i_say = prefix + f'请对下面的文章片段用中英文做概述,文件名是{os.path.relpath(fp, project_folder)},' \
f'文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index+1}/{len(file_manifest)}] 假设你是论文审稿专家,请对下面的文章片段做概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature,
history=[]) # 带超时倒计时
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user);
history.append(gpt_say)
yield chatbot, history, msg
if not fast_debug: time.sleep(2)
"""
# 可按需启用
i_say = f'根据你上述的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一篇英文的。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
i_say = f'我想让你做一个论文写作导师。您的任务是使用人工智能工具(例如自然语言处理)提供有关如何改进其上述文章的反馈。' \
f'您还应该利用您在有效写作技巧方面的修辞知识和经验来建议作者可以更好地以书面形式表达他们的想法和想法的方法。' \
f'根据你之前的分析,提出建议'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
"""
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature,
history=history) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say)
history.append(gpt_say)
yield chatbot, history, msg
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 总结word文档(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
import glob, os
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量总结Word文档。函数插件贡献者: JasonGuo1"])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
from docx import Document
except:
report_execption(chatbot, history,
a=f"解析项目: {txt}",
b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。")
yield chatbot, history, '正常'
return
# 清空历史,以免输入溢出
history = []
# 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
# 搜索需要处理的文件清单
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)]
# [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
# 如果没找到任何文件
if len(file_manifest) == 0:
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}")
yield chatbot, history, '正常'
return
# 开始正式执行任务
yield from 解析docx(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

View File

@ -0,0 +1,154 @@
from predict import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
import re
import unicodedata
fast_debug = False
def is_paragraph_break(match):
"""
根据给定的匹配结果来判断换行符是否表示段落分隔
如果换行符前为句子结束标志句号感叹号问号且下一个字符为大写字母则换行符更有可能表示段落分隔
也可以根据之前的内容长度来判断段落是否已经足够长
"""
prev_char, next_char = match.groups()
# 句子结束标志
sentence_endings = ".!?"
# 设定一个最小段落长度阈值
min_paragraph_length = 140
if prev_char in sentence_endings and next_char.isupper() and len(match.string[:match.start(1)]) > min_paragraph_length:
return "\n\n"
else:
return " "
def normalize_text(text):
"""
通过把连字ligatures等文本特殊符号转换为其基本形式来对文本进行归一化处理
例如将连字 "fi" 转换为 "f" "i"
"""
# 对文本进行归一化处理,分解连字
normalized_text = unicodedata.normalize("NFKD", text)
# 替换其他特殊字符
cleaned_text = re.sub(r'[^\x00-\x7F]+', '', normalized_text)
return cleaned_text
def clean_text(raw_text):
"""
对从 PDF 提取出的原始文本进行清洗和格式化处理
1. 对原始文本进行归一化处理
2. 替换跨行的连词例如 Espe-\ncially 转换为 Especially
3. 根据 heuristic 规则判断换行符是否是段落分隔并相应地进行替换
"""
# 对文本进行归一化处理
normalized_text = normalize_text(raw_text)
# 替换跨行的连词
text = re.sub(r'(\w+-\n\w+)', lambda m: m.group(1).replace('-\n', ''), normalized_text)
# 根据前后相邻字符的特点,找到原文本中的换行符
newlines = re.compile(r'(\S)\n(\S)')
# 根据 heuristic 规则,用空格或段落分隔符替换原换行符
final_text = re.sub(newlines, lambda m: m.group(1) + is_paragraph_break(m) + m.group(2), text)
return final_text.strip()
def 解析PDF(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, glob, os, fitz
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
with fitz.open(fp) as doc:
file_content = ""
for page in doc:
file_content += page.get_text()
file_content = clean_text(file_content)
print(file_content)
prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
print('[1] yield chatbot, history')
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
print('[2] end gpt req')
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
print('[3] yield chatbot, history')
yield chatbot, history, msg
print('[4] next')
if not fast_debug: time.sleep(2)
all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say); history.append(gpt_say)
yield chatbot, history, msg
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 批量总结PDF文档(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
import glob, os
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量总结PDF文档。函数插件贡献者: ValeriaWongEralien"])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import fitz
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。")
yield chatbot, history, '正常'
return
# 清空历史,以免输入溢出
history = []
# 检测输入参数,如没有给定输入参数,直接退出
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
# 搜索需要处理的文件清单
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
# [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
# 如果没找到任何文件
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或.pdf文件: {txt}")
yield chatbot, history, '正常'
return
# 开始正式执行任务
yield from 解析PDF(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

View File

@ -0,0 +1,151 @@
from predict import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
fast_debug = False
def readPdf(pdfPath):
"""
读取pdf文件返回文本内容
"""
import pdfminer
from pdfminer.pdfparser import PDFParser
from pdfminer.pdfdocument import PDFDocument
from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.pdfdevice import PDFDevice
from pdfminer.layout import LAParams
from pdfminer.converter import PDFPageAggregator
fp = open(pdfPath, 'rb')
# Create a PDF parser object associated with the file object
parser = PDFParser(fp)
# Create a PDF document object that stores the document structure.
# Password for initialization as 2nd parameter
document = PDFDocument(parser)
# Check if the document allows text extraction. If not, abort.
if not document.is_extractable:
raise PDFTextExtractionNotAllowed
# Create a PDF resource manager object that stores shared resources.
rsrcmgr = PDFResourceManager()
# Create a PDF device object.
# device = PDFDevice(rsrcmgr)
# BEGIN LAYOUT ANALYSIS.
# Set parameters for analysis.
laparams = LAParams(
char_margin=10.0,
line_margin=0.2,
boxes_flow=0.2,
all_texts=False,
)
# Create a PDF page aggregator object.
device = PDFPageAggregator(rsrcmgr, laparams=laparams)
# Create a PDF interpreter object.
interpreter = PDFPageInterpreter(rsrcmgr, device)
# loop over all pages in the document
outTextList = []
for page in PDFPage.create_pages(document):
# read the page into a layout object
interpreter.process_page(page)
layout = device.get_result()
for obj in layout._objs:
if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal):
# print(obj.get_text())
outTextList.append(obj.get_text())
return outTextList
def 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, glob, os
from bs4 import BeautifulSoup
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
if ".tex" in fp:
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
if ".pdf" in fp.lower():
file_content = readPdf(fp)
file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk')
prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
print('[1] yield chatbot, history')
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
print('[2] end gpt req')
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
print('[3] yield chatbot, history')
yield chatbot, history, msg
print('[4] next')
if not fast_debug: time.sleep(2)
all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say); history.append(gpt_say)
yield chatbot, history, msg
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 批量总结PDF文档pdfminer(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
# 基本信息:功能、贡献者
chatbot.append([
"函数插件功能?",
"批量总结PDF文档此版本使用pdfminer插件带token约简功能。函数插件贡献者: Euclid-Jie。"])
yield chatbot, history, '正常'
# 尝试导入依赖,如果缺少依赖,则给出安装建议
try:
import pdfminer, bs4
except:
report_execption(chatbot, history,
a = f"解析项目: {txt}",
b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。")
yield chatbot, history, '正常'
return
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

View File

@ -0,0 +1,57 @@
from predict import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
fast_debug = False
def 生成函数注释(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, glob, os
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
i_say = f'请对下面的程序文件做一个概述并对文件中的所有函数生成注释使用markdown表格输出结果文件名是{os.path.relpath(fp, project_folder)},文件内容是 ```{file_content}```'
i_say_show_user = f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述,并对文件中的所有函数生成注释: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
print('[1] yield chatbot, history')
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
print('[2] end gpt req')
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
print('[3] yield chatbot, history')
yield chatbot, history, msg
print('[4] next')
if not fast_debug: time.sleep(2)
if not fast_debug:
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 批量生成函数注释(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
yield chatbot, history, '正常'
return
yield from 生成函数注释(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

View File

@ -0,0 +1,213 @@
from predict import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
fast_debug = False
def 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, glob, os
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
prefix = "接下来请你逐文件分析下面的工程" if index==0 else ""
i_say = prefix + f'请对下面的程序文件做一个概述文件名是{os.path.relpath(fp, project_folder)},文件代码是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
yield chatbot, history, msg
if not fast_debug: time.sleep(2)
all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
i_say = f'根据以上你自己的分析对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能包括{all_file})。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say); history.append(gpt_say)
yield chatbot, history, msg
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 解析项目本身(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import time, glob, os
file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
[f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
for index, fp in enumerate(file_manifest):
# if 'test_project' in fp: continue
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
prefix = "接下来请你分析自己的程序构成,别紧张," if index==0 else ""
i_say = prefix + f'请对下面的程序文件做一个概述文件名是{fp},文件代码是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的程序文件做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
# ** gpt request **
# gpt_say = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature)
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[], long_connection=True) # 带超时倒计时
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
yield chatbot, history, '正常'
time.sleep(2)
i_say = f'根据以上你自己的分析对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能包括{file_manifest})。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
# ** gpt request **
# gpt_say = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature, history=history)
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history, long_connection=True) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say); history.append(gpt_say)
yield chatbot, history, '正常'
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, '正常'
@CatchException
def 解析一个Python项目(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.py', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何python文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)
@CatchException
def 解析一个C项目的头文件(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] # + \
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)
@CatchException
def 解析一个C项目(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.h', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.h头文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)
@CatchException
def 解析一个Java项目(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.java', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.jar', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.xml', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.sh', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何java文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)
@CatchException
def 解析一个Rect项目(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.ts', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.tsx', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.json', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.js', recursive=True)] + \
[f for f in glob.glob(f'{project_folder}/**/*.jsx', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何Rect文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)
@CatchException
def 解析一个Golang项目(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.go', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何golang文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

View File

@ -0,0 +1,70 @@
from predict import predict_no_ui
from toolbox import CatchException, report_execption, write_results_to_file, predict_no_ui_but_counting_down
fast_debug = False
def 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt):
import time, glob, os
print('begin analysis on:', file_manifest)
for index, fp in enumerate(file_manifest):
with open(fp, 'r', encoding='utf-8') as f:
file_content = f.read()
prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else ""
i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```'
i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}'
chatbot.append((i_say_show_user, "[Local Message] waiting gpt response."))
print('[1] yield chatbot, history')
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[]) # 带超时倒计时
print('[2] end gpt req')
chatbot[-1] = (i_say_show_user, gpt_say)
history.append(i_say_show_user); history.append(gpt_say)
print('[3] yield chatbot, history')
yield chatbot, history, msg
print('[4] next')
if not fast_debug: time.sleep(2)
all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)])
i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常'
if not fast_debug:
msg = '正常'
# ** gpt request **
gpt_say = yield from predict_no_ui_but_counting_down(i_say, i_say, chatbot, top_p, temperature, history=history) # 带超时倒计时
chatbot[-1] = (i_say, gpt_say)
history.append(i_say); history.append(gpt_say)
yield chatbot, history, msg
res = write_results_to_file(history)
chatbot.append(("完成了吗?", res))
yield chatbot, history, msg
@CatchException
def 读文章写摘要(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
import glob, os
if os.path.exists(txt):
project_folder = txt
else:
if txt == "": txt = '空空如也的输入栏'
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}")
yield chatbot, history, '正常'
return
file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] # + \
# [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \
# [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)]
if len(file_manifest) == 0:
report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}")
yield chatbot, history, '正常'
return
yield from 解析Paper(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)

View File

@ -0,0 +1,25 @@
from predict import predict_no_ui_long_connection
from toolbox import CatchException, report_execption, write_results_to_file
import datetime
@CatchException
def 高阶功能模板函数(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
history = [] # 清空历史,以免输入溢出
chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板该函数面向希望实现更多有趣功能的开发者它可以作为创建新功能函数的模板。为了做到简单易读该函数只有25行代码所以不会实时反馈文字流或心跳请耐心等待程序输出完成。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组请不吝PR"))
yield chatbot, history, '正常' # 由于请求gpt需要一段时间我们先及时地做一次状态显示
for i in range(5):
currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month
currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day
i_say = f'历史中哪些事件发生在{currentMonth}{currentDay}列举两条并发送相关图片。发送图片时请使用Markdown将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。'
chatbot.append((i_say, "[Local Message] waiting gpt response."))
yield chatbot, history, '正常' # 由于请求gpt需要一段时间我们先及时地做一次状态显示
# history = [] 每次询问不携带之前的询问历史
gpt_say = predict_no_ui_long_connection(
inputs=i_say, top_p=top_p, temperature=temperature, history=[],
sys_prompt="当你想发送一张照片时请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。") # 请求gpt需要一段时间
chatbot[-1] = (i_say, gpt_say)
history.append(i_say);history.append(gpt_say)
yield chatbot, history, '正常' # 显示

View File

@ -1,67 +1,70 @@
# """
# 'primary' for main call-to-action,
# 'secondary' for a more subdued style,
# 'stop' for a stop button.
# """
# 'primary' 颜色对应 theme.py 中的 primary_hue
# 'secondary' 颜色对应 theme.py 中的 neutral_hue
# 'stop' 颜色对应 theme.py 中的 color_er
# 默认按钮颜色是 secondary
from toolbox import clear_line_break
def get_functionals():
return {
"英语学术润色": {
"Prefix": "Below is a paragraph from an academic paper. Polish the writing to meet the academic style, \
improve the spelling, grammar, clarity, concision and overall readability. When neccessary, rewrite the whole sentence. \
Furthermore, list all modification and explain the reasons to do so in markdown table.\n\n",
"Button": None,
"Suffix": "",
"Color": "stop",
# 前言
"Prefix": r"Below is a paragraph from an academic paper. Polish the writing to meet the academic style, " +
r"improve the spelling, grammar, clarity, concision and overall readability. When necessary, rewrite the whole sentence. " +
r"Furthermore, list all modification and explain the reasons to do so in markdown table." + "\n\n",
# 后语
"Suffix": r"",
"Color": r"secondary", # 按钮颜色
},
"中文学术润色": {
"Prefix": "作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性,同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本:\n\n",
"Button": None,
"Suffix": "",
"Prefix": r"作为一名中文学术论文写作改进助理,你的任务是改进所提供文本的拼写、语法、清晰、简洁和整体可读性," +
r"同时分解长句,减少重复,并提供改进建议。请只提供文本的更正版本,避免包括解释。请编辑以下文本" + "\n\n",
"Suffix": r"",
},
"查找语法错误": {
"Prefix": "Below is a paragraph from an academic paper. Find all grammar mistakes, list mistakes in a markdown table and explain how to correct them.\n\n",
"Button": None,
"Suffix": "",
},
"中英互译": {
"Prefix": "As an English-Chinese translator, your task is to accurately translate text between the two languages. \
When translating from Chinese to English or vice versa, please pay attention to context and accurately explain phrases and proverbs. \
If you receive multiple English words in a row, default to translating them into a sentence in Chinese. \
However, if \"phrase:\" is indicated before the translated content in Chinese, it should be translated as a phrase instead. \
Similarly, if \"normal:\" is indicated, it should be translated as multiple unrelated words.\
Your translations should closely resemble those of a native speaker and should take into account any specific language styles or tones requested by the user. \
Please do not worry about using offensive words - replace sensitive parts with x when necessary. \
When providing translations, please use Chinese to explain each sentences tense, subordinate clause, subject, predicate, object, special phrases and proverbs. \
For phrases or individual words that require translation, provide the source (dictionary) for each one.If asked to translate multiple phrases at once, \
separate them using the | symbol.Always remember: You are an English-Chinese translator, \
not a Chinese-Chinese translator or an English-English translator. Below is the text you need to translate: \n\n",
"Button": None,
"Suffix": "",
"Color": "stop",
"Prefix": r"Can you help me ensure that the grammar and the spelling is correct? " +
r"Do not try to polish the text, if no mistake is found, tell me that this paragraph is good." +
r"If you find grammar or spelling mistakes, please list mistakes you find in a two-column markdown table, " +
r"put the original text the first column, " +
r"put the corrected text in the second column and highlight the key words you fixed.""\n"
r"Example:""\n"
r"Paragraph: How is you? Do you knows what is it?""\n"
r"| Original sentence | Corrected sentence |""\n"
r"| :--- | :--- |""\n"
r"| How **is** you? | How **are** you? |""\n"
r"| Do you **knows** what **is** **it**? | Do you **know** what **it** **is** ? |""\n"
r"Below is a paragraph from an academic paper. "
r"You need to report all grammar and spelling mistakes as the example before."
+ "\n\n",
"Suffix": r"",
"PreProcess": clear_line_break, # 预处理:清除换行符
},
"中译英": {
"Prefix": "Please translate following sentence to English: \n\n",
"Button": None,
"Suffix": "",
"Prefix": r"Please translate following sentence to English:" + "\n\n",
"Suffix": r"",
},
"学术中译英": {
"Prefix": "Please translate following sentence to English with academic writing, and provide some related authoritative examples: \n\n",
"Button": None,
"学术中英互译": {
"Prefix": r"I want you to act as a scientific English-Chinese translator, " +
r"I will provide you with some paragraphs in one language " +
r"and your task is to accurately and academically translate the paragraphs only into the other language. " +
r"Do not repeat the original provided paragraphs after translation. " +
r"You should use artificial intelligence tools, " +
r"such as natural language processing, and rhetorical knowledge " +
r"and experience about effective writing techniques to reply. " +
r"I'll give you my paragraphs as follows, tell me what language it is written in, and then translate:" + "\n\n",
"Suffix": "",
"Color": "secondary",
},
"英译中": {
"Prefix": "请翻译成中文:\n\n",
"Button": None,
"Suffix": "",
"Prefix": r"请翻译成中文:" + "\n\n",
"Suffix": r"",
},
"找图片": {
"Prefix": r"我需要你找一张网络图片。使用Unsplash API(https://source.unsplash.com/960x640/?<英语关键词>)获取图片URL" +
r"然后请使用Markdown格式封装并且不要有反斜线不要用代码块。现在请按以下描述给我发送图片" + "\n\n",
"Suffix": r"",
},
"解释代码": {
"Prefix": "请解释以下代码:\n```\n",
"Button": None,
"Suffix": "\n```\n",
"Color": "stop",
"Prefix": r"请解释以下代码:" + "\n```\n",
"Suffix": "\n```\n",
},
}

108
functional_crazy.py Normal file
View File

@ -0,0 +1,108 @@
from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效
def get_crazy_functionals():
###################### 第一组插件 ###########################
# [第一组插件]: 最早期编写的项目插件和一些demo
from crazy_functions.读文章写摘要 import 读文章写摘要
from crazy_functions.生成函数注释 import 批量生成函数注释
from crazy_functions.解析项目源代码 import 解析项目本身
from crazy_functions.解析项目源代码 import 解析一个Python项目
from crazy_functions.解析项目源代码 import 解析一个C项目的头文件
from crazy_functions.解析项目源代码 import 解析一个C项目
from crazy_functions.解析项目源代码 import 解析一个Golang项目
from crazy_functions.解析项目源代码 import 解析一个Java项目
from crazy_functions.解析项目源代码 import 解析一个Rect项目
from crazy_functions.高级功能函数模板 import 高阶功能模板函数
from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文
function_plugins = {
"请解析并解构此项目本身(源码自译解)": {
"AsButton": False, # 加入下拉菜单中
"Function": 解析项目本身
},
"解析整个Py项目": {
"Color": "stop", # 按钮颜色
"Function": 解析一个Python项目
},
"解析整个C++项目头文件": {
"Color": "stop", # 按钮颜色
"Function": 解析一个C项目的头文件
},
"解析整个C++项目(.cpp/.h": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": 解析一个C项目
},
"解析整个Go项目": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": 解析一个Golang项目
},
"解析整个Java项目": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": 解析一个Java项目
},
"解析整个React项目": {
"Color": "stop", # 按钮颜色
"AsButton": False, # 加入下拉菜单中
"Function": 解析一个Rect项目
},
"读Tex论文写摘要": {
"Color": "stop", # 按钮颜色
"Function": 读文章写摘要
},
"批量生成函数注释": {
"Color": "stop", # 按钮颜色
"Function": 批量生成函数注释
},
"[多线程demo] 把本项目源代码切换成全英文": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Function": HotReload(全项目切换英文)
},
"[函数插件模板demo] 历史上的今天": {
# HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
"Function": HotReload(高阶功能模板函数)
},
}
###################### 第二组插件 ###########################
# [第二组插件]: 经过充分测试,但功能上距离达到完美状态还差一点点
from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer
from crazy_functions.总结word文档 import 总结word文档
function_plugins.update({
"[仅供开发调试] 批量总结PDF文档": {
"Color": "stop",
"Function": HotReload(批量总结PDF文档) # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
},
"[仅供开发调试] 批量总结PDF文档pdfminer": {
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(批量总结PDF文档pdfminer)
},
"[仅供开发调试] 批量总结Word文档": {
"Color": "stop",
"Function": HotReload(总结word文档)
},
})
###################### 第三组插件 ###########################
# [第三组插件]: 尚未充分测试的函数插件,放在这里
try:
from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
function_plugins.update({
"一键下载arxiv论文并翻译摘要先在input输入编号如1812.10695": {
"Color": "stop",
"AsButton": False, # 加入下拉菜单中
"Function": HotReload(下载arxiv论文并翻译摘要)
}
})
except Exception as err:
print(f'[下载arxiv论文并翻译摘要] 插件导入失败 {str(err)}')
###################### 第n组插件 ###########################
return function_plugins

BIN
img/demo.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

BIN
img/demo2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

BIN
img/公式.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 MiB

BIN
img/润色.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 MiB

206
main.py
View File

@ -1,113 +1,147 @@
import gradio as gr
import os
import markdown, mdtex2html
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
import gradio as gr
from predict import predict
from show_math import convert as convert_math
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf
def find_free_port():
import socket
from contextlib import closing
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
PORT = find_free_port()
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT = \
get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT')
# 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
if not AUTHENTICATION: AUTHENTICATION = None
initial_prompt = "Serve me as a writing and programming assistant."
title_html = """<h1 align="center">ChatGPT 学术优化</h1>"""
title_html = "<h1 align=\"center\">ChatGPT 学术优化</h1>"
description = """代码开源和更新[地址🚀](https://github.com/binary-husky/chatgpt_academic),感谢热情的[开发者们❤️](https://github.com/binary-husky/chatgpt_academic/graphs/contributors)"""
# 问询记录, python 版本建议3.9+(越新越好)
import logging
os.makedirs('gpt_log', exist_ok=True)
logging.basicConfig(filename='gpt_log/predict.log', level=logging.INFO)
os.makedirs("gpt_log", exist_ok=True)
try:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO, encoding="utf-8")
except:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO)
print("所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!")
# 一些普通功能模块
from functional import get_functionals
functional = get_functionals()
def reset_textbox(): return gr.update(value='')
def text_divide_paragraph(text):
if '```' in text:
# careful input
return text
else:
# wtf input
lines = text.split("\n")
for i, line in enumerate(lines):
if i!=0: lines[i] = "<p>"+lines[i].replace(" ", "&nbsp;")+"</p>"
text = "".join(lines)
return text
# 高级函数插件
from functional_crazy import get_crazy_functionals
crazy_fns = get_crazy_functionals()
def markdown_convertion(txt):
if ('$' in txt) and ('```' not in txt):
math_config = {'mdx_math': {'enable_dollar_delimiter': True}}
return markdown.markdown(txt,extensions=['fenced_code','tables']) + '<br><br>' + \
markdown.markdown(convert_math(txt, splitParagraphs=False),extensions=['fenced_code','tables'])
else:
return markdown.markdown(txt,extensions=['fenced_code','tables'])
# math_config = {'mdx_math': {'enable_dollar_delimiter': True}}
# markdown.markdown(txt, extensions=['fenced_code', 'tables', 'mdx_math'], extension_configs=math_config)
def format_io(self,y):
if y is None:
return []
i_ask, gpt_reply = y[-1]
i_ask = text_divide_paragraph(i_ask) # 输入部分太自由,预处理一波
y[-1] = (
None if i_ask is None else markdown.markdown(i_ask, extensions=['fenced_code','tables']),
None if gpt_reply is None else markdown_convertion(gpt_reply)
)
return y
# 处理markdown文本格式的转变
gr.Chatbot.postprocess = format_io
with gr.Blocks() as demo:
# 做一些外观色彩上的调整
from theme import adjust_theme, advanced_css
set_theme = adjust_theme()
cancel_handles = []
with gr.Blocks(theme=set_theme, analytics_enabled=False, css=advanced_css) as demo:
gr.HTML(title_html)
with gr.Row():
with gr.Row().style(equal_height=True):
with gr.Column(scale=2):
chatbot = gr.Chatbot()
chatbot.style(height=700)
chatbot.style()
chatbot.style(height=CHATBOT_HEIGHT)
history = gr.State([])
TRUE = gr.State(True)
FALSE = gr.State(False)
with gr.Column(scale=1):
with gr.Row():
with gr.Column(scale=12):
txt = gr.Textbox(show_label=False, placeholder="Input question here.").style(container=False)
with gr.Column(scale=1):
submitBtn = gr.Button("Ask", variant="primary")
txt = gr.Textbox(show_label=False, placeholder="Input question here.").style(container=False)
with gr.Row():
for k in functional:
variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
functional[k]["Button"] = gr.Button(k, variant=variant)
statusDisplay = gr.Markdown("status: ready")
systemPromptTxt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt).style(container=True)
#inputs, top_p, temperature, top_k, repetition_penalty
with gr.Accordion("arguments", open=False):
submitBtn = gr.Button("提交", variant="primary")
with gr.Row():
resetBtn = gr.Button("重置", variant="secondary"); resetBtn.style(size="sm")
stopBtn = gr.Button("停止", variant="secondary"); stopBtn.style(size="sm")
with gr.Row():
from check_proxy import check_proxy
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {check_proxy(proxies)}")
with gr.Accordion("基础功能区", open=True) as area_basic_fn:
with gr.Row():
for k in functional:
variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
functional[k]["Button"] = gr.Button(k, variant=variant)
with gr.Accordion("函数插件区", open=True) as area_crazy_fn:
with gr.Row():
gr.Markdown("注意以下“红颜色”标识的函数插件需从input区读取路径作为参数.")
with gr.Row():
for k in crazy_fns:
if not crazy_fns[k].get("AsButton", True): continue
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
crazy_fns[k]["Button"] = gr.Button(k, variant=variant)
with gr.Row():
with gr.Accordion("更多函数插件", open=True):
dropdown_fn_list = [k for k in crazy_fns.keys() if not crazy_fns[k].get("AsButton", True)]
with gr.Column(scale=1):
dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="").style(container=False)
with gr.Column(scale=1):
switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary")
with gr.Row():
with gr.Accordion("点击展开“文件上传区”。上传本地文件可供红色函数插件调用。", open=False) as area_file_up:
file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple")
with gr.Accordion("展开SysPrompt & 交互界面布局 & Github地址", open=False):
system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt)
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=5.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
txt.submit(predict, [txt, top_p, temperature, chatbot, history, systemPromptTxt], [chatbot, history, statusDisplay])
submitBtn.click(predict, [txt, top_p, temperature, chatbot, history, systemPromptTxt], [chatbot, history, statusDisplay], show_progress=True)
# submitBtn.click(reset_textbox, [], [txt])
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
gr.Markdown(description)
# 功能区显示开关与功能区的互动
def fn_area_visibility(a):
ret = {}
ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))})
ret.update({area_crazy_fn: gr.update(visible=("函数插件区" in a))})
return ret
checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn] )
# 整理反复出现的控件句柄组合
input_combo = [txt, top_p, temperature, chatbot, history, system_prompt]
output_combo = [chatbot, history, status]
predict_args = dict(fn=predict, inputs=input_combo, outputs=output_combo)
empty_txt_args = dict(fn=lambda: "", inputs=[], outputs=[txt]) # 用于在提交后清空输入栏
# 提交按钮、重置按钮
cancel_handles.append(txt.submit(**predict_args)) #; txt.submit(**empty_txt_args) 在提交后清空输入栏
cancel_handles.append(submitBtn.click(**predict_args)) #; submitBtn.click(**empty_txt_args) 在提交后清空输入栏
resetBtn.click(lambda: ([], [], "已重置"), None, output_combo)
# 基础功能区的回调函数注册
for k in functional:
functional[k]["Button"].click(predict,
[txt, top_p, temperature, chatbot,history, systemPromptTxt, FALSE, TRUE, gr.State(k)], [chatbot, history, statusDisplay], show_progress=True)
print(f"URL http://localhost:{PORT}")
demo.title = "ChatGPT 学术优化"
click_handle = functional[k]["Button"].click(predict, [*input_combo, gr.State(True), gr.State(k)], output_combo)
cancel_handles.append(click_handle)
# 文件上传区接收文件后与chatbot的互动
file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt], [chatbot, txt])
# 函数插件-固定按钮区
for k in crazy_fns:
if not crazy_fns[k].get("AsButton", True): continue
click_handle = crazy_fns[k]["Button"].click(crazy_fns[k]["Function"], [*input_combo, gr.State(PORT)], output_combo)
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
cancel_handles.append(click_handle)
# 函数插件-下拉菜单与随变按钮的互动
def on_dropdown_changed(k):
variant = crazy_fns[k]["Color"] if "Color" in crazy_fns[k] else "secondary"
return {switchy_bt: gr.update(value=k, variant=variant)}
dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt] )
# 随变按钮的回调函数注册
def route(k, *args, **kwargs):
if k in [r"打开插件列表", r"请先从插件列表中选择"]: return
yield from crazy_fns[k]["Function"](*args, **kwargs)
click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
# def expand_file_area(file_upload, area_file_up):
# if len(file_upload)>0: return {area_file_up: gr.update(open=True)}
# click_handle.then(expand_file_area, [file_upload, area_file_up], [area_file_up])
cancel_handles.append(click_handle)
# 终止按钮的回调函数注册
stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
# gradio的inbrowser触发不太稳定回滚代码到原始的浏览器打开函数
def auto_opentab_delay():
import threading, webbrowser, time
def open(): time.sleep(2)
webbrowser.open_new_tab(f'http://localhost:{PORT}')
t = threading.Thread(target=open)
t.daemon = True; t.start()
print(f"如果浏览器没有自动打开请复制并转到以下URL")
print(f"\t(亮色主体): http://localhost:{PORT}")
print(f"\t(暗色主体): http://localhost:{PORT}/?__dark-theme=true")
def open():
time.sleep(2)
webbrowser.open_new_tab(f"http://localhost:{PORT}/?__dark-theme=true")
threading.Thread(target=open, name="open-browser", daemon=True).start()
auto_opentab_delay()
demo.queue().launch(server_name="0.0.0.0", share=True, server_port=PORT)
demo.title = "ChatGPT 学术优化"
demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=True, server_port=PORT, auth=AUTHENTICATION)

View File

@ -1,78 +1,242 @@
# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目
"""
该文件中主要包含三个函数
不具备多线程能力的函数
1. predict: 正常对话时使用具备完备的交互功能不可多线程
具备多线程调用能力的函数
2. predict_no_ui高级实验性功能模块调用不会实时显示在界面上参数简单可以多线程并行方便实现复杂的功能逻辑
3. predict_no_ui_long_connection在实验过程中发现调用predict_no_ui处理长文档时和openai的连接容易断掉这个函数用stream的方式解决这个问题同样支持多线程
"""
import json
import gradio as gr
import logging
import traceback
import requests
import importlib
import os
if os.path.exists('config_private.py'):
# 放自己的秘密如API和代理网址
from config_private import proxies, API_URL, API_KEY
else:
from config import proxies, API_URL, API_KEY
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件不受git管控如果有则覆盖原config文件
from toolbox import get_conf
proxies, API_URL, API_KEY, TIMEOUT_SECONDS, MAX_RETRY, LLM_MODEL = \
get_conf('proxies', 'API_URL', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'LLM_MODEL')
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
def get_full_error(chunk, stream_response):
"""
获取完整的从Openai返回的报错
"""
while True:
try:
chunk += next(stream_response)
except:
break
return chunk
def predict_no_ui(inputs, top_p, temperature, history=[], sys_prompt=""):
"""
发送至chatGPT等待回复一次性完成不显示中间过程
predict函数的简化版
用于payload比较大的情况或者用于实现多线带嵌套的复杂功能
inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表
注意无论是inputs还是history内容太长了都会触发token数量溢出的错误然后raise ConnectionAbortedError
"""
headers, payload = generate_payload(inputs, top_p, temperature, history, system_prompt=sys_prompt, stream=False)
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=False
response = requests.post(API_URL, headers=headers, proxies=proxies,
json=payload, stream=False, timeout=TIMEOUT_SECONDS*2); break
except requests.exceptions.ReadTimeout as e:
retry += 1
traceback.print_exc()
if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
try:
result = json.loads(response.text)["choices"][0]["message"]["content"]
return result
except Exception as e:
if "choices" not in response.text: print(response.text)
raise ConnectionAbortedError("Json解析不合常规可能是文本过长" + response.text)
def predict_no_ui_long_connection(inputs, top_p, temperature, history=[], sys_prompt="", observe_window=None):
"""
发送至chatGPT等待回复一次性完成不显示中间过程但内部用stream的方法避免有人中途掐网线
observe_window用于负责跨越线程传递已经输出的部分大部分时候仅仅为了fancy的视觉效果留空即可
"""
headers, payload = generate_payload(inputs, top_p, temperature, history, system_prompt=sys_prompt, stream=True)
def compose_system(system_prompt):
return {"role": "system", "content": system_prompt}
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=False
response = requests.post(API_URL, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
except requests.exceptions.ReadTimeout as e:
retry += 1
traceback.print_exc()
if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
stream_response = response.iter_lines()
result = ''
while True:
try: chunk = next(stream_response).decode()
except StopIteration: break
if len(chunk)==0: continue
if not chunk.startswith('data:'):
error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode()
if "reduce the length" in error_msg:
raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg)
else:
raise RuntimeError("OpenAI拒绝了请求" + error_msg)
json_data = json.loads(chunk.lstrip('data:'))['choices'][0]
delta = json_data["delta"]
if len(delta) == 0: break
if "role" in delta: continue
if "content" in delta:
result += delta["content"]
print(delta["content"], end='')
if observe_window is not None: observe_window[0] += delta["content"]
else: raise RuntimeError("意外Json结构"+delta)
if json_data['finish_reason'] == 'length':
raise ConnectionAbortedError("正常结束但显示Token不足。")
return result
def compose_user(user_input):
return {"role": "user", "content": user_input}
def predict(inputs, top_p, temperature, chatbot=[], history=[], system_prompt='', retry=False,
def predict(inputs, top_p, temperature, chatbot=[], history=[], system_prompt='',
stream = True, additional_fn=None):
"""
发送至chatGPT流式获取输出
用于基础的对话功能
inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py
"""
if additional_fn is not None:
import functional
importlib.reload(functional)
importlib.reload(functional) # 热更新prompt
functional = functional.get_functionals()
if "PreProcess" in functional[additional_fn]: inputs = functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = functional[additional_fn]["Prefix"] + inputs + functional[additional_fn]["Suffix"]
if stream:
raw_input = inputs
logging.info(f'[raw_input] {raw_input}')
chatbot.append((inputs, ""))
yield chatbot, history, "Waiting"
yield chatbot, history, "等待响应"
headers, payload = generate_payload(inputs, top_p, temperature, history, system_prompt, stream)
history.append(inputs); history.append(" ")
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=True
response = requests.post(API_URL, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
except:
retry += 1
chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
yield chatbot, history, "请求超时"+retry_msg
if retry > MAX_RETRY: raise TimeoutError
gpt_replying_buffer = ""
is_head_of_the_stream = True
if stream:
stream_response = response.iter_lines()
while True:
chunk = next(stream_response)
# print(chunk.decode()[6:])
if is_head_of_the_stream:
# 数据流的第一帧不携带content
is_head_of_the_stream = False; continue
if chunk:
try:
if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0:
# 判定为数据流的结束gpt_replying_buffer也写完了
logging.info(f'[response] {gpt_replying_buffer}')
break
# 处理数据流的主体
chunkjson = json.loads(chunk.decode()[6:])
status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}"
# 如果这里抛出异常一般是文本过长详情见get_full_error的输出
gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk.decode()[6:])['choices'][0]["delta"]["content"]
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield chatbot, history, status_text
except Exception as e:
traceback.print_exc()
yield chatbot, history, "Json解析不合常规"
chunk = get_full_error(chunk, stream_response)
error_msg = chunk.decode()
if "reduce the length" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Input (or history) is too long, please reduce input or clear history by refreshing this page.")
history = [] # 清除历史
elif "Incorrect API key" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key provided.")
elif "exceeded your current quota" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由拒绝服务.")
else:
from toolbox import regular_txt_to_markdown
tb_str = '```\n' + traceback.format_exc() + '```'
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk.decode()[4:])}")
yield chatbot, history, "Json异常" + error_msg
return
def generate_payload(inputs, top_p, temperature, history, system_prompt, stream):
"""
整合所有信息选择LLM模型生成http请求为发送请求做准备
"""
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}
chat_counter = len(history) // 2
conversation_cnt = len(history) // 2
print(f"chat_counter - {chat_counter}")
messages = [compose_system(system_prompt)]
if chat_counter:
for index in range(0, 2*chat_counter, 2):
d1 = {}
d1["role"] = "user"
d1["content"] = history[index]
d2 = {}
d2["role"] = "assistant"
d2["content"] = history[index+1]
if d1["content"] != "":
if d2["content"] != "" or retry:
messages.append(d1)
messages.append(d2)
messages = [{"role": "system", "content": system_prompt}]
if conversation_cnt:
for index in range(0, 2*conversation_cnt, 2):
what_i_have_asked = {}
what_i_have_asked["role"] = "user"
what_i_have_asked["content"] = history[index]
what_gpt_answer = {}
what_gpt_answer["role"] = "assistant"
what_gpt_answer["content"] = history[index+1]
if what_i_have_asked["content"] != "":
if what_gpt_answer["content"] == "": continue
if what_gpt_answer["content"] == timeout_bot_msg: continue
messages.append(what_i_have_asked)
messages.append(what_gpt_answer)
else:
messages[-1]['content'] = d2['content']
if retry and chat_counter:
messages.pop()
else:
temp3 = {}
temp3["role"] = "user"
temp3["content"] = inputs
messages.append(temp3)
chat_counter += 1
# messages
messages[-1]['content'] = what_gpt_answer['content']
what_i_ask_now = {}
what_i_ask_now["role"] = "user"
what_i_ask_now["content"] = inputs
messages.append(what_i_ask_now)
payload = {
"model": "gpt-3.5-turbo",
# "model": "gpt-4",
"model": LLM_MODEL,
"messages": messages,
"temperature": temperature, # 1.0,
"top_p": top_p, # 1.0,
@ -81,54 +245,8 @@ def predict(inputs, top_p, temperature, chatbot=[], history=[], system_prompt=''
"presence_penalty": 0,
"frequency_penalty": 0,
}
print(f" {LLM_MODEL} : {conversation_cnt} : {inputs}")
return headers,payload
history.append(inputs)
try:
# make a POST request to the API endpoint using the requests.post method, passing in stream=True
response = requests.post(API_URL, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=15)
except:
chatbot[-1] = ((chatbot[-1][0], 'Requests Timeout, Network Error.'))
yield chatbot, history, "Requests Timeout"
raise TimeoutError
token_counter = 0
partial_words = ""
counter = 0
if stream:
stream_response = response.iter_lines()
while True:
chunk = next(stream_response)
# print(chunk)
if chunk == b'data: [DONE]':
break
if counter == 0:
counter += 1
continue
counter += 1
# check whether each line is non-empty
if chunk:
# decode each line as response data is in bytes
try:
if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0:
logging.info(f'[response] {chatbot[-1][-1]}')
break
except Exception as e:
traceback.print_exc()
chunkjson = json.loads(chunk.decode()[6:])
status_text = f"id: {chunkjson['id']}, finish_reason: {chunkjson['choices'][0]['finish_reason']}"
partial_words = partial_words + \
json.loads(chunk.decode()[6:])[
'choices'][0]["delta"]["content"]
if token_counter == 0:
history.append(" " + partial_words)
else:
history[-1] = partial_words
chatbot[-1] = (history[-2], history[-1])
token_counter += 1
yield chatbot, history, status_text

36
request_llm/README.md Normal file
View File

@ -0,0 +1,36 @@
# 如何使用其他大语言模型dev分支测试中
## 1. 先运行text-generation
``` sh
# 下载模型( text-generation 这么牛的项目别忘了给人家star
git clone https://github.com/oobabooga/text-generation-webui.git
# 安装text-generation的额外依赖
pip install accelerate bitsandbytes flexgen gradio llamacpp markdown numpy peft requests rwkv safetensors sentencepiece tqdm datasets git+https://github.com/huggingface/transformers
# 切换路径
cd text-generation-webui
# 下载模型
python download-model.py facebook/galactica-1.3b
# 其他可选如 facebook/opt-1.3b
# facebook/galactica-6.7b
# facebook/galactica-120b
# facebook/pygmalion-1.3b 等
# 详情见 https://github.com/oobabooga/text-generation-webui
# 启动text-generation注意把模型的斜杠改成下划线
python server.py --cpu --listen --listen-port 7860 --model facebook_galactica-1.3b
```
## 2. 修改config.py
``` sh
# LLM_MODEL格式较复杂 TGUI:[模型]@[ws地址]:[ws端口] , 端口要和上面给定的端口一致
LLM_MODEL = "TGUI:galactica-1.3b@localhost:7860"
```
## 3. 运行!
``` sh
cd chatgpt-academic
python main.py
```

167
request_llm/bridge_tgui.py Normal file
View File

@ -0,0 +1,167 @@
'''
Contributed by SagsMug. Modified by binary-husky
https://github.com/oobabooga/text-generation-webui/pull/175
'''
import asyncio
import json
import random
import string
import websockets
import logging
import time
import threading
import importlib
from toolbox import get_conf
LLM_MODEL, = get_conf('LLM_MODEL')
# "TGUI:galactica-1.3b@localhost:7860"
model_name, addr_port = LLM_MODEL.split('@')
assert ':' in addr_port, "LLM_MODEL 格式不正确!" + LLM_MODEL
addr, port = addr_port.split(':')
def random_hash():
letters = string.ascii_lowercase + string.digits
return ''.join(random.choice(letters) for i in range(9))
async def run(context, max_token=512):
params = {
'max_new_tokens': max_token,
'do_sample': True,
'temperature': 0.5,
'top_p': 0.9,
'typical_p': 1,
'repetition_penalty': 1.05,
'encoder_repetition_penalty': 1.0,
'top_k': 0,
'min_length': 0,
'no_repeat_ngram_size': 0,
'num_beams': 1,
'penalty_alpha': 0,
'length_penalty': 1,
'early_stopping': True,
'seed': -1,
}
session = random_hash()
async with websockets.connect(f"ws://{addr}:{port}/queue/join") as websocket:
while content := json.loads(await websocket.recv()):
#Python3.10 syntax, replace with if elif on older
if content["msg"] == "send_hash":
await websocket.send(json.dumps({
"session_hash": session,
"fn_index": 12
}))
elif content["msg"] == "estimation":
pass
elif content["msg"] == "send_data":
await websocket.send(json.dumps({
"session_hash": session,
"fn_index": 12,
"data": [
context,
params['max_new_tokens'],
params['do_sample'],
params['temperature'],
params['top_p'],
params['typical_p'],
params['repetition_penalty'],
params['encoder_repetition_penalty'],
params['top_k'],
params['min_length'],
params['no_repeat_ngram_size'],
params['num_beams'],
params['penalty_alpha'],
params['length_penalty'],
params['early_stopping'],
params['seed'],
]
}))
elif content["msg"] == "process_starts":
pass
elif content["msg"] in ["process_generating", "process_completed"]:
yield content["output"]["data"][0]
# You can search for your desired end indicator and
# stop generation by closing the websocket here
if (content["msg"] == "process_completed"):
break
def predict_tgui(inputs, top_p, temperature, chatbot=[], history=[], system_prompt='', stream = True, additional_fn=None):
"""
发送至chatGPT流式获取输出
用于基础的对话功能
inputs 是本次问询的输入
top_p, temperature是chatGPT的内部调优参数
history 是之前的对话列表注意无论是inputs还是history内容太长了都会触发token数量溢出的错误
chatbot 为WebUI中显示的对话列表修改它然后yeild出去可以直接修改对话界面内容
additional_fn代表点击的哪个按钮按钮见functional.py
"""
if additional_fn is not None:
import functional
importlib.reload(functional) # 热更新prompt
functional = functional.get_functionals()
if "PreProcess" in functional[additional_fn]: inputs = functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话)
inputs = functional[additional_fn]["Prefix"] + inputs + functional[additional_fn]["Suffix"]
raw_input = "What I would like to say is the following: " + inputs
logging.info(f'[raw_input] {raw_input}')
history.extend([inputs, ""])
chatbot.append([inputs, ""])
yield chatbot, history, "等待响应"
prompt = inputs
tgui_say = ""
mutable = ["", time.time()]
def run_coorotine(mutable):
async def get_result(mutable):
async for response in run(prompt):
print(response[len(mutable[0]):])
mutable[0] = response
if (time.time() - mutable[1]) > 3:
print('exit when no listener')
break
asyncio.run(get_result(mutable))
thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True)
thread_listen.start()
while thread_listen.is_alive():
time.sleep(1)
mutable[1] = time.time()
# Print intermediate steps
if tgui_say != mutable[0]:
tgui_say = mutable[0]
history[-1] = tgui_say
chatbot[-1] = (history[-2], history[-1])
yield chatbot, history, "status_text"
logging.info(f'[response] {tgui_say}')
def predict_tgui_no_ui(inputs, top_p, temperature, history=[], sys_prompt=""):
raw_input = "What I would like to say is the following: " + inputs
prompt = inputs
tgui_say = ""
mutable = ["", time.time()]
def run_coorotine(mutable):
async def get_result(mutable):
async for response in run(prompt, max_token=20):
print(response[len(mutable[0]):])
mutable[0] = response
if (time.time() - mutable[1]) > 3:
print('exit when no listener')
break
asyncio.run(get_result(mutable))
thread_listen = threading.Thread(target=run_coorotine, args=(mutable,))
thread_listen.start()
while thread_listen.is_alive():
time.sleep(1)
mutable[1] = time.time()
tgui_say = mutable[0]
return tgui_say

View File

@ -1,3 +1,7 @@
gradio
gradio>=3.23
requests[socks]
mdtex2html
Markdown
latex2mathml
openai
transformers

175
self_analysis.md Normal file
View File

@ -0,0 +1,175 @@
# chatgpt-academic项目自译解报告
Author补充以下分析均由本项目调用ChatGPT一键生成如果有不准确的地方全怪GPT😄
## [0/18] 程序摘要: functional_crazy.py
这是一个功能扩展的程序,文件名为 `functional_crazy.py`。代码的主要功能是通过提供一系列函数插件,增强程序的功能,让用户可以通过界面中的按钮,快速调用对应的函数插件实现相应的操作。代码中使用了 `HotReload` 函数插件,可以在不重启程序的情况下更新函数插件的代码,让其生效。同时,通过 `UserVisibleLevel` 变量的设置可以控制哪些插件会在UI界面显示出来。函数插件列表包括了以下功能解析项目本身、解析一个Python项目、解析一个C++项目头文件、解析一个C++项目、读取文章并生成摘要、批量生成函数注释、全项目切换成英文、批量总结PDF文档、批量总结PDF文档pdfminer、批量总结Word文档、高阶功能模板函数、以及其他未经充分测试的函数插件。
## [1/18] 程序摘要: main.py
该程序是一个基于Gradio构建的对话生成模型的Web界面示例包含了以下主要功能
1.加载模型并对用户输入进行响应;
2.通过调用外部函数库来获取用户的输入,并在模型生成的过程中进行处理;
3.支持用户上传本地文件,供外部函数库调用;
4.支持停止当前的生成过程;
5.保存用户的历史记录,并将其记录在本地日志文件中,以供后续分析和使用。
该程序需要依赖于一些外部库和软件包如Gradio、torch等。用户需要确保这些依赖项已经安装并且在运行该程序前对config_private.py配置文件进行相应的修改。
## [2/18] 程序摘要: functional.py
该文件定义了一个名为“functional”的函数函数的作用是返回一个包含多个字典键值对的字典每个键值对表示一种功能。该字典的键值由功能名称和对应的数据组成。其中的每个字典都包含4个键值对分别为“Prefix”、“Suffix”、“Color”和“PreProcess”分别表示前缀、后缀、按钮颜色和预处理函数。如果某些键值对没有给出那么程序中默认相应的值如按钮颜色默认为“secondary”等。每个功能描述了不同的学术润色/翻译/其他服务如“英语学术润色”、“中文学术润色”、“查找语法错误”等。函数还引用了一个名为“clear_line_break”的函数用于预处理修改前的文本。
## [3/18] 程序摘要: show_math.py
该程序文件名为show_math.py主要用途是将Markdown和LaTeX混合格式转换成带有MathML的HTML格式。该程序通过递归地处理LaTeX和Markdown混合段落逐一转换成HTML/MathML标记出来并在LaTeX公式创建中进行错误处理。在程序文件中定义了3个变量分别是incompleteconvError和convert其中convert函数是用来执行转换的主要函数。程序使用正则表达式进行LaTeX格式和Markdown段落的分割从而实现转换。如果在Latex转换过程中发生错误程序将输出相应的错误信息。
## [4/18] 程序摘要: predict.py
本程序文件的文件名为"./predict.py",主要包含三个函数:
1. predict正常对话时使用具备完备的交互功能不可多线程
2. predict_no_ui高级实验性功能模块调用不会实时显示在界面上参数简单可以多线程并行方便实现复杂的功能逻辑
3. predict_no_ui_long_connection在实验过程中发现调用predict_no_ui处理长文档时和openai的连接容易断掉这个函数用stream的方式解决这个问题同样支持多线程。
其中predict函数用于基础的对话功能发送至chatGPT流式获取输出根据点击的哪个按钮进行对话预处理等额外操作predict_no_ui函数用于payload比较大的情况或者用于实现多线、带嵌套的复杂功能predict_no_ui_long_connection实现调用predict_no_ui处理长文档时避免连接断掉的情况支持多线程。
## [5/18] 程序摘要: check_proxy.py
该程序文件名为check_proxy.py主要功能是检查代理服务器的可用性并返回代理服务器的地理位置信息或错误提示。具体实现方式如下
首先使用requests模块向指定网站https://ipapi.co/json/发送GET请求请求结果以JSON格式返回。如果代理服务器参数(proxies)是有效的且没有指明'https'代理,则用默认字典值'无'替代。
然后程序会解析返回的JSON数据并根据数据中是否包含国家名字字段来判断代理服务器的地理位置。如果有国家名字字段则将其打印出来并返回代理服务器的相关信息。如果没有国家名字字段但有错误信息字段则返回其他错误提示信息。
在程序执行前程序会先设置环境变量no_proxy并使用toolbox模块中的get_conf函数从配置文件中读取代理参数。
最后,检测程序会输出检查结果并返回对应的结果字符串。
## [6/18] 程序摘要: config_private.py
本程序文件名为`config_private.py`,其功能为配置私有信息以便在主程序中使用。主要功能包括:
- 配置OpenAI API的密钥和API URL
- 配置是否使用代理,如果使用代理配置代理地址和端口
- 配置发送请求的超时时间和失败重试次数的限制
- 配置并行使用线程数和用户名密码
- 提供检查功能以确保API密钥已经正确设置
其中需要特别注意的是最后一个检查功能要求在运行之前必须将API密钥正确设置否则程序会直接退出。
## [7/18] 程序摘要: config.py
该程序文件是一个配置文件用于配置OpenAI的API参数和优化体验的相关参数具体包括以下几个步骤
1.设置OpenAI的API密钥。
2.选择是否使用代理,如果使用则需要设置代理地址和端口等参数。
3.设置请求OpenAI后的超时时间、网页的端口、重试次数、选择的OpenAI模型、API的网址等。
4.设置并行使用的线程数和用户名密码。
该程序文件的作用为在使用OpenAI API时进行相关参数的配置以保证请求的正确性和速度并且优化使用体验。
## [8/18] 程序摘要: theme.py
该程序是一个自定义Gradio主题的Python模块。主题文件名为"./theme.py"。程序引入了Gradio模块并定义了一个名为"adjust_theme()"的函数。该函数根据输入值调整Gradio的默认主题返回一个包含所需自定义属性的主题对象。主题属性包括颜色、字体、过渡、阴影、按钮边框和渐变等。主题颜色列表包括石板色、灰色、锌色、中性色、石头色、红色、橙色、琥珀色、黄色、酸橙色、绿色、祖母绿、青蓝色、青色、天蓝色、蓝色、靛蓝色、紫罗兰色、紫色、洋红色、粉红色和玫瑰色。如果Gradio版本较旧则不能自定义字体和颜色。
## [9/18] 程序摘要: toolbox.py
该程序文件包含了一系列函数用于实现聊天程序所需的各种功能如预测对话、将对话记录写入文件、将普通文本转换为Markdown格式文本、装饰器函数CatchException和HotReload等。其中一些函数用到了第三方库如Python-Markdown、mdtex2html、zipfile、tarfile、rarfile和py7zr。除此之外还有一些辅助函数如get_conf、clear_line_break和extract_archive等。主要功能包括
1. 导入markdown、mdtex2html、threading、functools等模块。
2. 定义函数predict_no_ui_but_counting_down用于生成对话。
3. 定义函数write_results_to_file用于将对话记录生成Markdown文件。
4. 定义函数regular_txt_to_markdown将普通文本转换为Markdown格式的文本。
5. 定义装饰器函数CatchException用于捕获函数执行异常并返回生成器。
6. 定义函数report_execption用于向chatbot中添加错误信息。
7. 定义函数text_divide_paragraph用于将文本按照段落分隔符分割开生成带有段落标签的HTML代码。
8. 定义函数markdown_convertion用于将Markdown格式的文本转换为HTML格式。
9. 定义函数format_io用于将输入和输出解析为HTML格式。
10. 定义函数find_free_port用于返回当前系统中可用的未使用端口。
11. 定义函数extract_archive用于解压归档文件。
12. 定义函数find_recent_files用于查找最近创建的文件。
13. 定义函数on_file_uploaded用于处理上传文件的操作。
14. 定义函数on_report_generated用于处理生成报告文件的操作。
## [10/18] 程序摘要: crazy_functions/生成函数注释.py
该程序文件是一个Python脚本文件名为“生成函数注释.py”位于“./crazy_functions/”目录下。该程序实现了一个批量生成函数注释的功能可以对指定文件夹下的所有Python和C++源代码文件中的所有函数进行注释使用Markdown表格输出注释结果。
该程序引用了predict.py和toolbox.py两个模块其中predict.py实现了一个基于GPT模型的文本生成功能用于生成函数注释而toolbox.py实现了一些工具函数包括异常处理函数、文本写入函数等。另外该程序还定义了两个函数一个是“生成函数注释”函数用于处理单个文件的注释生成另一个是“批量生成函数注释”函数用于批量处理多个文件的注释生成。
## [11/18] 程序摘要: crazy_functions/读文章写摘要.py
这个程序文件是一个名为“读文章写摘要”的函数。该函数的输入包括文章的文本内容、top_p生成文本时选择最可能的词语的概率阈值、temperature控制生成文本的随机性的因子、对话历史等参数以及一个聊天机器人和一个系统提示的文本。该函数的主要工作是解析一组.tex文件然后生成一段学术性语言的中文和英文摘要。在解析过程中该函数使用一个名为“toolbox”的模块中的辅助函数和一个名为“predict”的模块中的函数来执行GPT-2模型的推理工作然后将结果返回给聊天机器人。另外该程序还包括一个名为“fast_debug”的bool型变量用于调试和测试。
## [12/18] 程序摘要: crazy_functions/代码重写为全英文_多线程.py
该程序文件实现了一个多线程操作,用于将指定目录下的所有 Python 文件中的中文转化为英文,并将转化后的文件存入另一个目录中。具体实现过程如下:
1. 集合目标文件路径并清空历史记录。
2. 循环目标文件,对每个文件启动一个线程进行任务操作。
3. 各个线程同时开始执行任务函数,并在任务完成后将转化后的文件写入指定目录,最终生成一份任务执行报告。
## [13/18] 程序摘要: crazy_functions/高级功能函数模板.py
该程序文件名为高级功能函数模板.py它包含了一个名为“高阶功能模板函数”的函数这个函数可以作为开发新功能函数的模板。该函数引用了predict.py和toolbox.py文件中的函数。在该函数内部它首先清空了历史记录然后对于今天和今天以后的四天它问用户历史中哪些事件发生在这些日期并列举两条事件并发送相关的图片。在向用户询问问题时使用了GPT进行响应。由于请求GPT需要一定的时间所以函数会在重新显示状态之前等待一段时间。在每次与用户的互动中使用yield关键字生成器函数来输出聊天机器人的当前状态包括聊天消息、历史记录和状态'正常'。最后程序调用write_results_to_file函数将聊天的结果写入文件以供后续的评估和分析。
## [14/18] 程序摘要: crazy_functions/总结word文档.py
该程序文件名为总结word文档.py主要功能是批量总结Word文档。具体实现过程是解析docx格式和doc格式文件生成文件内容然后使用自然语言处理工具对文章内容做中英文概述最后给出建议。该程序需要依赖python-docx和pywin32如果没有安装会给出安装建议。
## [15/18] 程序摘要: crazy_functions/批量总结PDF文档pdfminer.py
该程序文件名为pdfminer.py位于./crazy_functions/目录下。程序实现了批量读取PDF文件并使用pdfminer解析PDF文件内容。此外程序还根据解析得到的文本内容调用机器学习模型生成对每篇文章的概述最终生成全文摘要。程序中还对模块依赖进行了导入检查若缺少依赖则会提供安装建议。
## [16/18] 程序摘要: crazy_functions/解析项目源代码.py
这个程序文件中包含了几个函数,分别是:
1. `解析源代码(file_manifest, project_folder, top_p, temperature, chatbot, history, systemPromptTxt)`通过输入文件路径列表对程序文件进行逐文件分析根据分析结果做出整体功能和构架的概括并生成包括每个文件功能的markdown表格。
2. `解析项目本身(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`对当前文件夹下的所有Python文件及其子文件夹进行逐文件分析并生成markdown表格。
3. `解析一个Python项目(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`对指定路径下的所有Python文件及其子文件夹进行逐文件分析并生成markdown表格。
4. `解析一个C项目的头文件(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`对指定路径下的所有头文件进行逐文件分析并生成markdown表格。
5. `解析一个C项目(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT)`:对指定路径下的所有.h、.cpp、.c文件及其子文件夹进行逐文件分析并生成markdown表格。
程序中还包含了一些辅助函数和变量如CatchException装饰器函数report_execption函数、write_results_to_file函数等。在执行过程中还会调用其他模块中的函数如toolbox模块的函数和predict模块的函数。
## [17/18] 程序摘要: crazy_functions/批量总结PDF文档.py
这个程序文件是一个名为“批量总结PDF文档”的函数插件。它导入了predict和toolbox模块并定义了一些函数包括is_paragraph_breaknormalize_text和clean_text。这些函数是对输入文本进行预处理和清洗的功能函数。主要的功能函数是解析PDF它打开每个PDF文件并将其内容存储在file_content变量中然后传递给聊天机器人以产生一句话的概括。在解析PDF文件之后该函数连接了所有文件的摘要以产生一段学术语言和英文摘要。最后函数批量处理目标文件夹中的所有PDF文件并输出结果。
## 根据以上你自己的分析对程序的整体功能和构架做出概括。然后用一张markdown表格整理每个文件的功能。
该程序是一个聊天机器人使用了OpenAI的GPT语言模型以及一些特殊的辅助功能去处理各种学术写作和科研润色任务。整个程序由一些函数组成每个函数都代表了不同的学术润色/翻译/其他服务。
下面是程序中每个文件的功能列表:
| 文件名 | 功能 |
|--------|--------|
| functional_crazy.py | 实现高级功能函数模板和其他一些辅助功能函数 |
| main.py | 程序的主要入口负责程序的启动和UI的展示 |
| functional.py | 定义各种功能按钮的颜色和响应函数 |
| show_math.py | 解析LaTeX文本将其转换为Markdown格式 |
| predict.py | 基础的对话功能用于与chatGPT进行交互 |
| check_proxy.py | 检查代理设置的正确性 |
| config_private.py | 配置程序的API密钥和其他私有信息 |
| config.py | 配置OpenAI的API参数和程序的其他属性 |
| theme.py | 设置程序主题样式 |
| toolbox.py | 存放一些辅助函数供程序使用 |
| crazy_functions/生成函数注释.py | 生成Python文件中所有函数的注释 |
| crazy_functions/读文章写摘要.py | 解析文章文本,生成中英文摘要 |
| crazy_functions/代码重写为全英文_多线程.py | 将中文代码内容转化为英文 |
| crazy_functions/高级功能函数模板.py | 实现高级功能函数模板 |
| crazy_functions/总结word文档.py | 解析Word文件生成文章内容的概要 |
| crazy_functions/批量总结PDF文档pdfminer.py | 解析PDF文件生成文章内容的概要使用pdfminer库 |
| crazy_functions/批量总结PDF文档.py | 解析PDF文件生成文章内容的概要使用PyMuPDF库 |
| crazy_functions/解析项目源代码.py | 解析C/C++源代码生成markdown表格 |
| crazy_functions/批量总结PDF文档.py | 对PDF文件进行批量摘要生成 |
总的来说,该程序提供了一系列的学术润色和翻译的工具,支持对各种类型的文件进行分析和处理。同时也提供了对话式用户界面,便于用户使用和交互。

View File

@ -1,3 +1,5 @@
# This program is written by: https://github.com/polarwinkel/mdtex2html
from latex2mathml.converter import convert as tex2mathml
import re
@ -75,4 +77,4 @@ def convert(mdtex, extensions=[], splitParagraphs=True):
result = convert(parts[0]+mathml+incomplete, extensions, splitParagraphs=False)
if not found:
result = mdtex
return result
return result

152
theme.py Normal file
View File

@ -0,0 +1,152 @@
import gradio as gr
# gradio可用颜色列表
# gr.themes.utils.colors.slate (石板色)
# gr.themes.utils.colors.gray (灰色)
# gr.themes.utils.colors.zinc (锌色)
# gr.themes.utils.colors.neutral (中性色)
# gr.themes.utils.colors.stone (石头色)
# gr.themes.utils.colors.red (红色)
# gr.themes.utils.colors.orange (橙色)
# gr.themes.utils.colors.amber (琥珀色)
# gr.themes.utils.colors.yellow (黄色)
# gr.themes.utils.colors.lime (酸橙色)
# gr.themes.utils.colors.green (绿色)
# gr.themes.utils.colors.emerald (祖母绿)
# gr.themes.utils.colors.teal (青蓝色)
# gr.themes.utils.colors.cyan (青色)
# gr.themes.utils.colors.sky (天蓝色)
# gr.themes.utils.colors.blue (蓝色)
# gr.themes.utils.colors.indigo (靛蓝色)
# gr.themes.utils.colors.violet (紫罗兰色)
# gr.themes.utils.colors.purple (紫色)
# gr.themes.utils.colors.fuchsia (洋红色)
# gr.themes.utils.colors.pink (粉红色)
# gr.themes.utils.colors.rose (玫瑰色)
def adjust_theme():
try:
color_er = gr.themes.utils.colors.pink
set_theme = gr.themes.Default(
primary_hue=gr.themes.utils.colors.orange,
neutral_hue=gr.themes.utils.colors.gray,
font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")],
font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")])
set_theme.set(
# Colors
input_background_fill_dark="*neutral_800",
# Transition
button_transition="none",
# Shadows
button_shadow="*shadow_drop",
button_shadow_hover="*shadow_drop_lg",
button_shadow_active="*shadow_inset",
input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset",
input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset",
input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset",
checkbox_label_shadow="*shadow_drop",
block_shadow="*shadow_drop",
form_gap_width="1px",
# Button borders
input_border_width="1px",
input_background_fill="white",
# Gradients
stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)",
stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)",
error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)",
error_background_fill_dark="*background_fill_primary",
checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)",
checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)",
checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)",
checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)",
button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)",
button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)",
button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)",
button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)",
button_primary_border_color_dark="*primary_500",
button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)",
button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)",
button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)",
button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)",
button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})",
button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})",
button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})",
button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})",
button_cancel_border_color=color_er.c200,
button_cancel_border_color_dark=color_er.c600,
button_cancel_text_color=color_er.c600,
button_cancel_text_color_dark="white",
)
except:
set_theme = None; print('gradio版本较旧, 不能自定义字体和颜色')
return set_theme
advanced_css = """
/* 设置表格的外边距为1em内部单元格之间边框合并空单元格显示. */
.markdown-body table {
margin: 1em 0;
border-collapse: collapse;
empty-cells: show;
}
/* 设置表格单元格的内边距为5px边框粗细为1.2px颜色为--border-color-primary. */
.markdown-body th, .markdown-body td {
border: 1.2px solid var(--border-color-primary);
padding: 5px;
}
/* 设置表头背景颜色为rgba(175,184,193,0.2)透明度为0.2. */
.markdown-body thead {
background-color: rgba(175,184,193,0.2);
}
/* 设置表头单元格的内边距为0.5em和0.2em. */
.markdown-body thead th {
padding: .5em .2em;
}
/* 去掉列表前缀的默认间距使其与文本线对齐. */
.markdown-body ol, .markdown-body ul {
padding-inline-start: 2em !important;
}
/* 设定聊天气泡的样式包括圆角最大宽度和阴影等. */
[class *= "message"] {
border-radius: var(--radius-xl) !important;
/* padding: var(--spacing-xl) !important; */
/* font-size: var(--text-md) !important; */
/* line-height: var(--line-md) !important; */
/* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */
/* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */
}
[data-testid = "bot"] {
max-width: 95%;
/* width: auto !important; */
border-bottom-left-radius: 0 !important;
}
[data-testid = "user"] {
max-width: 100%;
/* width: auto !important; */
border-bottom-right-radius: 0 !important;
}
/* 行内代码的背景设为淡灰色设定圆角和间距. */
.markdown-body code {
display: inline;
white-space: break-spaces;
border-radius: 6px;
margin: 0 2px 0 2px;
padding: .2em .4em .1em .4em;
background-color: rgba(175,184,193,0.2);
}
/* 设定代码块的样式包括背景颜色外边距圆角 */
.markdown-body pre code {
display: block;
overflow: auto;
white-space: pre;
background-color: rgba(175,184,193,0.2);
border-radius: 10px;
padding: 1em;
margin: 1em 2em 1em 0.5em;
}
"""

344
toolbox.py Normal file
View File

@ -0,0 +1,344 @@
import markdown, mdtex2html, threading, importlib, traceback, importlib, inspect, re
from show_math import convert as convert_math
from functools import wraps, lru_cache
def get_reduce_token_percent(text):
try:
# text = "maximum context length is 4097 tokens. However, your messages resulted in 4870 tokens"
pattern = r"(\d+)\s+tokens\b"
match = re.findall(pattern, text)
EXCEED_ALLO = 500 # 稍微留一点余地,否则在回复时会因余量太少出问题
max_limit = float(match[0]) - EXCEED_ALLO
current_tokens = float(match[1])
ratio = max_limit/current_tokens
assert ratio > 0 and ratio < 1
return ratio, str(int(current_tokens-max_limit))
except:
return 0.5, '不详'
def predict_no_ui_but_counting_down(i_say, i_say_show_user, chatbot, top_p, temperature, history=[], sys_prompt='', long_connection=True):
"""
调用简单的predict_no_ui接口但是依然保留了些许界面心跳功能当对话太长时会自动采用二分法截断
i_say: 当前输入
i_say_show_user: 显示到对话界面上的当前输入例如输入整个文件时你绝对不想把文件的内容都糊到对话界面上
chatbot: 对话界面句柄
top_p, temperature: gpt参数
history: gpt参数 对话历史
sys_prompt: gpt参数 sys_prompt
long_connection: 是否采用更稳定的连接方式推荐
"""
import time
from predict import predict_no_ui, predict_no_ui_long_connection
from toolbox import get_conf
TIMEOUT_SECONDS, MAX_RETRY = get_conf('TIMEOUT_SECONDS', 'MAX_RETRY')
# 多线程的时候需要一个mutable结构在不同线程之间传递信息
# list就是最简单的mutable结构我们第一个位置放gpt输出第二个位置传递报错信息
mutable = [None, '']
# multi-threading worker
def mt(i_say, history):
while True:
try:
if long_connection:
mutable[0] = predict_no_ui_long_connection(inputs=i_say, top_p=top_p, temperature=temperature, history=history, sys_prompt=sys_prompt)
else:
mutable[0] = predict_no_ui(inputs=i_say, top_p=top_p, temperature=temperature, history=history, sys_prompt=sys_prompt)
break
except ConnectionAbortedError as token_exceeded_error:
# 尝试计算比例,尽可能多地保留文本
p_ratio, n_exceed = get_reduce_token_percent(str(token_exceeded_error))
if len(history) > 0:
history = [his[ int(len(his) *p_ratio): ] for his in history if his is not None]
else:
i_say = i_say[: int(len(i_say) *p_ratio) ]
mutable[1] = f'警告文本过长将进行截断Token溢出数{n_exceed},截断比例:{(1-p_ratio):.0%}'
except TimeoutError as e:
mutable[0] = '[Local Message] 请求超时。'
raise TimeoutError
except Exception as e:
mutable[0] = f'[Local Message] 异常:{str(e)}.'
raise RuntimeError(f'[Local Message] 异常:{str(e)}.')
# 创建新线程发出http请求
thread_name = threading.Thread(target=mt, args=(i_say, history)); thread_name.start()
# 原来的线程则负责持续更新UI实现一个超时倒计时并等待新线程的任务完成
cnt = 0
while thread_name.is_alive():
cnt += 1
chatbot[-1] = (i_say_show_user, f"[Local Message] {mutable[1]}waiting gpt response {cnt}/{TIMEOUT_SECONDS*2*(MAX_RETRY+1)}"+''.join(['.']*(cnt%4)))
yield chatbot, history, '正常'
time.sleep(1)
# 把gpt的输出从mutable中取出来
gpt_say = mutable[0]
if gpt_say=='[Local Message] Failed with timeout.': raise TimeoutError
return gpt_say
def write_results_to_file(history, file_name=None):
"""
将对话记录history以Markdown格式写入文件中如果没有指定文件名则使用当前时间生成文件名
"""
import os, time
if file_name is None:
# file_name = time.strftime("chatGPT分析报告%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md'
file_name = 'chatGPT分析报告' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.md'
os.makedirs('./gpt_log/', exist_ok=True)
with open(f'./gpt_log/{file_name}', 'w', encoding = 'utf8') as f:
f.write('# chatGPT 分析报告\n')
for i, content in enumerate(history):
try: # 这个bug没找到触发条件暂时先这样顶一下
if type(content) != str: content = str(content)
except:
continue
if i%2==0: f.write('## ')
f.write(content)
f.write('\n\n')
res = '以上材料已经被写入' + os.path.abspath(f'./gpt_log/{file_name}')
print(res)
return res
def regular_txt_to_markdown(text):
"""
将普通文本转换为Markdown格式的文本
"""
text = text.replace('\n', '\n\n')
text = text.replace('\n\n\n', '\n\n')
text = text.replace('\n\n\n', '\n\n')
return text
def CatchException(f):
"""
装饰器函数捕捉函数f中的异常并封装到一个生成器中返回并显示到聊天当中
"""
@wraps(f)
def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
try:
yield from f(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT)
except Exception as e:
from check_proxy import check_proxy
from toolbox import get_conf
proxies, = get_conf('proxies')
tb_str = '```\n' + traceback.format_exc() + '```'
if len(chatbot) == 0: chatbot.append(["插件调度异常","异常原因"])
chatbot[-1] = (chatbot[-1][0], f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
yield chatbot, history, f'异常 {e}'
return decorated
def HotReload(f):
"""
装饰器函数实现函数插件热更新
"""
@wraps(f)
def decorated(*args, **kwargs):
fn_name = f.__name__
f_hot_reload = getattr(importlib.reload(inspect.getmodule(f)), fn_name)
yield from f_hot_reload(*args, **kwargs)
return decorated
def report_execption(chatbot, history, a, b):
"""
向chatbot中添加错误信息
"""
chatbot.append((a, b))
history.append(a); history.append(b)
def text_divide_paragraph(text):
"""
将文本按照段落分隔符分割开生成带有段落标签的HTML代码
"""
if '```' in text:
# careful input
return text
else:
# wtf input
lines = text.split("\n")
for i, line in enumerate(lines):
lines[i] = lines[i].replace(" ", "&nbsp;")
text = "</br>".join(lines)
return text
def markdown_convertion(txt):
"""
将Markdown格式的文本转换为HTML格式如果包含数学公式则先将公式转换为HTML格式
"""
pre = '<div class="markdown-body">'
suf = '</div>'
if ('$' in txt) and ('```' not in txt):
return pre + markdown.markdown(txt,extensions=['fenced_code','tables']) + '<br><br>' + markdown.markdown(convert_math(txt, splitParagraphs=False),extensions=['fenced_code','tables']) + suf
else:
return pre + markdown.markdown(txt,extensions=['fenced_code','tables']) + suf
def close_up_code_segment_during_stream(gpt_reply):
"""
在gpt输出代码的中途输出了前面的```但还没输出完后面的```补上后面的```
"""
if '```' not in gpt_reply: return gpt_reply
if gpt_reply.endswith('```'): return gpt_reply
# 排除了以上两个情况,我们
segments = gpt_reply.split('```')
n_mark = len(segments) - 1
if n_mark % 2 == 1:
# print('输出代码片段中!')
return gpt_reply+'\n```'
else:
return gpt_reply
def format_io(self, y):
"""
将输入和输出解析为HTML格式将y中最后一项的输入部分段落化并将输出部分的Markdown和数学公式转换为HTML格式
"""
if y is None or y == []: return []
i_ask, gpt_reply = y[-1]
i_ask = text_divide_paragraph(i_ask) # 输入部分太自由,预处理一波
gpt_reply = close_up_code_segment_during_stream(gpt_reply) # 当代码输出半截的时候,试着补上后个```
y[-1] = (
None if i_ask is None else markdown.markdown(i_ask, extensions=['fenced_code','tables']),
None if gpt_reply is None else markdown_convertion(gpt_reply)
)
return y
def find_free_port():
"""
返回当前系统中可用的未使用端口
"""
import socket
from contextlib import closing
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
s.bind(('', 0))
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
return s.getsockname()[1]
def extract_archive(file_path, dest_dir):
import zipfile
import tarfile
import os
# Get the file extension of the input file
file_extension = os.path.splitext(file_path)[1]
# Extract the archive based on its extension
if file_extension == '.zip':
with zipfile.ZipFile(file_path, 'r') as zipobj:
zipobj.extractall(path=dest_dir)
print("Successfully extracted zip archive to {}".format(dest_dir))
elif file_extension in ['.tar', '.gz', '.bz2']:
with tarfile.open(file_path, 'r:*') as tarobj:
tarobj.extractall(path=dest_dir)
print("Successfully extracted tar archive to {}".format(dest_dir))
# 第三方库需要预先pip install rarfile
# 此外Windows上还需要安装winrar软件配置其Path环境变量如"C:\Program Files\WinRAR"才可以
elif file_extension == '.rar':
try:
import rarfile
with rarfile.RarFile(file_path) as rf:
rf.extractall(path=dest_dir)
print("Successfully extracted rar archive to {}".format(dest_dir))
except:
print("Rar format requires additional dependencies to install")
return '\n\n需要安装pip install rarfile来解压rar文件'
# 第三方库需要预先pip install py7zr
elif file_extension == '.7z':
try:
import py7zr
with py7zr.SevenZipFile(file_path, mode='r') as f:
f.extractall(path=dest_dir)
print("Successfully extracted 7z archive to {}".format(dest_dir))
except:
print("7z format requires additional dependencies to install")
return '\n\n需要安装pip install py7zr来解压7z文件'
else:
return ''
return ''
def find_recent_files(directory):
"""
me: find files that is created with in one minutes under a directory with python, write a function
gpt: here it is!
"""
import os
import time
current_time = time.time()
one_minute_ago = current_time - 60
recent_files = []
for filename in os.listdir(directory):
file_path = os.path.join(directory, filename)
if file_path.endswith('.log'): continue
created_time = os.path.getctime(file_path)
if created_time >= one_minute_ago:
if os.path.isdir(file_path): continue
recent_files.append(file_path)
return recent_files
def on_file_uploaded(files, chatbot, txt):
if len(files) == 0: return chatbot, txt
import shutil, os, time, glob
from toolbox import extract_archive
try: shutil.rmtree('./private_upload/')
except: pass
time_tag = time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
os.makedirs(f'private_upload/{time_tag}', exist_ok=True)
err_msg = ''
for file in files:
file_origin_name = os.path.basename(file.orig_name)
shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
txt = f'private_upload/{time_tag}'
moved_files_str = '\t\n\n'.join(moved_files)
chatbot.append(['我上传了文件,请查收',
f'[Local Message] 收到以下文件: \n\n{moved_files_str}'+
f'\n\n调用路径参数已自动修正到: \n\n{txt}'+
f'\n\n现在您点击任意实验功能时,以上文件将被作为输入参数'+err_msg])
return chatbot, txt
def on_report_generated(files, chatbot):
from toolbox import find_recent_files
report_files = find_recent_files('gpt_log')
if len(report_files) == 0: return files, chatbot
# files.extend(report_files)
chatbot.append(['汇总报告如何远程获取?', '汇总报告已经添加到右侧“文件上传区”(可能处于折叠状态),请查收。'])
return report_files, chatbot
@lru_cache(maxsize=128)
def read_single_conf_with_lru_cache(arg):
try: r = getattr(importlib.import_module('config_private'), arg)
except: r = getattr(importlib.import_module('config'), arg)
# 在读取API_KEY时检查一下是不是忘了改config
if arg=='API_KEY':
# 正确的 API_KEY 是 "sk-" + 48 位大小写字母数字的组合
API_MATCH = re.match(r"sk-[a-zA-Z0-9]{48}$", r)
if API_MATCH:
print(f"[API_KEY] 您的 API_KEY 是: {r[:15]}*** API_KEY 导入成功")
else:
assert False, "正确的 API_KEY 是 'sk-' + '48 位大小写字母数字' 的组合请在config文件中修改API密钥, 添加海外代理之后再运行。" + \
"如果您刚更新过代码请确保旧版config_private文件中没有遗留任何新增键值"
if arg=='proxies':
if r is None:
print('[PROXY] 网络代理状态未配置。无代理状态下很可能无法访问。建议检查USE_PROXY选项是否修改。')
else:
print('[PROXY] 网络代理状态:已配置。配置信息如下:', r)
assert isinstance(r, dict), 'proxies格式错误请注意proxies选项的格式不要遗漏括号。'
return r
def get_conf(*args):
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
res = []
for arg in args:
r = read_single_conf_with_lru_cache(arg)
res.append(r)
return res
def clear_line_break(txt):
txt = txt.replace('\n', ' ')
txt = txt.replace(' ', ' ')
txt = txt.replace(' ', ' ')
return txt

5
version Normal file
View File

@ -0,0 +1,5 @@
{
"version": 2.3,
"show_feature": true,
"new_feature": "修复多线程插件Bug加入版本检查功能。"
}