re-format code to with pre-commit

This commit is contained in:
qingxu fu 2023-12-31 19:30:32 +08:00
parent 60ba712131
commit 480516380d
52 changed files with 439 additions and 326 deletions

View File

@ -34,7 +34,7 @@ body:
- Others | 非最新版
validations:
required: true
- type: dropdown
id: os
attributes:
@ -47,7 +47,7 @@ body:
- Docker
validations:
required: true
- type: textarea
id: describe
attributes:
@ -55,7 +55,7 @@ body:
description: Describe the bug | 简述
validations:
required: true
- type: textarea
id: screenshot
attributes:
@ -63,15 +63,9 @@ body:
description: Screen Shot | 有帮助的截图
validations:
required: true
- type: textarea
id: traceback
attributes:
label: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback如有 + 帮助我们复现的测试材料样本(如有)
description: Terminal Traceback & Material to Help Reproduce Bugs | 终端traceback如有 + 帮助我们复现的测试材料样本(如有)

View File

@ -21,8 +21,3 @@ body:
attributes:
label: Feature Request | 功能请求
description: Feature Request | 功能请求

View File

@ -15,7 +15,7 @@ jobs:
permissions:
issues: write
pull-requests: read
steps:
- uses: actions/stale@v8
with:

1
.gitignore vendored
View File

@ -152,3 +152,4 @@ request_llms/moss
media
flagged
request_llms/ChatGLM-6b-onnx-u8s8
.pre-commit-config.yaml

View File

@ -1,7 +1,7 @@
> **Caution**
>
>
> 2023.11.12: 某些依赖包尚不兼容python 3.12推荐python 3.11。
>
>
> 2023.12.26: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目完全开源免费,您可通过订阅[在线服务](https://github.com/binary-husky/gpt_academic/wiki/online)的方式鼓励本项目的发展。
<br>
@ -47,7 +47,7 @@ Read this in [English](docs/README.English.md) | [日本語](docs/README.Japanes
>
> 2.本项目中每个文件的功能都在[自译解报告](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告)`self_analysis.md`详细说明。随着版本的迭代您也可以随时自行点击相关函数插件调用GPT重新生成项目的自我解析报告。常见问题请查阅wiki。
> [![常规安装方法](https://img.shields.io/static/v1?label=&message=常规安装方法&color=gray)](#installation) [![一键安装脚本](https://img.shields.io/static/v1?label=&message=一键安装脚本&color=gray)](https://github.com/binary-husky/gpt_academic/releases) [![配置说明](https://img.shields.io/static/v1?label=&message=配置说明&color=gray)](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明) [![wiki](https://img.shields.io/static/v1?label=&message=wiki&color=gray)]([https://github.com/binary-husky/gpt_academic/wiki/项目配置说明](https://github.com/binary-husky/gpt_academic/wiki))
>
>
> 3.本项目兼容并鼓励尝试国产大语言模型ChatGLM等。支持多个api-key共存可在配置文件中填写如`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`。需要临时更换`API_KEY`时,在输入区输入临时的`API_KEY`然后回车键提交即可生效。
<br><br>
@ -119,7 +119,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
<br><br>
# Installation
### 安装方法I直接运行 (Windows, Linux or MacOS)
### 安装方法I直接运行 (Windows, Linux or MacOS)
1. 下载项目
@ -156,7 +156,7 @@ Latex论文一键校对 | [插件] 仿Grammarly对Latex文章进行语法、拼
```sh
# 【可选步骤I】支持清华ChatGLM2。清华ChatGLM备注如果遇到"Call ChatGLM fail 不能正常加载ChatGLM的参数" 错误,参考如下: 1以上默认安装的为torch+cpu版使用cuda需要卸载torch重新安装torch+cuda 2如因本机配置不够无法加载模型可以修改request_llm/bridge_chatglm.py中的模型精度, 将 AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) 都修改为 AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# 【可选步骤II】支持复旦MOSS
python -m pip install -r request_llms/requirements_moss.txt
@ -243,8 +243,8 @@ P.S. 如果需要依赖Latex的插件功能请见Wiki。另外您也可以
```python
"超级英译中": {
# 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释代码、润色等等
"Prefix": "请翻译把下面一段内容成中文然后用一个markdown表格逐一解释文中出现的专有名词\n\n",
"Prefix": "请翻译把下面一段内容成中文然后用一个markdown表格逐一解释文中出现的专有名词\n\n",
# 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。
"Suffix": "",
},

View File

@ -129,7 +129,7 @@ services:
runtime: nvidia
devices:
- /dev/nvidia0:/dev/nvidia0
# 与宿主的网络融合
network_mode: "host"
command: >
@ -163,7 +163,7 @@ services:
runtime: nvidia
devices:
- /dev/nvidia0:/dev/nvidia0
# 与宿主的网络融合
network_mode: "host"
@ -229,4 +229,3 @@ services:
# 不使用代理网络拉取最新代码
command: >
bash -c "python3 -u main.py"

View File

@ -1,2 +1 @@
# 此Dockerfile不再维护请前往docs/GithubAction+ChatGLM+Moss

View File

@ -1 +1 @@
# 此Dockerfile不再维护请前往docs/GithubAction+JittorLLMs
# 此Dockerfile不再维护请前往docs/GithubAction+JittorLLMs

View File

@ -15,7 +15,7 @@ WORKDIR /gpt
RUN pip3 install openai numpy arxiv rich
RUN pip3 install colorama Markdown pygments pymupdf
RUN pip3 install python-docx pdfminer
RUN pip3 install python-docx pdfminer
RUN pip3 install nougat-ocr
# 装载项目文件

View File

@ -2,9 +2,9 @@
> **ملحوظة**
>
>
> تمت ترجمة هذا الملف README باستخدام GPT (بواسطة المكون الإضافي لهذا المشروع) وقد لا تكون الترجمة 100٪ موثوقة، يُرجى التمييز بعناية بنتائج الترجمة.
>
>
> 2023.11.7: عند تثبيت التبعيات، يُرجى اختيار الإصدار المُحدد في `requirements.txt`. الأمر للتثبيت: `pip install -r requirements.txt`.
# <div align=center><img src="logo.png" width="40"> GPT الأكاديمي</div>
@ -12,14 +12,14 @@
**إذا كنت تحب هذا المشروع، فيُرجى إعطاؤه Star. لترجمة هذا المشروع إلى لغة عشوائية باستخدام GPT، قم بقراءة وتشغيل [`multi_language.py`](multi_language.py) (تجريبي).
> **ملحوظة**
>
>
> 1. يُرجى ملاحظة أنها الإضافات (الأزرار) المميزة فقط التي تدعم قراءة الملفات، وبعض الإضافات توجد في قائمة منسدلة في منطقة الإضافات. بالإضافة إلى ذلك، نرحب بأي Pull Request جديد بأعلى أولوية لأي إضافة جديدة.
>
>
> 2. تُوضّح كل من الملفات في هذا المشروع وظيفتها بالتفصيل في [تقرير الفهم الذاتي `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). يمكنك في أي وقت أن تنقر على إضافة وظيفة ذات صلة لاستدعاء GPT وإعادة إنشاء تقرير الفهم الذاتي للمشروع. للأسئلة الشائعة [`الويكي`](https://github.com/binary-husky/gpt_academic/wiki). [طرق التثبيت العادية](#installation) | [نصب بنقرة واحدة](https://github.com/binary-husky/gpt_academic/releases) | [تعليمات التكوين](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明).
>
>
> 3. يتم توافق هذا المشروع مع ودعم توصيات اللغة البيجائية الأكبر شمولًا وشجاعة لمثل ChatGLM. يمكنك توفير العديد من مفاتيح Api المشتركة في تكوين الملف، مثل `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. عند تبديل مؤقت لـ `API_KEY`، قم بإدخال `API_KEY` المؤقت في منطقة الإدخال ثم اضغط على زر "إدخال" لجعله ساري المفعول.
<div align="center">
@ -46,7 +46,7 @@
⭐إضغط على وكيل "شارلوت الذكي" | [وظائف] استكمال الذكاء للكأس الأول للذكاء المكتسب من مايكروسوفت، اكتشاف وتطوير عالمي العميل
تبديل الواجهة المُظلمة | يمكنك التبديل إلى الواجهة المظلمة بإضافة ```/?__theme=dark``` إلى نهاية عنوان URL في المتصفح
دعم المزيد من نماذج LLM | دعم لجميع GPT3.5 وGPT4 و[ChatGLM2 في جامعة ثوه في لين](https://github.com/THUDM/ChatGLM2-6B) و[MOSS في جامعة فودان](https://github.com/OpenLMLab/MOSS)
⭐تحوي انطباعة "ChatGLM2" | يدعم استيراد "ChatGLM2" ويوفر إضافة المساعدة في تعديله
⭐تحوي انطباعة "ChatGLM2" | يدعم استيراد "ChatGLM2" ويوفر إضافة المساعدة في تعديله
دعم المزيد من نماذج "LLM"، دعم [نشر الحديس](https://huggingface.co/spaces/qingxu98/gpt-academic) | انضم إلى واجهة "Newbing" (Bing الجديدة)،نقدم نماذج Jittorllms الجديدة تؤيدهم [LLaMA](https://github.com/facebookresearch/llama) و [盘古α](https://openi.org.cn/pangu/)
⭐حزمة "void-terminal" للشبكة (pip) | قم بطلب كافة وظائف إضافة هذا المشروع في python بدون واجهة رسومية (قيد التطوير)
⭐PCI-Express لإعلام (PCI) | [وظائف] باللغة الطبيعية، قم بتنفيذ المِهام الأخرى في المشروع
@ -200,8 +200,8 @@ docker-compose up
```
"ترجمة سوبر الإنجليزية إلى العربية": {
# البادئة، ستتم إضافتها قبل إدخالاتك. مثلاً، لوصف ما تريده مثل ترجمة أو شرح كود أو تلوين وهلم جرا
"بادئة": "يرجى ترجمة النص التالي إلى العربية ثم استخدم جدول Markdown لشرح المصطلحات المختصة المذكورة في النص:\n\n",
"بادئة": "يرجى ترجمة النص التالي إلى العربية ثم استخدم جدول Markdown لشرح المصطلحات المختصة المذكورة في النص:\n\n",
# اللاحقة، سيتم إضافتها بعد إدخالاتك. يمكن استخدامها لوضع علامات اقتباس حول إدخالك.
"لاحقة": "",
},
@ -341,4 +341,3 @@ https://github.com/oobabooga/one-click-installers
# المزيد:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -18,11 +18,11 @@ To translate this project to arbitrary language with GPT, read and run [`multi_l
> 1.Please note that only plugins (buttons) highlighted in **bold** support reading files, and some plugins are located in the **dropdown menu** in the plugin area. Additionally, we welcome and process any new plugins with the **highest priority** through PRs.
>
> 2.The functionalities of each file in this project are described in detail in the [self-analysis report `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). As the version iterates, you can also click on the relevant function plugin at any time to call GPT to regenerate the project's self-analysis report. Common questions are in the [`wiki`](https://github.com/binary-husky/gpt_academic/wiki). [Regular installation method](#installation) | [One-click installation script](https://github.com/binary-husky/gpt_academic/releases) | [Configuration instructions](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明).
>
>
> 3.This project is compatible with and encourages the use of domestic large-scale language models such as ChatGLM. Multiple api-keys can be used together. You can fill in the configuration file with `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"` to temporarily switch `API_KEY` during input, enter the temporary `API_KEY`, and then press enter to apply it.
<div align="center">
@ -126,7 +126,7 @@ python -m pip install -r requirements.txt # This step is the same as the pip ins
【Optional Step】If you need to support THU ChatGLM2 or Fudan MOSS as backends, you need to install additional dependencies (Prerequisites: Familiar with Python + Familiar with Pytorch + Sufficient computer configuration):
```sh
# 【Optional Step I】Support THU ChatGLM2. Note: If you encounter the "Call ChatGLM fail unable to load ChatGLM parameters" error, refer to the following: 1. The default installation above is for torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2. If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py. Change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# 【Optional Step II】Support Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@ -204,8 +204,8 @@ For example:
```
"Super Translation": {
# Prefix: will be added before your input. For example, used to describe your request, such as translation, code explanation, proofreading, etc.
"Prefix": "Please translate the following paragraph into Chinese and then explain each proprietary term in the text using a markdown table:\n\n",
"Prefix": "Please translate the following paragraph into Chinese and then explain each proprietary term in the text using a markdown table:\n\n",
# Suffix: will be added after your input. For example, used to wrap your input in quotation marks along with the prefix.
"Suffix": "",
},
@ -355,4 +355,3 @@ https://github.com/oobabooga/one-click-installers
# More:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -2,9 +2,9 @@
> **Remarque**
>
>
> Ce README a été traduit par GPT (implémenté par le plugin de ce projet) et n'est pas fiable à 100 %. Veuillez examiner attentivement les résultats de la traduction.
>
>
> 7 novembre 2023 : Lors de l'installation des dépendances, veuillez choisir les versions **spécifiées** dans le fichier `requirements.txt`. Commande d'installation : `pip install -r requirements.txt`.
@ -12,7 +12,7 @@
**Si vous aimez ce projet, merci de lui donner une étoile ; si vous avez inventé des raccourcis ou des plugins utiles, n'hésitez pas à envoyer des demandes d'extraction !**
Si vous aimez ce projet, veuillez lui donner une étoile.
Si vous aimez ce projet, veuillez lui donner une étoile.
Pour traduire ce projet dans une langue arbitraire avec GPT, lisez et exécutez [`multi_language.py`](multi_language.py) (expérimental).
> **Remarque**
@ -22,7 +22,7 @@ Pour traduire ce projet dans une langue arbitraire avec GPT, lisez et exécutez
> 2. Les fonctionnalités de chaque fichier de ce projet sont spécifiées en détail dans [le rapport d'auto-analyse `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic个项目自译解报告). Vous pouvez également cliquer à tout moment sur les plugins de fonctions correspondants pour appeler GPT et générer un rapport d'auto-analyse du projet. Questions fréquemment posées [wiki](https://github.com/binary-husky/gpt_academic/wiki). [Méthode d'installation standard](#installation) | [Script d'installation en un clic](https://github.com/binary-husky/gpt_academic/releases) | [Instructions de configuration](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)..
>
> 3. Ce projet est compatible avec et recommande l'expérimentation de grands modèles de langage chinois tels que ChatGLM, etc. Prend en charge plusieurs clés API, vous pouvez les remplir dans le fichier de configuration comme `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Pour changer temporairement la clé API, entrez la clé API temporaire dans la zone de saisie, puis appuyez sur Entrée pour soumettre et activer celle-ci.
<div align="center">
@ -128,7 +128,7 @@ python -m pip install -r requirements.txt # This step is the same as the pip ins
[Optional Steps] If you need to support Tsinghua ChatGLM2/Fudan MOSS as backends, you need to install additional dependencies (Prerequisites: Familiar with Python + Have used PyTorch + Sufficient computer configuration):
```sh
# [Optional Step I] Support Tsinghua ChatGLM2. Comment on this note: If you encounter the error "Call ChatGLM generated an error and cannot load the parameters of ChatGLM", refer to the following: 1: The default installation is the torch+cpu version. To use cuda, you need to uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient computer configuration, you can modify the model precision in request_llm/bridge_chatglm.py. Change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Optional Step II] Support Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@ -201,7 +201,7 @@ Par exemple:
"Traduction avancée de l'anglais vers le français": {
# Préfixe, ajouté avant votre saisie. Par exemple, utilisez-le pour décrire votre demande, telle que la traduction, l'explication du code, l'amélioration, etc.
"Prefix": "Veuillez traduire le contenu suivant en français, puis expliquer chaque terme propre à la langue anglaise utilisé dans le texte à l'aide d'un tableau markdown : \n\n",
# Suffixe, ajouté après votre saisie. Par exemple, en utilisant le préfixe, vous pouvez entourer votre contenu par des guillemets.
"Suffix": "",
},
@ -354,4 +354,3 @@ https://github.com/oobabooga/one-click-installers
# Plus
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -2,9 +2,9 @@
> **Hinweis**
>
> Dieses README wurde mithilfe der GPT-Übersetzung (durch das Plugin dieses Projekts) erstellt und ist nicht zu 100 % zuverlässig. Bitte überprüfen Sie die Übersetzungsergebnisse sorgfältig.
>
>
> Dieses README wurde mithilfe der GPT-Übersetzung (durch das Plugin dieses Projekts) erstellt und ist nicht zu 100 % zuverlässig. Bitte überprüfen Sie die Übersetzungsergebnisse sorgfältig.
>
> 7. November 2023: Beim Installieren der Abhängigkeiten bitte nur die in der `requirements.txt` **angegebenen Versionen** auswählen. Installationsbefehl: `pip install -r requirements.txt`.
@ -12,19 +12,19 @@
**Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Star. Wenn Sie praktische Tastenkombinationen oder Plugins entwickelt haben, sind Pull-Anfragen willkommen!**
Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Star.
Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Star.
Um dieses Projekt mit GPT in eine beliebige Sprache zu übersetzen, lesen Sie [`multi_language.py`](multi_language.py) (experimentell).
> **Hinweis**
>
> 1. Beachten Sie bitte, dass nur die mit **hervorgehobenen** Plugins (Schaltflächen) Dateien lesen können. Einige Plugins befinden sich im **Drop-down-Menü** des Plugin-Bereichs. Außerdem freuen wir uns über jede neue Plugin-PR mit **höchster Priorität**.
>
>
> 2. Die Funktionen jeder Datei in diesem Projekt sind im [Selbstanalysebericht `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPT-Academic-Selbstanalysebericht) ausführlich erläutert. Sie können jederzeit auf die relevanten Funktions-Plugins klicken und GPT aufrufen, um den Selbstanalysebericht des Projekts neu zu generieren. Häufig gestellte Fragen finden Sie im [`Wiki`](https://github.com/binary-husky/gpt_academic/wiki). [Standardinstallationsmethode](#installation) | [Ein-Klick-Installationsskript](https://github.com/binary-husky/gpt_academic/releases) | [Konfigurationsanleitung](https://github.com/binary-husky/gpt_academic/wiki/Projekt-Konfigurationsanleitung).
>
>
> 3. Dieses Projekt ist kompatibel mit und unterstützt auch die Verwendung von inländischen Sprachmodellen wie ChatGLM. Die gleichzeitige Verwendung mehrerer API-Schlüssel ist möglich, indem Sie sie in der Konfigurationsdatei wie folgt angeben: `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Wenn Sie den `API_KEY` vorübergehend ändern möchten, geben Sie vorübergehend den temporären `API_KEY` im Eingabebereich ein und drücken Sie die Eingabetaste, um die Änderung wirksam werden zu lassen.
<div align="center">
@ -93,7 +93,7 @@ Weitere Funktionen anzeigen (z. B. Bildgenerierung) …… | Siehe das Ende dies
</div>
# Installation
### Installation Method I: Run directly (Windows, Linux or MacOS)
### Installation Method I: Run directly (Windows, Linux or MacOS)
1. Download the project
```sh
@ -128,7 +128,7 @@ python -m pip install -r requirements.txt # This step is the same as installing
[Optional] If you need to support Tsinghua ChatGLM2/Fudan MOSS as the backend, you need to install additional dependencies (Prerequisites: Familiar with Python + Have used PyTorch + Strong computer configuration):
```sh
# [Optional Step I] Support Tsinghua ChatGLM2. Tsinghua ChatGLM note: If you encounter the error "Call ChatGLM fail cannot load ChatGLM parameters normally", refer to the following: 1: The default installation above is torch+cpu version. To use cuda, you need to uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient computer configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py. Change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Optional Step II] Support Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@ -207,8 +207,8 @@ Beispiel:
```
"Übersetzung von Englisch nach Chinesisch": {
# Präfix, wird vor Ihrer Eingabe hinzugefügt. Zum Beispiel, um Ihre Anforderungen zu beschreiben, z.B. Übersetzen, Code erklären, verbessern usw.
"Präfix": "Bitte übersetzen Sie den folgenden Abschnitt ins Chinesische und erklären Sie dann jedes Fachwort in einer Markdown-Tabelle:\n\n",
"Präfix": "Bitte übersetzen Sie den folgenden Abschnitt ins Chinesische und erklären Sie dann jedes Fachwort in einer Markdown-Tabelle:\n\n",
# Suffix, wird nach Ihrer Eingabe hinzugefügt. Zum Beispiel, um Ihre Eingabe in Anführungszeichen zu setzen.
"Suffix": "",
},
@ -361,4 +361,3 @@ https://github.com/oobabooga/one-click-installers
# Weitere:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -12,7 +12,7 @@
**Se ti piace questo progetto, per favore dagli una stella; se hai idee o plugin utili, fai una pull request!**
Se ti piace questo progetto, dagli una stella.
Se ti piace questo progetto, dagli una stella.
Per tradurre questo progetto in qualsiasi lingua con GPT, leggi ed esegui [`multi_language.py`](multi_language.py) (sperimentale).
> **Nota**
@ -20,11 +20,11 @@ Per tradurre questo progetto in qualsiasi lingua con GPT, leggi ed esegui [`mult
> 1. Fai attenzione che solo i plugin (pulsanti) **evidenziati** supportano la lettura dei file, alcuni plugin si trovano nel **menu a tendina** nell'area dei plugin. Inoltre, accogliamo e gestiamo con **massima priorità** qualsiasi nuovo plugin attraverso pull request.
>
> 2. Le funzioni di ogni file in questo progetto sono descritte in dettaglio nel [rapporto di traduzione automatica del progetto `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). Con l'iterazione della versione, puoi anche fare clic sui plugin delle funzioni rilevanti in qualsiasi momento per richiamare GPT e rigenerare il rapporto di auto-analisi del progetto. Domande frequenti [`wiki`](https://github.com/binary-husky/gpt_academic/wiki) | [Metodo di installazione standard](#installazione) | [Script di installazione one-click](https://github.com/binary-husky/gpt_academic/releases) | [Configurazione](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
>
>
> 3. Questo progetto è compatibile e incoraggia l'uso di modelli di linguaggio di grandi dimensioni nazionali, come ChatGLM. Supporto per la coesistenza di più chiavi API, puoi compilare nel file di configurazione come `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Quando è necessario sostituire temporaneamente `API_KEY`, inserisci temporaneamente `API_KEY` nell'area di input e premi Invio per confermare.
<div align="center">
@ -128,7 +128,7 @@ python -m pip install -r requirements.txt # Questo passaggio è identico alla pr
[Optional] Se desideri utilizzare ChatGLM2 di Tsinghua/Fudan MOSS come backend, è necessario installare ulteriori dipendenze (Requisiti: conoscenza di Python + esperienza con Pytorch + hardware potente):
```sh
# [Optional Step I] Supporto per ChatGLM2 di Tsinghua. Note di ChatGLM di Tsinghua: Se si verifica l'errore "Call ChatGLM fail non può caricare i parametri di ChatGLM", fare riferimento a quanto segue: 1: L'installazione predefinita è la versione torch+cpu, per usare cuda è necessario disinstallare torch ed installare nuovamente la versione con torch+cuda; 2: Se il modello non può essere caricato a causa di una configurazione insufficiente, è possibile modificare la precisione del modello in request_llm/bridge_chatglm.py, sostituendo AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) con AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Optional Step II] Supporto per Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@ -206,8 +206,8 @@ Ad esempio,
```
"Traduzione avanzata Cinese-Inglese": {
# Prefisso, sarà aggiunto prima del tuo input. Ad esempio, utilizzato per descrivere la tua richiesta, come traduzione, spiegazione del codice, rifinitura, ecc.
"Prefisso": "Si prega di tradurre il seguente testo in cinese e fornire spiegazione per i termini tecnici utilizzati, utilizzando una tabella in markdown uno per uno:\n\n",
"Prefisso": "Si prega di tradurre il seguente testo in cinese e fornire spiegazione per i termini tecnici utilizzati, utilizzando una tabella in markdown uno per uno:\n\n",
# Suffisso, sarà aggiunto dopo il tuo input. Ad esempio, in combinazione con il prefisso, puoi circondare il tuo input con virgolette.
"Suffisso": "",
},
@ -224,7 +224,7 @@ La scrittura di plugin per questo progetto è facile e richiede solo conoscenze
# Aggiornamenti
### I: Aggiornamenti
1. Funzionalità di salvataggio della conversazione. Chiamare `Salva la conversazione corrente` nell'area del plugin per salvare la conversazione corrente come un file html leggibile e ripristinabile.
1. Funzionalità di salvataggio della conversazione. Chiamare `Salva la conversazione corrente` nell'area del plugin per salvare la conversazione corrente come un file html leggibile e ripristinabile.
Inoltre, nella stessa area del plugin (menu a tendina) chiamare `Carica la cronologia della conversazione` per ripristinare una conversazione precedente.
Suggerimento: fare clic su `Carica la cronologia della conversazione` senza specificare un file per visualizzare la tua cronologia di archiviazione HTML.
<div align="center">
@ -358,4 +358,3 @@ https://github.com/oobabooga/one-click-installers
# Altre risorse:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -2,9 +2,9 @@
> **注意**
>
>
> 此READMEはGPTによる翻訳で生成されましたこのプロジェクトのプラグインによって実装されています、翻訳結果は100%正確ではないため、注意してください。
>
>
> 2023年11月7日: 依存関係をインストールする際は、`requirements.txt`で**指定されたバージョン**を選択してください。 インストールコマンド: `pip install -r requirements.txt`
@ -18,11 +18,11 @@ GPTを使用してこのプロジェクトを任意の言語に翻訳するに
> 1. **強調された** プラグインボタンのみがファイルを読み込むことができることに注意してください。一部のプラグインは、プラグインエリアのドロップダウンメニューにあります。また、新しいプラグインのPRを歓迎し、最優先で対応します。
>
> 2. このプロジェクトの各ファイルの機能は、[自己分析レポート`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E5%A0%82)で詳しく説明されています。バージョンが進化するにつれて、関連する関数プラグインをクリックして、プロジェクトの自己分析レポートをGPTで再生成することもできます。よくある質問については、[`wiki`](https://github.com/binary-husky/gpt_academic/wiki)をご覧ください。[標準的なインストール方法](#installation) | [ワンクリックインストールスクリプト](https://github.com/binary-husky/gpt_academic/releases) | [構成の説明](https://github.com/binary-husky/gpt_academic/wiki/Project-Configuration-Explain)。
>
>
> 3. このプロジェクトは、[ChatGLM](https://www.chatglm.dev/)などの中国製の大規模言語モデルも互換性があり、試してみることを推奨しています。複数のAPIキーを共存させることができ、設定ファイルに`API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`のように記入できます。`API_KEY`を一時的に変更する必要がある場合は、入力エリアに一時的な`API_KEY`を入力し、Enterキーを押して提出すると有効になります。
<div align="center">
@ -189,7 +189,7 @@ Python環境に詳しくないWindowsユーザーは、[リリース](https://gi
"超级英译中" {
# プレフィックス、入力の前に追加されます。例えば、要求を記述するために使用されます。翻訳、コードの解説、校正など
"プレフィックス" "下記の内容を中国語に翻訳し、専門用語を一つずつマークダウンテーブルで解説してください:\n\n"、
# サフィックス、入力の後に追加されます。プレフィックスと一緒に使用して、入力内容を引用符で囲むことができます。
"サフィックス" ""、
}、
@ -342,4 +342,3 @@ https://github.com/oobabooga/one-click-installers
# その他:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -27,7 +27,7 @@ GPT를 사용하여 이 프로젝트를 임의의 언어로 번역하려면 [`mu
<div align="center">
@ -130,7 +130,7 @@ python -m pip install -r requirements.txt # This step is the same as the pip ins
[Optional Step] If you need support for Tsinghua ChatGLM2/Fudan MOSS as the backend, you need to install additional dependencies (Prerequisites: Familiar with Python + Have used Pytorch + Sufficient computer configuration):
```sh
# [Optional Step I] Support for Tsinghua ChatGLM2. Note for Tsinghua ChatGLM: If you encounter the error "Call ChatGLM fail cannot load ChatGLM parameters", refer to the following: 1: The default installation above is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient computer configuration, you can modify the model precision in request_llm/bridge_chatglm.py, change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Optional Step II] Support for Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
@ -208,8 +208,8 @@ Please visit the [cloud server remote deployment wiki](https://github.com/binary
```
"초급영문 번역": {
# 접두사, 입력 내용 앞에 추가됩니다. 예를 들어 요구 사항을 설명하는 데 사용됩니다. 예를 들어 번역, 코드 설명, 교정 등
"Prefix": "다음 내용을 한국어로 번역하고 전문 용어에 대한 설명을 적용한 마크다운 표를 사용하세요:\n\n",
"Prefix": "다음 내용을 한국어로 번역하고 전문 용어에 대한 설명을 적용한 마크다운 표를 사용하세요:\n\n",
# 접미사, 입력 내용 뒤에 추가됩니다. 예를 들어 접두사와 함께 입력 내용을 따옴표로 감쌀 수 있습니다.
"Suffix": "",
},
@ -361,4 +361,3 @@ https://github.com/oobabooga/one-click-installers
# 더보기:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -2,9 +2,9 @@
> **Nota**
>
>
> Este README foi traduzido pelo GPT (implementado por um plugin deste projeto) e não é 100% confiável. Por favor, verifique cuidadosamente o resultado da tradução.
>
>
> 7 de novembro de 2023: Ao instalar as dependências, favor selecionar as **versões especificadas** no `requirements.txt`. Comando de instalação: `pip install -r requirements.txt`.
# <div align=center><img src="logo.png" width="40"> GPT Acadêmico</div>
@ -15,12 +15,12 @@ Para traduzir este projeto para qualquer idioma utilizando o GPT, leia e execute
> **Nota**
>
> 1. Observe que apenas os plugins (botões) marcados em **destaque** são capazes de ler arquivos, alguns plugins estão localizados no **menu suspenso** do plugin area. Também damos boas-vindas e prioridade máxima a qualquer novo plugin via PR.
>
>
> 2. As funcionalidades de cada arquivo deste projeto estão detalhadamente explicadas em [autoanálise `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). Com a iteração das versões, você também pode clicar nos plugins de funções relevantes a qualquer momento para chamar o GPT para regerar o relatório de autonálise do projeto. Perguntas frequentes [`wiki`](https://github.com/binary-husky/gpt_academic/wiki) | [Método de instalação convencional](#installation) | [Script de instalação em um clique](https://github.com/binary-husky/gpt_academic/releases) | [Explicação de configuração](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明)。
>
> 3. Este projeto é compatível e encoraja o uso de modelos de linguagem chineses, como ChatGLM. Vários api-keys podem ser usados simultaneamente, podendo ser especificados no arquivo de configuração como `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Quando precisar alterar temporariamente o `API_KEY`, insira o `API_KEY` temporário na área de entrada e pressione Enter para que ele seja efetivo.
<div align="center">
Funcionalidades (⭐= funcionalidade recentemente adicionada) | Descrição
@ -89,7 +89,7 @@ Apresentação de mais novas funcionalidades (geração de imagens, etc.) ... |
</div>
# Instalação
### Método de instalação I: Executar diretamente (Windows, Linux ou MacOS)
### Método de instalação I: Executar diretamente (Windows, Linux ou MacOS)
1. Baixe o projeto
```sh
@ -124,7 +124,7 @@ python -m pip install -r requirements.txt # Este passo é igual ao da instalaç
[Opcional] Se você quiser suporte para o ChatGLM2 do THU/ MOSS do Fudan, precisará instalar dependências extras (pré-requisitos: familiarizado com o Python + já usou o PyTorch + o computador tem configuração suficiente):
```sh
# [Opcional Passo I] Suporte para ChatGLM2 do THU. Observações sobre o ChatGLM2 do THU: Se você encontrar o erro "Call ChatGLM fail 不能正常加载ChatGLM的参数" (Falha ao chamar o ChatGLM, não é possível carregar os parâmetros do ChatGLM), consulte o seguinte: 1: A versão instalada por padrão é a versão torch+cpu. Se você quiser usar a versão cuda, desinstale o torch e reinstale uma versão com torch+cuda; 2: Se a sua configuração não for suficiente para carregar o modelo, você pode modificar a precisão do modelo em request_llm/bridge_chatglm.py, alterando todas as ocorrências de AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) para AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
python -m pip install -r request_llms/requirements_chatglm.txt
# [Opcional Passo II] Suporte para MOSS do Fudan
python -m pip install -r request_llms/requirements_moss.txt
@ -202,8 +202,8 @@ Por exemplo:
```
"超级英译中": {
# Prefixo, adicionado antes do seu input. Por exemplo, usado para descrever sua solicitação, como traduzir, explicar o código, revisar, etc.
"Prefix": "Por favor, traduza o parágrafo abaixo para o chinês e explique cada termo técnico dentro de uma tabela markdown:\n\n",
"Prefix": "Por favor, traduza o parágrafo abaixo para o chinês e explique cada termo técnico dentro de uma tabela markdown:\n\n",
# Sufixo, adicionado após o seu input. Por exemplo, em conjunto com o prefixo, pode-se colocar seu input entre aspas.
"Suffix": "",
},
@ -355,4 +355,3 @@ https://github.com/oobabooga/instaladores-de-um-clique
# Mais:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -2,9 +2,9 @@
> **Примечание**
>
>
> Этот README был переведен с помощью GPT (реализовано с помощью плагина этого проекта) и не может быть полностью надежным, пожалуйста, внимательно проверьте результаты перевода.
>
>
> 7 ноября 2023 года: При установке зависимостей, пожалуйста, выберите **указанные версии** из `requirements.txt`. Команда установки: `pip install -r requirements.txt`.
@ -17,12 +17,12 @@
>
> 1. Пожалуйста, обратите внимание, что только плагины (кнопки), выделенные **жирным шрифтом**, поддерживают чтение файлов, некоторые плагины находятся в выпадающем меню **плагинов**. Кроме того, мы с радостью приветствуем и обрабатываем PR для любых новых плагинов с **наивысшим приоритетом**.
>
> 2. Функции каждого файла в этом проекте подробно описаны в [отчете о самостоятельном анализе проекта `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). С каждым новым релизом вы также можете в любое время нажать на соответствующий функциональный плагин, вызвать GPT для повторной генерации сводного отчета о самоанализе проекта. Часто задаваемые вопросы [`wiki`](https://github.com/binary-husky/gpt_academic/wiki) | [обычные методы установки](#installation) | [скрипт одношаговой установки](https://github.com/binary-husky/gpt_academic/releases) | [инструкции по настройке](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明).
> 2. Функции каждого файла в этом проекте подробно описаны в [отчете о самостоятельном анализе проекта `self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/GPTAcademic项目自译解报告). С каждым новым релизом вы также можете в любое время нажать на соответствующий функциональный плагин, вызвать GPT для повторной генерации сводного отчета о самоанализе проекта. Часто задаваемые вопросы [`wiki`](https://github.com/binary-husky/gpt_academic/wiki) | [обычные методы установки](#installation) | [скрипт одношаговой установки](https://github.com/binary-husky/gpt_academic/releases) | [инструкции по настройке](https://github.com/binary-husky/gpt_academic/wiki/项目配置说明).
>
> 3. Этот проект совместим и настоятельно рекомендуется использование китайской NLP-модели ChatGLM и других моделей больших языков производства Китая. Поддерживает одновременное использование нескольких ключей API, которые можно указать в конфигурационном файле, например, `API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"`. Если нужно временно заменить `API_KEY`, введите временный `API_KEY` в окне ввода и нажмите Enter для его подтверждения.
<div align="center">
@ -204,8 +204,8 @@ docker-compose up
```
"Супер-англо-русский перевод": {
# Префикс, который будет добавлен перед вашим вводом. Например, используется для описания вашего запроса, например, перевода, объяснения кода, редактирования и т.д.
"Префикс": "Пожалуйста, переведите следующий абзац на русский язык, а затем покажите каждый термин на экране с помощью таблицы Markdown:\n\n",
"Префикс": "Пожалуйста, переведите следующий абзац на русский язык, а затем покажите каждый термин на экране с помощью таблицы Markdown:\n\n",
# Суффикс, который будет добавлен после вашего ввода. Например, можно использовать с префиксом, чтобы заключить ваш ввод в кавычки.
"Суффикс": "",
},
@ -335,7 +335,7 @@ GPT Academic Группа QQ разработчиков: `610599535`
```
В коде использовались многие функции, представленные в других отличных проектах, поэтому их порядок не имеет значения:
# ChatGLM2-6B от Тиньхуа:
# ChatGLM2-6B от Тиньхуа:
https://github.com/THUDM/ChatGLM2-6B
# Линейные модели с ограниченной памятью от Тиньхуа:
@ -358,4 +358,3 @@ https://github.com/oobabooga/one-click-installers
# Больше:
https://github.com/gradio-app/gradio
https://github.com/fghrsh/live2d_demo

View File

@ -17,18 +17,18 @@ nano config.py
- # 如果需要在二级路径下运行
- # CUSTOM_PATH = get_conf('CUSTOM_PATH')
- # if CUSTOM_PATH != "/":
- # if CUSTOM_PATH != "/":
- # from toolbox import run_gradio_in_subpath
- # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
- # else:
- # else:
- # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
+ 如果需要在二级路径下运行
+ CUSTOM_PATH = get_conf('CUSTOM_PATH')
+ if CUSTOM_PATH != "/":
+ if CUSTOM_PATH != "/":
+ from toolbox import run_gradio_in_subpath
+ run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
+ else:
+ else:
+ demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
if __name__ == "__main__":

View File

@ -7,13 +7,27 @@ sample = """
"""
import re
def preprocess_newbing_out(s):
pattern = r'\^(\d+)\^' # 匹配^数字^
pattern2 = r'\[(\d+)\]' # 匹配^数字^
sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
result = re.sub(pattern, sub, s) # 替换操作
if '[1]' in result:
result += '<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>' + "<br/>".join([re.sub(pattern2, sub, r) for r in result.split('\n') if r.startswith('[')]) + '</small>'
pattern = r"\^(\d+)\^" # 匹配^数字^
pattern2 = r"\[(\d+)\]" # 匹配^数字^
def sub(m):
return "\\[" + m.group(1) + "\\]" # 将匹配到的数字作为替换值
result = re.sub(pattern, sub, s) # 替换操作
if "[1]" in result:
result += (
'<br/><hr style="border-top: dotted 1px #44ac5c;"><br/><small>'
+ "<br/>".join(
[
re.sub(pattern2, sub, r)
for r in result.split("\n")
if r.startswith("[")
]
)
+ "</small>"
)
return result
@ -28,37 +42,39 @@ def close_up_code_segment_during_stream(gpt_reply):
str: 返回一个新的字符串将输出代码片段的后面的```补上
"""
if '```' not in gpt_reply:
if "```" not in gpt_reply:
return gpt_reply
if gpt_reply.endswith('```'):
if gpt_reply.endswith("```"):
return gpt_reply
# 排除了以上两个情况,我们
segments = gpt_reply.split('```')
segments = gpt_reply.split("```")
n_mark = len(segments) - 1
if n_mark % 2 == 1:
# print('输出代码片段中!')
return gpt_reply+'\n```'
return gpt_reply + "\n```"
else:
return gpt_reply
import markdown
from latex2mathml.converter import convert as tex2mathml
from functools import wraps, lru_cache
def markdown_convertion(txt):
"""
将Markdown格式的文本转换为HTML格式如果包含数学公式则先将公式转换为HTML格式
"""
pre = '<div class="markdown-body">'
suf = '</div>'
suf = "</div>"
if txt.startswith(pre) and txt.endswith(suf):
# print('警告,输入了已经经过转化的字符串,二次转化可能出问题')
return txt # 已经被转化过,不需要再次转化
return txt # 已经被转化过,不需要再次转化
markdown_extension_configs = {
'mdx_math': {
'enable_dollar_delimiter': True,
'use_gitlab_delimiters': False,
"mdx_math": {
"enable_dollar_delimiter": True,
"use_gitlab_delimiters": False,
},
}
find_equation_pattern = r'<script type="math/tex(?:.*?)>(.*?)</script>'
@ -72,19 +88,19 @@ def markdown_convertion(txt):
def replace_math_no_render(match):
content = match.group(1)
if 'mode=display' in match.group(0):
content = content.replace('\n', '</br>')
return f"<font color=\"#00FF00\">$$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$$</font>"
if "mode=display" in match.group(0):
content = content.replace("\n", "</br>")
return f'<font color="#00FF00">$$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$$</font>'
else:
return f"<font color=\"#00FF00\">$</font><font color=\"#FF00FF\">{content}</font><font color=\"#00FF00\">$</font>"
return f'<font color="#00FF00">$</font><font color="#FF00FF">{content}</font><font color="#00FF00">$</font>'
def replace_math_render(match):
content = match.group(1)
if 'mode=display' in match.group(0):
if '\\begin{aligned}' in content:
content = content.replace('\\begin{aligned}', '\\begin{array}')
content = content.replace('\\end{aligned}', '\\end{array}')
content = content.replace('&', ' ')
if "mode=display" in match.group(0):
if "\\begin{aligned}" in content:
content = content.replace("\\begin{aligned}", "\\begin{array}")
content = content.replace("\\end{aligned}", "\\end{array}")
content = content.replace("&", " ")
content = tex2mathml_catch_exception(content, display="block")
return content
else:
@ -94,37 +110,58 @@ def markdown_convertion(txt):
"""
解决一个mdx_math的bug$包裹begin命令时多余<script>
"""
content = content.replace('<script type="math/tex">\n<script type="math/tex; mode=display">', '<script type="math/tex; mode=display">')
content = content.replace('</script>\n</script>', '</script>')
content = content.replace(
'<script type="math/tex">\n<script type="math/tex; mode=display">',
'<script type="math/tex; mode=display">',
)
content = content.replace("</script>\n</script>", "</script>")
return content
if ('$' in txt) and ('```' not in txt): # 有$标识的公式符号,且没有代码段```的标识
if ("$" in txt) and ("```" not in txt): # 有$标识的公式符号,且没有代码段```的标识
# convert everything to html format
split = markdown.markdown(text='---')
convert_stage_1 = markdown.markdown(text=txt, extensions=['mdx_math', 'fenced_code', 'tables', 'sane_lists'], extension_configs=markdown_extension_configs)
split = markdown.markdown(text="---")
convert_stage_1 = markdown.markdown(
text=txt,
extensions=["mdx_math", "fenced_code", "tables", "sane_lists"],
extension_configs=markdown_extension_configs,
)
convert_stage_1 = markdown_bug_hunt(convert_stage_1)
# re.DOTALL: Make the '.' special character match any character at all, including a newline; without this flag, '.' will match anything except a newline. Corresponds to the inline flag (?s).
# 1. convert to easy-to-copy tex (do not render math)
convert_stage_2_1, n = re.subn(find_equation_pattern, replace_math_no_render, convert_stage_1, flags=re.DOTALL)
convert_stage_2_1, n = re.subn(
find_equation_pattern,
replace_math_no_render,
convert_stage_1,
flags=re.DOTALL,
)
# 2. convert to rendered equation
convert_stage_2_2, n = re.subn(find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL)
convert_stage_2_2, n = re.subn(
find_equation_pattern, replace_math_render, convert_stage_1, flags=re.DOTALL
)
# cat them together
return pre + convert_stage_2_1 + f'{split}' + convert_stage_2_2 + suf
return pre + convert_stage_2_1 + f"{split}" + convert_stage_2_2 + suf
else:
return pre + markdown.markdown(txt, extensions=['fenced_code', 'codehilite', 'tables', 'sane_lists']) + suf
return (
pre
+ markdown.markdown(
txt, extensions=["fenced_code", "codehilite", "tables", "sane_lists"]
)
+ suf
)
sample = preprocess_newbing_out(sample)
sample = close_up_code_segment_during_stream(sample)
sample = markdown_convertion(sample)
with open('tmp.html', 'w', encoding='utf8') as f:
f.write("""
with open("tmp.html", "w", encoding="utf8") as f:
f.write(
"""
<head>
<title>My Website</title>
<link rel="stylesheet" type="text/css" href="style.css">
</head>
""")
"""
)
f.write(sample)

View File

@ -2106,4 +2106,4 @@
"改变输入参数的顺序与结构": "入力パラメータの順序と構造を変更する",
"正在精细切分latex文件": "LaTeXファイルを細かく分割しています",
"读取文件": "ファイルを読み込んでいます"
}
}

View File

@ -98,4 +98,4 @@
"图片生成_DALLE2": "ImageGeneration_DALLE2",
"图片生成_DALLE3": "ImageGeneration_DALLE3",
"图片修改_DALLE2": "ImageModification_DALLE2"
}
}

View File

@ -61,4 +61,3 @@ VI 两种音频监听模式切换时,需要刷新页面才有效。
VII 非localhost运行+非https情况下无法打开录音功能的坑https://blog.csdn.net/weixin_39461487/article/details/109594434
## 5.点击函数插件区“实时音频采集” 或者其他音频交互功能

View File

@ -8,8 +8,8 @@ try {
live2d_settings['modelId'] = 5; // 默认模型 ID
live2d_settings['modelTexturesId'] = 1; // 默认材质 ID
live2d_settings['modelStorage'] = false; // 不储存模型 ID
live2d_settings['waifuSize'] = '210x187';
live2d_settings['waifuTipsSize'] = '187x52';
live2d_settings['waifuSize'] = '210x187';
live2d_settings['waifuTipsSize'] = '187x52';
live2d_settings['canSwitchModel'] = true;
live2d_settings['canSwitchTextures'] = true;
live2d_settings['canSwitchHitokoto'] = false;

View File

@ -123,4 +123,4 @@
<glyph unicode="&#xe65e;" d="M512 748.8l211.2 179.2 300.8-198.4-204.8-166.4-307.2 185.6zM1024 396.8l-300.8-198.4-211.2 172.8 300.8 185.6 211.2-160zM300.8 198.4l-300.8 198.4 204.8 166.4 307.2-192-211.2-172.8zM0 729.6l300.8 198.4 211.2-179.2-300.8-192-211.2 172.8zM512 332.8l211.2-179.2 89.6 57.6v-64l-300.8-179.2-300.8 179.2v64l89.6-51.2 211.2 172.8z" />
<glyph unicode="&#xe65f;" d="M864 249.6c-38.4 0-64 32-64 64v256c0 38.4 32 64 64 64 38.4 0 64-32 64-64v-256c0-32-25.6-64-64-64zM697.6 102.4h-38.4v-108.8c0-38.4-25.6-64-57.6-64s-57.6 25.6-57.6 64v108.8h-70.4v-108.8c0-38.4-25.6-64-57.6-64s-57.6 25.6-57.6 64v108.8h-32c-19.2 0-38.4 19.2-38.4 44.8v428.8h448v-422.4c0-32-12.8-51.2-38.4-51.2zM736 633.6h-448c0 89.6 32 153.6 76.8 192l-70.4 83.2c-6.4 12.8-6.4 25.6 0 38.4 12.8 12.8 25.6 12.8 38.4 0l83.2-96c32 12.8 64 19.2 96 19.2s70.4-6.4 96-19.2l83.2 96c12.8 12.8 25.6 12.8 38.4 0s12.8-32 0-38.4l-70.4-83.2c44.8-32 76.8-102.4 76.8-192zM441.6 761.6c-12.8 0-25.6-12.8-25.6-32s12.8-32 25.6-32 25.6 12.8 25.6 32-12.8 32-25.6 32zM582.4 761.6c-12.8 0-25.6-12.8-25.6-32s12.8-32 25.6-32 25.6 19.2 25.6 32-12.8 32-25.6 32zM160 249.6c-38.4 0-64 32-64 64v256c0 38.4 25.6 64 64 64s64-32 64-64v-256c0-32-25.6-64-64-64z" />
<glyph unicode="&#xe660;" d="M921.6 211.2c-32-153.6-115.2-211.2-147.2-249.6-32-25.6-121.6-25.6-153.6-6.4-38.4 25.6-134.4 25.6-166.4 0-44.8-32-115.2-19.2-128-12.8-256 179.2-352 716.8 12.8 774.4 64 12.8 134.4-32 134.4-32 51.2-25.6 70.4-12.8 115.2 6.4 96 44.8 243.2 44.8 313.6-76.8-147.2-96-153.6-294.4 19.2-403.2zM716.8 960c12.8-70.4-64-224-204.8-230.4-12.8 38.4 32 217.6 204.8 230.4z" />
</font></defs></svg>
</font></defs></svg>

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 56 KiB

File diff suppressed because one or more lines are too long

View File

@ -1 +1 @@
https://github.com/fghrsh/live2d_demo
https://github.com/fghrsh/live2d_demo

View File

@ -5,11 +5,11 @@ window.live2d_settings = Array(); /*
      '    L/ Live2D 看板娘 参数设置
     /  ,  /|  ,  ,    ', Version 1.4.2
     / /-/  L_ !  i Update 2018.11.12
    7  '-!|  |
    7  '-!|  |
     !,/7 '0'   ´0i|   |   
     |."  _   ,,,, / |./   | 网页添加 Live2D 看板娘
     '| i.,,__ _,. /  .i  | https://www.fghrsh.net/post/123.html
      '| | / k__/レ', . |
      '| | / k__/レ', . |
       | |/i 〈|/  i ,. | i | Thanks
      .|/ /    !   | journey-ad / https://github.com/journey-ad/live2d_src
        kヽ>   _,.   /! xiazeyu / https://github.com/xiazeyu/live2d-widget.js
@ -77,11 +77,11 @@ String.prototype.render = function(context) {
return this.replace(tokenReg, function (word, slash1, token, slash2) {
if (slash1 || slash2) { return word.replace('\\', ''); }
var variables = token.replace(/\s/g, '').split('.');
var currentObject = context;
var i, length, variable;
for (i = 0, length = variables.length; i < length; ++i) {
variable = variables[i];
currentObject = currentObject[variable];
@ -101,9 +101,9 @@ function showMessage(text, timeout, flag) {
if(flag || sessionStorage.getItem('waifu-text') === '' || sessionStorage.getItem('waifu-text') === null){
if(Array.isArray(text)) text = text[Math.floor(Math.random() * text.length + 1)-1];
if (live2d_settings.showF12Message) console.log('[Message]', text.replace(/<[^<>]+>/g,''));
if(flag) sessionStorage.setItem('waifu-text', text);
$('.waifu-tips').stop();
$('.waifu-tips').html(text).fadeTo(200, 1);
if (timeout === undefined) timeout = 5000;
@ -121,15 +121,15 @@ function hideMessage(timeout) {
function initModel(waifuPath, type) {
/* console welcome message */
eval(function(p,a,c,k,e,r){e=function(c){return(c<a?'':e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p}('8.d(" ");8.d("\\U,.\\y\\5.\\1\\1\\1\\1/\\1,\\u\\2 \\H\\n\\1\\1\\1\\1\\1\\b \', !-\\r\\j-i\\1/\\1/\\g\\n\\1\\1\\1 \\1 \\a\\4\\f\'\\1\\1\\1 L/\\a\\4\\5\\2\\n\\1\\1 \\1 /\\1 \\a,\\1 /|\\1 ,\\1 ,\\1\\1\\1 \',\\n\\1\\1\\1\\q \\1/ /-\\j/\\1\\h\\E \\9 \\5!\\1 i\\n\\1\\1\\1 \\3 \\6 7\\q\\4\\c\\1 \\3\'\\s-\\c\\2!\\t|\\1 |\\n\\1\\1\\1\\1 !,/7 \'0\'\\1\\1 \\X\\w| \\1 |\\1\\1\\1\\n\\1\\1\\1\\1 |.\\x\\"\\1\\l\\1\\1 ,,,, / |./ \\1 |\\n\\1\\1\\1\\1 \\3\'| i\\z.\\2,,A\\l,.\\B / \\1.i \\1|\\n\\1\\1\\1\\1\\1 \\3\'| | / C\\D/\\3\'\\5,\\1\\9.\\1|\\n\\1\\1\\1\\1\\1\\1 | |/i \\m|/\\1 i\\1,.\\6 |\\F\\1|\\n\\1\\1\\1\\1\\1\\1.|/ /\\1\\h\\G \\1 \\6!\\1\\1\\b\\1|\\n\\1\\1\\1 \\1 \\1 k\\5>\\2\\9 \\1 o,.\\6\\2 \\1 /\\2!\\n\\1\\1\\1\\1\\1\\1 !\'\\m//\\4\\I\\g\', \\b \\4\'7\'\\J\'\\n\\1\\1\\1\\1\\1\\1 \\3\'\\K|M,p,\\O\\3|\\P\\n\\1\\1\\1\\1\\1 \\1\\1\\1\\c-,/\\1|p./\\n\\1\\1\\1\\1\\1 \\1\\1\\1\'\\f\'\\1\\1!o,.:\\Q \\R\\S\\T v"+e.V+" / W "+e.N);8.d(" ");',60,60,'|u3000|uff64|uff9a|uff40|u30fd|uff8d||console|uff8a|uff0f|uff3c|uff84|log|live2d_settings|uff70|u00b4|uff49||u2010||u3000_|u3008||_|___|uff72|u2500|uff67|u30cf|u30fc||u30bd|u4ece|u30d8|uff1e|__|u30a4|k_|uff17_|u3000L_|u3000i|uff1a|u3009|uff34|uff70r|u30fdL__||___i|l2dVerDate|u30f3|u30ce|nLive2D|u770b|u677f|u5a18|u304f__|l2dVersion|FGHRSH|u00b40i'.split('|'),0,{}));
/* 判断 JQuery */
if (typeof($.ajax) != 'function') typeof(jQuery.ajax) == 'function' ? window.$ = jQuery : console.log('[Error] JQuery is not defined.');
/* 加载看板娘样式 */
live2d_settings.waifuSize = live2d_settings.waifuSize.split('x');
live2d_settings.waifuTipsSize = live2d_settings.waifuTipsSize.split('x');
live2d_settings.waifuEdgeSide = live2d_settings.waifuEdgeSide.split(':');
$("#live2d").attr("width",live2d_settings.waifuSize[0]);
$("#live2d").attr("height",live2d_settings.waifuSize[1]);
$(".waifu-tips").width(live2d_settings.waifuTipsSize[0]);
@ -138,32 +138,32 @@ function initModel(waifuPath, type) {
$(".waifu-tips").css("font-size",live2d_settings.waifuFontSize);
$(".waifu-tool").css("font-size",live2d_settings.waifuToolFont);
$(".waifu-tool span").css("line-height",live2d_settings.waifuToolLine);
if (live2d_settings.waifuEdgeSide[0] == 'left') $(".waifu").css("left",live2d_settings.waifuEdgeSide[1]+'px');
else if (live2d_settings.waifuEdgeSide[0] == 'right') $(".waifu").css("right",live2d_settings.waifuEdgeSide[1]+'px');
window.waifuResize = function() { $(window).width() <= Number(live2d_settings.waifuMinWidth.replace('px','')) ? $(".waifu").hide() : $(".waifu").show(); };
if (live2d_settings.waifuMinWidth != 'disable') { waifuResize(); $(window).resize(function() {waifuResize()}); }
try {
if (live2d_settings.waifuDraggable == 'axis-x') $(".waifu").draggable({ axis: "x", revert: live2d_settings.waifuDraggableRevert });
else if (live2d_settings.waifuDraggable == 'unlimited') $(".waifu").draggable({ revert: live2d_settings.waifuDraggableRevert });
else $(".waifu").css("transition", 'all .3s ease-in-out');
} catch(err) { console.log('[Error] JQuery UI is not defined.') }
live2d_settings.homePageUrl = live2d_settings.homePageUrl == 'auto' ? window.location.protocol+'//'+window.location.hostname+'/' : live2d_settings.homePageUrl;
if (window.location.protocol == 'file:' && live2d_settings.modelAPI.substr(0,2) == '//') live2d_settings.modelAPI = 'http:'+live2d_settings.modelAPI;
$('.waifu-tool .fui-home').click(function (){
//window.location = 'https://www.fghrsh.net/';
window.location = live2d_settings.homePageUrl;
});
$('.waifu-tool .fui-info-circle').click(function (){
//window.open('https://imjad.cn/archives/lab/add-dynamic-poster-girl-with-live2d-to-your-blog-02');
window.open(live2d_settings.aboutPageUrl);
});
if (typeof(waifuPath) == "object") loadTipsMessage(waifuPath); else {
$.ajax({
cache: true,
@ -172,7 +172,7 @@ function initModel(waifuPath, type) {
success: function (result){ loadTipsMessage(result); }
});
}
if (!live2d_settings.showToolMenu) $('.waifu-tool').hide();
if (!live2d_settings.canCloseLive2d) $('.waifu-tool .fui-cross').hide();
if (!live2d_settings.canSwitchModel) $('.waifu-tool .fui-eye').hide();
@ -185,7 +185,7 @@ function initModel(waifuPath, type) {
if (waifuPath === undefined) waifuPath = '';
var modelId = localStorage.getItem('modelId');
var modelTexturesId = localStorage.getItem('modelTexturesId');
if (!live2d_settings.modelStorage || modelId == null) {
var modelId = live2d_settings.modelId;
var modelTexturesId = live2d_settings.modelTexturesId;
@ -204,7 +204,7 @@ function loadModel(modelId, modelTexturesId=0) {
function loadTipsMessage(result) {
window.waifu_tips = result;
$.each(result.mouseover, function (index, tips){
$(document).on("mouseover", tips.selector, function (){
var text = getRandText(tips.text);
@ -223,50 +223,50 @@ function loadTipsMessage(result) {
var now = new Date();
var after = tips.date.split('-')[0];
var before = tips.date.split('-')[1] || after;
if((after.split('/')[0] <= now.getMonth()+1 && now.getMonth()+1 <= before.split('/')[0]) &&
if((after.split('/')[0] <= now.getMonth()+1 && now.getMonth()+1 <= before.split('/')[0]) &&
(after.split('/')[1] <= now.getDate() && now.getDate() <= before.split('/')[1])){
var text = getRandText(tips.text);
text = text.render({year: now.getFullYear()});
showMessage(text, 6000, true);
}
});
if (live2d_settings.showF12OpenMsg) {
re.toString = function() {
showMessage(getRandText(result.waifu.console_open_msg), 5000, true);
return '';
};
}
if (live2d_settings.showCopyMessage) {
$(document).on('copy', function() {
showMessage(getRandText(result.waifu.copy_message), 5000, true);
});
}
$('.waifu-tool .fui-photo').click(function(){
showMessage(getRandText(result.waifu.screenshot_message), 5000, true);
window.Live2D.captureName = live2d_settings.screenshotCaptureName;
window.Live2D.captureFrame = true;
});
$('.waifu-tool .fui-cross').click(function(){
sessionStorage.setItem('waifu-dsiplay', 'none');
showMessage(getRandText(result.waifu.hidden_message), 1300, true);
window.setTimeout(function() {$('.waifu').hide();}, 1300);
});
window.showWelcomeMessage = function(result) {
showMessage('欢迎使用GPT-Academic', 6000);
}; if (live2d_settings.showWelcomeMessage) showWelcomeMessage(result);
var waifu_tips = result.waifu;
function loadOtherModel() {
var modelId = modelStorageGetItem('modelId');
var modelRandMode = live2d_settings.modelRandMode;
$.ajax({
cache: modelRandMode == 'switch' ? true : false,
url: live2d_settings.modelAPI+modelRandMode+'/?id='+modelId,
@ -279,12 +279,12 @@ function loadTipsMessage(result) {
}
});
}
function loadRandTextures() {
var modelId = modelStorageGetItem('modelId');
var modelTexturesId = modelStorageGetItem('modelTexturesId');
var modelTexturesRandMode = live2d_settings.modelTexturesRandMode;
$.ajax({
cache: modelTexturesRandMode == 'switch' ? true : false,
url: live2d_settings.modelAPI+modelTexturesRandMode+'_textures/?id='+modelId+'-'+modelTexturesId,
@ -297,32 +297,32 @@ function loadTipsMessage(result) {
}
});
}
function modelStorageGetItem(key) { return live2d_settings.modelStorage ? localStorage.getItem(key) : sessionStorage.getItem(key); }
/* 检测用户活动状态,并在空闲时显示一言 */
if (live2d_settings.showHitokoto) {
window.getActed = false; window.hitokotoTimer = 0; window.hitokotoInterval = false;
$(document).mousemove(function(e){getActed = true;}).keydown(function(){getActed = true;});
setInterval(function(){ if (!getActed) ifActed(); else elseActed(); }, 1000);
}
function ifActed() {
if (!hitokotoInterval) {
hitokotoInterval = true;
hitokotoTimer = window.setInterval(showHitokotoActed, 30000);
}
}
function elseActed() {
getActed = hitokotoInterval = false;
window.clearInterval(hitokotoTimer);
}
function showHitokotoActed() {
if ($(document)[0].visibilityState == 'visible') showHitokoto();
}
function showHitokoto() {
switch(live2d_settings.hitokotoAPI) {
case 'lwl12.com':
@ -366,7 +366,7 @@ function loadTipsMessage(result) {
});
}
}
$('.waifu-tool .fui-eye').click(function (){loadOtherModel()});
$('.waifu-tool .fui-user').click(function (){loadRandTextures()});
$('.waifu-tool .fui-chat').click(function (){showHitokoto()});

View File

@ -31,7 +31,7 @@
},
"model_message": {
"1": ["来自 Potion Maker 的 Pio 酱 ~"],
"2": ["来自 Potion Maker 的 Tia 酱 ~"]
"2": ["来自 Potion Maker 的 Tia 酱 ~"]
},
"hitokoto_api_message": {
"lwl12.com": ["这句一言来自 <span style=\"color:#0099cc;\">『{source}』</span>", ",是 <span style=\"color:#0099cc;\">{creator}</span> 投稿的", "。"],
@ -111,4 +111,4 @@
{ "date": "11/05-11/12", "text": ["今年的<span style=\"color:#0099cc;\">双十一</span>是和谁一起过的呢~"] },
{ "date": "12/20-12/31", "text": ["这几天是<span style=\"color:#0099cc;\">圣诞节</span>,主人肯定又去剁手买买买了~"] }
]
}
}

View File

@ -287,4 +287,4 @@
}
.fui-user:before {
content: "\e631";
}
}

View File

@ -32,4 +32,4 @@ P.S. 如果您按照以下步骤成功接入了新的大模型欢迎发Pull R
5. 测试通过后,在`request_llms/bridge_all.py`中做最后的修改,把你的模型完全接入到框架中(聪慧如您,只需要看一眼该文件就明白怎么修改了)
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果
6. 修改`LLM_MODEL`配置,然后运行`python main.py`,测试最后的效果

View File

@ -2,4 +2,4 @@ protobuf
cpm_kernels
torch>=1.10
mdtex2html
sentencepiece
sentencepiece

View File

@ -3,4 +3,4 @@ jtorch >= 0.1.3
torch
torchvision
pandas
jieba
jieba

View File

@ -5,4 +5,3 @@ accelerate
matplotlib
huggingface_hub
triton

View File

@ -1 +1 @@
dashscope
dashscope

View File

@ -2,4 +2,4 @@ modelscope
transformers_stream_generator
auto-gptq
optimum
urllib3<2
urllib3<2

View File

@ -1 +1 @@
slack-sdk==3.21.3
slack-sdk==3.21.3

View File

@ -3,12 +3,14 @@
# """
def validate_path():
import os, sys
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
validate_path() # validate path so you can run from base directory
if __name__ == "__main__":
# from request_llms.bridge_newbingfree import predict_no_ui_long_connection
# from request_llms.bridge_moss import predict_no_ui_long_connection
@ -19,18 +21,18 @@ if __name__ == "__main__":
# from request_llms.bridge_deepseekcoder import predict_no_ui_long_connection
# from request_llms.bridge_qwen_7B import predict_no_ui_long_connection
from request_llms.bridge_qwen_local import predict_no_ui_long_connection
# from request_llms.bridge_spark import predict_no_ui_long_connection
# from request_llms.bridge_zhipu import predict_no_ui_long_connection
# from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
llm_kwargs = {
'max_length': 4096,
'top_p': 1,
'temperature': 1,
"max_length": 4096,
"top_p": 1,
"temperature": 1,
}
result = predict_no_ui_long_connection( inputs="请问什么是质子?",
llm_kwargs=llm_kwargs,
history=["你好", "我好!"],
sys_prompt="")
print('final result:', result)
result = predict_no_ui_long_connection(
inputs="请问什么是质子?", llm_kwargs=llm_kwargs, history=["你好", "我好!"], sys_prompt=""
)
print("final result:", result)

View File

@ -29,16 +29,20 @@ md = """
请随时告诉我您的需求我会尽力提供帮助如果您有任何问题或需要解答的议题请随时提问
"""
def validate_path():
import os, sys
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
os.path.dirname(__file__)
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # validate path so you can run from base directory
validate_path() # validate path so you can run from base directory
from toolbox import markdown_convertion
html = markdown_convertion(md)
print(html)
with open('test.html', 'w', encoding='utf-8') as f:
f.write(html)
with open("test.html", "w", encoding="utf-8") as f:
f.write(html)

View File

@ -4,16 +4,28 @@
import os, sys
def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
def validate_path():
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(dir_name + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
if __name__ == "__main__":
from tests.test_utils import plugin_test
# plugin_test(plugin='crazy_functions.函数动态生成->函数动态生成', main_input='交换图像的蓝色通道和红色通道', advanced_arg={"file_path_arg": "./build/ants.jpg"})
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2307.07522")
plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix")
plugin_test(
plugin="crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF",
main_input="G:/SEAFILE_LOCAL/50503047/我的资料库/学位/paperlatex/aaai/Fu_8368_with_appendix",
)
# plugin_test(plugin='crazy_functions.虚空终端->虚空终端', main_input='修改api-key为sk-jhoejriotherjep')
@ -34,7 +46,7 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.批量翻译PDF文档_多线程->批量翻译PDF文档', main_input='crazy_functions/test_project/pdf_and_word/aaai.pdf')
# plugin_test(plugin='crazy_functions.谷歌检索小助手->谷歌检索小助手', main_input="https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG=")
# plugin_test(plugin='crazy_functions.总结word文档->总结word文档', main_input="crazy_functions/test_project/pdf_and_word")
# plugin_test(plugin='crazy_functions.下载arxiv论文翻译摘要->下载arxiv论文并翻译摘要', main_input="1812.10695")
@ -53,12 +65,11 @@ if __name__ == "__main__":
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="What is the installation method")
# plugin_test(plugin='crazy_functions.知识库文件注入->读取知识库作答', main_input="远程云服务器部署?")
# plugin_test(plugin='crazy_functions.Latex输出PDF结果->Latex翻译中文并重新编译PDF', main_input="2210.03629")
# advanced_arg = {"advanced_arg":"--llm_to_learn=gpt-3.5-turbo --prompt_prefix='根据下面的服装类型提示想象一个穿着者对这个人外貌、身处的环境、内心世界、人设进行描写。要求100字以内用第二人称。' --system_prompt=''" }
# plugin_test(plugin='crazy_functions.chatglm微调工具->微调数据集生成', main_input='build/dev.json', advanced_arg=advanced_arg)
# advanced_arg = {"advanced_arg":"--pre_seq_len=128 --learning_rate=2e-2 --num_gpus=1 --json_dataset='t_code.json' --ptuning_directory='/home/hmp/ChatGLM2-6B/ptuning' " }
# plugin_test(plugin='crazy_functions.chatglm微调工具->启动微调', main_input='build/dev.json', advanced_arg=advanced_arg)

View File

@ -9,45 +9,52 @@ from functools import wraps
import sys
import os
def chat_to_markdown_str(chat):
result = ""
for i, cc in enumerate(chat):
result += f'\n\n{cc[0]}\n\n{cc[1]}'
if i != len(chat)-1:
result += '\n\n---'
result += f"\n\n{cc[0]}\n\n{cc[1]}"
if i != len(chat) - 1:
result += "\n\n---"
return result
def silence_stdout(func):
@wraps(func)
def wrapper(*args, **kwargs):
_original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')
sys.stdout.reconfigure(encoding='utf-8')
sys.stdout = open(os.devnull, "w")
sys.stdout.reconfigure(encoding="utf-8")
for q in func(*args, **kwargs):
sys.stdout = _original_stdout
yield q
sys.stdout = open(os.devnull, 'w')
sys.stdout.reconfigure(encoding='utf-8')
sys.stdout = open(os.devnull, "w")
sys.stdout.reconfigure(encoding="utf-8")
sys.stdout.close()
sys.stdout = _original_stdout
return wrapper
def silence_stdout_fn(func):
@wraps(func)
def wrapper(*args, **kwargs):
_original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')
sys.stdout.reconfigure(encoding='utf-8')
sys.stdout = open(os.devnull, "w")
sys.stdout.reconfigure(encoding="utf-8")
result = func(*args, **kwargs)
sys.stdout.close()
sys.stdout = _original_stdout
return result
return wrapper
class VoidTerminal():
class VoidTerminal:
def __init__(self) -> None:
pass
vt = VoidTerminal()
vt.get_conf = silence_stdout_fn(get_conf)
vt.set_conf = silence_stdout_fn(set_conf)
@ -56,9 +63,27 @@ vt.get_plugin_handle = silence_stdout_fn(get_plugin_handle)
vt.get_plugin_default_kwargs = silence_stdout_fn(get_plugin_default_kwargs)
vt.get_chat_handle = silence_stdout_fn(get_chat_handle)
vt.get_chat_default_kwargs = silence_stdout_fn(get_chat_default_kwargs)
vt.chat_to_markdown_str = (chat_to_markdown_str)
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
vt.get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
vt.chat_to_markdown_str = chat_to_markdown_str
(
proxies,
WEB_PORT,
LLM_MODEL,
CONCURRENT_COUNT,
AUTHENTICATION,
CHATBOT_HEIGHT,
LAYOUT,
API_KEY,
) = vt.get_conf(
"proxies",
"WEB_PORT",
"LLM_MODEL",
"CONCURRENT_COUNT",
"AUTHENTICATION",
"CHATBOT_HEIGHT",
"LAYOUT",
"API_KEY",
)
def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
from rich.live import Live
@ -69,9 +94,9 @@ def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
plugin = vt.get_plugin_handle(plugin)
plugin_kwargs = vt.get_plugin_default_kwargs()
plugin_kwargs['main_input'] = main_input
plugin_kwargs["main_input"] = main_input
if advanced_arg is not None:
plugin_kwargs['plugin_kwargs'] = advanced_arg
plugin_kwargs["plugin_kwargs"] = advanced_arg
if debug:
my_working_plugin = (plugin)(**plugin_kwargs)
else:
@ -81,4 +106,4 @@ def plugin_test(main_input, plugin, advanced_arg=None, debug=True):
for cookies, chat, hist, msg in my_working_plugin:
md_str = vt.chat_to_markdown_str(chat)
md = Markdown(md_str)
live.update(md, refresh=True)
live.update(md, refresh=True)

View File

@ -4,14 +4,25 @@
import os, sys
def validate_path(): dir_name = os.path.dirname(__file__); root_dir_assume = os.path.abspath(dir_name + '/..'); os.chdir(root_dir_assume); sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
def validate_path():
dir_name = os.path.dirname(__file__)
root_dir_assume = os.path.abspath(dir_name + "/..")
os.chdir(root_dir_assume)
sys.path.append(root_dir_assume)
validate_path() # 返回项目根路径
if __name__ == "__main__":
from tests.test_utils import plugin_test
plugin_test(plugin='crazy_functions.知识库问答->知识库文件注入', main_input="./README.md")
plugin_test(plugin="crazy_functions.知识库问答->知识库文件注入", main_input="./README.md")
plugin_test(plugin='crazy_functions.知识库问答->读取知识库作答', main_input="What is the installation method")
plugin_test(
plugin="crazy_functions.知识库问答->读取知识库作答",
main_input="What is the installation method",
)
plugin_test(plugin='crazy_functions.知识库问答->读取知识库作答', main_input="远程云服务器部署?")
plugin_test(plugin="crazy_functions.知识库问答->读取知识库作答", main_input="远程云服务器部署?")

View File

@ -115,4 +115,4 @@
border: solid;
border-width: thin;
border-top-width: 0;
}
}

View File

@ -197,7 +197,7 @@ function swap_input_area() {
// Swap the elements
parent.insertBefore(element2, element1);
parent.insertBefore(element1, nextSibling);
if (swapped) {swapped = false;}
if (swapped) {swapped = false;}
else {swapped = true;}
}
@ -502,7 +502,7 @@ function minor_ui_adjustment() {
var cur_right = btn_list[always_preserve-1].getBoundingClientRect().right;
if (bar_btn_width.length == 0){
// 首次运行,记录每个按钮的宽度
for (var i = 0; i < btn_list.length; i++) {
for (var i = 0; i < btn_list.length; i++) {
bar_btn_width.push(btn_list[i].getBoundingClientRect().width);
}
}
@ -553,4 +553,4 @@ function GptAcademicJavaScriptInit(LAYOUT = "LEFT-RIGHT") {
});
chatbotObserver.observe(chatbotIndicator, { attributes: true, childList: true, subtree: true });
if (LAYOUT === "LEFT-RIGHT") { chatbotAutoHeight(); }
}
}

View File

@ -17,7 +17,7 @@
--button-primary-text-color-hover: #FFFFFF;
--button-secondary-text-color: #FFFFFF;
--button-secondary-text-color-hover: #FFFFFF;
--border-bottom-right-radius: 0px;
--border-bottom-left-radius: 0px;
@ -51,8 +51,8 @@
--button-primary-border-color-hover: #3cff00;
--button-secondary-border-color: #3cff00;
--button-secondary-border-color-hover: #3cff00;
--body-background-fill: #000000;
--background-fill-primary: #000000;
--background-fill-secondary: #000000;
@ -103,7 +103,7 @@
--button-primary-text-color-hover: #FFFFFF;
--button-secondary-text-color: #FFFFFF;
--button-secondary-text-color-hover: #FFFFFF;
--border-bottom-right-radius: 0px;
@ -138,8 +138,8 @@
--button-primary-border-color-hover: #3cff00;
--button-secondary-border-color: #3cff00;
--button-secondary-border-color-hover: #3cff00;
--body-background-fill: #000000;
--background-fill-primary: #000000;
--background-fill-secondary: #000000;
@ -479,4 +479,3 @@
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */

View File

@ -1,18 +1,26 @@
import os
import gradio as gr
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
theme_dir = os.path.dirname(__file__)
def adjust_theme():
def adjust_theme():
try:
color_er = gr.themes.utils.colors.fuchsia
set_theme = gr.themes.Default(
primary_hue=gr.themes.utils.colors.orange,
neutral_hue=gr.themes.utils.colors.gray,
font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
font_mono=["ui-monospace", "Consolas", "monospace"])
font=[
"Helvetica",
"Microsoft YaHei",
"ui-sans-serif",
"sans-serif",
"system-ui",
],
font_mono=["ui-monospace", "Consolas", "monospace"],
)
set_theme.set(
# Colors
input_background_fill_dark="*neutral_800",
@ -59,9 +67,9 @@ def adjust_theme():
button_cancel_text_color_dark="white",
)
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
js = f"<script>{f.read()}</script>"
# 添加一个萌萌的看板娘
if ADD_WAIFU:
js += """
@ -69,21 +77,26 @@ def adjust_theme():
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
<script src="file=docs/waifu_plugin/autoload.js"></script>
"""
if not hasattr(gr, 'RawTemplateResponse'):
if not hasattr(gr, "RawTemplateResponse"):
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
gradio_original_template_fn = gr.RawTemplateResponse
def gradio_new_template_fn(*args, **kwargs):
res = gradio_original_template_fn(*args, **kwargs)
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
res.init_headers()
return res
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
gr.routes.templates.TemplateResponse = (
gradio_new_template_fn # override gradio template
)
except:
set_theme = None
print('gradio版本较旧, 不能自定义字体和颜色')
print("gradio版本较旧, 不能自定义字体和颜色")
return set_theme
with open(os.path.join(theme_dir, 'contrast.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "contrast.css"), "r", encoding="utf-8") as f:
advanced_css = f.read()
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
advanced_css += f.read()

View File

@ -303,4 +303,3 @@
.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */
.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */
.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */

View File

@ -1,17 +1,26 @@
import os
import gradio as gr
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
theme_dir = os.path.dirname(__file__)
def adjust_theme():
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
theme_dir = os.path.dirname(__file__)
def adjust_theme():
try:
color_er = gr.themes.utils.colors.fuchsia
set_theme = gr.themes.Default(
primary_hue=gr.themes.utils.colors.orange,
neutral_hue=gr.themes.utils.colors.gray,
font=["Helvetica", "Microsoft YaHei", "ui-sans-serif", "sans-serif", "system-ui"],
font_mono=["ui-monospace", "Consolas", "monospace"])
font=[
"Helvetica",
"Microsoft YaHei",
"ui-sans-serif",
"sans-serif",
"system-ui",
],
font_mono=["ui-monospace", "Consolas", "monospace"],
)
set_theme.set(
# Colors
input_background_fill_dark="*neutral_800",
@ -58,7 +67,7 @@ def adjust_theme():
button_cancel_text_color_dark="white",
)
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
js = f"<script>{f.read()}</script>"
# 添加一个萌萌的看板娘
@ -68,21 +77,26 @@ def adjust_theme():
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
<script src="file=docs/waifu_plugin/autoload.js"></script>
"""
if not hasattr(gr, 'RawTemplateResponse'):
if not hasattr(gr, "RawTemplateResponse"):
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
gradio_original_template_fn = gr.RawTemplateResponse
def gradio_new_template_fn(*args, **kwargs):
res = gradio_original_template_fn(*args, **kwargs)
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
res.init_headers()
return res
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
gr.routes.templates.TemplateResponse = (
gradio_new_template_fn # override gradio template
)
except:
set_theme = None
print('gradio版本较旧, 不能自定义字体和颜色')
print("gradio版本较旧, 不能自定义字体和颜色")
return set_theme
with open(os.path.join(theme_dir, 'default.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "default.css"), "r", encoding="utf-8") as f:
advanced_css = f.read()
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
advanced_css += f.read()

View File

@ -2,29 +2,36 @@ import logging
import os
import gradio as gr
from toolbox import get_conf, ProxyNetworkActivate
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
theme_dir = os.path.dirname(__file__)
def dynamic_set_theme(THEME):
set_theme = gr.themes.ThemeClass()
with ProxyNetworkActivate('Download_Gradio_Theme'):
logging.info('正在下载Gradio主题请稍等。')
if THEME.startswith('Huggingface-'): THEME = THEME.lstrip('Huggingface-')
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
with ProxyNetworkActivate("Download_Gradio_Theme"):
logging.info("正在下载Gradio主题请稍等。")
if THEME.startswith("Huggingface-"):
THEME = THEME.lstrip("Huggingface-")
if THEME.startswith("huggingface-"):
THEME = THEME.lstrip("huggingface-")
set_theme = set_theme.from_hub(THEME.lower())
return set_theme
def adjust_theme():
try:
set_theme = gr.themes.ThemeClass()
with ProxyNetworkActivate('Download_Gradio_Theme'):
logging.info('正在下载Gradio主题请稍等。')
THEME = get_conf('THEME')
if THEME.startswith('Huggingface-'): THEME = THEME.lstrip('Huggingface-')
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
with ProxyNetworkActivate("Download_Gradio_Theme"):
logging.info("正在下载Gradio主题请稍等。")
THEME = get_conf("THEME")
if THEME.startswith("Huggingface-"):
THEME = THEME.lstrip("Huggingface-")
if THEME.startswith("huggingface-"):
THEME = THEME.lstrip("huggingface-")
set_theme = set_theme.from_hub(THEME.lower())
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
js = f"<script>{f.read()}</script>"
# 添加一个萌萌的看板娘
@ -34,20 +41,26 @@ def adjust_theme():
<script src="file=docs/waifu_plugin/jquery-ui.min.js"></script>
<script src="file=docs/waifu_plugin/autoload.js"></script>
"""
if not hasattr(gr, 'RawTemplateResponse'):
if not hasattr(gr, "RawTemplateResponse"):
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
gradio_original_template_fn = gr.RawTemplateResponse
def gradio_new_template_fn(*args, **kwargs):
res = gradio_original_template_fn(*args, **kwargs)
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
res.init_headers()
return res
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
except Exception as e:
gr.routes.templates.TemplateResponse = (
gradio_new_template_fn # override gradio template
)
except Exception:
set_theme = None
from toolbox import trimmed_format_exc
logging.error('gradio版本较旧, 不能自定义字体和颜色:', trimmed_format_exc())
logging.error("gradio版本较旧, 不能自定义字体和颜色:", trimmed_format_exc())
return set_theme
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
advanced_css = f.read()

View File

@ -38,4 +38,4 @@ function setSlider() {
window.addEventListener("DOMContentLoaded", () => {
set_elements();
});
});

View File

@ -1,9 +1,11 @@
import os
import gradio as gr
from toolbox import get_conf
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU', 'LAYOUT')
CODE_HIGHLIGHT, ADD_WAIFU, LAYOUT = get_conf("CODE_HIGHLIGHT", "ADD_WAIFU", "LAYOUT")
theme_dir = os.path.dirname(__file__)
def adjust_theme():
try:
set_theme = gr.themes.Soft(
@ -50,7 +52,6 @@ def adjust_theme():
c900="#2B2B2B",
c950="#171717",
),
radius_size=gr.themes.sizes.radius_sm,
).set(
button_primary_background_fill="*primary_500",
@ -75,7 +76,7 @@ def adjust_theme():
chatbot_code_background_color_dark="*neutral_950",
)
with open(os.path.join(theme_dir, 'common.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "common.js"), "r", encoding="utf8") as f:
js = f"<script>{f.read()}</script>"
# 添加一个萌萌的看板娘
@ -86,24 +87,29 @@ def adjust_theme():
<script src="file=docs/waifu_plugin/autoload.js"></script>
"""
with open(os.path.join(theme_dir, 'green.js'), 'r', encoding='utf8') as f:
with open(os.path.join(theme_dir, "green.js"), "r", encoding="utf8") as f:
js += f"<script>{f.read()}</script>"
if not hasattr(gr, 'RawTemplateResponse'):
if not hasattr(gr, "RawTemplateResponse"):
gr.RawTemplateResponse = gr.routes.templates.TemplateResponse
gradio_original_template_fn = gr.RawTemplateResponse
def gradio_new_template_fn(*args, **kwargs):
res = gradio_original_template_fn(*args, **kwargs)
res.body = res.body.replace(b'</html>', f'{js}</html>'.encode("utf8"))
res.body = res.body.replace(b"</html>", f"{js}</html>".encode("utf8"))
res.init_headers()
return res
gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template
gr.routes.templates.TemplateResponse = (
gradio_new_template_fn # override gradio template
)
except:
set_theme = None
print('gradio版本较旧, 不能自定义字体和颜色')
print("gradio版本较旧, 不能自定义字体和颜色")
return set_theme
with open(os.path.join(theme_dir, 'green.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "green.css"), "r", encoding="utf-8") as f:
advanced_css = f.read()
with open(os.path.join(theme_dir, 'common.css'), "r", encoding="utf-8") as f:
with open(os.path.join(theme_dir, "common.css"), "r", encoding="utf-8") as f:
advanced_css += f.read()

View File

@ -10,29 +10,33 @@ from toolbox import get_conf
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
"""
def load_dynamic_theme(THEME):
adjust_dynamic_theme = None
if THEME == 'Chuanhu-Small-and-Beautiful':
if THEME == "Chuanhu-Small-and-Beautiful":
from .green import adjust_theme, advanced_css
theme_declaration = "<h2 align=\"center\" class=\"small\">[Chuanhu-Small-and-Beautiful主题]</h2>"
elif THEME == 'High-Contrast':
theme_declaration = (
'<h2 align="center" class="small">[Chuanhu-Small-and-Beautiful主题]</h2>'
)
elif THEME == "High-Contrast":
from .contrast import adjust_theme, advanced_css
theme_declaration = ""
elif '/' in THEME:
elif "/" in THEME:
from .gradios import adjust_theme, advanced_css
from .gradios import dynamic_set_theme
adjust_dynamic_theme = dynamic_set_theme(THEME)
theme_declaration = ""
else:
from .default import adjust_theme, advanced_css
theme_declaration = ""
return adjust_theme, advanced_css, theme_declaration, adjust_dynamic_theme
adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(get_conf('THEME'))
adjust_theme, advanced_css, theme_declaration, _ = load_dynamic_theme(get_conf("THEME"))
"""
@ -42,26 +46,26 @@ cookie相关工具函数
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
"""
def init_cookie(cookies, chatbot):
# 为每一位访问的用户赋予一个独一无二的uuid编码
cookies.update({'uuid': uuid.uuid4()})
cookies.update({"uuid": uuid.uuid4()})
return cookies
def to_cookie_str(d):
# Pickle the dictionary and encode it as a string
pickled_dict = pickle.dumps(d)
cookie_value = base64.b64encode(pickled_dict).decode('utf-8')
cookie_value = base64.b64encode(pickled_dict).decode("utf-8")
return cookie_value
def from_cookie_str(c):
# Decode the base64-encoded string and unpickle it into a dictionary
pickled_dict = base64.b64decode(c.encode('utf-8'))
pickled_dict = base64.b64decode(c.encode("utf-8"))
return pickle.loads(pickled_dict)
"""
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
3 部分
@ -114,5 +118,3 @@ js_code_for_persistent_cookie_init = """(persistent_cookie) => {
return getCookie("persistent_cookie");
}
"""