diff --git a/.gitignore b/.gitignore
index c4df287..286a67d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -146,9 +146,9 @@ debug*
private*
crazy_functions/test_project/pdf_and_word
crazy_functions/test_samples
-request_llm/jittorllms
+request_llms/jittorllms
multi-language
-request_llm/moss
+request_llms/moss
media
flagged
-request_llm/ChatGLM-6b-onnx-u8s8
+request_llms/ChatGLM-6b-onnx-u8s8
diff --git a/README.md b/README.md
index 77ff15e..8e1e55b 100644
--- a/README.md
+++ b/README.md
@@ -1,24 +1,25 @@
> **Note**
->
-> 2023.10.8: Gradio, Pydantic依赖调整,已修改 `requirements.txt`。请及时**更新代码**,安装依赖时,请严格选择`requirements.txt`中**指定的版本**。
->
-> `pip install -r requirements.txt`
+>
+> 2023.11.12: 紧急修复了endpoint异常的问题。
+>
+> 2023.11.7: 安装依赖时,请选择`requirements.txt`中**指定的版本**。 安装命令:`pip install -r requirements.txt`。本项目开源免费,近期发现有人蔑视开源协议并利用本项目违规圈钱,请提高警惕,谨防上当受骗。
+
#
+
+الوظائف (⭐= وظائف مُضافة حديثًا) | الوصف
+--- | ---
+⭐[التوصل لنموذج جديد](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | بحث بيدو[تشيان فان](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu) ووينسين[جينرال](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary)، مختبرات شنغهاي للذكاء الصناعي[شو شينغ](https://github.com/InternLM/InternLM)، إكسنفلام[زينغهو]https://xinghuo.xfyun.cn/)، [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)، واجهة بيانية ذكية و3 خدمات إضافية [DALLE3]
+الجودة الفائقة، الترجمة، شرح الكود | الإصلاح الفوري للاخطاء النحوية في الأبحاث وترجمة وتحسين التصريف اللغوي للأكواد
+[اختصارات مخصصة](https://www.bilibili.com/video/BV14s4y1E7jN) | دعم الاختصارات المخصصة
+تصميم قابل للتوسيع | دعم الإضافات القوية المخصصة (الوظائف)، الإضافات قابلة للتحديث بشكل فوري
+[تحليل البرنامج](https://www.bilibili.com/video/BV1cj411A7VW) | [وظائف] التحليل الشجري بناءً على البرنامج من Python/C/C++/Java/Lua/..., أو [التحليل الذاتي](https://www.bilibili.com/video/BV1cj411A7VW)
+قراءة وترجمة الأبحاث | [وظائف] فك تشفير كامل لأوراق البحث بتنسيق LaTeX/PDF وإنشاء مستخلص
+ترجمة وتحسين أوراق اللاتكس | [وظائف] ترجمة أو تحسين الأوراق المكتوبة بلاتكس
+إنشاء تعليقات الدوال دفعة واحدة | [وظائف] إنشاء تعليقات الدوال بدفعة واحدة
+ترجمة Markdown بين اللغتين العربية والإنجليزية | [وظائف] هل رأيت الـ 5 لغات المستخدمة في منشور [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) ؟
+إنشاء تقرير تحليل الدردشة | [وظائف] إنشاء تقرير ملخص بعد تشغيله
+ترجمة كاملة لأوراق PDF | [وظائف] تحليل الأوراق بتنسيق PDF لتحديد العنوان وملخصها وترجمتها (متعدد الخيوط)
+مساعدة Arxiv | [وظائف] قم بإدخال رابط مقال Arxiv لترجمة الملخص وتحميل ملف PDF
+تصحيح لاتكس بضغطة زر واحدة | [وظائف] إكمال تصحيح لاتكس بناءً على التركيبة النحوية، إخراج همز المقابل للمقارنة PDF
+مساعد بحث Google بنسخة محلية | [وظائف] قم بتقديم رابط لصفحة بحث Google Scholar العشوائي حتى يساعدك GPT في كتابة [الأبحاث المتعلقة](https://www.bilibili.com/video/BV1GP411U7Az/)
+تجميع معلومات الويب + GPT | [وظائف] جمع المعلومات من الويب بشكل سهل للرد على الأسئلة لجعل المعلومات محدثة باستمرار
+⭐ترجمة دقيقة لأوراق Arxiv ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [وظائف] ترجمة مقالات Arxiv عالية الجودة بنقرة واحدة، أفضل أداة حاليا للترجمة
+⭐[إدخال الصوت الفوري](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [وظائف] (غير متزامن) استماع الصوت وقطعه تلقائيًا وتحديد وقت الإجابة تلقائيًا
+عرض الصيغ/الصور/الجداول | يمكن عرض الصيغ بشكل [TEX](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png) وأيضًا بتنسيق رسومي، يدعم عرض الصيغ وإبراز الكود
+⭐إضغط على وكيل "شارلوت الذكي" | [وظائف] استكمال الذكاء للكأس الأول للذكاء المكتسب من مايكروسوفت، اكتشاف وتطوير عالمي العميل
+تبديل الواجهة المُظلمة | يمكنك التبديل إلى الواجهة المظلمة بإضافة ```/?__theme=dark``` إلى نهاية عنوان URL في المتصفح
+دعم المزيد من نماذج LLM | دعم لجميع GPT3.5 وGPT4 و[ChatGLM2 في جامعة ثوه في لين](https://github.com/THUDM/ChatGLM2-6B) و[MOSS في جامعة فودان](https://github.com/OpenLMLab/MOSS)
+⭐تحوي انطباعة "ChatGLM2" | يدعم استيراد "ChatGLM2" ويوفر إضافة المساعدة في تعديله
+دعم المزيد من نماذج "LLM"، دعم [نشر الحديس](https://huggingface.co/spaces/qingxu98/gpt-academic) | انضم إلى واجهة "Newbing" (Bing الجديدة)،نقدم نماذج Jittorllms الجديدة تؤيدهم [LLaMA](https://github.com/facebookresearch/llama) و [盘古α](https://openi.org.cn/pangu/)
+⭐حزمة "void-terminal" للشبكة (pip) | قم بطلب كافة وظائف إضافة هذا المشروع في python بدون واجهة رسومية (قيد التطوير)
+⭐PCI-Express لإعلام (PCI) | [وظائف] باللغة الطبيعية، قم بتنفيذ المِهام الأخرى في المشروع
+المزيد من العروض (إنشاء الصور وغيرها)……| شاهد أكثر في نهاية هذا المستند ...
+
+
+
+- شكل جديد (عن طريق تعديل الخيار LAYOUT في `config.py` لقانون التوزيع "اليمين أو اليسار" أو "الأعلى أو الأسفل")
+
+
+Feature (⭐ = Recently Added) | Description
+--- | ---
+⭐[Integrate New Models](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B) | Baidu [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu) and Wenxin Yiyu, [Tongyi Qianwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary), Shanghai AI-Lab [Shusheng](https://github.com/InternLM/InternLM), Xunfei [Xinghuo](https://xinghuo.xfyun.cn/), [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), Zhifu API, DALLE3
+Proofreading, Translation, Code Explanation | One-click proofreading, translation, searching for grammar errors in papers, explaining code
+[Custom Shortcuts](https://www.bilibili.com/video/BV14s4y1E7jN) | Support for custom shortcuts
+Modular Design | Support for powerful [plugins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), plugins support [hot updates](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
+[Program Profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin] One-click to profile Python/C/C++/Java/Lua/... project trees or [self-profiling](https://www.bilibili.com/video/BV1cj411A7VW)
+Read Papers, [Translate](https://www.bilibili.com/video/BV1KT411x7Wn) Papers | [Plugin] One-click to interpret full-text latex/pdf papers and generate abstracts
+Full-text Latex [Translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [Proofreading](https://www.bilibili.com/video/BV1FT411H7c5/) | [Plugin] One-click translation or proofreading of latex papers
+Batch Comment Generation | [Plugin] One-click batch generation of function comments
+Markdown [Translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Plugin] Did you see the [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in the top five languages?
+Chat Analysis Report Generation | [Plugin] Automatically generates summary reports after running
+[PDF Paper Full-text Translation](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin] Extract title & abstract of PDF papers + translate full-text (multi-threaded)
+[Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugin] Enter the arxiv article URL to translate the abstract + download PDF with one click
+One-click Proofreading of Latex Papers | [Plugin] Syntax and spelling correction of Latex papers similar to Grammarly + output side-by-side PDF
+[Google Scholar Integration Helper](https://www.bilibili.com/video/BV19L411U7ia) | [Plugin] Given any Google Scholar search page URL, let GPT help you [write related works](https://www.bilibili.com/video/BV1GP411U7Az/)
+Internet Information Aggregation + GPT | [Plugin] One-click to let GPT retrieve information from the Internet to answer questions and keep the information up to date
+⭐Arxiv Paper Fine Translation ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [Plugin] One-click [high-quality translation of arxiv papers](https://www.bilibili.com/video/BV1dz4y1v77A/), the best paper translation tool at present
+⭐[Real-time Speech Input](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [Plugin] Asynchronously [listen to audio](https://www.bilibili.com/video/BV1AV4y187Uy/), automatically segment sentences, and automatically find the best time to answer
+Formula/Image/Table Display | Can simultaneously display formulas in [TeX form and rendered form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formula and code highlighting
+⭐AutoGen Multi-Agent Plugin | [Plugin] Explore the emergence of multi-agent intelligence with Microsoft AutoGen!
+Start Dark [Theme](https://github.com/binary-husky/gpt_academic/issues/173) | Add ```/?__theme=dark``` to the end of the browser URL to switch to the dark theme
+[More LLM Model Support](https://www.bilibili.com/video/BV1wT411p7yf) | It must be great to be served by GPT3.5, GPT4, [THU ChatGLM2](https://github.com/THUDM/ChatGLM2-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time, right?
+⭐ChatGLM2 Fine-tuning Model | Support for loading ChatGLM2 fine-tuning models and providing ChatGLM2 fine-tuning assistant plugins
+More LLM Model Access, support for [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Join NewBing interface (New Bing), introduce Tsinghua [JittorLLMs](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama) and [Pangu](https://openi.org.cn/pangu/)
+⭐[void-terminal](https://github.com/binary-husky/void-terminal) pip package | Use this project's all function plugins directly in Python without GUI (under development)
+⭐Void Terminal Plugin | [Plugin] Schedule other plugins of this project directly in natural language
+More New Feature Demonstrations (Image Generation, etc.)...... | See the end of this document ........
+
+
+
+- New interface (modify the LAYOUT option in `config.py` to switch between "left-right layout" and "top-bottom layout")
+
+
+Fonctionnalités (⭐ = fonctionnalité récemment ajoutée) | Description
+--- | ---
+⭐[Modèles acquis](https://github.com/binary-husky/gpt_academic/wiki/如何切换模型)! | Baidu [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu) et Wenxin Yiyuan, [Tongyi Qianwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary), Shanghai AI-Lab [Shusheng](https://github.com/InternLM/InternLM), Xunfei [Xinghuo](https://xinghuo.xfyun.cn/), [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), Zhifu API, DALLE3
+Amélioration, traduction, explication du code | Correction, traduction, recherche d'erreurs de syntaxe dans les articles, explication du code
+[Raccourcis personnalisés](https://www.bilibili.com/video/BV14s4y1E7jN) | Prise en charge de raccourcis personnalisés
+Conception modulaire | Prise en charge de plugins puissants personnalisables, prise en charge de la [mise à jour à chaud](https://github.com/binary-husky/gpt_academic/wiki/函数插件指南) des plugins
+[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin] Analyse en profondeur d'un arbre de projets Python/C/C++/Java/Lua/... d'un simple clic ou [auto-analyse](https://www.bilibili.com/video/BV1cj411A7VW)
+Lecture d'articles, traduction d'articles | [Plugin] Lecture automatique des articles LaTeX/PDF et génération du résumé
+Traduction complète de [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) ou amélioration de leur qualité | [Plugin] Traduction ou amélioration rapide des articles LaTeX
+Génération de commentaires en masse | [Plugin] Génération facile de commentaires de fonctions
+Traduction [chinois-anglais](https://www.bilibili.com/video/BV1yo4y157jV/) du Markdown | [Plugin] Avez-vous vu le [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) dans les cinq langues ci-dessus ?
+Génération de rapports d'analyse du chat | [Plugin] Génération automatique d'un rapport récapitulatif après l'exécution du chat
+[Fonction de traduction complète des articles PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin] Extraction du titre et du résumé d'un article PDF, ainsi que traduction intégrale (multithreading)
+Assistant Arxiv | [Plugin] Saisissez l'URL d'un article Arxiv pour traduire automatiquement le résumé et télécharger le PDF
+Correction automatique d'articles LaTeX | [Plugin] Correction de la grammaire, de l'orthographe et comparaison avec le PDF correspondant, à la manière de Grammarly
+Assistant Google Scholar | [Plugin] Donner l'URL d'une page de recherche Google Scholar pour obtenir de l'aide sur l'écriture des références
+Agrégation d'informations sur Internet + GPT | [Plugin] Obtenez les informations de l'Internet pour répondre aux questions à l'aide de GPT, afin que les informations ne soient jamais obsolètes
+⭐Traduction détaillée des articles Arxiv ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [Plugin] Traduction de haute qualité d'articles Arxiv en un clic, le meilleur outil de traduction d'articles à ce jour
+⭐[Saisie orale en temps réel](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [Plugin] Écoute asynchrone de l'audio, découpage automatique et recherche automatique du meilleur moment pour répondre
+Affichage des formules, images, tableaux | Affichage simultané de la forme [TeX et rendue](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png) des formules, prise en charge de la mise en évidence des formules et du code
+⭐Plugin AutoGen multi-agents | [Plugin] Explorez les émergences intelligentes à plusieurs agents avec Microsoft AutoGen !
+Activation du [thème sombre](https://github.com/binary-husky/gpt_academic/issues/173) | Ajouter ```/?__theme=dark``` à l'URL du navigateur pour basculer vers le thème sombre
+Prise en charge de plusieurs modèles LLM | Expérimentez avec GPT 3.5, GPT4, [ChatGLM2 de Tsinghua](https://github.com/THUDM/ChatGLM2-6B), [MOSS de Fudan](https://github.com/OpenLMLab/MOSS) simultanément !
+⭐Modèle ChatGLM2 fine-tuned | Chargez et utilisez un modèle fine-tuned de ChatGLM2, disponible avec un plugin d'assistance
+Prise en charge de plus de modèles LLM, déploiement sur [Huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Ajout de l'interface de connaissance-API, support de [LLaMA](https://github.com/facebookresearch/llama) et [PanGuα](https://openi.org.cn/pangu/)
+⭐Paquet pip [void-terminal](https://github.com/binary-husky/void-terminal) | Accédez à toutes les fonctions et plugins de ce projet directement depuis Python (en cours de développement)
+⭐Plugin terminal du vide | [Plugin] Utilisez un langage naturel pour interagir avec les autres plugins du projet
+Affichage de nouvelles fonctionnalités (génération d'images, etc.) …… | Voir à la fin de ce document ……
+
+
+
+- Nouvelle interface (modifiez l'option LAYOUT dans `config.py` pour basculer entre la disposition "gauche-droite" et "haut-bas")
+
+
+Funktionen (⭐= Kürzlich hinzugefügte Funktion) | Beschreibung
+--- | ---
+⭐[Neues Modell integrieren](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | Baidu [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu) und Wenxin Yanyi, [Tongyi Qianwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary), Shanghai AI-Lab [Shusheng](https://github.com/InternLM/InternLM), Xunfei [Xinghuo](https://xinghuo.xfyun.cn/), [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), Cognitive Graph API, DALLE3
+Verfeinern, Übersetzen, Codierung erläutern | Ein-Klick-Verfeinerung, Übersetzung, Suche nach grammatikalischen Fehlern in wissenschaftlichen Arbeiten, Erklärung von Code
+[Eigene Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) definieren | Eigene Tastenkombinationen definieren
+Modulare Gestaltung | Ermöglicht die Verwendung benutzerdefinierter leistungsstarker [Plugins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), Plugins unterstützen [Hot-Reload](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
+[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin] Ermöglicht die Erstellung einer Projekthierarchie für Python/C/C++/Java/Lua/... mit nur einem Klick oder [Selbstanalyse](https://www.bilibili.com/video/BV1cj411A7VW)
+Lesen von Forschungsarbeiten, Übersetzen von Forschungsarbeiten | [Plugin] Ermöglicht eine Umwandlung des gesamten Latex-/PDF-Forschungspapiers mit nur einem Klick und generiert eine Zusammenfassung
+Latex-Übersetzung des vollständigen Textes, Ausbesserung | [Plugin] Ermöglicht eine Übersetzung oder Verbesserung der Latex-Forschungsarbeit mit nur einem Klick
+Erzeugen von Batch-Anmerkungen | [Plugin] Erzeugt Funktionserläuterungen in Stapeln
+Markdown- [En-De-Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Plugin] Haben Sie die [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen?
+Erzeugen eines Chat-Analyseberichts | [Plugin] Generiert einen zusammenfassenden Bericht nach der Ausführung
+PDF-Textübersetzungsmerkmal | [Plugin] Extrahiert Titel und Zusammenfassung des PDF-Dokuments und übersetzt den vollständigen Text (mehrfädig)
+Arxiv-Assistent | [Plugin] Geben Sie die URL eines Arxiv-Artikels ein, um eine Zusammenfassung zu übersetzen und die PDF-Datei herunterzuladen
+Automatische Überprüfung von Latex-Artikeln | [Plugin] Überprüft die Grammatik und Rechtschreibung von Latex-Artikeln nach dem Vorbild von Grammarly und generiert eine PDF-Vergleichsdatei
+Google Scholar Integration Assistant | [Plugin] Geben Sie eine beliebige URL der Google Scholar-Suchseite ein und lassen Sie GPT Ihre [Verwandten Arbeiten](https://www.bilibili.com/video/BV1GP411U7Az/) schreiben
+Internetinformationsaggregation + GPT | [Plugin] Ermöglicht es GPT, Fragen durch das Durchsuchen des Internets zu beantworten und Informationen immer auf dem neuesten Stand zu halten
+⭐Feine Übersetzung von Arxiv-Artikeln ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [Plugin] Übersetzt Arxiv-Artikel [mit hoher Qualität](https://www.bilibili.com/video/BV1dz4y1v77A/) mit einem Klick - das beste Übersetzungstool für wissenschaftliche Artikel
+⭐[Echtzeit-Spracheingabe](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [Plugin] [Asynchrones Lauschen auf Audio-Eingabe](https://www.bilibili.com/video/BV1AV4y187Uy/), automatisches Zerschneiden des Textes, automatische Suche nach dem richtigen Zeitpunkt zur Beantwortung
+Darstellen von Formeln/Bildern/Tabellen | Zeigt Formeln sowohl in [TEX-](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)- als auch in gerenderten Formen an, unterstützt Formeln und Code-Hervorhebung
+⭐AutoGen Multi-Agent Plugin | [Plugin] Erforscht die Möglichkeiten des emergenten Verhaltens von Multi-Agent-Systemen mit Microsoft AutoGen!
+Start im Dark-Theme | Um das Dark-Theme zu aktivieren, fügen Sie ```/?__theme=dark``` am Ende der URL im Browser hinzu
+[Mehrsprachige LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf) unterstützt | Es ist sicherlich beeindruckend, von GPT3.5, GPT4, [ChatGLM2 der Tsinghua University](https://github.com/THUDM/ChatGLM2-6B), [MOSS der Fudan University](https://github.com/OpenLMLab/MOSS) bedient zu werden, oder?
+⭐ChatGLM2 Feinabstimmungsmodell | Unterstützt das Laden von ChatGLM2-Feinabstimmungsmodellen und bietet Unterstützung für ChatGLM2-Feinabstimmungsassistenten
+Integration weiterer LLM-Modelle, Unterstützung von [Huggingface-Deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der [Jittorllms der Tsinghua University](https://github.com/Jittor/JittorLLMs) zur Unterstützung von LLaMA und PanGu Alpha
+⭐[void-terminal](https://github.com/binary-husky/void-terminal) Pip-Paket | Verwenden Sie das Projekt in Python direkt, indem Sie das gesamte Funktionsplugin verwenden (in Entwicklung)
+⭐Void-Terminal-Plugin | [Plugin] Verwenden Sie natürliche Sprache, um andere Funktionen dieses Projekts direkt zu steuern
+Weitere Funktionen anzeigen (z. B. Bildgenerierung) …… | Siehe das Ende dieses Dokuments ……
+
+
+
+- Neues Interface (Ändern Sie die LAYOUT-Option in der `config.py`, um zwischen "Links-Rechts-Layout" und "Oben-Unten-Layout" zu wechseln)
+
+
+Funzionalità (⭐ = Nuove funzionalità recenti) | Descrizione
+--- | ---
+⭐[Integrazione di nuovi modelli](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | Baidu [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu) e [Wenxin](https://cloud.baidu.com/doc/GUIDE/5268.9) Intelligence, [Tongyi Qianwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary), Shanghai AI-Lab [bookbrain](https://github.com/InternLM/InternLM), Xunfei [Xinghuo](https://xinghuo.xfyun.cn/), [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), Zhipu API, DALLE3
+Revisione, traduzione, spiegazione del codice | Revisione, traduzione, ricerca errori grammaticali nei documenti e spiegazione del codice con un clic
+[Tasti di scelta rapida personalizzati](https://www.bilibili.com/video/BV14s4y1E7jN) | Supporta tasti di scelta rapida personalizzati
+Design modulare | Supporto per plugin personalizzati potenti, i plugin supportano l'[aggiornamento in tempo reale](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
+[Analisi del codice](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin] Un clic per analizzare alberi di progetti Python/C/C++/Java/Lua/... o [autoanalisi](https://www.bilibili.com/video/BV1cj411A7VW)
+Lettura di documenti, traduzione di documenti | [Plugin] Un clic per interpretare documenti completi in latex/pdf e generare un riassunto
+Traduzione completa di testi in Latex, revisione completa di testi in Latex | [Plugin] Un clic per tradurre o correggere documenti in latex
+Generazione automatica di commenti in batch | [Plugin] Un clic per generare commenti di funzione in batch
+Traduzione [cinese-inglese](https://www.bilibili.com/video/BV1yo4y157jV/) in Markdown | [Plugin] Hai visto sopra i README in 5 lingue diverse ([Inglese](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md))?
+Generazione di rapporti di analisi chat | [Plugin] Genera automaticamente un rapporto di sintesi dopo l'esecuzione
+Funzionalità di traduzione di testo completo in PDF | [Plugin] Estrai il titolo e il riassunto dei documenti PDF e traduci tutto il testo (multithreading)
+Aiutante per Arxiv | [Plugin] Inserisci l'URL dell'articolo Arxiv per tradurre riassunto e scaricare PDF in un clic
+Controllo completo dei documenti in Latex | [Plugin] Rileva errori grammaticali e ortografici nei documenti in Latex simile a Grammarly + Scarica un PDF per il confronto
+Assistente per Google Scholar | [Plugin] Dato qualsiasi URL della pagina di ricerca di Google Scholar, fai scrivere da GPT gli *articoli correlati* per te
+Concentrazione delle informazioni di Internet + GPT | [Plugin] [Recupera informazioni da Internet](https://www.bilibili.com/video/BV1om4y127ck) utilizzando GPT per rispondere alle domande e rendi le informazioni sempre aggiornate
+⭐Traduzione accurata di articoli Arxiv ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [Plugin] [Traduci articoli Arxiv ad alta qualità](https://www.bilibili.com/video/BV1dz4y1v77A/) con un clic, lo strumento di traduzione degli articoli migliore al mondo al momento
+⭐[Inserimento della conversazione vocale in tempo reale](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [Plugin] [Ascolta l'audio](https://www.bilibili.com/video/BV1AV4y187Uy/) in modo asincrono, taglia automaticamente le frasi e trova automaticamente il momento giusto per rispondere
+Visualizzazione di formule, immagini, tabelle | Mostra contemporaneamente formule in formato tex e renderizzato, supporta formule e evidenziazione del codice
+⭐Plugin multi-agente AutoGen | [Plugin] Esplora le possibilità dell'emergenza intelligence multi-agente con l'aiuto di Microsoft AutoGen!
+Attiva il tema scuro [qui](https://github.com/binary-husky/gpt_academic/issues/173) | Aggiungi ```/?__theme=dark``` alla fine dell'URL del browser per passare al tema scuro
+Supporto di più modelli LLM | Essere servito contemporaneamente da GPT3.5, GPT4, [ChatGLM2 di Tsinghua](https://github.com/THUDM/ChatGLM2-6B), [MOSS di Fudan](https://github.com/OpenLMLab/MOSS)
+⭐Modello di fine-tuning ChatGLM2 | Supporto per l'importazione del modello di fine-tuning di ChatGLM2, fornendo plug-in di assistenza per il fine tuning di ChatGLM2
+Più supporto per modelli LLM, supporto del [deploy di Huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Aggiungi interfaccia Newbing (Bing Translator), introduce il supporto di [JittorLLMs](https://github.com/Jittor/JittorLLMs) di Tsinghua, supporto per [LLaMA](https://github.com/facebookresearch/llama) e [Panguα](https://openi.org.cn/pangu/)
+⭐Pacchetto pip [void-terminal](https://github.com/binary-husky/void-terminal) | Fornisce funzionalità di tutti i plugin di questo progetto direttamente in Python senza GUI (in sviluppo)
+⭐Plugin terminale virtuale | [Plugin] Richiama altri plugin di questo progetto utilizzando linguaggio naturale
+Altre nuove funzionalità (come la generazione di immagini) ... | Vedi alla fine di questo documento ...
+
+
+
+
+- Nuovo layout (modifica l'opzione LAYOUT in `config.py` per passare tra "layout sinistra / destra" e "layout sopra / sotto")
+
+
+機能(⭐= 最近追加された機能) | 説明
+--- | ---
+⭐[新しいモデルの追加](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | Baidu [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)とWenxin Yiyu, [Tongyi Qianwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary), Shanghai AI-Lab [Shusheng](https://github.com/InternLM/InternLM), Xunfei [Xinghuo](https://xinghuo.xfyun.cn/), [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), Zhantu API, DALLE3
+校正、翻訳、コード解説 | 一括校正、翻訳、論文の文法エラーの検索、コードの解説
+[カスタムショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | カスタムショートカットキーのサポート
+モジュール化された設計 | カスタムでパワフルな[プラグイン](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions)のサポート、プラグインの[ホットリロード](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
+[プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [プラグイン] Python/C/C++/Java/Lua/...のプロジェクトツリーを簡単に解析するか、[自己解析](https://www.bilibili.com/video/BV1cj411A7VW)
+論文の読み込み、[翻訳](https://www.bilibili.com/video/BV1KT411x7Wn) | [プラグイン] LaTeX/PDFの論文全文を翻訳して要約を作成する
+LaTeX全文の[翻訳](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[校正](https://www.bilibili.com/video/BV1FT411H7c5/) | [プラグイン] LaTeX論文を翻訳や校正する
+一括コメント生成 | [プラグイン] 関数コメントを一括生成する
+Markdownの[日英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [プラグイン] 5つの言語([英語](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)など)のREADMEをご覧になりましたか?
+チャット分析レポートの生成 | [プラグイン] 実行後にサマリーレポートを自動生成する
+[PDF論文全文の翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [プラグイン] PDF論文のタイトルと要約を抽出し、全文を翻訳する(マルチスレッド)
+[Arxivアシスタント](https://www.bilibili.com/video/BV1LM4y1279X) | [プラグイン] arxiv論文のURLを入力すると、要約を翻訳してPDFをダウンロードできます
+LaTeX論文の一括校正 | [プラグイン] Grammarlyのように、LaTeX論文の文法とスペルを修正して対照PDFを出力する
+[Google Scholar統合アシスタント](https://www.bilibili.com/video/BV19L411U7ia) | [プラグイン] 任意のGoogle Scholar検索ページのURLを指定して、関連資料をGPTに書かせることができます
+インターネット情報の集約+GPT | [プラグイン] インターネットから情報を取得して質問に答え、情報が常に最新になるようにします
+⭐Arxiv論文の詳細な翻訳 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [プラグイン] arxiv論文を超高品質で翻訳します。最高の論文翻訳ツールです
+⭐[リアルタイム音声入力](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [プラグイン] 非同期[音声をリッスン(https://www.bilibili.com/video/BV1AV4y187Uy/)し、自動で文章を区切り、回答のタイミングを自動で探します
+公式/画像/表の表示 | 公式の[tex形式とレンダリング形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)を同時に表示し、公式とコードのハイライトをサポートします
+⭐AutoGenマルチエージェントプラグイン | [プラグイン] Microsoft AutoGenを利用して、マルチエージェントのインテリジェントなエマージェンスを探索します
+ダーク[テーマ](https://github.com/binary-husky/gpt_academic/issues/173)を起動 | ブラウザのURLに```/?__theme=dark```を追加すると、ダークテーマに切り替えられます
+[複数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)のサポート | GPT3.5、GPT4、[Tsinghua ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)、[Fudan MOSS](https://github.com/OpenLMLab/MOSS)などを同時に使えるのは最高の感じですよね?
+⭐ChatGLM2ファインチューニングモデル | ChatGLM2ファインチューニングモデルをロードして使用することができ、ChatGLM2ファインチューニングの補助プラグインが用意されています
+さらなるLLMモデルの導入、[HuggingFaceデプロイのサポート](https://huggingface.co/spaces/qingxu98/gpt-academic) | Newbingインターフェース(新しいBing)の追加、Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs)の導入、[LLaMA](https://github.com/facebookresearch/llama)および[盤古α](https://openi.org.cn/pangu/)のサポート
+⭐[void-terminal](https://github.com/binary-husky/void-terminal) pipパッケージ | GUIから独立して、Pythonから直接このプロジェクトのすべての関数プラグインを呼び出せます(開発中)
+⭐Void Terminalプラグイン | [プラグイン] 自然言語で、このプロジェクトの他のプラグインを直接実行します
+その他の新機能の紹介(画像生成など)...... | 末尾をご覧ください ......
+
+
+
+
+- もし出力に数式が含まれている場合、TeX形式とレンダリング形式の両方で表示されます。これにより、コピーと読み取りが容易になります。
+
+
+
+기능 (⭐= 최근 추가 기능) | 설명
+--- | ---
+⭐[새 모델 추가](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | Baidu [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu)와 Wenxin Yiyan, [Tongyi Qianwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary), Shanghai AI-Lab [Shusheng](https://github.com/InternLM/InternLM), Xunfei [Star](https://xinghuo.xfyun.cn/), [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), Zhipu API, DALLE3
+문체 개선, 번역, 코드 설명 | 일괄적인 문체 개선, 번역, 논문 문법 오류 탐색, 코드 설명
+[사용자 정의 단축키](https://www.bilibili.com/video/BV14s4y1E7jN) | 사용자 정의 단축키 지원
+모듈화 설계 | 사용자 정의 가능한 강력한 [플러그인](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions) 지원, 플러그인 지원 [핫 업데이트](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
+[프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [플러그인] 한 번에 Python/C/C++/Java/Lua/... 프로젝트 트리를 분석하거나 [자체 분석](https://www.bilibili.com/video/BV1cj411A7VW)
+논문 읽기, 논문 [번역](https://www.bilibili.com/video/BV1KT411x7Wn) | [플러그인] LaTeX/PDF 논문 전문을 읽고 요약 생성
+LaTeX 전체 [번역](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [개선](https://www.bilibili.com/video/BV1FT411H7c5/) | [플러그인] LaTeX 논문 번역 또는 개선
+일괄 주석 생성 | [플러그인] 함수 주석 일괄 생성
+Markdown [한 / 영 번역](https://www.bilibili.com/video/BV1yo4y157jV/) | 위의 5개 언어로 작성된 [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)를 살펴보셨나요?
+채팅 분석 보고서 생성 | [플러그인] 실행 후 요약 보고서 자동 생성
+[PDF 논문 전체 번역](https://www.bilibili.com/video/BV1KT411x7Wn) 기능 | [플러그인] PDF 논문 제목 및 요약 추출 + 전체 번역 (멀티 스레드)
+[Arxiv 도우미](https://www.bilibili.com/video/BV1LM4y1279X) | [플러그인] arxiv 논문 url 입력시 요약 번역 + PDF 다운로드
+LaTeX 논문 일괄 교정 | [플러그인] Grammarly를 모사하여 LaTeX 논문에 대한 문법 및 맞춤법 오류 교정 + 대조 PDF 출력
+[Google 학술 통합 도우미](https://www.bilibili.com/video/BV19L411U7ia) | 임의의 Google 학술 검색 페이지 URL을 지정하여 gpt가 [related works를 작성](https://www.bilibili.com/video/BV1GP411U7Az/)하게 해주세요.
+인터넷 정보 집계 + GPT | [플러그인] [인터넷에서 정보를 가져와서](https://www.bilibili.com/video/BV1om4y127ck) 질문에 대답하도록 GPT를 자동화하세요. 정보가 절대로 오래되지 않도록 해줍니다.
+⭐Arxiv 논문 세심한 번역 ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [플러그인] [arxiv 논문을 고품질 번역으로](https://www.bilibili.com/video/BV1dz4y1v77A/) 번역하는 최고의 도구
+⭐[실시간 음성 대화 입력](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [플러그인] 비동기적으로 [오디오를 모니터링](https://www.bilibili.com/video/BV1AV4y187Uy/)하여 문장을 자동으로 분절하고 대답 시기를 자동으로 찾습니다.
+수식/이미지/표 표시 | [tex 형식 및 렌더링 형식](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)의 수식을 동시에 표시하며, 수식 및 코드 하이라이트 지원
+⭐AutoGen multi-agent 플러그인 | [플러그인] Microsoft AutoGen을 활용하여 여러 개의 에이전트가 지능적으로 발생하는 가능성을 탐색하세요!
+다크 모드 주제 지원 | 브라우저의 URL 뒤에 ```/?__theme=dark```를 추가하여 다크 모드로 전환하세요.
+[다양한 LLM 모델](https://www.bilibili.com/video/BV1wT411p7yf) 지원 | GPT3.5, GPT4, [Tsinghua ChatGLM2](https://github.com/THUDM/ChatGLM2-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)을 함께 사용하는 느낌은 좋을 것입니다, 그렇지 않습니까?
+⭐ChatGLM2 fine-tuned 모델 | ChatGLM2 fine-tuned 모델 로드를 지원하며, ChatGLM2 fine-tuned 보조 플러그인 제공
+더 많은 LLM 모델 연결, [huggingface 배포](https://huggingface.co/spaces/qingxu98/gpt-academic) 지원 | Newbing 인터페이스(신 밍), Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) 도입, [LLaMA](https://github.com/facebookresearch/llama)와 [Pangu-alpha](https://openi.org.cn/pangu/)를 지원합니다.
+⭐[void-terminal](https://github.com/binary-husky/void-terminal) 패키지 | GUI에서 독립, Python에서 이 프로젝트의 모든 함수 플러그인을 직접 호출 (개발 중)
+⭐Void 터미널 플러그인 | [플러그인] 자연어로 이 프로젝트의 다른 플러그인을 직접 영속합니다.
+기타 새로운 기능 소개 (이미지 생성 등) …… | 본 문서 맨 끝 참조 ……
+
+
+
+- 새로운 인터페이스(`config.py`의 LAYOUT 옵션 수정으로 "왼쪽-오른쪽 레이아웃"과 "위-아래 레이아웃"을 전환할 수 있음)
+
+
+Funcionalidades (⭐= funcionalidade recentemente adicionada) | Descrição
+--- | ---
+⭐[Integração com novos modelos](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | [Qianfan](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu) da Baidu, Wenxin e [Tongyi Qianwen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary), [Shusheng](https://github.com/InternLM/InternLM) da Shanghai AI-Lab, [Xinghuo](https://xinghuo.xfyun.cn/) da Iflytek, [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), Zhipu API, DALLE3
+Aprimoramento, tradução, explicação de códigos | Aprimoramento com um clique, tradução, busca de erros gramaticais em artigos e explicação de códigos
+[Atalhos de teclado personalizados](https://www.bilibili.com/video/BV14s4y1E7jN) | Suporte para atalhos de teclado personalizados
+Design modular | Suporte a plugins poderosos e personalizáveis, plugins com suporte a [atualização a quente](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
+[Análise de código](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin] Análise instantânea da estrutura de projetos em Python/C/C++/Java/Lua/... ou [autoanálise](https://www.bilibili.com/video/BV1cj411A7VW)
+Leitura de artigos, [tradução](https://www.bilibili.com/video/BV1KT411x7Wn) de artigos | [Plugin] Interpretação instantânea de artigos completos em latex/pdf e geração de resumos
+Tradução completa de artigos em latex [PDF](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [aprimoramento](https://www.bilibili.com/video/BV1FT411H7c5/) | [Plugin] Tradução completa ou aprimoramento de artigos em latex com um clique
+Geração em lote de comentários | [Plugin] Geração em lote de comentários de funções com um clique
+Tradução (inglês-chinês) de Markdown | [Plugin] Você já viu o [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) nas 5 línguas acima?
+Criação de relatório de análise de bate-papo | [Plugin] Geração automática de relatório de resumo após a execução
+Tradução [completa de artigos em PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin] Extração de título e resumo de artigos em PDF + tradução completa (multithreading)
+Auxiliar Arxiv | [Plugin] Insira o URL de um artigo Arxiv para traduzir o resumo + baixar o PDF com um clique
+Correção automática de artigos em latex | [Plugin] Correções gramaticais e ortográficas de artigos em latex semelhante ao Grammarly + saída PDF comparativo
+Auxiliar Google Scholar | [Plugin] Insira qualquer URL da busca do Google Acadêmico e deixe o GPT [escrever trabalhos relacionados](https://www.bilibili.com/video/BV1GP411U7Az/) para você
+Agregação de informações da Internet + GPT | [Plugin] Capturar informações da Internet e obter respostas de perguntas com o GPT em um clique, para que as informações nunca fiquem desatualizadas
+⭐Tradução refinada de artigos do Arxiv ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [Plugin] Tradução de alta qualidade de artigos do Arxiv com um clique, a melhor ferramenta de tradução de artigos atualmente
+⭐Entrada de conversa de voz em tempo real | [Plugin] Monitoramento de áudio [assíncrono](https://www.bilibili.com/video/BV1AV4y187Uy/), segmentação automática de frases, detecção automática de momentos de resposta
+Exibição de fórmulas, imagens e tabelas | Exibição de fórmulas em formato tex e renderizadas simultaneamente, suporte a fórmulas e destaque de código
+⭐Plugin AutoGen para vários agentes | [Plugin] Explore a emergência de múltiplos agentes com o AutoGen da Microsoft!
+Ativar o tema escuro | Adicione ```/?__theme=dark``` ao final da URL para alternar para o tema escuro
+Suporte a múltiplos modelos LLM | Ser atendido simultaneamente pelo GPT3.5, GPT4, [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B) do Tsinghua University e [MOSS](https://github.com/OpenLMLab/MOSS) da Fudan University se sente incrível, não é mesmo?
+⭐Modelo de ajuste fino ChatGLM2 | Suporte para carregar o modelo ChatGLM2 ajustado e fornecer plugins de assistência ao ajuste fino do ChatGLM2
+Mais modelos LLM e suporte para [implantação pela HuggingFace](https://huggingface.co/spaces/qingxu98/gpt-academic) | Integração com a interface Newbing (Bing novo), introdução do [Jittorllms](https://github.com/Jittor/JittorLLMs) da Tsinghua University com suporte a [LLaMA](https://github.com/facebookresearch/llama) e [Panguα](https://openi.org.cn/pangu/)
+⭐Pacote pip [void-terminal](https://github.com/binary-husky/void-terminal) | Chame todas as funções plugins deste projeto diretamente em Python, sem a GUI (em desenvolvimento)
+⭐Plugin Terminal do Vácuo | [Plugin] Chame outros plugins deste projeto diretamente usando linguagem natural
+Apresentação de mais novas funcionalidades (geração de imagens, etc.) ... | Veja no final deste documento ...
+
+
+
+
+- Nova interface (altere a opção LAYOUT em `config.py` para alternar entre os "Layouts de lado a lado" e "Layout de cima para baixo")
+
+
+Функции (⭐= Недавно добавленные функции) | Описание
+--- | ---
+⭐[Подключение новой модели](https://github.com/binary-husky/gpt_academic/wiki/%E5%A6%82%E4%BD%95%E5%88%87%E6%8D%A2%E6%A8%A1%E5%9E%8B)! | Baidu [QianFan](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Nlks5zkzu) и WenxinYiYan, [TongYiQianWen](https://modelscope.cn/models/qwen/Qwen-7B-Chat/summary), Shanghai AI-Lab [ShuSheng](https://github.com/InternLM/InternLM), Xunfei [XingHuo](https://xinghuo.xfyun.cn/), [LLaMa2](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), ZhiPu API, DALLE3
+Улучшение, перевод, объяснение кода | Одним нажатием выполнить поиск синтаксических ошибок в научных статьях, переводить, объяснять код
+[Настройка горячих клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки горячих клавиш
+Модульный дизайн | Поддержка настраиваемых мощных [плагинов](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), плагины поддерживают [горячую замену](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
+[Профилирование кода](https://www.bilibili.com/video/BV1cj411A7VW) | [Плагин] Одним нажатием можно профилировать дерево проекта Python/C/C++/Java/Lua/... или [проанализировать самого себя](https://www.bilibili.com/video/BV1cj411A7VW)
+Просмотр статей, перевод статей | [Плагин] Одним нажатием прочитать полный текст статьи в формате LaTeX/PDF и сгенерировать аннотацию
+Перевод LaTeX статей, [улучшение](https://www.bilibili.com/video/BV1FT411H7c5/)| [Плагин] Одним нажатием перевести или улучшить статьи в формате LaTeX
+Генерация пакетного комментария | [Плагин] Одним нажатием сгенерировать многострочный комментарий к функции
+Перевод Markdown на английский и китайский | [Плагин] Вы видели документацию на сверху на пяти языках? [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)`
+Анализ и создание отчета в формате чата | [Плагин] Автоматически генерируйте сводный отчет после выполнения
+Функция перевода полноценной PDF статьи | [Плагин] Изъять название и аннотацию статьи из PDF + переводить полный текст (многопоточно)
+[Arxiv помощник](https://www.bilibili.com/video/BV1LM4y1279X) | [Плагин] Просто введите URL статьи на arXiv, чтобы одним нажатием выполнить перевод аннотации + загрузить PDF
+Одним кликом проверить статью на LaTeX | [Плагин] Проверка грамматики и правописания статьи LaTeX, добавление PDF в качестве справки
+[Помощник Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Плагин] Создайте "related works" с помощью Google Scholar URL по вашему выбору.
+Агрегирование интернет-информации + GPT | [Плагин] [GPT получает информацию из интернета](https://www.bilibili.com/video/BV1om4y127ck) и отвечает на вопросы, чтобы информация никогда не устаревала
+⭐Точный перевод статей Arxiv ([Docker](https://github.com/binary-husky/gpt_academic/pkgs/container/gpt_academic_with_latex)) | [Плагин] [Переводите статьи Arxiv наивысшего качества](https://www.bilibili.com/video/BV1dz4y1v77A/) всего одним нажатием. Сейчас это лучший инструмент для перевода научных статей
+⭐[Реальное время ввода голосом](https://github.com/binary-husky/gpt_academic/blob/master/docs/use_audio.md) | [Плагин] Асинхронно [слушать аудио](https://www.bilibili.com/video/BV1AV4y187Uy/), автоматически разбивать на предложения, автоматически находить момент для ответа
+Отображение формул/изображений/таблиц | Поддержка отображения формул в форме [tex и рендеринга](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), поддержка подсветки синтаксиса формул и кода
+⭐Плагин AutoGen для множества интеллектуальных агентов | [Плагин] Используйте Microsoft AutoGen для исследования возможностей интеллектуального всплытия нескольких агентов!
+Запуск [темной темы](https://github.com/binary-husky/gpt_academic/issues/173) | Добавьте `/?__theme=dark` в конец URL в браузере, чтобы переключиться на темную тему
+[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf) | Быть обслуживаемым GPT3.5, GPT4, [ChatGLM2 из Цинхуа](https://github.com/THUDM/ChatGLM2-6B), [MOSS из Фуданя](https://github.com/OpenLMLab/MOSS) одновременно должно быть очень приятно, не так ли?
+⭐Модель ChatGLM2 Fine-tune | Поддержка загрузки модели ChatGLM2 Fine-tune, предоставляет вспомогательный плагин ChatGLM2 Fine-tune
+Больше моделей LLM, поддержка [развертывания huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Включение интерфейса Newbing (новый Bing), введение поддержки китайских [Jittorllms](https://github.com/Jittor/JittorLLMs) для поддержки [LLaMA](https://github.com/facebookresearch/llama) и [Panguα](https://openi.org.cn/pangu/)
+⭐Пакет pip [void-terminal](https://github.com/binary-husky/void-terminal) | Без GUI вызывайте все функциональные плагины этого проекта прямо из Python (разрабатывается)
+⭐Плагин пустого терминала | [Плагин] Используя естественный язык, напрямую распоряжайтесь другими плагинами этого проекта
+Больше новых функций (генерация изображений и т. д.) ... | Смотрите в конце этого документа ...
+
+
+
+- Новый интерфейс (изменение опции LAYOUT в `config.py` позволяет переключиться между "расположением слева и справа" и "расположением сверху и снизу")
+
-
-Funzione | Descrizione
---- | ---
-Correzione immediata | Supporta correzione immediata e ricerca degli errori di grammatica del documento con un solo clic
-Traduzione cinese-inglese immediata | Traduzione cinese-inglese immediata con un solo clic
-Spiegazione del codice immediata | Visualizzazione del codice, spiegazione del codice, generazione del codice, annotazione del codice con un solo clic
-[Scorciatoie personalizzate](https://www.bilibili.com/video/BV14s4y1E7jN) | Supporta scorciatoie personalizzate
-Design modularizzato | Supporta potenti [plugin di funzioni](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions) personalizzati, i plugin supportano l'[aggiornamento in tempo reale](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-[Auto-profiling del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] [Comprensione immediata](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) del codice sorgente di questo progetto
-[Analisi del programma](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin di funzioni] Un clic può analizzare l'albero di altri progetti Python/C/C++/Java/Lua/...
-Lettura del documento, [traduzione](https://www.bilibili.com/video/BV1KT411x7Wn) del documento | [Plugin di funzioni] La lettura immediata dell'intero documento latex/pdf di un documento e la generazione di un riassunto
-Traduzione completa di un documento Latex, [correzione immediata](https://www.bilibili.com/video/BV1FT411H7c5/) | [Plugin di funzioni] Una traduzione o correzione immediata di un documento Latex
-Generazione di annotazioni in batch | [Plugin di funzioni] Generazione automatica delle annotazioni di funzione con un solo clic
-[Traduzione cinese-inglese di Markdown](https://www.bilibili.com/video/BV1yo4y157jV/) | [Plugin di funzioni] Hai letto il [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) delle cinque lingue sopra?
-Generazione di report di analisi di chat | [Plugin di funzioni] Generazione automatica di un rapporto di sintesi dopo l'esecuzione
-[Funzione di traduzione di tutto il documento PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin di funzioni] Estrarre il titolo e il sommario dell'articolo PDF + tradurre l'intero testo (multithreading)
-[Assistente di Arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plugin di funzioni] Inserire l'URL dell'articolo di Arxiv e tradurre il sommario con un clic + scaricare il PDF
-[Assistente integrato di Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Plugin di funzioni] Con qualsiasi URL di pagina di ricerca di Google Scholar, lascia che GPT ti aiuti a scrivere il tuo [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
-Aggregazione delle informazioni su Internet + GPT | [Plugin di funzioni] Fai in modo che GPT rilevi le informazioni su Internet prima di rispondere alle domande, senza mai diventare obsolete
-Visualizzazione di formule/img/tabelle | È possibile visualizzare un'equazione in forma [tex e render](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png) contemporaneamente, supporta equazioni e evidenziazione del codice
-Supporto per plugin di funzioni multithreading | Supporto per chiamata multithreaded di chatgpt, elaborazione con un clic di grandi quantità di testo o di un programma
-Avvia il tema di gradio [scuro](https://github.com/binary-husky/gpt_academic/issues/173) | Aggiungere ```/?__theme=dark``` dopo l'URL del browser per passare a un tema scuro
-Supporto per maggiori modelli LLM, supporto API2D | Sentirsi serviti simultaneamente da GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) deve essere una grande sensazione, giusto?
-Ulteriori modelli LLM supportat,i supporto per l'implementazione di Huggingface | Aggiunta di un'interfaccia Newbing (Nuovo Bing), introdotta la compatibilità con Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs), [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) e [PanGu-α](https://openi.org.cn/pangu/)
-Ulteriori dimostrazioni di nuove funzionalità (generazione di immagini, ecc.)... | Vedere la fine di questo documento...
-
-
-
-- Nuova interfaccia (modificare l'opzione LAYOUT in `config.py` per passare dal layout a sinistra e a destra al layout superiore e inferiore)
-
-
-기능 | 설명
---- | ---
-원 키워드 | 원 키워드 및 논문 문법 오류를 찾는 기능 지원
-한-영 키워드 | 한-영 키워드 지원
-코드 설명 | 코드 표시, 코드 설명, 코드 생성, 코드에 주석 추가
-[사용자 정의 바로 가기 키](https://www.bilibili.com/video/BV14s4y1E7jN) | 사용자 정의 바로 가기 키 지원
-모듈식 설계 | 강력한[함수 플러그인](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions) 지원, 플러그인이 [램 업데이트](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)를 지원합니다.
-[자체 프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] [원 키 우드] 프로젝트 소스 코드의 내용을 이해하는 기능을 제공
-[프로그램 분석](https://www.bilibili.com/video/BV1cj411A7VW) | [함수 플러그인] 프로젝트 트리를 분석할 수 있습니다 (Python/C/C++/Java/Lua/...)
-논문 읽기, 번역 | [함수 플러그인] LaTex/PDF 논문의 전문을 읽고 요약을 생성합니다.
-LaTeX 텍스트[번역](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [원 키워드](https://www.bilibili.com/video/BV1FT411H7c5/) | [함수 플러그인] LaTeX 논문의 번역 또는 개량을 위해 일련의 모드를 번역할 수 있습니다.
-대량의 주석 생성 | [함수 플러그인] 함수 코멘트를 대량으로 생성할 수 있습니다.
-Markdown 한-영 번역 | [함수 플러그인] 위의 5 종 언어의 [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)를 볼 수 있습니다.
-chat 분석 보고서 생성 | [함수 플러그인] 수행 후 요약 보고서를 자동으로 생성합니다.
-[PDF 논문 번역](https://www.bilibili.com/video/BV1KT411x7Wn) | [함수 플러그인] PDF 논문이 제목 및 요약을 추출한 후 번역됩니다. (멀티 스레드)
-[Arxiv 도우미](https://www.bilibili.com/video/BV1LM4y1279X) | [함수 플러그인] Arxiv 논문 URL을 입력하면 요약을 번역하고 PDF를 다운로드 할 수 있습니다.
-[Google Scholar 통합 도우미](https://www.bilibili.com/video/BV19L411U7ia) | [함수 플러그인] Google Scholar 검색 페이지 URL을 제공하면 gpt가 [Related Works 작성](https://www.bilibili.com/video/BV1GP411U7Az/)을 도와줍니다.
-인터넷 정보 집계+GPT | [함수 플러그인] 먼저 GPT가 인터넷에서 정보를 수집하고 질문에 대답 할 수 있도록합니다. 정보가 절대적으로 구식이 아닙니다.
-수식/이미지/표 표시 | 급여, 코드 강조 기능 지원
-멀티 스레드 함수 플러그인 지원 | Chatgpt를 여러 요청에서 실행하여 [대량의 텍스트](https://www.bilibili.com/video/BV1FT411H7c5/) 또는 프로그램을 처리 할 수 있습니다.
-다크 그라디오 테마 시작 | 어둡게 주제를 변경하려면 브라우저 URL 끝에 ```/?__theme=dark```을 추가하면됩니다.
-[다중 LLM 모델](https://www.bilibili.com/video/BV1wT411p7yf) 지원, [API2D](https://api2d.com/) 인터페이스 지원됨 | GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)가 모두 동시에 작동하는 것처럼 느낄 수 있습니다!
-LLM 모델 추가 및[huggingface 배치](https://huggingface.co/spaces/qingxu98/gpt-academic) 지원 | 새 Bing 인터페이스 (새 Bing) 추가, Clearing House [Jittorllms](https://github.com/Jittor/JittorLLMs) 지원 [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) 및 [盘古α](https://openi.org.cn/pangu/)
-기타 새로운 기능 (이미지 생성 등) ... | 이 문서의 끝부분을 참조하세요. ...- 모든 버튼은 functional.py를 동적으로 읽어와서 사용자 정의 기능을 자유롭게 추가할 수 있으며, 클립 보드를 해제합니다.
-
-

-
-
-- 검수/오타 교정
-
-

-
-
-- 출력에 수식이 포함되어 있으면 텍스와 렌더링의 형태로 동시에 표시되어 복사 및 읽기가 용이합니다.
-
-

-
-
-- 프로젝트 코드를 볼 시간이 없습니까? 전체 프로젝트를 chatgpt에 직접 표시하십시오
-
-

-
-
-- 다양한 대형 언어 모델 범용 요청 (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-

-
-
----
-# 설치
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. 프로젝트 다운로드
-```sh
-git clone https://github.com/binary-husky/gpt_academic.git
-cd gpt_academic
-```
-
-2. API_KEY 구성
-
-`config.py`에서 API KEY 등 설정을 구성합니다. [특별한 네트워크 환경 설정](https://github.com/binary-husky/gpt_academic/issues/1) .
-
-(P.S. 프로그램이 실행될 때, 이름이 `config_private.py`인 기밀 설정 파일이 있는지 우선적으로 확인하고 해당 설정으로 `config.py`의 동일한 이름의 설정을 덮어씁니다. 따라서 구성 읽기 논리를 이해할 수 있다면, `config.py` 옆에 `config_private.py`라는 새 구성 파일을 만들고 `config.py`의 구성을 `config_private.py`로 이동(복사)하는 것이 좋습니다. `config_private.py`는 git으로 관리되지 않으며 개인 정보를 더 안전하게 보호할 수 있습니다. P.S. 프로젝트는 또한 대부분의 옵션을 `환경 변수`를 통해 설정할 수 있으며, `docker-compose` 파일을 참조하여 환경 변수 작성 형식을 확인할 수 있습니다. 우선순위: `환경 변수` > `config_private.py` > `config.py`)
-
-
-3. 의존성 설치
-```sh
-# (I 선택: 기존 python 경험이 있다면) (python 버전 3.9 이상, 최신 버전이 좋습니다), 참고: 공식 pip 소스 또는 알리 pip 소스 사용, 일시적인 교체 방법: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (II 선택: Python에 익숙하지 않은 경우) anaconda 사용 방법은 비슷함(https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # anaconda 환경 만들기
-conda activate gptac_venv # anaconda 환경 활성화
-python -m pip install -r requirements.txt # 이 단계도 pip install의 단계와 동일합니다.
-```
-
-
추가지원을 위해 Tsinghua ChatGLM / Fudan MOSS를 사용해야하는 경우 지원을 클릭하여 이 부분을 확장하세요.
-
-
-[Tsinghua ChatGLM] / [Fudan MOSS]를 백엔드로 사용하려면 추가적인 종속성을 설치해야합니다 (전제 조건 : Python을 이해하고 Pytorch를 사용한 적이 있으며, 컴퓨터가 충분히 강력한 경우) :
-```sh
-# [선택 사항 I] Tsinghua ChatGLM을 지원합니다. Tsinghua ChatGLM에 대한 참고사항 : "Call ChatGLM fail cannot load ChatGLM parameters normally" 오류 발생시 다음 참조:
-# 1 : 기본 설치된 것들은 torch + cpu 버전입니다. cuda를 사용하려면 torch를 제거한 다음 torch + cuda를 다시 설치해야합니다.
-# 2 : 모델을 로드할 수 없는 기계 구성 때문에, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)를
-# AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)로 변경합니다.
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# [선택 사항 II] Fudan MOSS 지원
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # 다음 코드 줄을 실행할 때 프로젝트 루트 경로에 있어야합니다.
-
-# [선택 사항III] AVAIL_LLM_MODELS config.py 구성 파일에 기대하는 모델이 포함되어 있는지 확인하십시오.
-# 현재 지원되는 전체 모델 :
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. 실행
-```sh
-python main.py
-```5. 테스트 함수 플러그인
-```
-- 테스트 함수 플러그인 템플릿 함수 (GPT에게 오늘의 역사에서 무슨 일이 일어났는지 대답하도록 요청)를 구현하는 데 사용할 수 있습니다. 이 함수를 기반으로 더 복잡한 기능을 구현할 수 있습니다.
- "[함수 플러그인 템플릿 데모] 오늘의 역사"를 클릭하세요.
-```
-
-## 설치 - 방법 2 : 도커 사용
-
-1. ChatGPT 만 (대부분의 사람들이 선택하는 것을 권장합니다.)
-
-``` sh
-git clone https://github.com/binary-husky/gpt_academic.git # 다운로드
-cd gpt_academic # 경로 이동
-nano config.py # 아무 텍스트 에디터로 config.py를 열고 "Proxy","API_KEY","WEB_PORT" (예 : 50923) 등을 구성합니다.
-docker build -t gpt-academic . # 설치
-
-#(마지막 단계-1 선택) Linux 환경에서는 --net=host를 사용하면 더 편리합니다.
-docker run --rm -it --net=host gpt-academic
-#(마지막 단계-2 선택) macOS / windows 환경에서는 -p 옵션을 사용하여 컨테이너의 포트 (예 : 50923)를 호스트의 포트로 노출해야합니다.
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (Docker에 익숙해야합니다.)
-
-``` sh
-#docker-compose.yml을 수정하여 계획 1 및 계획 3을 삭제하고 계획 2를 유지합니다. docker-compose.yml에서 계획 2의 구성을 수정하면 됩니다. 주석을 참조하십시오.
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + Pangu + RWKV (Docker에 익숙해야합니다.)
-``` sh
-#docker-compose.yml을 수정하여 계획 1 및 계획 2을 삭제하고 계획 3을 유지합니다. docker-compose.yml에서 계획 3의 구성을 수정하면 됩니다. 주석을 참조하십시오.
-docker-compose up
-```
-
-
-## 설치 - 방법 3 : 다른 배치 방법
-
-1. 리버스 프록시 URL / Microsoft Azure API 사용 방법
-API_URL_REDIRECT를 `config.py`에 따라 구성하면됩니다.
-
-2. 원격 클라우드 서버 배치 (클라우드 서버 지식과 경험이 필요합니다.)
-[배치위키-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)에 방문하십시오.
-
-3. WSL2 사용 (Windows Subsystem for Linux 하위 시스템)
-[배치 위키-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)에 방문하십시오.
-
-4. 2 차 URL (예 : `http : //localhost/subpath`)에서 실행하는 방법
-[FastAPI 실행 설명서] (docs / WithFastapi.md)를 참조하십시오.
-
-5. docker-compose 실행
-docker-compose.yml을 읽은 후 지시 사항에 따라 작업하십시오.
----
-# 고급 사용법
-## 사용자 정의 바로 가기 버튼 / 사용자 정의 함수 플러그인
-
-1. 사용자 정의 바로 가기 버튼 (학술 바로 가기)
-임의의 텍스트 편집기로 'core_functional.py'를 엽니다. 엔트리 추가, 그런 다음 프로그램을 다시 시작하면됩니다. (버튼이 이미 추가되어 보이고 접두사, 접미사가 모두 변수가 효과적으로 수정되면 프로그램을 다시 시작하지 않아도됩니다.)
-예 :
-```
-"超级英译中": {
- # 접두사. 당신이 요구하는 것을 설명하는 데 사용됩니다. 예를 들어 번역, 코드를 설명, 다듬기 등
- "Prefix": "下面翻译成中文,然后用一个 markdown 表格逐一解释文中出现的专有名词:\n\n",
-
- # 접미사는 입력 내용 앞뒤에 추가됩니다. 예를 들어 전위를 사용하여 입력 내용을 따옴표로 묶는데 사용할 수 있습니다.
- "Suffix": "",
-},
-```
-
-

-
-
-2. 사용자 지정 함수 플러그인
-강력한 함수 플러그인을 작성하여 원하는 작업을 수행하십시오.
-이 프로젝트의 플러그인 작성 및 디버깅 난이도는 매우 낮으며, 일부 파이썬 기본 지식만 있으면 제공된 템플릿을 모방하여 플러그인 기능을 구현할 수 있습니다. 자세한 내용은 [함수 플러그인 가이드]를 참조하십시오. (https://github.com/binary -husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E 4%BB%B6%E6%8C%87%E5%8D%97).
----
-# 최신 업데이트
-## 새로운 기능 동향1. 대화 저장 기능.
-
-1. 함수 플러그인 영역에서 '현재 대화 저장'을 호출하면 현재 대화를 읽을 수 있고 복원 가능한 HTML 파일로 저장할 수 있습니다. 또한 함수 플러그인 영역(드롭다운 메뉴)에서 '대화 기록 불러오기'를 호출하면 이전 대화를 복원할 수 있습니다. 팁: 파일을 지정하지 않고 '대화 기록 불러오기'를 클릭하면 기록된 HTML 캐시를 볼 수 있으며 '모든 로컬 대화 기록 삭제'를 클릭하면 모든 HTML 캐시를 삭제할 수 있습니다.
-
-2. 보고서 생성. 대부분의 플러그인은 실행이 끝난 후 작업 보고서를 생성합니다.
-
-3. 모듈화 기능 설계, 간단한 인터페이스로도 강력한 기능을 지원할 수 있습니다.
-
-4. 자체 번역이 가능한 오픈 소스 프로젝트입니다.
-
-5. 다른 오픈 소스 프로젝트를 번역하는 것은 어렵지 않습니다.
-
-6. [live2d](https://github.com/fghrsh/live2d_demo) 장식 기능(기본적으로 비활성화되어 있으며 `config.py`를 수정해야 합니다.)
-
-7. MOSS 대 언어 모델 지원 추가
-
-8. OpenAI 이미지 생성
-
-9. OpenAI 음성 분석 및 요약
-
-10. LaTeX 전체적인 교정 및 오류 수정
-
-## 버전:
-- version 3.5 (TODO): 자연어를 사용하여 이 프로젝트의 모든 함수 플러그인을 호출하는 기능(우선순위 높음)
-- version 3.4(TODO): 로컬 대 모듈의 다중 스레드 지원 향상
-- version 3.3: 인터넷 정보 종합 기능 추가
-- version 3.2: 함수 플러그인이 더 많은 인수 인터페이스를 지원합니다.(대화 저장 기능, 임의의 언어 코드 해석 및 동시에 임의의 LLM 조합을 확인하는 기능)
-- version 3.1: 여러 개의 GPT 모델에 대한 동시 쿼리 지원! api2d 지원, 여러 개의 apikey 로드 밸런싱 지원
-- version 3.0: chatglm 및 기타 소형 llm의 지원
-- version 2.6: 플러그인 구조를 재구성하여 상호 작용성을 향상시켰습니다. 더 많은 플러그인을 추가했습니다.
-- version 2.5: 자체 업데이트, 전체 프로젝트를 요약할 때 텍스트가 너무 길어지고 토큰이 오버플로우되는 문제를 해결했습니다.
-- version 2.4: (1) PDF 전체 번역 기능 추가; (2) 입력 영역 위치 전환 기능 추가; (3) 수직 레이아웃 옵션 추가; (4) 다중 스레드 함수 플러그인 최적화.
-- version 2.3: 다중 스레드 상호 작용성 강화
-- version 2.2: 함수 플러그인 히트 리로드 지원
-- version 2.1: 접는 레이아웃 지원
-- version 2.0: 모듈화 함수 플러그인 도입
-- version 1.0: 기본 기능
-
-gpt_academic 개발자 QQ 그룹-2 : 610599535
-
-- 알려진 문제
- - 일부 브라우저 번역 플러그인이이 소프트웨어의 프론트 엔드 작동 방식을 방해합니다.
- - gradio 버전이 너무 높거나 낮으면 여러 가지 이상이 발생할 수 있습니다.
-
-## 참고 및 학습 자료
-
-```
-많은 우수 프로젝트의 디자인을 참고했습니다. 주요 항목은 다음과 같습니다.
-
-# 프로젝트 1 : Tsinghua ChatGLM-6B :
-https://github.com/THUDM/ChatGLM-6B
-
-# 프로젝트 2 : Tsinghua JittorLLMs:
-https://github.com/Jittor/JittorLLMs
-
-# 프로젝트 3 : Edge-GPT :
-https://github.com/acheong08/EdgeGPT
-
-# 프로젝트 4 : ChuanhuChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# 프로젝트 5 : ChatPaper :
-https://github.com/kaixindelele/ChatPaper
-
-# 더 많은 :
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
diff --git a/docs/README.md.Portuguese.md b/docs/README.md.Portuguese.md
deleted file mode 100644
index 2347d5a..0000000
--- a/docs/README.md.Portuguese.md
+++ /dev/null
@@ -1,324 +0,0 @@
-> **Nota**
->
-> Ao instalar as dependências, por favor, selecione rigorosamente as versões **especificadas** no arquivo requirements.txt.
->
-> `pip install -r requirements.txt`
->
-
-#

Otimização acadêmica GPT (GPT Academic)
-
-**Se você gostou deste projeto, por favor dê um Star. Se você criou atalhos acadêmicos mais úteis ou plugins funcionais, sinta-se livre para abrir uma issue ou pull request. Nós também temos um README em [Inglês|](README_EN.md)[日本語|](README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](README_RS.md)[Français](README_FR.md) traduzidos por este próprio projeto.
-Para traduzir este projeto para qualquer idioma com o GPT, leia e execute [`multi_language.py`](multi_language.py) (experimental).
-
-> **Nota**
->
-> 1. Por favor, preste atenção que somente os plugins de funções (botões) com a cor **vermelha** podem ler arquivos. Alguns plugins estão localizados no **menu suspenso** na área de plugins. Além disso, nós damos as boas-vindas com a **maior prioridade** e gerenciamos quaisquer novos plugins PR!
->
-> 2. As funções de cada arquivo neste projeto são detalhadas em [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A), auto-análises do projeto geradas pelo GPT também estão podem ser chamadas a qualquer momento ao clicar nos plugins relacionados. As perguntas frequentes estão resumidas no [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Instruções de Instalação](#installation).
->
-> 3. Este projeto é compatível com e incentiva o uso de modelos de linguagem nacionais, como chatglm e RWKV, Pangolin, etc. Suporta a coexistência de várias chaves de API e pode ser preenchido no arquivo de configuração como `API_KEY="openai-key1,openai-key2,api2d-key3"`. Quando precisar alterar temporariamente o `API_KEY`, basta digitar o `API_KEY` temporário na área de entrada e pressionar Enter para que ele entre em vigor.
-
-
-
-Funcionalidade | Descrição
---- | ---
-Um clique de polimento | Suporte a um clique polimento, um clique encontrar erros de gramática no artigo
-Tradução chinês-inglês de um clique | Tradução chinês-inglês de um clique
-Explicação de código de um único clique | Exibir código, explicar código, gerar código, adicionar comentários ao código
-[Teclas de atalho personalizadas](https://www.bilibili.com/video/BV14s4y1E7jN) | Suporte a atalhos personalizados
-Projeto modular | Suporte para poderosos plugins[de função personalizada](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), os plugins suportam[hot-reload](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-[Análise automática do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função][um clique para entender](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) o código-fonte do projeto
-[Análise do programa](https://www.bilibili.com/video/BV1cj411A7VW) | [Plugin de função] Um clique pode analisar a árvore de projetos do Python/C/C++/Java/Lua/...
-Leitura de artigos, [tradução](https://www.bilibili.com/video/BV1KT411x7Wn) de artigos | [Plugin de função] um clique para interpretar o resumo de artigos LaTeX/PDF e gerar resumo
-Tradução completa LATEX, polimento|[Plugin de função] Uma clique para traduzir ou polir um artigo LATEX
-Geração em lote de comentários | [Plugin de função] Um clique gera comentários de função em lote
-[Tradução chinês-inglês](https://www.bilibili.com/video/BV1yo4y157jV/) markdown | [Plugin de função] Você viu o README em 5 linguagens acima?
-Relatório de análise de chat | [Plugin de função] Gera automaticamente um resumo após a execução
-[Funcionalidade de tradução de artigos completos em PDF](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plugin de função] Extrai o título e o resumo do artigo PDF e traduz o artigo completo (multithread)
-Assistente arXiv | [Plugin de função] Insira o url do artigo arXiv para traduzir o resumo + baixar PDF
-Assistente de integração acadêmica do Google | [Plugin de função] Dê qualquer URL de página de pesquisa acadêmica do Google e deixe o GPT escrever[trabalhos relacionados](https://www.bilibili.com/video/BV1GP411U7Az/)
-Agregação de informações da Internet + GPT | [Plugin de função] Um clique para obter informações do GPT através da Internet e depois responde a perguntas para informações nunca ficarem desatualizadas
-Exibição de fórmulas/imagem/tabela | Pode exibir simultaneamente a forma de renderização e[TEX] das fórmulas, suporte a fórmulas e realce de código
-Suporte de plugins de várias linhas | Suporte a várias chamadas em linha do chatgpt, um clique para processamento[de massa de texto](https://www.bilibili.com/video/BV1FT411H7c5/) ou programa
-Tema gradio escuro | Adicione ``` /?__theme=dark``` ao final da url do navegador para ativar o tema escuro
-[Suporte para vários modelos LLM](https://www.bilibili.com/video/BV1wT411p7yf), suporte para a nova interface API2D | A sensação de ser atendido simultaneamente por GPT3.5, GPT4, [Chatglm THU](https://github.com/THUDM/ChatGLM-6B), [Moss Fudan](https://github.com/OpenLMLab/MOSS) deve ser ótima, certo?
-Mais modelos LLM incorporados, suporte para a implantação[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Adicione interface Newbing (New Bing), suporte [JittorLLMs](https://github.com/Jittor/JittorLLMs) THU Introdução ao suporte do LLaMA, RWKV e Pan Gu Alpha
-Mais recursos novos mostrados (geração de imagens, etc.) ... | Consulte o final deste documento ...
-
-
-
-- Nova interface (Modifique a opção LAYOUT em `config.py` para alternar entre o layout esquerdo/direito e o layout superior/inferior)
-
-

-
- All buttons are dynamically generated by reading functional.py, and you can add custom functions at will, liberating the clipboard
-
-
-

-
-
-- Proofreading/errors correction
-
-
-
-

-
-
-- If the output contains formulas, it will be displayed in both tex and rendering format at the same time, which is convenient for copying and reading
-
-
-
-

-
-
-- Don't want to read the project code? Just show the whole project to chatgpt
-
-
-
-

-
-
-- Mix the use of multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-
-
-

-
-
----
-# Instalação
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. Download the project
-
-```sh
-git clone https://github.com/binary-husky/gpt_academic.git
-cd gpt_academic
-```
-
-2. Configure the API KEY
-
-In `config.py`, configure API KEY and other settings, [Special Network Environment Settings] (https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py`, and use the configuration in it to cover the configuration with the same name in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`. The writing format of environment variables is referenced to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` > `config.py`)
-
-
-3. Install dependencies
-
-```sh
-# (Option I: for those familiar with python)(python version is 3.9 or above, the newer the better), note: use the official pip source or the Alibaba pip source. Temporary solution for changing source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: for those who are unfamiliar with python) use anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # create anaconda environment
-conda activate gptac_venv # activate anaconda environment
-python -m pip install -r requirements.txt # This step is the same as the pip installation step
-```
-
-
If you need to support Tsinghua ChatGLM / Fudan MOSS as the backend, click to expand here
-
-
-[Optional Step] If you need to support Tsinghua ChatGLM / Fudan MOSS as the backend, you need to install more dependencies (prerequisite: familiar with Python + used Pytorch + computer configuration is strong):
-```sh
-# 【Optional Step I】support Tsinghua ChatGLM。Tsinghua ChatGLM Note: If you encounter a "Call ChatGLM fails cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installed is torch+cpu version, and using cuda requires uninstalling torch and reinstalling torch+cuda; 2: If the model cannot be loaded due to insufficient computer configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# 【Optional Step II】support Fudan MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When executing this line of code, you must be in the project root path
-
-# 【Optional Step III】Make sure that the AVAIL_LLM_MODELS in the config.py configuration file contains the expected model. Currently, all supported models are as follows (jittorllms series currently only supports docker solutions):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-4. Run
-
-```sh
-python main.py
-```5. Plugin de Função de Teste
-```
-- Função de modelo de plug-in de teste (exige que o GPT responda ao que aconteceu hoje na história), você pode usar esta função como modelo para implementar funções mais complexas
- Clique em "[Função de plug-in de modelo de demonstração] O que aconteceu hoje na história?"
-```
-
-## Instalação - Método 2: Usando o Docker
-
-1. Apenas ChatGPT (recomendado para a maioria das pessoas)
-
-``` sh
-git clone https://github.com/binary-husky/gpt_academic.git # Baixar o projeto
-cd gpt_academic # Entrar no caminho
-nano config.py # Editar config.py com qualquer editor de texto configurando "Proxy", "API_KEY" e "WEB_PORT" (por exemplo, 50923), etc.
-docker build -t gpt-academic . # Instale
-
-# (Ùltima etapa - escolha 1) Dentro do ambiente Linux, é mais fácil e rápido usar `--net=host`
-docker run --rm -it --net=host gpt-academic
-# (Última etapa - escolha 2) Em ambientes macOS/windows, você só pode usar a opção -p para expor a porta do contêiner (por exemplo, 50923) para a porta no host
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (conhecimento de Docker necessário)
-
-``` sh
-# Edite o arquivo docker-compose.yml, remova as soluções 1 e 3, mantenha a solução 2, e siga as instruções nos comentários do arquivo
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + Pangu + RWKV (conhecimento de Docker necessário)
-``` sh
-# Edite o arquivo docker-compose.yml, remova as soluções 1 e 2, mantenha a solução 3, e siga as instruções nos comentários do arquivo
-docker-compose up
-```
-
-
-## Instalação - Método 3: Outros Métodos de Implantação
-
-1. Como usar URLs de proxy inverso/microsoft Azure API
-Basta configurar o API_URL_REDIRECT de acordo com as instruções em `config.py`.
-
-2. Implantação em servidores em nuvem remotos (requer conhecimento e experiência de servidores em nuvem)
-Acesse [Wiki de implementação remota do servidor em nuvem](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-3. Usando a WSL2 (sub-sistema do Windows para Linux)
-Acesse [Wiki da implantação da WSL2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-4. Como executar em um subdiretório (ex. `http://localhost/subpath`)
-Acesse [Instruções de execução FastAPI](docs/WithFastapi.md)
-
-5. Execute usando o docker-compose
-Leia o arquivo docker-compose.yml e siga as instruções.
-
-# Uso Avançado
-## Customize novos botões de acesso rápido / plug-ins de função personalizados
-
-1. Personalizar novos botões de acesso rápido (atalhos acadêmicos)
-Abra `core_functional.py` em qualquer editor de texto e adicione os seguintes itens e reinicie o programa (Se o botão já foi adicionado e pode ser visto, prefixos e sufixos são compatíveis com modificações em tempo real e não exigem reinício do programa para ter efeito.)
-Por exemplo,
-```
-"Super Eng:": {
- # Prefixo, será adicionado antes da sua entrada. Por exemplo, para descrever sua solicitação, como tradução, explicação de código, polimento, etc.
- "Prefix": "Por favor, traduza o seguinte conteúdo para chinês e use uma tabela em Markdown para explicar termos próprios no texto: \n \n",
-
- # Sufixo, será adicionado após a sua entrada. Por exemplo, emparelhado com o prefixo, pode colocar sua entrada entre aspas.
- "Suffix": "",
-},
-```
-
-

-
-
-2. Personalizar plug-ins de função
-
-Escreva plug-ins de função poderosos para executar tarefas que você deseja e não pensava possível.
-A dificuldade geral de escrever e depurar plug-ins neste projeto é baixa e, se você tem algum conhecimento básico de python, pode implementar suas próprias funções sobre o modelo que fornecemos.
-Para mais detalhes, consulte o [Guia do plug-in de função.](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-
----
-# Última atualização
-## Novas funções dinâmicas.
-
-1. Função de salvamento de diálogo. Ao chamar o plug-in de função "Salvar diálogo atual", é possível salvar o diálogo atual em um arquivo html legível e reversível. Além disso, ao chamar o plug-in de função "Carregar arquivo de histórico de diálogo" no menu suspenso da área de plug-in, é possível restaurar uma conversa anterior. Dica: clicar em "Carregar arquivo de histórico de diálogo" sem especificar um arquivo permite visualizar o cache do arquivo html de histórico. Clicar em "Excluir todo o registro de histórico de diálogo local" permite excluir todo o cache de arquivo html.
-
-

-
-
-
-2. Geração de relatório. A maioria dos plug-ins gera um relatório de trabalho após a conclusão da execução.
-
-
-3. Design modular de funcionalidades, com interfaces simples, mas suporte a recursos poderosos
-
-

-

-
-
-4. Este é um projeto de código aberto que é capaz de "auto-traduzir-se".
-
-

-
-
-5. A tradução de outros projetos de código aberto é simples.
-
-

-
-
-
-

-
-
-6. Recursos decorativos para o [live2d](https://github.com/fghrsh/live2d_demo) (desativados por padrão, é necessário modificar o arquivo `config.py`)
-
-

-
-
-7. Suporte ao modelo de linguagem MOSS
-
-

-
-
-8. Geração de imagens pelo OpenAI
-
-

-
-
-9. Análise e resumo de áudio pelo OpenAI
-
-

-
-
-10. Revisão e correção de erros de texto em Latex.
-
-

-
-
-## Versão:
-- Versão 3.5(Todo): Usar linguagem natural para chamar todas as funções do projeto (prioridade alta)
-- Versão 3.4(Todo): Melhorar o suporte à multithread para o chatglm local
-- Versão 3.3: +Funções integradas de internet
-- Versão 3.2: Suporte a mais interfaces de parâmetros de plug-in (função de salvar diálogo, interpretação de códigos de várias linguagens, perguntas de combinações LLM arbitrárias ao mesmo tempo)
-- Versão 3.1: Suporte a perguntas a vários modelos de gpt simultaneamente! Suporte para api2d e balanceamento de carga para várias chaves api
-- Versão 3.0: Suporte ao chatglm e outros LLMs de pequeno porte
-- Versão 2.6: Refatoração da estrutura de plug-in, melhoria da interatividade e adição de mais plug-ins
-- Versão 2.5: Autoatualização, resolvendo problemas de token de texto excessivamente longo e estouro ao compilar grandes projetos
-- Versão 2.4: (1) Adição de funcionalidade de tradução de texto completo em PDF; (2) Adição de funcionalidade de mudança de posição da área de entrada; (3) Adição de opção de layout vertical; (4) Otimização de plug-ins de multithread.
-- Versão 2.3: Melhoria da interatividade de multithread
-- Versão 2.2: Suporte à recarga a quente de plug-ins
-- Versão 2.1: Layout dobrável
-- Versão 2.0: Introdução de plug-ins de função modular
-- Versão 1.0: Funcionalidades básicasgpt_academic desenvolvedores QQ grupo-2: 610599535
-
-- Problemas conhecidos
- - Extensões de tradução de alguns navegadores podem interferir na execução do front-end deste software
- - Uma versão muito alta ou muito baixa do Gradio pode causar vários erros
-
-## Referências e Aprendizado
-
-```
-Foi feita referência a muitos projetos excelentes em código, principalmente:
-
-# Projeto1: ChatGLM-6B da Tsinghua:
-https://github.com/THUDM/ChatGLM-6B
-
-# Projeto2: JittorLLMs da Tsinghua:
-https://github.com/Jittor/JittorLLMs
-
-# Projeto3: Edge-GPT:
-https://github.com/acheong08/EdgeGPT
-
-# Projeto4: ChuanhuChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Projeto5: ChatPaper:
-https://github.com/kaixindelele/ChatPaper
-
-# Mais:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
diff --git a/docs/README_EN.md b/docs/README_EN.md
deleted file mode 100644
index 02b8588..0000000
--- a/docs/README_EN.md
+++ /dev/null
@@ -1,322 +0,0 @@
-> **Note**
->
-> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct.
->
-> When installing dependencies, **please strictly select the versions** specified in requirements.txt.
->
-> `pip install -r requirements.txt`
-
-# GPT Academic Optimization (GPT Academic)
-
-**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request.
-To translate this project to arbitary language with GPT, read and run [`multi_language.py`](multi_language.py) (experimental).**
-
-> Note:
->
-> 1. Please note that only the function plugins (buttons) marked in **red** support reading files. Some plugins are in the **drop-down menu** in the plugin area. We welcome and process any new plugins with the **highest priority**!
-> 2. The function of each file in this project is detailed in the self-translation analysis [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). With version iteration, you can also click on related function plugins at any time to call GPT to regenerate the project's self-analysis report. Common questions are summarized in the [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installation method](#installation).
-> 3. This project is compatible with and encourages trying domestic large language models such as chatglm, RWKV, Pangu, etc. Multiple API keys are supported and can be filled in the configuration file like `API_KEY="openai-key1,openai-key2,api2d-key3"`. When temporarily changing `API_KEY`, enter the temporary `API_KEY` in the input area and press enter to submit, which will take effect.
-
-
-
-Function | Description
---- | ---
-One-click polishing | Supports one-click polishing and one-click searching for grammar errors in papers.
-One-click Chinese-English translation | One-click Chinese-English translation.
-One-click code interpretation | Displays, explains, generates, and adds comments to code.
-[Custom shortcut keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
-Modular design | Supports custom powerful [function plug-ins](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), plug-ins support [hot update](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-[Self-program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] [One-click understanding](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the source code of this project
-[Program profiling](https://www.bilibili.com/video/BV1cj411A7VW) | [Function plug-in] One-click profiling of other project trees in Python/C/C++/Java/Lua/...
-Reading papers, [translating](https://www.bilibili.com/video/BV1KT411x7Wn) papers | [Function Plug-in] One-click interpretation of latex/pdf full-text papers and generation of abstracts.
-Latex full-text [translation](https://www.bilibili.com/video/BV1nk4y1Y7Js/), [polishing](https://www.bilibili.com/video/BV1FT411H7c5/) | [Function plug-in] One-click translation or polishing of latex papers.
-Batch annotation generation | [Function plug-in] One-click batch generation of function annotations.
-Markdown [Chinese-English translation](https://www.bilibili.com/video/BV1yo4y157jV/) | [Function plug-in] Have you seen the [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) in the five languages above?
-Chat analysis report generation | [Function plug-in] Automatically generate summary reports after running.
-[PDF full-text translation function](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function plug-in] PDF paper extract title & summary + translate full text (multi-threaded)
-[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function plug-in] Enter the arxiv article url and you can translate abstracts and download PDFs with one click.
-[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function plug-in] Given any Google Scholar search page URL, let GPT help you [write relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
-Internet information aggregation+GPT | [Function plug-in] One-click [let GPT get information from the Internet first](https://www.bilibili.com/video/BV1om4y127ck), then answer questions, and let the information never be outdated.
-Formula/image/table display | Can display formulas in both [tex form and render form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), support formulas and code highlighting.
-Multi-threaded function plug-in support | Supports multi-threaded calling of chatgpt, and can process [massive text](https://www.bilibili.com/video/BV1FT411H7c5/) or programs with one click.
-Start Dark Gradio [theme](https://github.com/binary-husky/gpt_academic/issues/173) | Add ```/?__theme=dark``` after the browser URL to switch to the dark theme.
-[Multiple LLM models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | The feeling of being served by GPT3.5, GPT4, [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), and [Fudan MOSS](https://github.com/OpenLMLab/MOSS) at the same time must be great, right?
-More LLM model access, support [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Add Newbing interface (New Bing), introduce Tsinghua [Jittorllms](https://github.com/Jittor/JittorLLMs) to support [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) and [Panguα](https://openi.org.cn/pangu/)
-More new feature displays (image generation, etc.)…… | See the end of this document for more...
-
-
-- New interface (modify the LAYOUT option in `config.py` to switch between "left and right layout" and "up and down layout")
-
-

-
- All buttons are dynamically generated by reading `functional.py`, and you can add custom functions freely to unleash the power of clipboard.
-
-

-
-
-- polishing/correction
-
-

-
-
-- If the output contains formulas, they will be displayed in both `tex` and render form, making it easy to copy and read.
-
-

-
-
-- Tired of reading the project code? ChatGPT can explain it all.
-
-

-
-
-- Multiple large language models are mixed, such as ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4.
-
-

-
-
----
-# Installation
-## Method 1: Directly running (Windows, Linux or MacOS)
-
-1. Download the project
-```sh
-git clone https://github.com/binary-husky/gpt_academic.git
-cd gpt_academic
-```
-
-2. Configure the API_KEY
-
-Configure the API KEY in `config.py`, [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py` and use the configurations in it to override the same configurations in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configurations in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your private information more secure. P.S. The project also supports configuring most options through `environment variables`. Please refer to the format of `docker-compose` file when writing. Reading priority: `environment variables` > `config_private.py` > `config.py`)
-
-
-3. Install the dependencies
-```sh
-# (Option I: If familiar with python) (python version 3.9 or above, the newer the better), note: use official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: If not familiar with python) Use anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # create anaconda environment
-conda activate gptac_venv # activate anaconda environment
-python -m pip install -r requirements.txt # this step is the same as pip installation
-```
-
-
If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand
-
-
-[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (prerequisites: familiar with Python + used Pytorch + computer configuration is strong enough):
-```sh
-# [Optional Step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: if you encounter the "Call ChatGLM fail cannot load ChatGLM parameters" error, refer to this: 1: The default installation above is torch + cpu version, to use cuda, you need to uninstall torch and reinstall torch + cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code = True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# [Optional Step II] Support Fudan MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the root directory of the project
-
-# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file includes the expected models. Currently supported models are as follows (the jittorllms series only supports the docker solution for the time being):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Run it
-```sh
-python main.py
-```5. Test Function Plugin
-```
-- Test function plugin template function (ask GPT what happened today in history), based on which you can implement more complex functions as a template
- Click "[Function Plugin Template Demo] Today in History"
-```
-
-## Installation - Method 2: Using Docker
-
-1. ChatGPT Only (Recommended for Most People)
-
-``` sh
-git clone https://github.com/binary-husky/gpt_academic.git # Download project
-cd gpt_academic # Enter path
-nano config.py # Edit config.py with any text editor, configure "Proxy", "API_KEY" and "WEB_PORT" (e.g. 50923), etc.
-docker build -t gpt-academic . # Install
-
-#(Last step - option 1) In a Linux environment, use `--net=host` for convenience and speed.
-docker run --rm -it --net=host gpt-academic
-#(Last step - option 2) On macOS/windows environment, only -p option can be used to expose the container's port (e.g. 50923) to the port of the main machine.
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (Requires Docker Knowledge)
-
-``` sh
-# Modify docker-compose.yml, delete Plan 1 and Plan 3, and keep Plan 2. Modify the configuration of Plan 2 in docker-compose.yml, refer to the comments in it for configuration.
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + Pangu + RWKV (Requires Docker Knowledge)
-
-``` sh
-# Modify docker-compose.yml, delete Plan 1 and Plan 2, and keep Plan 3. Modify the configuration of Plan 3 in docker-compose.yml, refer to the comments in it for configuration.
-docker-compose up
-```
-
-## Installation - Method 3: Other Deployment Options
-
-1. How to Use Reverse Proxy URL/Microsoft Cloud Azure API
-Configure API_URL_REDIRECT according to the instructions in 'config.py'.
-
-2. Deploy to a Remote Server (Requires Knowledge and Experience with Cloud Servers)
-Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-3. Using WSL2 (Windows Subsystem for Linux)
-Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-4. How to Run Under a Subdomain (e.g. `http://localhost/subpath`)
-Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
-
-5. Using docker-compose to Run
-Read the docker-compose.yml and follow the prompts.
-
----
-# Advanced Usage
-## Custom New Shortcut Buttons / Custom Function Plugins
-
-1. Custom New Shortcut Buttons (Academic Hotkey)
-Open `core_functional.py` with any text editor, add an entry as follows and restart the program. (If the button has been successfully added and is visible, the prefix and suffix can be hot-modified without having to restart the program.)
-For example,
-```
-"Super English-to-Chinese": {
- # Prefix, which will be added before your input. For example, used to describe your requests, such as translation, code explanation, polishing, etc.
- "Prefix": "Please translate the following content into Chinese and then use a markdown table to explain the proprietary terms that appear in the text:\n\n",
-
- # Suffix, which is added after your input. For example, with the prefix, your input content can be surrounded by quotes.
- "Suffix": "",
-},
-```
-
-

-
-
-2. Custom Function Plugins
-
-Write powerful function plugins to perform any task you can think of, even those you cannot think of.
-The difficulty of plugin writing and debugging in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plug-in functions based on the template we provide.
-For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-
----
-# Latest Update
-## New Feature Dynamics
-1. Conversation saving function. Call `Save current conversation` in the function plugin area to save the current conversation as a readable and recoverable HTML file. In addition, call `Load conversation history archive` in the function plugin area (dropdown menu) to restore previous sessions. Tip: Clicking `Load conversation history archive` without specifying a file will display the cached history of HTML archives, and clicking `Delete all local conversation history` will delete all HTML archive caches.
-
-
-

-
-
-
-2. Report generation. Most plugins will generate work reports after execution.
-
-
-
-
-3. Modular function design with simple interfaces that support powerful functions.
-
-
-

-

-
-
-
-4. This is an open-source project that can "self-translate".
-
-
-

-
-
-5. Translating other open-source projects is a piece of cake.
-
-
-

-
-
-
-

-
-
-6. A small feature decorated with [live2d](https://github.com/fghrsh/live2d_demo) (disabled by default, need to modify `config.py`).
-
-
-

-
-
-7. Added MOSS large language model support.
-
-

-
-
-8. OpenAI image generation.
-
-

-
-
-9. OpenAI audio parsing and summarization.
-
-

-
-
-10. Full-text proofreading and error correction of LaTeX.
-
-

-
-
-
-## Versions:
-- version 3.5(Todo): Use natural language to call all function plugins of this project (high priority).
-- version 3.4(Todo): Improve multi-threading support for chatglm local large models.
-- version 3.3: +Internet information integration function.
-- version 3.2: Function plugin supports more parameter interfaces (save conversation function, interpretation of any language code + simultaneous inquiry of any LLM combination).
-- version 3.1: Support simultaneous inquiry of multiple GPT models! Support api2d, and support load balancing of multiple apikeys.
-- version 3.0: Support chatglm and other small LLM models.
-- version 2.6: Refactored plugin structure, improved interactivity, and added more plugins.
-- version 2.5: Self-updating, solving the problem of text overflow and token overflow when summarizing large engineering source codes.
-- version 2.4: (1) Added PDF full-text translation function; (2) Added the function of switching the position of the input area; (3) Added vertical layout option; (4) Optimized multi-threading function plugins.
-- version 2.3: Enhanced multi-threading interactivity.
-- version 2.2: Function plugin supports hot reloading.
-- version 2.1: Collapsible layout.
-- version 2.0: Introduction of modular function plugins.
-- version 1.0: Basic functions.
-
-gpt_academic Developer QQ Group-2: 610599535
-
-- Known Issues
- - Some browser translation plugins interfere with the front-end operation of this software.
- - Both high and low versions of gradio can lead to various exceptions.
-
-## Reference and Learning
-
-```
-Many other excellent designs have been referenced in the code, mainly including:
-
-# Project 1: THU ChatGLM-6B:
-https://github.com/THUDM/ChatGLM-6B
-
-# Project 2: THU JittorLLMs:
-https://github.com/Jittor/JittorLLMs
-
-# Project 3: Edge-GPT:
-https://github.com/acheong08/EdgeGPT
-
-# Project 4: ChuanhuChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Project 5: ChatPaper:
-https://github.com/kaixindelele/ChatPaper
-
-# More:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/docs/README_FR.md b/docs/README_FR.md
deleted file mode 100644
index af3bb42..0000000
--- a/docs/README_FR.md
+++ /dev/null
@@ -1,323 +0,0 @@
-> **Note**
->
-> Ce fichier README est généré automatiquement par le plugin de traduction markdown de ce projet et n'est peut - être pas correct à 100%.
->
-> During installation, please strictly select the versions **specified** in requirements.txt.
->
-> `pip install -r requirements.txt`
->
-
-#

Optimisation académique GPT (GPT Academic)
-
-**Si vous aimez ce projet, veuillez lui donner une étoile. Si vous avez trouvé des raccourcis académiques ou des plugins fonctionnels plus utiles, n'hésitez pas à ouvrir une demande ou une pull request.
-Pour traduire ce projet dans une langue arbitraire avec GPT, lisez et exécutez [`multi_language.py`](multi_language.py) (expérimental).
-
-> **Note**
->
-> 1. Veuillez noter que seuls les plugins de fonctions (boutons) **en rouge** prennent en charge la lecture de fichiers. Certains plugins se trouvent dans le **menu déroulant** de la zone de plugins. De plus, nous accueillons et traitons les nouvelles pull requests pour les plugins avec **la plus haute priorité**!
->
-> 2. Les fonctions de chaque fichier de ce projet sont expliquées en détail dans l'auto-analyse [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). Avec l'itération des versions, vous pouvez également cliquer sur les plugins de fonctions pertinents et appeler GPT pour régénérer le rapport d'auto-analyse du projet à tout moment. Les FAQ sont résumées dans [le wiki](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Méthode d'installation](#installation).
->
-> 3. Ce projet est compatible avec et encourage l'utilisation de grands modèles de langage nationaux tels que chatglm, RWKV, Pangu, etc. La coexistence de plusieurs clés API est prise en charge et peut être remplie dans le fichier de configuration, tel que `API_KEY="openai-key1,openai-key2,api2d-key3"`. Lorsque vous souhaitez remplacer temporairement `API_KEY`, saisissez temporairement `API_KEY` dans la zone de saisie, puis appuyez sur Entrée pour soumettre et activer.
-
-
-
-Functionnalité | Description
---- | ---
-Révision en un clic | prend en charge la révision en un clic et la recherche d'erreurs de syntaxe dans les articles
-Traduction chinois-anglais en un clic | Traduction chinois-anglais en un clic
-Explication de code en un clic | Affichage, explication, génération et ajout de commentaires de code
-[Raccourcis personnalisés](https://www.bilibili.com/video/BV14s4y1E7jN) | prend en charge les raccourcis personnalisés
-Conception modulaire | prend en charge de puissants plugins de fonction personnalisée, les plugins prennent en charge la [mise à jour à chaud](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
-[Autoscanner](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] [Compréhension instantanée](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) du code source de ce projet
-[Analyse de programme](https://www.bilibili.com/video/BV1cj411A7VW) | [Plug-in de fonction] Analyse en un clic de la structure d'autres projets Python / C / C ++ / Java / Lua / ...
-Lecture d'articles, [traduction](https://www.bilibili.com/video/BV1KT411x7Wn) d'articles | [Plug-in de fonction] Compréhension instantanée de l'article latex / pdf complet et génération de résumés
-[Traduction](https://www.bilibili.com/video/BV1nk4y1Y7Js/) et [révision](https://www.bilibili.com/video/BV1FT411H7c5/) complets en latex | [Plug-in de fonction] traduction ou révision en un clic d'articles en latex
-Génération de commentaires en masse | [Plug-in de fonction] Génération en un clic de commentaires de fonction en masse
-Traduction [chinois-anglais](https://www.bilibili.com/video/BV1yo4y157jV/) en Markdown | [Plug-in de fonction] avez-vous vu la [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) pour les 5 langues ci-dessus?
-Génération de rapports d'analyse de chat | [Plug-in de fonction] Génère automatiquement un rapport de résumé après l'exécution
-[Traduction intégrale en pdf](https://www.bilibili.com/video/BV1KT411x7Wn) | [Plug-in de fonction] Extraction de titre et de résumé de l'article pdf + traduction intégrale (multi-thread)
-[Aide à arxiv](https://www.bilibili.com/video/BV1LM4y1279X) | [Plug-in de fonction] Entrer l'url de l'article arxiv pour traduire et télécharger le résumé en un clic
-[Aide à la recherche Google Scholar](https://www.bilibili.com/video/BV19L411U7ia) | [Plug-in de fonction] Donnez l'URL de la page de recherche Google Scholar, laissez GPT vous aider à [écrire des ouvrages connexes](https://www.bilibili.com/video/BV1GP411U7Az/)
-Aggrégation d'informations en ligne et GPT | [Plug-in de fonction] Permet à GPT de [récupérer des informations en ligne](https://www.bilibili.com/video/BV1om4y127ck), puis de répondre aux questions, afin que les informations ne soient jamais obsolètes
-Affichage d'équations / images / tableaux | Fournit un affichage simultané de [la forme tex et de la forme rendue](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), prend en charge les formules mathématiques et la coloration syntaxique du code
-Prise en charge des plugins à plusieurs threads | prend en charge l'appel multithread de chatgpt, un clic pour traiter [un grand nombre d'articles](https://www.bilibili.com/video/BV1FT411H7c5/) ou de programmes
-Thème gradio sombre en option de démarrage | Ajoutez```/?__theme=dark``` à la fin de l'URL du navigateur pour basculer vers le thème sombre
-[Prise en charge de plusieurs modèles LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Sera probablement très agréable d'être servi simultanément par GPT3.5, GPT4, [ChatGLM de Tsinghua](https://github.com/THUDM/ChatGLM-6B), [MOSS de Fudan](https://github.com/OpenLMLab/MOSS)
-Plus de modèles LLM, déploiement de [huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Ajout prise en charge de l'interface Newbing (nouvelle bing), introduction du support de [Jittorllms de Tsinghua](https://github.com/Jittor/JittorLLMs), [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) et [Panguα](https://openi.org.cn/pangu/)
-Plus de nouvelles fonctionnalités (génération d'images, etc.) ... | Voir la fin de ce document pour plus de détails ...
-
-
-
-
-- Nouvelle interface (modifier l'option LAYOUT de `config.py` pour passer d'une disposition ``gauche-droite`` à une disposition ``haut-bas``)
-
-

-
- Tous les boutons sont générés dynamiquement en lisant functional.py et peuvent être facilement personnalisés pour ajouter des fonctionnalités personnalisées, ce qui facilite l'utilisation du presse-papiers.
-
-

-
-
-- Correction d'erreurs/lissage du texte.
-
-

-
-
-- Si la sortie contient des équations, elles sont affichées à la fois sous forme de tex et sous forme rendue pour faciliter la lecture et la copie.
-
-

-
-
-- Pas envie de lire les codes de ce projet? Tout le projet est directement exposé par ChatGPT.
-
-

-
-
-- Appel à une variété de modèles de langage de grande envergure (ChatGLM + OpenAI-GPT3.5 + [API2D] (https://api2d.com/)-GPT4).
-
-

-
-
----
-# Installation
-## Installation-Method 1: running directly (Windows, Linux or MacOS)
-
-1. Télécharger le projet
-```sh
-git clone https://github.com/binary-husky/gpt_academic.git
-cd gpt_academic
-```
-
-2. Configuration de la clé API
-
-Dans `config.py`, configurez la clé API et d'autres paramètres. Consultez [Special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. Lorsque le programme est exécuté, il vérifie en premier s'il existe un fichier de configuration privé nommé `config_private.py` et remplace les paramètres portant le même nom dans `config.py` par les paramètres correspondants dans `config_private.py`. Par conséquent, si vous comprenez la logique de lecture de nos configurations, nous vous recommandons vivement de créer un nouveau fichier de configuration nommé `config_private.py` à côté de `config.py` et de transférer (copier) les configurations de `config.py`. `config_private.py` n'est pas contrôlé par Git et peut garantir la sécurité de vos informations privées. P.S. Le projet prend également en charge la configuration de la plupart des options via "variables d'environnement", le format d'écriture des variables d'environnement est référencé dans le fichier `docker-compose`. Priorité de lecture: "variables d'environnement" > `config_private.py` > `config.py`)
-
-
-3. Installer les dépendances
-```sh
-# (Option I: python users instalation) (Python version 3.9 or higher, the newer the better). Note: use official pip source or ali pip source. To temporarily change the source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: non-python users instalation) Use Anaconda, the steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # Create anaconda env
-conda activate gptac_venv # Activate anaconda env
-python -m pip install -r requirements.txt # Same step as pip instalation
-```
-
-
Cliquez ici pour afficher le texte si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend.
-
-
-【Optional】 Si vous souhaitez prendre en charge THU ChatGLM/FDU MOSS en tant que backend, des dépendances supplémentaires doivent être installées (prérequis: compétent en Python + utilisez Pytorch + configuration suffisante de l'ordinateur):
-```sh
-# 【Optional Step I】 Support THU ChatGLM. Remarque sur THU ChatGLM: Si vous rencontrez l'erreur "Appel à ChatGLM échoué, les paramètres ChatGLM ne peuvent pas être chargés normalement", reportez-vous à ce qui suit: 1: La version par défaut installée est torch+cpu, si vous souhaitez utiliser cuda, vous devez désinstaller torch et réinstaller torch+cuda; 2: Si le modèle ne peut pas être chargé en raison d'une configuration insuffisante de l'ordinateur local, vous pouvez modifier la précision du modèle dans request_llm/bridge_chatglm.py, modifier AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) par AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# 【Optional Step II】 Support FDU MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note: When running this line of code, you must be in the project root path.
-
-# 【Optional Step III】Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the desired model. Currently, all models supported are as follows (the jittorllms series currently only supports the docker scheme):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Exécution
-```sh
-python main.py
-```5. Plugin de fonction de test
-```
-- Fonction de modèle de plugin de test (requiert que GPT réponde à ce qui s'est passé dans l'histoire aujourd'hui), vous pouvez utiliser cette fonction comme modèle pour mettre en œuvre des fonctionnalités plus complexes.
- Cliquez sur "[Démo de modèle de plugin de fonction] Aujourd'hui dans l'histoire"
-```
-
-## Installation - Méthode 2: Utilisation de Docker
-
-1. ChatGPT uniquement (recommandé pour la plupart des gens)
-
-``` sh
-git clone https://github.com/binary-husky/gpt_academic.git # Télécharger le projet
-cd gpt_academic # Accéder au chemin
-nano config.py # Editez config.py avec n'importe quel éditeur de texte en configurant "Proxy", "API_KEY" et "WEB_PORT" (p. ex. 50923)
-docker build -t gpt-academic . # Installer
-
-# (Dernière étape - choix1) Dans un environnement Linux, l'utilisation de `--net=host` est plus facile et rapide
-docker run --rm -it --net=host gpt-academic
-# (Dernière étape - choix 2) Dans un environnement macOS/Windows, seule l'option -p permet d'exposer le port du récipient (p.ex. 50923) au port de l'hôte.
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (il faut connaître Docker)
-
-``` sh
-# Modifiez docker-compose.yml, supprimez la solution 1 et la solution 3, conservez la solution 2. Modifiez la configuration de la solution 2 dans docker-compose.yml en suivant les commentaires.
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + PanGu + RWKV (il faut connaître Docker)
-``` sh
-# Modifiez docker-compose.yml, supprimez la solution 1 et la solution 2, conservez la solution 3. Modifiez la configuration de la solution 3 dans docker-compose.yml en suivant les commentaires.
-docker-compose up
-```
-
-
-## Installation - Méthode 3: Autres méthodes de déploiement
-
-1. Comment utiliser une URL de proxy inversé / Microsoft Azure Cloud API
-Configurez simplement API_URL_REDIRECT selon les instructions de config.py.
-
-2. Déploiement distant sur un serveur cloud (connaissance et expérience des serveurs cloud requises)
-Veuillez consulter [Wiki de déploiement-1] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97).
-
-3. Utilisation de WSL2 (sous-système Windows pour Linux)
-Veuillez consulter [Wiki de déploiement-2] (https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2).
-
-4. Comment exécuter sous un sous-répertoire (tel que `http://localhost/subpath`)
-Veuillez consulter les [instructions d'exécution de FastAPI] (docs/WithFastapi.md).
-
-5. Utilisation de docker-compose
-Veuillez lire docker-compose.yml, puis suivre les instructions fournies.
-
-# Utilisation avancée
-## Personnalisation de nouveaux boutons pratiques / Plugins de fonctions personnalisées
-
-1. Personnalisation de nouveaux boutons pratiques (raccourcis académiques)
-Ouvrez core_functional.py avec n'importe quel éditeur de texte, ajoutez une entrée comme suit, puis redémarrez le programme. (Si le bouton a été ajouté avec succès et est visible, le préfixe et le suffixe prennent en charge les modifications à chaud et ne nécessitent pas le redémarrage du programme pour prendre effet.)
-Par exemple
-```
-"Super coller sens": {
- # Préfixe, sera ajouté avant votre entrée. Par exemple, pour décrire votre demande, telle que traduire, expliquer du code, faire la mise en forme, etc.
- "Prefix": "Veuillez traduire le contenu suivant en chinois, puis expliquer chaque terme proprement nommé qui y apparaît avec un tableau markdown:\n\n",
-
- # Suffixe, sera ajouté après votre entrée. Par exemple, en utilisant le préfixe, vous pouvez entourer votre contenu d'entrée de guillemets.
- "Suffix": "",
-},
-```
-
-

-
-
-2. Plugins de fonctions personnalisées
-
-Écrivez des plugins de fonctions puissants pour effectuer toutes les tâches que vous souhaitez ou que vous ne pouvez pas imaginer.
-Les plugins de ce projet ont une difficulté de programmation et de débogage très faible. Si vous avez des connaissances de base en Python, vous pouvez simuler la fonctionnalité de votre propre plugin en suivant le modèle que nous avons fourni.
-Veuillez consulter le [Guide du plugin de fonction] (https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) pour plus de détails.
-
----
-# Latest Update
-
-## Nouvelles fonctionnalités en cours de déploiement.
-
-1. Fonction de sauvegarde de la conversation.
-Appelez simplement "Enregistrer la conversation actuelle" dans la zone de plugin de fonction pour enregistrer la conversation actuelle en tant que fichier html lisible et récupérable. De plus, dans la zone de plugin de fonction (menu déroulant), appelez "Charger une archive de l'historique de la conversation" pour restaurer la conversation précédente. Astuce : cliquer directement sur "Charger une archive de l'historique de la conversation" sans spécifier de fichier permet de consulter le cache d'archive html précédent. Cliquez sur "Supprimer tous les enregistrements locaux de l'historique de la conversation" pour supprimer le cache d'archive html.
-
-
-

-
-
-
-
-2. Générer un rapport. La plupart des plugins génèrent un rapport de travail après l'exécution.
-
-
-3. Conception de fonctionnalités modulaires avec une interface simple mais capable d'une fonctionnalité puissante.
-
-

-

-
-
-4. C'est un projet open source qui peut "se traduire de lui-même".
-
-

-
-
-5. Traduire d'autres projets open source n'est pas un problème.
-
-

-
-
-
-

-
-
-6. Fonction de décoration de live2d (désactivée par défaut, nécessite une modification de config.py).
-
-

-
-
-7. Prise en charge du modèle de langue MOSS.
-
-

-
-
-8. Génération d'images OpenAI.
-
-

-
-
-9. Analyse et synthèse vocales OpenAI.
-
-

-
-
-10. Correction de la totalité des erreurs de Latex.
-
-

-
-
-
-## Versions :
-- version 3.5 (À faire) : appel de toutes les fonctions de plugin de ce projet en langage naturel (priorité élevée)
-- version 3.4 (À faire) : amélioration du support multi-thread de chatglm en local
-- version 3.3 : Fonctionnalité intégrée d'informations d'internet
-- version 3.2 : La fonction du plugin de fonction prend désormais en charge des interfaces de paramètres plus nombreuses (fonction de sauvegarde, décodage de n'importe quel langage de code + interrogation simultanée de n'importe quelle combinaison de LLM)
-- version 3.1 : Prise en charge de l'interrogation simultanée de plusieurs modèles GPT ! Support api2d, équilibrage de charge multi-clé api.
-- version 3.0 : Prise en charge de chatglm et autres LLM de petite taille.
-- version 2.6 : Refonte de la structure des plugins, amélioration de l'interactivité, ajout de plus de plugins.
-- version 2.5 : Auto-mise à jour, résolution des problèmes de texte trop long et de dépassement de jetons lors de la compilation du projet global.
-- version 2.4 : (1) Nouvelle fonction de traduction de texte intégral PDF ; (2) Nouvelle fonction de permutation de position de la zone d'entrée ; (3) Nouvelle option de mise en page verticale ; (4) Amélioration des fonctions multi-thread de plug-in.
-- version 2.3 : Amélioration de l'interactivité multithread.
-- version 2.2 : Les plugins de fonctions peuvent désormais être rechargés à chaud.
-- version 2.1 : Disposition pliable
-- version 2.0 : Introduction de plugins de fonctions modulaires
-- version 1.0 : Fonctionnalités de base
-
-gpt_academic développeur QQ groupe-2:610599535
-
-- Problèmes connus
- - Certains plugins de traduction de navigateur perturbent le fonctionnement de l'interface frontend de ce logiciel
- - Des versions gradio trop hautes ou trop basses provoquent de nombreuses anomalies
-
-## Référence et apprentissage
-
-```
-De nombreux autres excellents projets ont été référencés dans le code, notamment :
-
-# Projet 1 : ChatGLM-6B de Tsinghua :
-https://github.com/THUDM/ChatGLM-6B
-
-# Projet 2 : JittorLLMs de Tsinghua :
-https://github.com/Jittor/JittorLLMs
-
-# Projet 3 : Edge-GPT :
-https://github.com/acheong08/EdgeGPT
-
-# Projet 4 : ChuanhuChatGPT :
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Projet 5 : ChatPaper :
-https://github.com/kaixindelele/ChatPaper
-
-# Plus :
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/docs/README_JP.md b/docs/README_JP.md
deleted file mode 100644
index 46145e1..0000000
--- a/docs/README_JP.md
+++ /dev/null
@@ -1,329 +0,0 @@
-> **Note**
->
-> このReadmeファイルは、このプロジェクトのmarkdown翻訳プラグインによって自動的に生成されたもので、100%正確ではない可能性があります。
->
-> When installing dependencies, please strictly choose the versions specified in `requirements.txt`.
->
-> `pip install -r requirements.txt`
->
-
-#

GPT 学术优化 (GPT Academic)
-
-**もしこのプロジェクトが好きなら、星をつけてください。もしあなたがより良いアカデミックショートカットまたは機能プラグインを思いついた場合、Issueをオープンするか pull request を送信してください。私たちはこのプロジェクト自体によって翻訳された[英語 |](README_EN.md)[日本語 |](README_JP.md)[한국어 |](https://github.com/mldljyh/ko_gpt_academic)[Русский |](README_RS.md)[Français](README_FR.md)のREADMEも用意しています。
-GPTを使った任意の言語にこのプロジェクトを翻訳するには、[`multi_language.py`](multi_language.py)を読んで実行してください。 (experimental)。
-
-> **注意**
->
-> 1. **赤色**で表示された関数プラグイン(ボタン)のみ、ファイルの読み取りをサポートしています。一部のプラグインは、プラグインエリアの**ドロップダウンメニュー**内にあります。また、私たちはどんな新しいプラグインのPRでも、**最優先**で歓迎し、処理します!
->
-> 2. このプロジェクトの各ファイルの機能は、自己解析の詳細説明書である[`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)で説明されています。バージョンが進化するにつれて、関連する関数プラグインをいつでもクリックし、GPTを呼び出してプロジェクトの自己解析レポートを再生成することができます。よくある問題は[`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)にまとめられています。[インストール方法](#installation)。
-
-> 3. このプロジェクトは、chatglmやRWKV、パンクなど、国内の大規模自然言語モデルを利用することをサポートし、試みることを奨励します。複数のAPIキーを共存することができ、設定ファイルに`API_KEY="openai-key1,openai-key2,api2d-key3"`のように記入することができます。`API_KEY`を一時的に変更する場合は、入力エリアに一時的な`API_KEY`を入力してEnterキーを押せば、それが有効になります。
-
-
-
-
-機能 | 説明
---- | ---
-一键校正 | 一键で校正可能、論文の文法エラーを検索することができる
-一键中英翻訳 | 一键で中英翻訳可能
-一键コード解説 | コードを表示し、解説し、生成し、コードに注釈をつけることができる
-[自分でカスタマイズ可能なショートカットキー](https://www.bilibili.com/video/BV14s4y1E7jN) | 自分でカスタマイズ可能なショートカットキーをサポートする
-モジュール化された設計 | カスタマイズ可能な[強力な関数プラグイン](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions)をサポートし、プラグインは[ホットアップデート](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)に対応している
-[自己プログラム解析](https://www.bilibili.com/video/BV1cj411A7VW) | [関数プラグイン] [一键読解](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)このプロジェクトのソースコード
-プログラム解析 | [関数プラグイン] 一鍵で他のPython/C/C++/Java/Lua/...プロジェクトを分析できる
-論文の読み、[翻訳](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] LaTex/ PDF論文の全文を一鍵で読み解き、要約を生成することができる
-LaTex全文[翻訳](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[校正](https://www.bilibili.com/video/BV1FT411H7c5/) | [関数プラグイン] LaTex論文の翻訳または校正を一鍵で行うことができる
-一括で注釈を生成 | [関数プラグイン] 一鍵で関数に注釈をつけることができる
-Markdown[中英翻訳](https://www.bilibili.com/video/BV1yo4y157jV/) | [関数プラグイン] 上記の5種類の言語の[README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md)を見たことがありますか?
-チャット分析レポート生成 | [関数プラグイン] 実行後、自動的に概要報告書を生成する
-[PDF論文全文翻訳機能](https://www.bilibili.com/video/BV1KT411x7Wn) | [関数プラグイン] PDF論文からタイトルと要約を抽出し、全文を翻訳する(マルチスレッド)
-[Arxivアシスタント](https://www.bilibili.com/video/BV1LM4y1279X) | [関数プラグイン] arxiv記事のURLを入力するだけで、要約を一鍵翻訳し、PDFをダウンロードできる
-[Google Scholar 総合アシスタント](https://www.bilibili.com/video/BV19L411U7ia) | [関数プラグイン] 任意のGoogle Scholar検索ページURLを指定すると、gptが[related works](https://www.bilibili.com/video/BV1GP411U7Az/)を作成する
-インターネット情報収集+GPT | [関数プラグイン] まずGPTに[インターネットから情報を収集](https://www.bilibili.com/video/BV1om4y127ck)してから質問に回答させ、情報が常に最新であるようにする
-数式/画像/表表示 | 数式の[tex形式とレンダリング形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png)を同時に表示し、数式、コードハイライトをサポートしている
-マルチスレッド関数プラグインがサポートされている | chatgptをマルチスレッドで呼び出し、[大量のテキスト](https://www.bilibili.com/video/BV1FT411H7c5/)またはプログラムを一鍵で処理できる
-ダークグラジオ[テーマの起動](https://github.com/binary-husky/gpt_academic/issues/173) | ブラウザのURLの後ろに```/?__theme=dark```を追加すると、ダークテーマを切り替えることができます。
-[多数のLLMモデル](https://www.bilibili.com/video/BV1wT411p7yf)がサポートされ、[API2D](https://api2d.com/)がサポートされている | 同時にGPT3.5、GPT4、[清華ChatGLM](https://github.com/THUDM/ChatGLM-6B)、[復旦MOSS](https://github.com/OpenLMLab/MOSS)に対応
-より多くのLLMモデルが接続され、[huggingfaceデプロイ](https://huggingface.co/spaces/qingxu98/gpt-academic)がサポートされている | Newbingインターフェイス(Newbing)、清華大学の[Jittorllm](https://github.com/Jittor/JittorLLMs)のサポート[LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV)と[盘古α](https://openi.org.cn/pangu/)
-さらに多くの新機能(画像生成など)を紹介する... | この文書の最後に示す...
-
-
-- 新しいインターフェース(`config.py`のLAYOUTオプションを変更することで、「左右配置」と「上下配置」を切り替えることができます)
-
-

-
- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to free the clipboard.
-
-
-

-
-
-- Polishing/Correction
-
-
-

-
-
-- If the output contains formulas, they are displayed in both TeX and rendering forms, making it easy to copy and read.
-
-
-

-
-
-- Don't feel like looking at the project code? Just ask chatgpt directly.
-
-
-

-
-
-
-- Mixed calls of multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-
-

-
-
----
-
-# Installation
-
-## Installation-Method 1: Directly run (Windows, Linux or MacOS)
-
-1. Download the project.
-
-```sh
-git clone https://github.com/binary-husky/gpt_academic.git
-cd gpt_academic
-```
-
-2. Configure the API_KEY.
-
-Configure the API KEY and other settings in `config.py` and [special network environment settings](https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. When the program is running, it will first check if there is a private configuration file named `config_private.py`, and use the configuration in it to override the same name configuration in `config.py`. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variables` > `config_private.py` > `config.py`)
-
-3. Install dependencies.
-
-```sh
-# (Choose I: If familiar with Python)(Python version 3.9 or above, the newer the better) Note: Use the official pip source or Ali pip source. Temporary switching source method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Choose II: If not familiar with Python) Use anaconda, the steps are the same (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # Create anaconda environment.
-conda activate gptac_venv # Activate the anaconda environment.
-python -m pip install -r requirements.txt # This step is the same as the pip installation step.
-```
-
-
If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, click to expand.
-
-
-[Optional Steps] If you need to support Tsinghua ChatGLM/Fudan MOSS as a backend, you need to install more dependencies (precondition: familiar with Python + used Pytorch + computer configuration). Strong enough):
-
-```sh
-# Optional step I: support Tsinghua ChatGLM. Tsinghua ChatGLM remarks: If you encounter the error "Call ChatGLM fail cannot load ChatGLM parameters normally", refer to the following: 1: The version installed above is torch+cpu version, using cuda requires uninstalling torch and reinstalling torch+cuda; 2: If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, and change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# Optional Step II: Support Fudan MOSS.
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, it must be in the project root.
-
-# 【Optional Step III】Ensure that the AVAIL_LLM_MODELS in the config.py configuration file contains the expected model. Currently, all supported models are as follows (jittorllms series currently only supports the docker solution):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Run.
-
-```sh
-python main.py
-```5. Testing Function Plugin
-```
-- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions
- Click "[Function Plugin Template Demo] Today in History"
-```
-
-## Installation-Methods 2: Using Docker
-
-1. Only ChatGPT (recommended for most people)
-
- ``` sh
-git clone https://github.com/binary-husky/gpt_academic.git # Download project
-cd gpt_academic # Enter path
-nano config.py # Edit config.py with any text editor ‑ configure "Proxy," "API_KEY," "WEB_PORT" (e.g., 50923) and more
-docker build -t gpt-academic . # installation
-
-#(Last step-Option 1) In a Linux environment, `--net=host` is more convenient and quick
-docker run --rm -it --net=host gpt-academic
-#(Last step-Option 2) In a macOS/windows environment, the -p option must be used to expose the container port (e.g., 50923) to the port on the host.
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker)
-
-``` sh
-# Modify docker-compose.yml, delete plans 1 and 3, and retain plan 2. Modify the configuration of plan 2 in docker-compose.yml, and reference the comments for instructions.
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + Pangu + RWKV (requires familiarity with Docker)
-``` sh
-# Modify docker-compose.yml, delete plans 1 and 2, and retain plan 3. Modify the configuration of plan 3 in docker-compose.yml, and reference the comments for instructions.
-docker-compose up
-```
-
-
-## Installation-Method 3: Other Deployment Methods
-
-1. How to use proxy URL/Microsoft Azure API
-Configure API_URL_REDIRECT according to the instructions in `config.py`.
-
-2. Remote Cloud Server Deployment (requires cloud server knowledge and experience)
-Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-3. Using WSL2 (Windows Subsystem for Linux Subsystem)
-Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-4. How to run on a secondary URL (such as `http://localhost/subpath`)
-Please visit [FastAPI Running Instructions](docs/WithFastapi.md)
-
-5. Run with docker-compose
-Please read docker-compose.yml and follow the instructions provided therein.
----
-# Advanced Usage
-## Customize new convenience buttons/custom function plugins
-
-1. Custom new convenience buttons (academic shortcut keys)
-Open `core_functional.py` with any text editor, add the item as follows, and restart the program. (If the button has been added successfully and is visible, the prefix and suffix support hot modification without restarting the program.)
-example:
-```
-"Super English to Chinese Translation": {
- # Prefix, which will be added before your input. For example, used to describe your request, such as translation, code interpretation, polish, etc.
- "Prefix": "Please translate the following content into Chinese, and explain the proper nouns in the text in a markdown table one by one:\n\n",
-
- # Suffix, which will be added after your input. For example, in combination with the prefix, you can surround your input content with quotation marks.
- "Suffix": "",
-},
-```
-
-

-
-
-2. Custom function plugins
-
-Write powerful function plugins to perform any task you can and cannot think of.
-The difficulty of writing and debugging plugins in this project is low, and as long as you have a certain amount of python basic knowledge, you can follow the template provided by us to achieve your own plugin functions.
-For details, please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
-
----
-# Latest Update
-## New feature dynamics.
-1. ダイアログの保存機能。関数プラグインエリアで '現在の会話を保存' を呼び出すと、現在のダイアログを読み取り可能で復元可能なHTMLファイルとして保存できます。さらに、関数プラグインエリア(ドロップダウンメニュー)で 'ダイアログの履歴保存ファイルを読み込む' を呼び出すことで、以前の会話を復元することができます。Tips:ファイルを指定せずに 'ダイアログの履歴保存ファイルを読み込む' をクリックすることで、過去のHTML保存ファイルのキャッシュを表示することができます。'すべてのローカルダイアログの履歴を削除' をクリックすることで、すべてのHTML保存ファイルのキャッシュを削除できます。
-
-

-
-
-
-2. 報告書を生成します。ほとんどのプラグインは、実行が終了した後に作業報告書を生成します。
-
-
-3. モジュール化された機能設計、簡単なインターフェースで強力な機能をサポートする。
-
-

-

-
-
-4. 自己解決可能なオープンソースプロジェクトです。
-
-

-
-
-
-5. 他のオープンソースプロジェクトの解読、容易である。
-
-

-
-
-
-

-
-
-6. [Live2D](https://github.com/fghrsh/live2d_demo)のデコレート小機能です。(デフォルトでは閉じてますが、 `config.py`を変更する必要があります。)
-
-

-
-
-7. 新たにMOSS大言語モデルのサポートを追加しました。
-
-

-
-
-8. OpenAI画像生成
-
-

-
-
-9. OpenAIオーディオの解析とサマリー
-
-

-
-
-10. 全文校正されたLaTeX
-
-

-
-
-
-## バージョン:
-- version 3.5(作業中):すべての関数プラグインを自然言語で呼び出すことができるようにする(高い優先度)。
-- version 3.4(作業中):chatglmのローカルモデルのマルチスレッドをサポートすることで、機能を改善する。
-- version 3.3:+Web情報の総合機能
-- version 3.2:関数プラグインでさらに多くのパラメータインターフェイスをサポートする(ダイアログの保存機能、任意の言語コードの解読+同時に任意のLLM組み合わせに関する問い合わせ)
-- version 3.1:複数のGPTモデルを同時に質問できるようになりました! api2dをサポートし、複数のAPIキーを均等に負荷分散することができます。
-- version 3.0:chatglmとその他の小型LLMのサポート。
-- version 2.6:プラグイン構造を再構築し、対話内容を高め、より多くのプラグインを追加しました。
-- version 2.5:自己アップデートし、長文書やトークンのオーバーフローの問題を解決しました。
-- version 2.4:(1)全文翻訳のPDF機能を追加しました。(2)入力エリアの位置切り替え機能を追加しました。(3)垂直レイアウトオプションを追加しました。(4)マルチスレッド関数プラグインを最適化しました。
-- version 2.3:マルチスレッド性能の向上。
-- version 2.2:関数プラグインのホットリロードをサポートする。
-- version 2.1:折りたたみ式レイアウト。
-- version 2.0:モジュール化された関数プラグインを導入。
-- version 1.0:基本機能
-
-gpt_academic開発者QQグループ-2:610599535
-
-- 既知の問題
- - 一部のブラウザ翻訳プラグインが、このソフトウェアのフロントエンドの実行を妨害する
- - gradioバージョンが高すぎるか低すぎると、多くの異常が引き起こされる
-
-## 参考学習
-
-```
-コードの中には、他の優れたプロジェクトの設計から参考にしたものがたくさん含まれています:
-
-# プロジェクト1:清華ChatGLM-6B:
-https://github.com/THUDM/ChatGLM-6B
-
-# プロジェクト2:清華JittorLLMs:
-https://github.com/Jittor/JittorLLMs
-
-# プロジェクト3:Edge-GPT:
-https://github.com/acheong08/EdgeGPT
-
-# プロジェクト4:ChuanhuChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# プロジェクト5:ChatPaper:
-https://github.com/kaixindelele/ChatPaper
-
-# その他:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/docs/README_RS.md b/docs/README_RS.md
deleted file mode 100644
index d4888a0..0000000
--- a/docs/README_RS.md
+++ /dev/null
@@ -1,278 +0,0 @@
-> **Note**
->
-> Этот файл самовыражения автоматически генерируется модулем перевода markdown в этом проекте и может быть не на 100% правильным.
->
-#

GPT Академическая оптимизация (GPT Academic)
-
-**Если вам нравится этот проект, пожалуйста, поставьте ему звезду. Если вы придумали более полезные языковые ярлыки или функциональные плагины, не стесняйтесь открывать issue или pull request.
-Чтобы перевести этот проект на произвольный язык с помощью GPT, ознакомьтесь и запустите [`multi_language.py`](multi_language.py) (экспериментальный).
-
-> **Примечание**
->
-> 1. Обратите внимание, что только функциональные плагины (кнопки), помеченные **красным цветом**, поддерживают чтение файлов, некоторые плагины находятся в **выпадающем меню** в области плагинов. Кроме того, мы с наивысшим приоритетом рады и обрабатываем pull requests для любых новых плагинов!
->
-> 2. В каждом файле проекта функциональность описана в документе самоанализа [`self_analysis.md`](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A). С каждой итерацией выполнения версии вы можете в любое время вызвать повторное создание отчета о самоанализе этого проекта, щелкнув соответствующий функциональный плагин и вызвав GPT. Вопросы сборки описаны в [`wiki`](https://github.com/binary-husky/gpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Метод установки](#installation).
->
-> 3. Этот проект совместим и поощряет использование китайских языковых моделей chatglm и RWKV, пангу и т. Д. Поддержка нескольких api-key, которые могут существовать одновременно, может быть указан в файле конфигурации, например `API_KEY="openai-key1,openai-key2,api2d-key3"`. Если требуется временно изменить `API_KEY`, введите временный `API_KEY` в области ввода и нажмите клавишу Enter, чтобы он вступил в силу.
-
-> **Примечание**
->
-> При установке зависимостей строго выбирайте версии, **указанные в файле requirements.txt**.
->
-> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/`## Задание
-
-Вы профессиональный переводчик научных статей.
-
-Переведите этот файл в формате Markdown на русский язык. Не изменяйте существующие команды Markdown, ответьте только переведенными результатами.
-
-## Результат
-
-Функция | Описание
---- | ---
-Однокнопочный стиль | Поддержка однокнопочного стиля и поиска грамматических ошибок в научных статьях
-Однокнопочный перевод на английский и китайский | Однокнопочный перевод на английский и китайский
-Однокнопочное объяснение кода | Показ кода, объяснение его, генерация кода, комментирование кода
-[Настройка быстрых клавиш](https://www.bilibili.com/video/BV14s4y1E7jN) | Поддержка настройки быстрых клавиш
-Модульный дизайн | Поддержка пользовательских функциональных плагинов мощных [функциональных плагинов](https://github.com/binary-husky/gpt_academic/tree/master/crazy_functions), плагины поддерживают [горячую замену](https://github.com/binary-husky/gpt_academic/wiki/Function-Plug-in-Guide)
-[Анализ своей программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] [Однокнопочный просмотр](https://github.com/binary-husky/gpt_academic/wiki/chatgpt-academicProject-Self-analysis-Report) исходного кода этого проекта
-[Анализ программы](https://www.bilibili.com/video/BV1cj411A7VW) | [Функциональный плагин] Однокнопочный анализ дерева других проектов Python/C/C++/Java/Lua/...
-Чтение статей, [перевод](https://www.bilibili.com/video/BV1KT411x7Wn) статей | [Функциональный плагин] Однокнопочное чтение полного текста научных статей и генерация резюме
-Полный перевод [LaTeX](https://www.bilibili.com/video/BV1nk4y1Y7Js/) и совершенствование | [Функциональный плагин] Однокнопочный перевод или совершенствование LaTeX статьи
-Автоматическое комментирование | [Функциональный плагин] Однокнопочное автоматическое генерирование комментариев функций
-[Перевод](https://www.bilibili.com/video/BV1yo4y157jV/) Markdown на английский и китайский | [Функциональный плагин] Вы видели обе версии файлов [README](https://github.com/binary-husky/gpt_academic/blob/master/docs/README_EN.md) для этих 5 языков?
-Отчет о чат-анализе | [Функциональный плагин] После запуска будет автоматически сгенерировано сводное извещение
-Функция перевода полного текста [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) | [Функциональный плагин] Извлечение заголовка и резюме [PDF-статьи](https://www.bilibili.com/video/BV1KT411x7Wn) и перевод всего документа (многопоточность)
-[Arxiv Helper](https://www.bilibili.com/video/BV1LM4y1279X) | [Функциональный плагин] Введите URL статьи на arxiv и одним щелчком мыши переведите резюме и загрузите PDF
-[Google Scholar Integration Helper](https://www.bilibili.com/video/BV19L411U7ia) | [Функциональный плагин] При заданном любом URL страницы поиска в Google Scholar позвольте gpt вам помочь [написать обзор](https://www.bilibili.com/video/BV1GP411U7Az/)
-Сбор Интернет-информации + GPT | [Функциональный плагин] Однокнопочный [запрос информации из Интернета GPT](https://www.bilibili.com/video/BV1om4y127ck), затем ответьте на вопрос, чтобы информация не устарела никогда
-Отображение формул / изображений / таблиц | Может одновременно отображать формулы в [формате Tex и рендеринге](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), поддерживает формулы, подсвечивает код
-Поддержка функций с многопоточностью | Поддержка многопоточного вызова chatgpt, однокнопочная обработка [больших объемов текста](https://www.bilibili.com/video/BV1FT411H7c5/) или программ
-Темная тема gradio для запуска приложений | Добавьте ```/?__theme=dark``` после URL в браузере, чтобы переключиться на темную тему
-[Поддержка нескольких моделей LLM](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) | Они одновременно обслуживаются GPT3.5, GPT4, [Clear ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS)
-Подключение нескольких новых моделей LLM, поддержка деплоя[huggingface](https://huggingface.co/spaces/qingxu98/gpt-academic) | Подключение интерфейса Newbing (новый Bing), подключение поддержки [LLaMA](https://github.com/facebookresearch/llama), поддержка [RWKV](https://github.com/BlinkDL/ChatRWKV) и [Pangu α](https://openi.org.cn/pangu/)
-Больше новых функций (генерация изображения и т. д.) | См. на конце этого файла…- All buttons are dynamically generated by reading functional.py, and custom functions can be freely added to liberate the clipboard
-
-

-
-
-- Revision/Correction
-
-

-
-
-- If the output contains formulas, they will be displayed in both tex and rendered form for easy copying and reading
-
-

-
-
-- Don't feel like looking at project code? Show the entire project directly in chatgpt
-
-

-
-
-- Mixing multiple large language models (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
-
-

-
-
----
-# Installation
-## Installation-Method 1: Run directly (Windows, Linux or MacOS)
-
-1. Download the project
-```sh
-git clone https://github.com/binary-husky/gpt_academic.git
-cd gpt_academic
-```
-
-2. Configure API_KEY
-
-In `config.py`, configure API KEY and other settings, [special network environment settings] (https://github.com/binary-husky/gpt_academic/issues/1).
-
-(P.S. When the program is running, it will first check whether there is a secret configuration file named `config_private.py` and use the configuration in it to replace the same name in` config.py`. Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py`, and transfer (copy) the configuration in `config.py` to `config_private.py`. `config_private.py` is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Priority of read: `environment variable`>`config_private.py`>`config.py`)
-
-
-3. Install dependencies
-```sh
-# (Option I: If familiar with Python)(Python version 3.9 or above, the newer the better), note: use the official pip source or the aliyun pip source, temporary switching source method: python -m pip install -r requirements.txt - i https://mirrors.aliyun.com/pypi/simple/
-python -m pip install -r requirements.txt
-
-# (Option II: If unfamiliar with Python)Use Anaconda, the steps are also similar (https://www.bilibili.com/video/BV1rc411W7Dr):
-conda create -n gptac_venv python=3.11 # create an Anaconda environment
-conda activate gptac_venv # activate Anaconda environment
-python -m pip install -r requirements.txt # This step is the same as the pip installation
-```
-
-
If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, click here to expand
-
-
-[Optional step] If you need to support Tsinghua ChatGLM/Fudan MOSS as backend, you need to install more dependencies (prerequisites: familiar with Python + have used Pytorch + computer configuration is strong):
-```sh
-# [Optional step I] Support Tsinghua ChatGLM. Tsinghua ChatGLM note: If you encounter the "Call ChatGLM fail cannot load ChatGLM parameters normally" error, refer to the following: 1: The default installation above is torch+cpu version, and cuda is used Need to uninstall torch and reinstall torch+cuda; 2: If you cannot load the model due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py, AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) Modify to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
-python -m pip install -r request_llm/requirements_chatglm.txt
-
-# [Optional step II] Support Fudan MOSS
-python -m pip install -r request_llm/requirements_moss.txt
-git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Note that when executing this line of code, you must be in the project root path
-
-# [Optional step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently, all supported models are as follows (the jittorllms series currently only supports the docker solution):
-AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
-```
-
-
-
-
-
-
-4. Run
-```sh
-python main.py
-```5. Testing Function Plugin
-```
-- Testing function plugin template function (requires GPT to answer what happened in history today), you can use this function as a template to implement more complex functions
- Click "[Function plugin Template Demo] On this day in history"
-```
-
-## Installation - Method 2: Using Docker
-
-1. ChatGPT only (recommended for most people)
-
-``` sh
-git clone https://github.com/binary-husky/gpt_academic.git # download the project
-cd gpt_academic # enter the path
-nano config.py # edit config.py with any text editor to configure "Proxy", "API_KEY", and "WEB_PORT" (eg 50923)
-docker build -t gpt-academic . # install
-
-# (Last step-Option 1) In a Linux environment, using `--net=host` is more convenient and faster
-docker run --rm -it --net=host gpt-academic
-# (Last step-Option 2) In macOS/windows environment, only -p option can be used to expose the port on the container (eg 50923) to the port on the host
-docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic
-```
-
-2. ChatGPT + ChatGLM + MOSS (requires familiarity with Docker)
-
-``` sh
-# Edit docker-compose.yml, delete solutions 1 and 3, and keep solution 2. Modify the configuration of solution 2 in docker-compose.yml, refer to the comments in it
-docker-compose up
-```
-
-3. ChatGPT + LLAMA + PanGu + RWKV (requires familiarity with Docker)
-``` sh
-# Edit docker-compose.yml, delete solutions 1 and 2, and keep solution 3. Modify the configuration of solution 3 in docker-compose.yml, refer to the comments in it
-docker-compose up
-```
-
-
-## Installation Method 3: Other Deployment Methods
-
-1. How to use reverse proxy URL/Microsoft Azure API
-Configure API_URL_REDIRECT according to the instructions in `config.py`.
-
-2. Remote Cloud Server Deployment (Requires Knowledge and Experience of Cloud Servers)
-Please visit [Deployment Wiki-1](https://github.com/binary-husky/gpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
-
-3. Using WSL2 (Windows Subsystem for Linux subsystem)
-Please visit [Deployment Wiki-2](https://github.com/binary-husky/gpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
-
-4. How to run at the secondary URL (such as `http://localhost/subpath`)
-Please visit [FastAPI Operation Instructions](docs/WithFastapi.md)
-
-5. Using docker-compose to run
-Please read docker-compose.yml and follow the prompts to operate.
-
----
-# Advanced Usage
-## Customize new convenient buttons / custom function plugins
-
-1. Customize new convenient buttons (academic shortcuts)
-Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, both prefixes and suffixes can be hot-modified without having to restart the program.)
-For example:
-```
-"Super English to Chinese": {
- # Prefix, will be added before your input. For example, describe your requirements, such as translation, code interpretation, polishing, etc.
- "Prefix": "Please translate the following content into Chinese, and then explain each proper noun that appears in the text with a markdown table:\n\n",
-
- # Suffix, will be added after your input. For example, with the prefix, you can enclose your input content in quotes.
- "Suffix": "",
-},
-```
-
-

-
-
-2. Custom function plugin
-
-Write powerful function plugins to perform any task you can and can't imagine.
-The difficulty of debugging and writing plugins in this project is very low. As long as you have a certain knowledge of python, you can implement your own plugin function by imitating the template we provide.
-Please refer to the [Function Plugin Guide](https://github.com/binary-husky/gpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) for details.
-
----
-# Latest Update
-## New feature dynamic
-
-1. Сохранение диалогов. Вызовите "Сохранить текущий диалог" в разделе функций-плагина, чтобы сохранить текущий диалог как файл HTML, который можно прочитать и восстановить. Кроме того, вызовите «Загрузить архив истории диалога» в меню функций-плагина, чтобы восстановить предыдущую сессию. Совет: если нажать кнопку "Загрузить исторический архив диалога" без указания файла, можно просмотреть кэш исторических файлов HTML. Щелкните "Удалить все локальные записи истории диалогов", чтобы удалить все файловые кэши HTML.
-
-2. Создание отчетов. Большинство плагинов создают рабочий отчет после завершения выполнения.
-
-3. Модульный дизайн функций, простой интерфейс, но сильный функционал.
-
-4. Это проект с открытым исходным кодом, который может «сам переводить себя».
-
-5. Перевод других проектов с открытым исходным кодом - это не проблема.
-
-6. Мелкие функции декорирования [live2d](https://github.com/fghrsh/live2d_demo) (по умолчанию отключены, нужно изменить `config.py`).
-
-7. Поддержка большой языковой модели MOSS.
-
-8. Генерация изображений с помощью OpenAI.
-
-9. Анализ и подведение итогов аудиофайлов с помощью OpenAI.
-
-10. Полный цикл проверки правописания с использованием LaTeX.
-
-## Версии:
-- Версия 3.5 (Todo): использование естественного языка для вызова функций-плагинов проекта (высокий приоритет)
-- Версия 3.4 (Todo): улучшение многопоточной поддержки локальных больших моделей чата.
-- Версия 3.3: добавлена функция объединения интернет-информации.
-- Версия 3.2: функции-плагины поддерживают большое количество параметров (сохранение диалогов, анализирование любого языка программирования и одновременное запрос LLM-групп).
-- Версия 3.1: поддержка одновременного запроса нескольких моделей GPT! Поддержка api2d, сбалансированное распределение нагрузки по нескольким ключам api.
-- Версия 3.0: поддержка chatglm и других небольших LLM.
-- Версия 2.6: перестройка структуры плагинов, улучшение интерактивности, добавлено больше плагинов.
-- Версия 2.5: автоматическое обновление для решения проблемы длинного текста и переполнения токенов при обработке больших проектов.
-- Версия 2.4: (1) добавлена функция полного перевода PDF; (2) добавлена функция переключения положения ввода; (3) добавлена опция вертикального макета; (4) оптимизация многопоточности плагинов.
-- Версия 2.3: улучшение многопоточной интерактивности.
-- Версия 2.2: функции-плагины поддерживают горячую перезагрузку.
-- Версия 2.1: раскрывающийся макет.
-- Версия 2.0: использование модульных функций-плагинов.
-- Версия 1.0: базовые функции.
-
-gpt_academic Разработчик QQ-группы-2: 610599535
-
-- Известные проблемы
- - Некоторые плагины перевода в браузерах мешают работе фронтенда этого программного обеспечения
- - Высокая или низкая версия gradio может вызвать множество исключений
-
-## Ссылки и учебные материалы
-
-```
-Мы использовали многие концепты кода из других отличных проектов, включая:
-
-# Проект 1: Qinghua ChatGLM-6B:
-https://github.com/THUDM/ChatGLM-6B
-
-# Проект 2: Qinghua JittorLLMs:
-https://github.com/Jittor/JittorLLMs
-
-# Проект 3: Edge-GPT:
-https://github.com/acheong08/EdgeGPT
-
-# Проект 4: Chuanhu ChatGPT:
-https://github.com/GaiZhenbiao/ChuanhuChatGPT
-
-# Проект 5: ChatPaper:
-https://github.com/kaixindelele/ChatPaper
-
-# Больше:
-https://github.com/gradio-app/gradio
-https://github.com/fghrsh/live2d_demo
-```
\ No newline at end of file
diff --git a/docs/WithFastapi.md b/docs/WithFastapi.md
index 188b527..bbbb386 100644
--- a/docs/WithFastapi.md
+++ b/docs/WithFastapi.md
@@ -16,7 +16,7 @@ nano config.py
+ demo.queue(concurrency_count=CONCURRENT_COUNT)
- # 如果需要在二级路径下运行
- - # CUSTOM_PATH, = get_conf('CUSTOM_PATH')
+ - # CUSTOM_PATH = get_conf('CUSTOM_PATH')
- # if CUSTOM_PATH != "/":
- # from toolbox import run_gradio_in_subpath
- # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
@@ -24,7 +24,7 @@ nano config.py
- # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png")
+ 如果需要在二级路径下运行
- + CUSTOM_PATH, = get_conf('CUSTOM_PATH')
+ + CUSTOM_PATH = get_conf('CUSTOM_PATH')
+ if CUSTOM_PATH != "/":
+ from toolbox import run_gradio_in_subpath
+ run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
diff --git a/docs/self_analysis.md b/docs/self_analysis.md
index ebc2337..0b76c7b 100644
--- a/docs/self_analysis.md
+++ b/docs/self_analysis.md
@@ -38,20 +38,20 @@
| crazy_functions\读文章写摘要.py | 对论文进行解析和全文摘要生成 |
| crazy_functions\谷歌检索小助手.py | 提供谷歌学术搜索页面中相关文章的元数据信息。 |
| crazy_functions\高级功能函数模板.py | 使用Unsplash API发送相关图片以回复用户的输入。 |
-| request_llm\bridge_all.py | 基于不同LLM模型进行对话。 |
-| request_llm\bridge_chatglm.py | 使用ChatGLM模型生成回复,支持单线程和多线程方式。 |
-| request_llm\bridge_chatgpt.py | 基于GPT模型完成对话。 |
-| request_llm\bridge_jittorllms_llama.py | 使用JittorLLMs模型完成对话,支持单线程和多线程方式。 |
-| request_llm\bridge_jittorllms_pangualpha.py | 使用JittorLLMs模型完成对话,基于多进程和多线程方式。 |
-| request_llm\bridge_jittorllms_rwkv.py | 使用JittorLLMs模型完成聊天功能,提供包括历史信息、参数调节等在内的多个功能选项。 |
-| request_llm\bridge_moss.py | 加载Moss模型完成对话功能。 |
-| request_llm\bridge_newbing.py | 使用Newbing聊天机器人进行对话,支持单线程和多线程方式。 |
-| request_llm\bridge_newbingfree.py | 基于Bing chatbot API实现聊天机器人的文本生成功能。 |
-| request_llm\bridge_stackclaude.py | 基于Slack API实现Claude与用户的交互。 |
-| request_llm\bridge_tgui.py | 通过websocket实现聊天机器人与UI界面交互。 |
-| request_llm\edge_gpt.py | 调用Bing chatbot API提供聊天机器人服务。 |
-| request_llm\edge_gpt_free.py | 实现聊天机器人API,采用aiohttp和httpx工具库。 |
-| request_llm\test_llms.py | 对llm模型进行单元测试。 |
+| request_llms\bridge_all.py | 基于不同LLM模型进行对话。 |
+| request_llms\bridge_chatglm.py | 使用ChatGLM模型生成回复,支持单线程和多线程方式。 |
+| request_llms\bridge_chatgpt.py | 基于GPT模型完成对话。 |
+| request_llms\bridge_jittorllms_llama.py | 使用JittorLLMs模型完成对话,支持单线程和多线程方式。 |
+| request_llms\bridge_jittorllms_pangualpha.py | 使用JittorLLMs模型完成对话,基于多进程和多线程方式。 |
+| request_llms\bridge_jittorllms_rwkv.py | 使用JittorLLMs模型完成聊天功能,提供包括历史信息、参数调节等在内的多个功能选项。 |
+| request_llms\bridge_moss.py | 加载Moss模型完成对话功能。 |
+| request_llms\bridge_newbing.py | 使用Newbing聊天机器人进行对话,支持单线程和多线程方式。 |
+| request_llms\bridge_newbingfree.py | 基于Bing chatbot API实现聊天机器人的文本生成功能。 |
+| request_llms\bridge_stackclaude.py | 基于Slack API实现Claude与用户的交互。 |
+| request_llms\bridge_tgui.py | 通过websocket实现聊天机器人与UI界面交互。 |
+| request_llms\edge_gpt.py | 调用Bing chatbot API提供聊天机器人服务。 |
+| request_llms\edge_gpt_free.py | 实现聊天机器人API,采用aiohttp和httpx工具库。 |
+| request_llms\test_llms.py | 对llm模型进行单元测试。 |
## 接下来请你逐文件分析下面的工程[0/48] 请对下面的程序文件做一个概述: check_proxy.py
@@ -129,7 +129,7 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
1. `input_clipping`: 该函数用于裁剪输入文本长度,使其不超过一定的限制。
2. `request_gpt_model_in_new_thread_with_ui_alive`: 该函数用于请求 GPT 模型并保持用户界面的响应,支持多线程和实时更新用户界面。
-这两个函数都依赖于从 `toolbox` 和 `request_llm` 中导入的一些工具函数。函数的输入和输出有详细的描述文档。
+这两个函数都依赖于从 `toolbox` 和 `request_llms` 中导入的一些工具函数。函数的输入和输出有详细的描述文档。
## [12/48] 请对下面的程序文件做一个概述: crazy_functions\Latex全文润色.py
@@ -137,7 +137,7 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
## [13/48] 请对下面的程序文件做一个概述: crazy_functions\Latex全文翻译.py
-这个文件包含两个函数 `Latex英译中` 和 `Latex中译英`,它们都会对整个Latex项目进行翻译。这个文件还包含一个类 `PaperFileGroup`,它拥有一个方法 `run_file_split`,用于把长文本文件分成多个短文件。其中使用了工具库 `toolbox` 中的一些函数和从 `request_llm` 中导入了 `model_info`。接下来的函数把文件读取进来,把它们的注释删除,进行分割,并进行翻译。这个文件还包括了一些异常处理和界面更新的操作。
+这个文件包含两个函数 `Latex英译中` 和 `Latex中译英`,它们都会对整个Latex项目进行翻译。这个文件还包含一个类 `PaperFileGroup`,它拥有一个方法 `run_file_split`,用于把长文本文件分成多个短文件。其中使用了工具库 `toolbox` 中的一些函数和从 `request_llms` 中导入了 `model_info`。接下来的函数把文件读取进来,把它们的注释删除,进行分割,并进行翻译。这个文件还包括了一些异常处理和界面更新的操作。
## [14/48] 请对下面的程序文件做一个概述: crazy_functions\__init__.py
@@ -217,7 +217,7 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
## [31/48] 请对下面的程序文件做一个概述: crazy_functions\读文章写摘要.py
-这个程序文件是一个Python模块,文件名为crazy_functions\读文章写摘要.py。该模块包含了两个函数,其中主要函数是"读文章写摘要"函数,其实现了解析给定文件夹中的tex文件,对其中每个文件的内容进行摘要生成,并根据各论文片段的摘要,最终生成全文摘要。第二个函数是"解析Paper"函数,用于解析单篇论文文件。其中用到了一些工具函数和库,如update_ui、CatchException、report_execption、write_results_to_file等。
+这个程序文件是一个Python模块,文件名为crazy_functions\读文章写摘要.py。该模块包含了两个函数,其中主要函数是"读文章写摘要"函数,其实现了解析给定文件夹中的tex文件,对其中每个文件的内容进行摘要生成,并根据各论文片段的摘要,最终生成全文摘要。第二个函数是"解析Paper"函数,用于解析单篇论文文件。其中用到了一些工具函数和库,如update_ui、CatchException、report_exception、write_results_to_file等。
## [32/48] 请对下面的程序文件做一个概述: crazy_functions\谷歌检索小助手.py
@@ -227,19 +227,19 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
该程序文件定义了一个名为高阶功能模板函数的函数,该函数接受多个参数,包括输入的文本、gpt模型参数、插件模型参数、聊天显示框的句柄、聊天历史等,并利用送出请求,使用 Unsplash API 发送相关图片。其中,为了避免输入溢出,函数会在开始时清空历史。函数也有一些 UI 更新的语句。该程序文件还依赖于其他两个模块:CatchException 和 update_ui,以及一个名为 request_gpt_model_in_new_thread_with_ui_alive 的来自 crazy_utils 模块(应该是自定义的工具包)的函数。
-## [34/48] 请对下面的程序文件做一个概述: request_llm\bridge_all.py
+## [34/48] 请对下面的程序文件做一个概述: request_llms\bridge_all.py
该文件包含两个函数:predict和predict_no_ui_long_connection,用于基于不同的LLM模型进行对话。该文件还包含一个lazyloadTiktoken类和一个LLM_CATCH_EXCEPTION修饰器函数。其中lazyloadTiktoken类用于懒加载模型的tokenizer,LLM_CATCH_EXCEPTION用于错误处理。整个文件还定义了一些全局变量和模型信息字典,用于引用和配置LLM模型。
-## [35/48] 请对下面的程序文件做一个概述: request_llm\bridge_chatglm.py
+## [35/48] 请对下面的程序文件做一个概述: request_llms\bridge_chatglm.py
这是一个Python程序文件,名为`bridge_chatglm.py`,其中定义了一个名为`GetGLMHandle`的类和三个方法:`predict_no_ui_long_connection`、 `predict`和 `stream_chat`。该文件依赖于多个Python库,如`transformers`和`sentencepiece`。该文件实现了一个聊天机器人,使用ChatGLM模型来生成回复,支持单线程和多线程方式。程序启动时需要加载ChatGLM的模型和tokenizer,需要一段时间。在配置文件`config.py`中设置参数会影响模型的内存和显存使用,因此程序可能会导致低配计算机卡死。
-## [36/48] 请对下面的程序文件做一个概述: request_llm\bridge_chatgpt.py
+## [36/48] 请对下面的程序文件做一个概述: request_llms\bridge_chatgpt.py
-该文件为 Python 代码文件,文件名为 request_llm\bridge_chatgpt.py。该代码文件主要提供三个函数:predict、predict_no_ui和 predict_no_ui_long_connection,用于发送至 chatGPT 并等待回复,获取输出。该代码文件还包含一些辅助函数,用于处理连接异常、生成 HTTP 请求等。该文件的代码架构清晰,使用了多个自定义函数和模块。
+该文件为 Python 代码文件,文件名为 request_llms\bridge_chatgpt.py。该代码文件主要提供三个函数:predict、predict_no_ui和 predict_no_ui_long_connection,用于发送至 chatGPT 并等待回复,获取输出。该代码文件还包含一些辅助函数,用于处理连接异常、生成 HTTP 请求等。该文件的代码架构清晰,使用了多个自定义函数和模块。
-## [37/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_llama.py
+## [37/48] 请对下面的程序文件做一个概述: request_llms\bridge_jittorllms_llama.py
该代码文件实现了一个聊天机器人,其中使用了 JittorLLMs 模型。主要包括以下几个部分:
1. GetGLMHandle 类:一个进程类,用于加载 JittorLLMs 模型并接收并处理请求。
@@ -248,17 +248,17 @@ toolbox.py是一个工具类库,其中主要包含了一些函数装饰器和
这个文件中还有一些辅助函数和全局变量,例如 importlib、time、threading 等。
-## [38/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_pangualpha.py
+## [38/48] 请对下面的程序文件做一个概述: request_llms\bridge_jittorllms_pangualpha.py
这个文件是为了实现使用jittorllms(一种机器学习模型)来进行聊天功能的代码。其中包括了模型加载、模型的参数加载、消息的收发等相关操作。其中使用了多进程和多线程来提高性能和效率。代码中还包括了处理依赖关系的函数和预处理函数等。
-## [39/48] 请对下面的程序文件做一个概述: request_llm\bridge_jittorllms_rwkv.py
+## [39/48] 请对下面的程序文件做一个概述: request_llms\bridge_jittorllms_rwkv.py
这个文件是一个Python程序,文件名为request_llm\bridge_jittorllms_rwkv.py。它依赖transformers、time、threading、importlib、multiprocessing等库。在文件中,通过定义GetGLMHandle类加载jittorllms模型参数和定义stream_chat方法来实现与jittorllms模型的交互。同时,该文件还定义了predict_no_ui_long_connection和predict方法来处理历史信息、调用jittorllms模型、接收回复信息并输出结果。
-## [40/48] 请对下面的程序文件做一个概述: request_llm\bridge_moss.py
+## [40/48] 请对下面的程序文件做一个概述: request_llms\bridge_moss.py
-该文件为一个Python源代码文件,文件名为 request_llm\bridge_moss.py。代码定义了一个 GetGLMHandle 类和两个函数 predict_no_ui_long_connection 和 predict。
+该文件为一个Python源代码文件,文件名为 request_llms\bridge_moss.py。代码定义了一个 GetGLMHandle 类和两个函数 predict_no_ui_long_connection 和 predict。
GetGLMHandle 类继承自Process类(多进程),主要功能是启动一个子进程并加载 MOSS 模型参数,通过 Pipe 进行主子进程的通信。该类还定义了 check_dependency、moss_init、run 和 stream_chat 等方法,其中 check_dependency 和 moss_init 是子进程的初始化方法,run 是子进程运行方法,stream_chat 实现了主进程和子进程的交互过程。
@@ -266,7 +266,7 @@ GetGLMHandle 类继承自Process类(多进程),主要功能是启动一个
函数 predict 是单线程方法,通过调用 update_ui 将交互过程中 MOSS 的回复实时更新到UI(User Interface)中,并执行一个 named function(additional_fn)指定的函数对输入进行预处理。
-## [41/48] 请对下面的程序文件做一个概述: request_llm\bridge_newbing.py
+## [41/48] 请对下面的程序文件做一个概述: request_llms\bridge_newbing.py
这是一个名为`bridge_newbing.py`的程序文件,包含三个部分:
@@ -276,11 +276,11 @@ GetGLMHandle 类继承自Process类(多进程),主要功能是启动一个
第三部分定义了一个名为`newbing_handle`的全局变量,并导出了`predict_no_ui_long_connection`和`predict`这两个方法,以供其他程序可以调用。
-## [42/48] 请对下面的程序文件做一个概述: request_llm\bridge_newbingfree.py
+## [42/48] 请对下面的程序文件做一个概述: request_llms\bridge_newbingfree.py
这个Python文件包含了三部分内容。第一部分是来自edge_gpt_free.py文件的聊天机器人程序。第二部分是子进程Worker,用于调用主体。第三部分提供了两个函数:predict_no_ui_long_connection和predict用于调用NewBing聊天机器人和返回响应。其中predict函数还提供了一些参数用于控制聊天机器人的回复和更新UI界面。
-## [43/48] 请对下面的程序文件做一个概述: request_llm\bridge_stackclaude.py
+## [43/48] 请对下面的程序文件做一个概述: request_llms\bridge_stackclaude.py
这是一个Python源代码文件,文件名为request_llm\bridge_stackclaude.py。代码分为三个主要部分:
@@ -290,21 +290,21 @@ GetGLMHandle 类继承自Process类(多进程),主要功能是启动一个
第三部分定义了predict_no_ui_long_connection和predict两个函数,主要用于通过调用ClaudeHandle对象的stream_chat方法来获取Claude的回复,并更新ui以显示相关信息。其中predict函数采用单线程方法,而predict_no_ui_long_connection函数使用多线程方法。
-## [44/48] 请对下面的程序文件做一个概述: request_llm\bridge_tgui.py
+## [44/48] 请对下面的程序文件做一个概述: request_llms\bridge_tgui.py
该文件是一个Python代码文件,名为request_llm\bridge_tgui.py。它包含了一些函数用于与chatbot UI交互,并通过WebSocket协议与远程LLM模型通信完成文本生成任务,其中最重要的函数是predict()和predict_no_ui_long_connection()。这个程序还有其他的辅助函数,如random_hash()。整个代码文件在协作的基础上完成了一次修改。
-## [45/48] 请对下面的程序文件做一个概述: request_llm\edge_gpt.py
+## [45/48] 请对下面的程序文件做一个概述: request_llms\edge_gpt.py
该文件是一个用于调用Bing chatbot API的Python程序,它由多个类和辅助函数构成,可以根据给定的对话连接在对话中提出问题,使用websocket与远程服务通信。程序实现了一个聊天机器人,可以为用户提供人工智能聊天。
-## [46/48] 请对下面的程序文件做一个概述: request_llm\edge_gpt_free.py
+## [46/48] 请对下面的程序文件做一个概述: request_llms\edge_gpt_free.py
该代码文件为一个会话API,可通过Chathub发送消息以返回响应。其中使用了 aiohttp 和 httpx 库进行网络请求并发送。代码中包含了一些函数和常量,多数用于生成请求数据或是请求头信息等。同时该代码文件还包含了一个 Conversation 类,调用该类可实现对话交互。
-## [47/48] 请对下面的程序文件做一个概述: request_llm\test_llms.py
+## [47/48] 请对下面的程序文件做一个概述: request_llms\test_llms.py
-这个文件是用于对llm模型进行单元测试的Python程序。程序导入一个名为"request_llm.bridge_newbingfree"的模块,然后三次使用该模块中的predict_no_ui_long_connection()函数进行预测,并输出结果。此外,还有一些注释掉的代码段,这些代码段也是关于模型预测的。
+这个文件是用于对llm模型进行单元测试的Python程序。程序导入一个名为"request_llms.bridge_newbingfree"的模块,然后三次使用该模块中的predict_no_ui_long_connection()函数进行预测,并输出结果。此外,还有一些注释掉的代码段,这些代码段也是关于模型预测的。
## 用一张Markdown表格简要描述以下文件的功能:
check_proxy.py, colorful.py, config.py, config_private.py, core_functional.py, crazy_functional.py, main.py, multi_language.py, theme.py, toolbox.py, crazy_functions\crazy_functions_test.py, crazy_functions\crazy_utils.py, crazy_functions\Latex全文润色.py, crazy_functions\Latex全文翻译.py, crazy_functions\__init__.py, crazy_functions\下载arxiv论文翻译摘要.py。根据以上分析,用一句话概括程序的整体功能。
@@ -355,24 +355,24 @@ crazy_functions\代码重写为全英文_多线程.py, crazy_functions\图片生
概括程序的整体功能:提供了一系列处理文本、文件和代码的功能,使用了各类语言模型、多线程、网络请求和数据解析技术来提高效率和精度。
## 用一张Markdown表格简要描述以下文件的功能:
-crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llm\bridge_all.py, request_llm\bridge_chatglm.py, request_llm\bridge_chatgpt.py, request_llm\bridge_jittorllms_llama.py, request_llm\bridge_jittorllms_pangualpha.py, request_llm\bridge_jittorllms_rwkv.py, request_llm\bridge_moss.py, request_llm\bridge_newbing.py, request_llm\bridge_newbingfree.py, request_llm\bridge_stackclaude.py, request_llm\bridge_tgui.py, request_llm\edge_gpt.py, request_llm\edge_gpt_free.py, request_llm\test_llms.py。根据以上分析,用一句话概括程序的整体功能。
+crazy_functions\谷歌检索小助手.py, crazy_functions\高级功能函数模板.py, request_llms\bridge_all.py, request_llms\bridge_chatglm.py, request_llms\bridge_chatgpt.py, request_llms\bridge_jittorllms_llama.py, request_llms\bridge_jittorllms_pangualpha.py, request_llms\bridge_jittorllms_rwkv.py, request_llms\bridge_moss.py, request_llms\bridge_newbing.py, request_llms\bridge_newbingfree.py, request_llms\bridge_stackclaude.py, request_llms\bridge_tgui.py, request_llms\edge_gpt.py, request_llms\edge_gpt_free.py, request_llms\test_llms.py。根据以上分析,用一句话概括程序的整体功能。
| 文件名 | 功能描述 |
| --- | --- |
| crazy_functions\谷歌检索小助手.py | 提供谷歌学术搜索页面中相关文章的元数据信息。 |
| crazy_functions\高级功能函数模板.py | 使用Unsplash API发送相关图片以回复用户的输入。 |
-| request_llm\bridge_all.py | 基于不同LLM模型进行对话。 |
-| request_llm\bridge_chatglm.py | 使用ChatGLM模型生成回复,支持单线程和多线程方式。 |
-| request_llm\bridge_chatgpt.py | 基于GPT模型完成对话。 |
-| request_llm\bridge_jittorllms_llama.py | 使用JittorLLMs模型完成对话,支持单线程和多线程方式。 |
-| request_llm\bridge_jittorllms_pangualpha.py | 使用JittorLLMs模型完成对话,基于多进程和多线程方式。 |
-| request_llm\bridge_jittorllms_rwkv.py | 使用JittorLLMs模型完成聊天功能,提供包括历史信息、参数调节等在内的多个功能选项。 |
-| request_llm\bridge_moss.py | 加载Moss模型完成对话功能。 |
-| request_llm\bridge_newbing.py | 使用Newbing聊天机器人进行对话,支持单线程和多线程方式。 |
-| request_llm\bridge_newbingfree.py | 基于Bing chatbot API实现聊天机器人的文本生成功能。 |
-| request_llm\bridge_stackclaude.py | 基于Slack API实现Claude与用户的交互。 |
-| request_llm\bridge_tgui.py | 通过websocket实现聊天机器人与UI界面交互。 |
-| request_llm\edge_gpt.py | 调用Bing chatbot API提供聊天机器人服务。 |
-| request_llm\edge_gpt_free.py | 实现聊天机器人API,采用aiohttp和httpx工具库。 |
-| request_llm\test_llms.py | 对llm模型进行单元测试。 |
+| request_llms\bridge_all.py | 基于不同LLM模型进行对话。 |
+| request_llms\bridge_chatglm.py | 使用ChatGLM模型生成回复,支持单线程和多线程方式。 |
+| request_llms\bridge_chatgpt.py | 基于GPT模型完成对话。 |
+| request_llms\bridge_jittorllms_llama.py | 使用JittorLLMs模型完成对话,支持单线程和多线程方式。 |
+| request_llms\bridge_jittorllms_pangualpha.py | 使用JittorLLMs模型完成对话,基于多进程和多线程方式。 |
+| request_llms\bridge_jittorllms_rwkv.py | 使用JittorLLMs模型完成聊天功能,提供包括历史信息、参数调节等在内的多个功能选项。 |
+| request_llms\bridge_moss.py | 加载Moss模型完成对话功能。 |
+| request_llms\bridge_newbing.py | 使用Newbing聊天机器人进行对话,支持单线程和多线程方式。 |
+| request_llms\bridge_newbingfree.py | 基于Bing chatbot API实现聊天机器人的文本生成功能。 |
+| request_llms\bridge_stackclaude.py | 基于Slack API实现Claude与用户的交互。 |
+| request_llms\bridge_tgui.py | 通过websocket实现聊天机器人与UI界面交互。 |
+| request_llms\edge_gpt.py | 调用Bing chatbot API提供聊天机器人服务。 |
+| request_llms\edge_gpt_free.py | 实现聊天机器人API,采用aiohttp和httpx工具库。 |
+| request_llms\test_llms.py | 对llm模型进行单元测试。 |
| 程序整体功能 | 实现不同种类的聊天机器人,可以根据输入进行文本生成。 |
diff --git a/docs/translate_english.json b/docs/translate_english.json
index c13ac81..955dcaf 100644
--- a/docs/translate_english.json
+++ b/docs/translate_english.json
@@ -265,7 +265,7 @@
"例如chatglm&gpt-3.5-turbo&api2d-gpt-4": "e.g. chatglm&gpt-3.5-turbo&api2d-gpt-4",
"先切换模型到openai或api2d": "Switch the model to openai or api2d first",
"在这里输入分辨率": "Enter the resolution here",
- "如256x256": "e.g. 256x256",
+ "如1024x1024": "e.g. 1024x1024",
"默认": "Default",
"建议您复制一个config_private.py放自己的秘密": "We suggest you to copy a config_private.py file to keep your secrets, such as API and proxy URLs, from being accidentally uploaded to Github and seen by others.",
"如API和代理网址": "Such as API and proxy URLs",
@@ -430,7 +430,6 @@
"并显示到聊天当中": "And display it in the chat",
"插件调度异常": "Plugin scheduling exception",
"异常原因": "Exception reason",
- "实验性函数调用出错": "Experimental function call error",
"当前代理可用性": "Current proxy availability",
"异常": "Exception",
"将文本按照段落分隔符分割开": "Split the text into paragraphs according to the paragraph separator",
@@ -502,7 +501,8 @@
"环境变量": "Environment variable",
"不支持通过环境变量设置!": "Setting through environment variables is not supported!",
"加载失败!": "Loading failed!",
- "成功读取环境变量": "Successfully read environment variables",
+ "如": " e.g., ",
+ "成功读取环境变量": "Successfully read environment variable: ",
"本项目现已支持OpenAI和API2D的api-key": "This project now supports api-keys for OpenAI and API2D",
"也支持同时填写多个api-key": "It also supports filling in multiple api-keys at the same time",
"您既可以在config.py中修改api-key": "You can modify the api-key in config.py",
@@ -513,7 +513,7 @@
"请在config文件中修改API密钥之后再运行": "Please modify the API key in the config file before running",
"网络代理状态": "Network proxy status",
"未配置": "Not configured",
- "无代理状态下很可能无法访问OpenAI家族的模型": "It is very likely that you cannot access OpenAI's models without a proxy",
+ "无代理状态下很可能无法访问OpenAI家族的模型": "",
"建议": "Suggestion",
"检查USE_PROXY选项是否修改": "Check if the USE_PROXY option has been modified",
"已配置": "Configured",
@@ -1184,7 +1184,7 @@
"Call ChatGLM fail 不能正常加载ChatGLM的参数": "Call ChatGLM fail, unable to load parameters for ChatGLM",
"不能正常加载ChatGLM的参数!": "Unable to load parameters for ChatGLM!",
"多线程方法": "Multithreading method",
- "函数的说明请见 request_llm/bridge_all.py": "For function details, please see request_llm/bridge_all.py",
+ "函数的说明请见 request_llms/bridge_all.py": "For function details, please see request_llms/bridge_all.py",
"程序终止": "Program terminated",
"单线程方法": "Single-threaded method",
"等待ChatGLM响应中": "Waiting for response from ChatGLM",
@@ -1543,7 +1543,7 @@
"str类型": "str type",
"所有音频都总结完成了吗": "Are all audio summaries completed?",
"SummaryAudioVideo内容": "SummaryAudioVideo content",
- "使用教程详情见 request_llm/README.md": "See request_llm/README.md for detailed usage instructions",
+ "使用教程详情见 request_llms/README.md": "See request_llms/README.md for detailed usage instructions",
"删除中间文件夹": "Delete intermediate folder",
"Claude组件初始化成功": "Claude component initialized successfully",
"$c$ 是光速": "$c$ is the speed of light",
@@ -2788,5 +2788,120 @@
"加载已保存": "Load saved",
"打开浏览器页面": "Open browser page",
"解锁插件": "Unlock plugin",
- "如果话筒激活 / 如果处于回声收尾阶段": "If the microphone is active / If it is in the echo tail stage"
+ "如果话筒激活 / 如果处于回声收尾阶段": "If the microphone is active / If it is in the echo tail stage",
+ "分辨率": "Resolution",
+ "分析行业动态": "Analyze industry trends",
+ "在项目实施过程中提供支持": "Provide support during project implementation",
+ "azure 对齐支持 -=-=-=-=-=-=-": "Azure alignment support -=-=-=-=-=-=-",
+ "默认的系统提示词": "Default system prompts",
+ "为您解释复杂的技术概念": "Explain complex technical concepts to you",
+ "提供项目管理和协作建议": "Provide project management and collaboration advice",
+ "请从AVAIL_LLM_MODELS中选择": "Please select from AVAIL_LLM_MODELS",
+ "提高编程能力": "Improve programming skills",
+ "请注意Newbing组件已不再维护": "Please note that the Newbing component is no longer maintained",
+ "用于定义和切换多个azure模型 --": "Used to define and switch between multiple Azure models --",
+ "支持 256x256": "Supports 256x256",
+ "定义界面上“询问多个GPT模型”插件应该使用哪些模型": "Define which models the 'Ask multiple GPT models' plugin should use on the interface",
+ "必须是.png格式": "Must be in .png format",
+ "tokenizer只用于粗估token数量": "The tokenizer is only used to estimate the number of tokens",
+ "协助您进行文案策划和内容创作": "Assist you in copywriting and content creation",
+ "帮助您巩固编程基础": "Help you consolidate your programming foundation",
+ "修改需求": "Modify requirements",
+ "确保项目顺利进行": "Ensure the smooth progress of the project",
+ "帮助您了解市场发展和竞争态势": "Help you understand market development and competitive situation",
+ "不需要动态切换": "No need for dynamic switching",
+ "解答您在学习过程中遇到的问题": "Answer the questions you encounter during the learning process",
+ "Endpoint不正确": "Endpoint is incorrect",
+ "提供编程思路和建议": "Provide programming ideas and suggestions",
+ "先上传图片": "Upload the image first",
+ "提供计算机科学、数据科学、人工智能等相关领域的学习资源和建议": "Provide learning resources and advice in computer science, data science, artificial intelligence, and other related fields",
+ "提供写作建议和技巧": "Provide writing advice and tips",
+ "间隔": "Interval",
+ "此后不需要在此处添加api2d的接口了": "No need to add the api2d interface here anymore",
+ "4. 学习辅导": "4. Learning guidance",
+ "智谱AI大模型": "Zhipu AI large model",
+ "3. 项目支持": "3. Project support",
+ "但这是意料之中的": "But this is expected",
+ "检查endpoint是否可用": "Check if the endpoint is available",
+ "接入智谱大模型": "Access the intelligent spectrum model",
+ "如果您有任何问题或需要解答的议题": "If you have any questions or topics that need answers",
+ "api2d 对齐支持 -=-=-=-=-=-=-": "api2d alignment support -=-=-=-=-=-=-",
+ "支持多线程": "Support multi-threading",
+ "再输入修改需求": "Enter modification requirements again",
+ "Endpoint不满足要求": "Endpoint does not meet the requirements",
+ "检查endpoint是否合法": "Check if the endpoint is valid",
+ "为您制定技术战略提供参考和建议": "Provide reference and advice for developing your technical strategy",
+ "支持 1024x1024": "Support 1024x1024",
+ "因为下面的代码会自动添加": "Because the following code will be automatically added",
+ "尝试加载模型": "Try to load the model",
+ "使用DALLE3生成图片 | 输入参数字符串": "Use DALLE3 to generate images | Input parameter string",
+ "当前论文无需解析": "The current paper does not need to be parsed",
+ "单个azure模型部署": "Deploy a single Azure model",
+ "512x512 或 1024x1024": "512x512 or 1024x1024",
+ "至少是8k上下文的模型": "A model with at least 8k context",
+ "自动忽略重复的输入": "Automatically ignore duplicate inputs",
+ "让您更好地掌握知识": "Help you better grasp knowledge",
+ "文件列表": "File list",
+ "并在不同模型之间用": "And use it between different models",
+ "插件调用出错": "Plugin call error",
+ "帮助您撰写文章、报告、散文、故事等": "Help you write articles, reports, essays, stories, etc.",
+ "*实验性功能*": "*Experimental feature*",
+ "2. 编程": "2. Programming",
+ "让您更容易理解": "Make it easier for you to understand",
+ "的最大上下文长度太短": "The maximum context length is too short",
+ "方法二": "Method 2",
+ "多个azure模型部署+动态切换": "Deploy multiple Azure models + dynamic switching",
+ "详情请见额外文档 docs\\use_azure.md": "For details, please refer to the additional document docs\\use_azure.md",
+ "包括但不限于 Python、Java、C++ 等": "Including but not limited to Python, Java, C++, etc.",
+ "为您提供业界最新的新闻和技术趋势": "Providing you with the latest industry news and technology trends",
+ "自动检测并屏蔽失效的KEY": "Automatically detect and block invalid keys",
+ "请勿使用": "Please do not use",
+ "最后输入分辨率": "Enter the resolution at last",
+ "图片": "Image",
+ "请检查AZURE_ENDPOINT的配置! 当前的Endpoint为": "Please check the configuration of AZURE_ENDPOINT! The current Endpoint is",
+ "图片修改": "Image modification",
+ "已经收集到所有信息": "All information has been collected",
+ "加载API_KEY": "Loading API_KEY",
+ "协助您编写代码": "Assist you in writing code",
+ "我可以为您提供以下服务": "I can provide you with the following services",
+ "排队中请稍后 ...": "Please wait in line ...",
+ "建议您使用英文提示词": "It is recommended to use English prompts",
+ "不能支撑AutoGen运行": "Cannot support AutoGen operation",
+ "帮助您解决编程问题": "Help you solve programming problems",
+ "上次用户反馈输入为": "Last user feedback input is",
+ "请随时告诉我您的需求": "Please feel free to tell me your needs",
+ "有 sys_prompt 接口": "There is a sys_prompt interface",
+ "可能会覆盖之前的配置": "May overwrite previous configuration",
+ "5. 行业动态和趋势分析": "5. Industry dynamics and trend analysis",
+ "正在等待线程锁": "Waiting for thread lock",
+ "请输入分辨率": "Please enter the resolution",
+ "接驳void-terminal": "Connecting to void-terminal",
+ "启动DALLE2图像修改向导程序": "Launching DALLE2 image modification wizard program",
+ "加载模型失败": "Failed to load the model",
+ "是否使用Docker容器运行代码": "Whether to run the code using Docker container",
+ "请输入修改需求": "Please enter modification requirements",
+ "作为您的写作和编程助手": "As your writing and programming assistant",
+ "然后再次点击本插件": "Then click this plugin again",
+ "需要动态切换": "Dynamic switching is required",
+ "文心大模型4.0": "Wenxin Large Model 4.0",
+ "找不到任何.pdf拓展名的文件": "Cannot find any file with .pdf extension",
+ "在使用AutoGen插件时": "When using the AutoGen plugin",
+ "协助您规划项目进度和任务分配": "Assist you in planning project schedules and task assignments",
+ "1. 写作": "1. Writing",
+ "你亲手写的api名称": "The API name you wrote yourself",
+ "使用DALLE2生成图片 | 输入参数字符串": "Generate images using DALLE2 | Input parameter string",
+ "方法一": "Method 1",
+ "我会尽力提供帮助": "I will do my best to provide assistance",
+ "多个azure模型": "Multiple Azure models",
+ "准备就绪": "Ready",
+ "请随时提问": "Please feel free to ask",
+ "如果需要使用AZURE": "If you need to use AZURE",
+ "如果不是本地模型": "If it is not a local model",
+ "AZURE_CFG_ARRAY中配置的模型必须以azure开头": "The models configured in AZURE_CFG_ARRAY must start with 'azure'",
+ "API key has been deactivated. OpenAI以账户失效为由": "API key has been deactivated. OpenAI considers it as an account failure",
+ "请先上传图像": "Please upload the image first",
+ "高优先级": "High priority",
+ "请配置ZHIPUAI_API_KEY": "Please configure ZHIPUAI_API_KEY",
+ "单个azure模型": "Single Azure model",
+ "预留参数 context 未实现": "Reserved parameter 'context' not implemented"
}
\ No newline at end of file
diff --git a/docs/translate_japanese.json b/docs/translate_japanese.json
index fa3af4e..2f80792 100644
--- a/docs/translate_japanese.json
+++ b/docs/translate_japanese.json
@@ -352,7 +352,6 @@
"感谢热情的": "熱心な感謝",
"是本次输出": "今回の出力です",
"协议": "プロトコル",
- "实验性函数调用出错": "実験的な関数呼び出しエラー",
"例如需要翻译的一段话": "翻訳が必要な例文",
"本地文件地址": "ローカルファイルアドレス",
"更好的UI视觉效果": "より良いUI視覚効果",
@@ -782,7 +781,7 @@
"主进程统一调用函数接口": "メインプロセスが関数インターフェースを統一的に呼び出します",
"再例如一个包含了待处理文件的路径": "処理待ちのファイルを含むパスの例",
"负责把学术论文准确翻译成中文": "学術論文を正確に中国語に翻訳する責任があります",
- "函数的说明请见 request_llm/bridge_all.py": "関数の説明については、request_llm/bridge_all.pyを参照してください",
+ "函数的说明请见 request_llms/bridge_all.py": "関数の説明については、request_llms/bridge_all.pyを参照してください",
"然后回车提交": "そしてEnterを押して提出してください",
"防止爆token": "トークンの爆発を防止する",
"Latex项目全文中译英": "LaTeXプロジェクト全文の中国語から英語への翻訳",
@@ -854,7 +853,7 @@
"查询版本和用户意见": "バージョンとユーザーの意見を検索する",
"提取摘要": "要約を抽出する",
"在gpt输出代码的中途": "GPTがコードを出力する途中で",
- "如256x256": "256x256のように",
+ "如1024x1024": "1024x1024のように",
"概括其内容": "内容を要約する",
"剩下的情况都开头除去": "残りの場合はすべて先頭を除去する",
"至少一个线程任务意外失败": "少なくとも1つのスレッドタスクが予期しない失敗をした",
@@ -1616,7 +1615,7 @@
"正在重试": "再試行中",
"从而更全面地理解项目的整体功能": "プロジェクトの全体的な機能をより理解するために",
"正在等您说完问题": "質問が完了するのをお待ちしています",
- "使用教程详情见 request_llm/README.md": "使用方法の詳細については、request_llm/README.mdを参照してください",
+ "使用教程详情见 request_llms/README.md": "使用方法の詳細については、request_llms/README.mdを参照してください",
"6.25 加入判定latex模板的代码": "6.25 テンプレートの判定コードを追加",
"找不到任何音频或视频文件": "音声またはビデオファイルが見つかりません",
"请求GPT模型的": "GPTモデルのリクエスト",
diff --git a/docs/translate_std.json b/docs/translate_std.json
index 90eb685..ee8b2c6 100644
--- a/docs/translate_std.json
+++ b/docs/translate_std.json
@@ -94,5 +94,8 @@
"解析一个Matlab项目": "AnalyzeAMatlabProject",
"函数动态生成": "DynamicFunctionGeneration",
"多智能体终端": "MultiAgentTerminal",
- "多智能体": "MultiAgent"
+ "多智能体": "MultiAgent",
+ "图片生成_DALLE2": "ImageGeneration_DALLE2",
+ "图片生成_DALLE3": "ImageGeneration_DALLE3",
+ "图片修改_DALLE2": "ImageModification_DALLE2"
}
\ No newline at end of file
diff --git a/docs/translate_traditionalchinese.json b/docs/translate_traditionalchinese.json
index 53570ae..9ca7cba 100644
--- a/docs/translate_traditionalchinese.json
+++ b/docs/translate_traditionalchinese.json
@@ -123,7 +123,7 @@
"的第": "的第",
"减少重复": "減少重複",
"如果超过期限没有喂狗": "如果超過期限沒有餵狗",
- "函数的说明请见 request_llm/bridge_all.py": "函數的說明請見 request_llm/bridge_all.py",
+ "函数的说明请见 request_llms/bridge_all.py": "函數的說明請見 request_llms/bridge_all.py",
"第7步": "第7步",
"说": "說",
"中途接收可能的终止指令": "中途接收可能的終止指令",
@@ -780,7 +780,6 @@
"检测到程序终止": "偵測到程式終止",
"对整个Latex项目进行润色": "對整個Latex專案進行潤色",
"方法则会被调用": "方法則會被調用",
- "实验性函数调用出错": "實驗性函數調用出錯",
"把完整输入-输出结果显示在聊天框": "把完整輸入-輸出結果顯示在聊天框",
"本地文件预览": "本地檔案預覽",
"接下来请你逐文件分析下面的论文文件": "接下來請你逐檔案分析下面的論文檔案",
@@ -1147,7 +1146,7 @@
"Y+回车=确认": "Y+回車=確認",
"正在同时咨询ChatGPT和ChatGLM……": "正在同時諮詢ChatGPT和ChatGLM……",
"根据 heuristic 规则": "根據heuristic規則",
- "如256x256": "如256x256",
+ "如1024x1024": "如1024x1024",
"函数插件区": "函數插件區",
"*** API_KEY 导入成功": "*** API_KEY 導入成功",
"请对下面的程序文件做一个概述文件名是": "請對下面的程序文件做一個概述文件名是",
@@ -1887,7 +1886,7 @@
"请继续分析其他源代码": "請繼續分析其他源代碼",
"质能方程式": "質能方程式",
"功能尚不稳定": "功能尚不穩定",
- "使用教程详情见 request_llm/README.md": "使用教程詳情見 request_llm/README.md",
+ "使用教程详情见 request_llms/README.md": "使用教程詳情見 request_llms/README.md",
"从以上搜索结果中抽取信息": "從以上搜索結果中抽取信息",
"虽然PDF生成失败了": "雖然PDF生成失敗了",
"找图片": "尋找圖片",
diff --git a/docs/use_azure.md b/docs/use_azure.md
index 4c43a7e..0e192ba 100644
--- a/docs/use_azure.md
+++ b/docs/use_azure.md
@@ -1,3 +1,42 @@
+# 微软Azure云接入指南
+
+## 方法一(旧方法,只能接入一个Azure模型)
+
+- 通过以下教程,获取AZURE_ENDPOINT,AZURE_API_KEY,AZURE_ENGINE,直接修改 config 配置即可。配置的修改方法见本项目wiki。
+
+## 方法二(新方法,接入多个Azure模型,并支持动态切换)
+
+- 在方法一的基础上,注册并获取多组 AZURE_ENDPOINT,AZURE_API_KEY,AZURE_ENGINE
+- 修改config中的AZURE_CFG_ARRAY和AVAIL_LLM_MODELS配置项,按照格式填入多个Azure模型的配置,如下所示:
+
+```
+AZURE_CFG_ARRAY = {
+ "azure-gpt-3.5": # 第一个模型,azure模型必须以"azure-"开头,注意您还需要将"azure-gpt-3.5"加入AVAIL_LLM_MODELS(模型下拉菜单)
+ {
+ "AZURE_ENDPOINT": "https://你亲手写的api名称.openai.azure.com/",
+ "AZURE_API_KEY": "cccccccccccccccccccccccccccccccc",
+ "AZURE_ENGINE": "填入你亲手写的部署名1",
+ "AZURE_MODEL_MAX_TOKEN": 4096,
+ },
+ "azure-gpt-4": # 第二个模型,azure模型必须以"azure-"开头,注意您还需要将"azure-gpt-4"加入AVAIL_LLM_MODELS(模型下拉菜单)
+ {
+ "AZURE_ENDPOINT": "https://你亲手写的api名称.openai.azure.com/",
+ "AZURE_API_KEY": "dddddddddddddddddddddddddddddddd",
+ "AZURE_ENGINE": "填入你亲手写的部署名2",
+ "AZURE_MODEL_MAX_TOKEN": 8192,
+ },
+ "azure-gpt-3.5-16k": # 第三个模型,azure模型必须以"azure-"开头,注意您还需要将"azure-gpt-3.5-16k"加入AVAIL_LLM_MODELS(模型下拉菜单)
+ {
+ "AZURE_ENDPOINT": "https://你亲手写的api名称.openai.azure.com/",
+ "AZURE_API_KEY": "eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee",
+ "AZURE_ENGINE": "填入你亲手写的部署名3",
+ "AZURE_MODEL_MAX_TOKEN": 16384,
+ },
+}
+```
+
+
+
# 通过微软Azure云服务申请 Openai API
由于Openai和微软的关系,现在是可以通过微软的Azure云计算服务直接访问openai的api,免去了注册和网络的问题。
diff --git a/main.py b/main.py
index 9f38995..b29c94f 100644
--- a/main.py
+++ b/main.py
@@ -1,26 +1,25 @@
import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
import pickle
-import codecs
import base64
def main():
import gradio as gr
if gr.__version__ not in ['3.32.6']:
raise ModuleNotFoundError("使用项目内置Gradio获取最优体验! 请运行 `pip install -r requirements.txt` 指令安装内置Gradio及其他依赖, 详情信息见requirements.txt.")
- from request_llm.bridge_all import predict
+ from request_llms.bridge_all import predict
from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith
# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION')
CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT')
ENABLE_AUDIO, AUTO_CLEAR_TXT, PATH_LOGGING, AVAIL_THEMES, THEME = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT', 'PATH_LOGGING', 'AVAIL_THEMES', 'THEME')
DARK_MODE, NUM_CUSTOM_BASIC_BTN, SSL_KEYFILE, SSL_CERTFILE = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN', 'SSL_KEYFILE', 'SSL_CERTFILE')
+ INIT_SYS_PROMPT = get_conf('INIT_SYS_PROMPT')
# 如果WEB_PORT是-1, 则随机选取WEB端口
PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
from check_proxy import get_current_version
from themes.theme import adjust_theme, advanced_css, theme_declaration, load_dynamic_theme
- initial_prompt = "Serve me as a writing and programming assistant."
title_html = f"
GPT 学术优化 {get_current_version()}
{theme_declaration}"
description = "Github源代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic), "
description += "感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors)."
@@ -32,6 +31,7 @@ def main():
description += "虚空终端使用说明: 点击虚空终端, 然后根据提示输入指令, 再次点击虚空终端"
description += "如何保存对话: 点击保存当前的对话按钮"
description += "如何语音对话: 请阅读Wiki"
+ description += "如何临时更换API_KEY: 在输入区输入临时API_KEY后提交(网页刷新后失效)"
# 问询记录, python 版本建议3.9+(越新越好)
import logging, uuid
@@ -48,7 +48,7 @@ def main():
# 高级函数插件
from crazy_functional import get_crazy_functions
- DEFAULT_FN_GROUPS, = get_conf('DEFAULT_FN_GROUPS')
+ DEFAULT_FN_GROUPS = get_conf('DEFAULT_FN_GROUPS')
plugins = get_crazy_functions()
all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')]))
match_group = lambda tags, groups: any([g in groups for g in tags.split('|')])
@@ -94,7 +94,7 @@ def main():
clearBtn = gr.Button("清除", elem_id="elem_clear", variant="secondary", visible=False); clearBtn.style(size="sm")
if ENABLE_AUDIO:
with gr.Row():
- audio_mic = gr.Audio(source="microphone", type="numpy", streaming=True, show_label=False).style(container=False)
+ audio_mic = gr.Audio(source="microphone", type="numpy", elem_id="elem_audio", streaming=True, show_label=False).style(container=False)
with gr.Row():
status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel")
with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn:
@@ -153,7 +153,7 @@ def main():
top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
max_length_sl = gr.Slider(minimum=256, maximum=1024*32, value=4096, step=128, interactive=True, label="Local LLM MaxLength",)
- system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=initial_prompt)
+ system_prompt = gr.Textbox(show_label=True, lines=2, placeholder=f"System Prompt", label="System prompt", value=INIT_SYS_PROMPT)
with gr.Tab("界面外观", elem_id="interact-panel"):
theme_dropdown = gr.Dropdown(AVAIL_THEMES, value=THEME, label="更换UI主题").style(container=False)
@@ -433,16 +433,16 @@ def main():
server_port=PORT,
favicon_path=os.path.join(os.path.dirname(__file__), "docs/logo.png"),
auth=AUTHENTICATION if len(AUTHENTICATION) != 0 else None,
- blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"])
+ blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
# 如果需要在二级路径下运行
- # CUSTOM_PATH, = get_conf('CUSTOM_PATH')
+ # CUSTOM_PATH = get_conf('CUSTOM_PATH')
# if CUSTOM_PATH != "/":
# from toolbox import run_gradio_in_subpath
# run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH)
# else:
# demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png",
- # blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"])
+ # blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile",f"{PATH_LOGGING}/admin"])
if __name__ == "__main__":
main()
diff --git a/multi_language.py b/multi_language.py
index 8e3ac9d..a20fb5a 100644
--- a/multi_language.py
+++ b/multi_language.py
@@ -13,6 +13,7 @@
4. Run `python multi_language.py`.
Note: You need to run it multiple times to increase translation coverage because GPT makes mistakes sometimes.
+ (You can also run `CACHE_ONLY=True python multi_language.py` to use cached translation mapping)
5. Find the translated program in `multi-language\English\*`
@@ -35,7 +36,9 @@ import pickle
import time
from toolbox import get_conf
-CACHE_FOLDER, = get_conf('PATH_LOGGING')
+CACHE_ONLY = os.environ.get('CACHE_ONLY', False)
+
+CACHE_FOLDER = get_conf('PATH_LOGGING')
blacklist = ['multi-language', CACHE_FOLDER, '.git', 'private_upload', 'multi_language.py', 'build', '.github', '.vscode', '__pycache__', 'venv']
@@ -336,7 +339,10 @@ def step_1_core_key_translate():
if d not in cached_translation_keys:
need_translate.append(d)
- need_translate_mapping = trans(need_translate, language=LANG_STD, special=True)
+ if CACHE_ONLY:
+ need_translate_mapping = {}
+ else:
+ need_translate_mapping = trans(need_translate, language=LANG_STD, special=True)
map_to_json(need_translate_mapping, language=LANG_STD)
cached_translation = read_map_from_json(language=LANG_STD)
cached_translation = dict(sorted(cached_translation.items(), key=lambda x: -len(x[0])))
@@ -476,8 +482,10 @@ def step_2_core_key_translate():
if d not in cached_translation_keys:
need_translate.append(d)
-
- up = trans_json(need_translate, language=LANG, special=False)
+ if CACHE_ONLY:
+ up = {}
+ else:
+ up = trans_json(need_translate, language=LANG, special=False)
map_to_json(up, language=LANG)
cached_translation = read_map_from_json(language=LANG)
LANG_STD = 'std'
diff --git a/request_llm/bridge_chatglm.py b/request_llm/bridge_chatglm.py
deleted file mode 100644
index 387b3e2..0000000
--- a/request_llm/bridge_chatglm.py
+++ /dev/null
@@ -1,167 +0,0 @@
-
-from transformers import AutoModel, AutoTokenizer
-import time
-import threading
-import importlib
-from toolbox import update_ui, get_conf, ProxyNetworkActivate
-from multiprocessing import Process, Pipe
-
-load_message = "ChatGLM尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,ChatGLM消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
-
-#################################################################################
-class GetGLMHandle(Process):
- def __init__(self):
- super().__init__(daemon=True)
- self.parent, self.child = Pipe()
- self.chatglm_model = None
- self.chatglm_tokenizer = None
- self.info = ""
- self.success = True
- self.check_dependency()
- self.start()
- self.threadLock = threading.Lock()
-
- def check_dependency(self):
- try:
- import sentencepiece
- self.info = "依赖检测通过"
- self.success = True
- except:
- self.info = "缺少ChatGLM的依赖,如果要使用ChatGLM,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_chatglm.txt`安装ChatGLM的依赖。"
- self.success = False
-
- def ready(self):
- return self.chatglm_model is not None
-
- def run(self):
- # 子进程执行
- # 第一次运行,加载参数
- retry = 0
- LOCAL_MODEL_QUANT, device = get_conf('LOCAL_MODEL_QUANT', 'LOCAL_MODEL_DEVICE')
-
- if LOCAL_MODEL_QUANT == "INT4": # INT4
- _model_name_ = "THUDM/chatglm2-6b-int4"
- elif LOCAL_MODEL_QUANT == "INT8": # INT8
- _model_name_ = "THUDM/chatglm2-6b-int8"
- else:
- _model_name_ = "THUDM/chatglm2-6b" # FP16
-
- while True:
- try:
- with ProxyNetworkActivate('Download_LLM'):
- if self.chatglm_model is None:
- self.chatglm_tokenizer = AutoTokenizer.from_pretrained(_model_name_, trust_remote_code=True)
- if device=='cpu':
- self.chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True).float()
- else:
- self.chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True).half().cuda()
- self.chatglm_model = self.chatglm_model.eval()
- break
- else:
- break
- except:
- retry += 1
- if retry > 3:
- self.child.send('[Local Message] Call ChatGLM fail 不能正常加载ChatGLM的参数。')
- raise RuntimeError("不能正常加载ChatGLM的参数!")
-
- while True:
- # 进入任务等待状态
- kwargs = self.child.recv()
- # 收到消息,开始请求
- try:
- for response, history in self.chatglm_model.stream_chat(self.chatglm_tokenizer, **kwargs):
- self.child.send(response)
- # # 中途接收可能的终止指令(如果有的话)
- # if self.child.poll():
- # command = self.child.recv()
- # if command == '[Terminate]': break
- except:
- from toolbox import trimmed_format_exc
- self.child.send('[Local Message] Call ChatGLM fail.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
- # 请求处理结束,开始下一个循环
- self.child.send('[Finish]')
-
- def stream_chat(self, **kwargs):
- # 主进程执行
- self.threadLock.acquire()
- self.parent.send(kwargs)
- while True:
- res = self.parent.recv()
- if res != '[Finish]':
- yield res
- else:
- break
- self.threadLock.release()
-
-global glm_handle
-glm_handle = None
-#################################################################################
-def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
- """
- 多线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- global glm_handle
- if glm_handle is None:
- glm_handle = GetGLMHandle()
- if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + glm_handle.info
- if not glm_handle.success:
- error = glm_handle.info
- glm_handle = None
- raise RuntimeError(error)
-
- # chatglm 没有 sys_prompt 接口,因此把prompt加入 history
- history_feedin = []
- history_feedin.append(["What can I do?", sys_prompt])
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
- response = ""
- for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- if len(observe_window) >= 1: observe_window[0] = response
- if len(observe_window) >= 2:
- if (time.time()-observe_window[1]) > watch_dog_patience:
- raise RuntimeError("程序终止。")
- return response
-
-
-
-def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- 单线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- chatbot.append((inputs, ""))
-
- global glm_handle
- if glm_handle is None:
- glm_handle = GetGLMHandle()
- chatbot[-1] = (inputs, load_message + "\n\n" + glm_handle.info)
- yield from update_ui(chatbot=chatbot, history=[])
- if not glm_handle.success:
- glm_handle = None
- return
-
- if additional_fn is not None:
- from core_functional import handle_core_functionality
- inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
-
- # 处理历史信息
- history_feedin = []
- history_feedin.append(["What can I do?", system_prompt] )
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- # 开始接收chatglm的回复
- response = "[Local Message]: 等待ChatGLM响应中 ..."
- for response in glm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- chatbot[-1] = (inputs, response)
- yield from update_ui(chatbot=chatbot, history=history)
-
- # 总结输出
- if response == "[Local Message]: 等待ChatGLM响应中 ...":
- response = "[Local Message]: ChatGLM响应异常 ..."
- history.extend([inputs, response])
- yield from update_ui(chatbot=chatbot, history=history)
diff --git a/request_llm/local_llm_class.py b/request_llm/local_llm_class.py
deleted file mode 100644
index c9c7253..0000000
--- a/request_llm/local_llm_class.py
+++ /dev/null
@@ -1,180 +0,0 @@
-from transformers import AutoModel, AutoTokenizer
-import time
-import threading
-import importlib
-from toolbox import update_ui, get_conf, Singleton
-from multiprocessing import Process, Pipe
-
-def SingletonLocalLLM(cls):
- """
- 一个单实例装饰器
- """
- _instance = {}
- def _singleton(*args, **kargs):
- if cls not in _instance:
- _instance[cls] = cls(*args, **kargs)
- return _instance[cls]
- elif _instance[cls].corrupted:
- _instance[cls] = cls(*args, **kargs)
- return _instance[cls]
- else:
- return _instance[cls]
- return _singleton
-
-class LocalLLMHandle(Process):
- def __init__(self):
- # ⭐主进程执行
- super().__init__(daemon=True)
- self.corrupted = False
- self.load_model_info()
- self.parent, self.child = Pipe()
- self.running = True
- self._model = None
- self._tokenizer = None
- self.info = ""
- self.check_dependency()
- self.start()
- self.threadLock = threading.Lock()
-
- def load_model_info(self):
- # 🏃♂️🏃♂️🏃♂️ 子进程执行
- raise NotImplementedError("Method not implemented yet")
- self.model_name = ""
- self.cmd_to_install = ""
-
- def load_model_and_tokenizer(self):
- """
- This function should return the model and the tokenizer
- """
- # 🏃♂️🏃♂️🏃♂️ 子进程执行
- raise NotImplementedError("Method not implemented yet")
-
- def llm_stream_generator(self, **kwargs):
- # 🏃♂️🏃♂️🏃♂️ 子进程执行
- raise NotImplementedError("Method not implemented yet")
-
- def try_to_import_special_deps(self, **kwargs):
- """
- import something that will raise error if the user does not install requirement_*.txt
- """
- # ⭐主进程执行
- raise NotImplementedError("Method not implemented yet")
-
- def check_dependency(self):
- # ⭐主进程执行
- try:
- self.try_to_import_special_deps()
- self.info = "依赖检测通过"
- self.running = True
- except:
- self.info = f"缺少{self.model_name}的依赖,如果要使用{self.model_name},除了基础的pip依赖以外,您还需要运行{self.cmd_to_install}安装{self.model_name}的依赖。"
- self.running = False
-
- def run(self):
- # 🏃♂️🏃♂️🏃♂️ 子进程执行
- # 第一次运行,加载参数
- try:
- self._model, self._tokenizer = self.load_model_and_tokenizer()
- except:
- self.running = False
- from toolbox import trimmed_format_exc
- self.child.send(f'[Local Message] 不能正常加载{self.model_name}的参数.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
- self.child.send('[FinishBad]')
- raise RuntimeError(f"不能正常加载{self.model_name}的参数!")
-
- while True:
- # 进入任务等待状态
- kwargs = self.child.recv()
- # 收到消息,开始请求
- try:
- for response_full in self.llm_stream_generator(**kwargs):
- self.child.send(response_full)
- self.child.send('[Finish]')
- # 请求处理结束,开始下一个循环
- except:
- from toolbox import trimmed_format_exc
- self.child.send(f'[Local Message] 调用{self.model_name}失败.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
- self.child.send('[Finish]')
-
- def stream_chat(self, **kwargs):
- # ⭐主进程执行
- self.threadLock.acquire()
- self.parent.send(kwargs)
- while True:
- res = self.parent.recv()
- if res == '[Finish]':
- break
- if res == '[FinishBad]':
- self.running = False
- self.corrupted = True
- break
- else:
- yield res
- self.threadLock.release()
-
-
-
-def get_local_llm_predict_fns(LLMSingletonClass, model_name):
- load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
-
- def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
- """
- ⭐多线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- _llm_handle = LLMSingletonClass()
- if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + _llm_handle.info
- if not _llm_handle.running: raise RuntimeError(_llm_handle.info)
-
- # chatglm 没有 sys_prompt 接口,因此把prompt加入 history
- history_feedin = []
- history_feedin.append([sys_prompt, "Certainly!"])
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
- response = ""
- for response in _llm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- if len(observe_window) >= 1:
- observe_window[0] = response
- if len(observe_window) >= 2:
- if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
- return response
-
-
-
- def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
- """
- ⭐单线程方法
- 函数的说明请见 request_llm/bridge_all.py
- """
- chatbot.append((inputs, ""))
-
- _llm_handle = LLMSingletonClass()
- chatbot[-1] = (inputs, load_message + "\n\n" + _llm_handle.info)
- yield from update_ui(chatbot=chatbot, history=[])
- if not _llm_handle.running: raise RuntimeError(_llm_handle.info)
-
- if additional_fn is not None:
- from core_functional import handle_core_functionality
- inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
-
- # 处理历史信息
- history_feedin = []
- history_feedin.append([system_prompt, "Certainly!"])
- for i in range(len(history)//2):
- history_feedin.append([history[2*i], history[2*i+1]] )
-
- # 开始接收回复
- response = f"[Local Message]: 等待{model_name}响应中 ..."
- for response in _llm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
- chatbot[-1] = (inputs, response)
- yield from update_ui(chatbot=chatbot, history=history)
-
- # 总结输出
- if response == f"[Local Message]: 等待{model_name}响应中 ...":
- response = f"[Local Message]: {model_name}响应异常 ..."
- history.extend([inputs, response])
- yield from update_ui(chatbot=chatbot, history=history)
-
- return predict_no_ui_long_connection, predict
\ No newline at end of file
diff --git a/request_llm/README.md b/request_llms/README.md
similarity index 96%
rename from request_llm/README.md
rename to request_llms/README.md
index 545bc1f..92b856e 100644
--- a/request_llm/README.md
+++ b/request_llms/README.md
@@ -2,7 +2,7 @@
## ChatGLM
-- 安装依赖 `pip install -r request_llm/requirements_chatglm.txt`
+- 安装依赖 `pip install -r request_llms/requirements_chatglm.txt`
- 修改配置,在config.py中将LLM_MODEL的值改为"chatglm"
``` sh
diff --git a/request_llm/bridge_all.py b/request_llms/bridge_all.py
similarity index 82%
rename from request_llm/bridge_all.py
rename to request_llms/bridge_all.py
index 99f889c..88848a9 100644
--- a/request_llm/bridge_all.py
+++ b/request_llms/bridge_all.py
@@ -8,7 +8,7 @@
具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁
2. predict_no_ui_long_connection(...)
"""
-import tiktoken
+import tiktoken, copy
from functools import lru_cache
from concurrent.futures import ThreadPoolExecutor
from toolbox import get_conf, trimmed_format_exc
@@ -16,12 +16,15 @@ from toolbox import get_conf, trimmed_format_exc
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
from .bridge_chatgpt import predict as chatgpt_ui
-from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
-from .bridge_chatglm import predict as chatglm_ui
+from .bridge_chatgpt_vision import predict_no_ui_long_connection as chatgpt_vision_noui
+from .bridge_chatgpt_vision import predict as chatgpt_vision_ui
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
from .bridge_chatglm import predict as chatglm_ui
+from .bridge_chatglm3 import predict_no_ui_long_connection as chatglm3_noui
+from .bridge_chatglm3 import predict as chatglm3_ui
+
from .bridge_qianfan import predict_no_ui_long_connection as qianfan_noui
from .bridge_qianfan import predict as qianfan_ui
@@ -56,7 +59,7 @@ if not AZURE_ENDPOINT.endswith('/'): AZURE_ENDPOINT += '/'
azure_endpoint = AZURE_ENDPOINT + f'openai/deployments/{AZURE_ENGINE}/chat/completions?api-version=2023-05-15'
# 兼容旧版的配置
try:
- API_URL, = get_conf("API_URL")
+ API_URL = get_conf("API_URL")
if API_URL != "https://api.openai.com/v1/chat/completions":
openai_endpoint = API_URL
print("警告!API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置")
@@ -94,7 +97,7 @@ model_info = {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
- "max_token": 1024*16,
+ "max_token": 16385,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
@@ -112,7 +115,16 @@ model_info = {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
"endpoint": openai_endpoint,
- "max_token": 1024 * 16,
+ "max_token": 16385,
+ "tokenizer": tokenizer_gpt35,
+ "token_cnt": get_token_num_gpt35,
+ },
+
+ "gpt-3.5-turbo-1106": {#16k
+ "fn_with_ui": chatgpt_ui,
+ "fn_without_ui": chatgpt_noui,
+ "endpoint": openai_endpoint,
+ "max_token": 16385,
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
@@ -135,6 +147,15 @@ model_info = {
"token_cnt": get_token_num_gpt4,
},
+ "gpt-4-1106-preview": {
+ "fn_with_ui": chatgpt_ui,
+ "fn_without_ui": chatgpt_noui,
+ "endpoint": openai_endpoint,
+ "max_token": 128000,
+ "tokenizer": tokenizer_gpt4,
+ "token_cnt": get_token_num_gpt4,
+ },
+
"gpt-3.5-random": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
@@ -144,6 +165,16 @@ model_info = {
"token_cnt": get_token_num_gpt4,
},
+ "gpt-4-vision-preview": {
+ "fn_with_ui": chatgpt_vision_ui,
+ "fn_without_ui": chatgpt_vision_noui,
+ "endpoint": openai_endpoint,
+ "max_token": 4096,
+ "tokenizer": tokenizer_gpt4,
+ "token_cnt": get_token_num_gpt4,
+ },
+
+
# azure openai
"azure-gpt-3.5":{
"fn_with_ui": chatgpt_ui,
@@ -159,11 +190,11 @@ model_info = {
"fn_without_ui": chatgpt_noui,
"endpoint": azure_endpoint,
"max_token": 8192,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
+ "tokenizer": tokenizer_gpt4,
+ "token_cnt": get_token_num_gpt4,
},
- # api_2d
+ # api_2d (此后不需要在此处添加api2d的接口了,因为下面的代码会自动添加)
"api2d-gpt-3.5-turbo": {
"fn_with_ui": chatgpt_ui,
"fn_without_ui": chatgpt_noui,
@@ -182,15 +213,6 @@ model_info = {
"token_cnt": get_token_num_gpt4,
},
- "api2d-gpt-3.5-turbo-16k": {
- "fn_with_ui": chatgpt_ui,
- "fn_without_ui": chatgpt_noui,
- "endpoint": api2d_endpoint,
- "max_token": 1024*16,
- "tokenizer": tokenizer_gpt35,
- "token_cnt": get_token_num_gpt35,
- },
-
# 将 chatglm 直接对齐到 chatglm2
"chatglm": {
"fn_with_ui": chatglm_ui,
@@ -208,6 +230,14 @@ model_info = {
"tokenizer": tokenizer_gpt35,
"token_cnt": get_token_num_gpt35,
},
+ "chatglm3": {
+ "fn_with_ui": chatglm3_ui,
+ "fn_without_ui": chatglm3_noui,
+ "endpoint": None,
+ "max_token": 8192,
+ "tokenizer": tokenizer_gpt35,
+ "token_cnt": get_token_num_gpt35,
+ },
"qianfan": {
"fn_with_ui": qianfan_ui,
"fn_without_ui": qianfan_noui,
@@ -218,6 +248,20 @@ model_info = {
},
}
+# -=-=-=-=-=-=- api2d 对齐支持 -=-=-=-=-=-=-
+for model in AVAIL_LLM_MODELS:
+ if model.startswith('api2d-') and (model.replace('api2d-','') in model_info.keys()):
+ mi = copy.deepcopy(model_info[model.replace('api2d-','')])
+ mi.update({"endpoint": api2d_endpoint})
+ model_info.update({model: mi})
+
+# -=-=-=-=-=-=- azure 对齐支持 -=-=-=-=-=-=-
+for model in AVAIL_LLM_MODELS:
+ if model.startswith('azure-') and (model.replace('azure-','') in model_info.keys()):
+ mi = copy.deepcopy(model_info[model.replace('azure-','')])
+ mi.update({"endpoint": azure_endpoint})
+ model_info.update({model: mi})
+
# -=-=-=-=-=-=- 以下部分是新加入的模型,可能附带额外依赖 -=-=-=-=-=-=-
if "claude-1-100k" in AVAIL_LLM_MODELS or "claude-2" in AVAIL_LLM_MODELS:
from .bridge_claude import predict_no_ui_long_connection as claude_noui
@@ -451,6 +495,22 @@ if "sparkv2" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
})
except:
print(trimmed_format_exc())
+if "sparkv3" in AVAIL_LLM_MODELS: # 讯飞星火认知大模型
+ try:
+ from .bridge_spark import predict_no_ui_long_connection as spark_noui
+ from .bridge_spark import predict as spark_ui
+ model_info.update({
+ "sparkv3": {
+ "fn_with_ui": spark_ui,
+ "fn_without_ui": spark_noui,
+ "endpoint": None,
+ "max_token": 4096,
+ "tokenizer": tokenizer_gpt35,
+ "token_cnt": get_token_num_gpt35,
+ }
+ })
+ except:
+ print(trimmed_format_exc())
if "llama2" in AVAIL_LLM_MODELS: # llama2
try:
from .bridge_llama2 import predict_no_ui_long_connection as llama2_noui
@@ -467,6 +527,46 @@ if "llama2" in AVAIL_LLM_MODELS: # llama2
})
except:
print(trimmed_format_exc())
+if "zhipuai" in AVAIL_LLM_MODELS: # zhipuai
+ try:
+ from .bridge_zhipu import predict_no_ui_long_connection as zhipu_noui
+ from .bridge_zhipu import predict as zhipu_ui
+ model_info.update({
+ "zhipuai": {
+ "fn_with_ui": zhipu_ui,
+ "fn_without_ui": zhipu_noui,
+ "endpoint": None,
+ "max_token": 4096,
+ "tokenizer": tokenizer_gpt35,
+ "token_cnt": get_token_num_gpt35,
+ }
+ })
+ except:
+ print(trimmed_format_exc())
+
+# <-- 用于定义和切换多个azure模型 -->
+AZURE_CFG_ARRAY = get_conf("AZURE_CFG_ARRAY")
+if len(AZURE_CFG_ARRAY) > 0:
+ for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
+ # 可能会覆盖之前的配置,但这是意料之中的
+ if not azure_model_name.startswith('azure'):
+ raise ValueError("AZURE_CFG_ARRAY中配置的模型必须以azure开头")
+ endpoint_ = azure_cfg_dict["AZURE_ENDPOINT"] + \
+ f'openai/deployments/{azure_cfg_dict["AZURE_ENGINE"]}/chat/completions?api-version=2023-05-15'
+ model_info.update({
+ azure_model_name: {
+ "fn_with_ui": chatgpt_ui,
+ "fn_without_ui": chatgpt_noui,
+ "endpoint": endpoint_,
+ "azure_api_key": azure_cfg_dict["AZURE_API_KEY"],
+ "max_token": azure_cfg_dict["AZURE_MODEL_MAX_TOKEN"],
+ "tokenizer": tokenizer_gpt35, # tokenizer只用于粗估token数量
+ "token_cnt": get_token_num_gpt35,
+ }
+ })
+ if azure_model_name not in AVAIL_LLM_MODELS:
+ AVAIL_LLM_MODELS += [azure_model_name]
+
@@ -484,7 +584,7 @@ def LLM_CATCH_EXCEPTION(f):
return decorated
-def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False):
+def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window=[], console_slience=False):
"""
发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。
inputs:
diff --git a/request_llms/bridge_chatglm.py b/request_llms/bridge_chatglm.py
new file mode 100644
index 0000000..c58495d
--- /dev/null
+++ b/request_llms/bridge_chatglm.py
@@ -0,0 +1,78 @@
+model_name = "ChatGLM"
+cmd_to_install = "`pip install -r request_llms/requirements_chatglm.txt`"
+
+
+from toolbox import get_conf, ProxyNetworkActivate
+from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
+
+
+
+# ------------------------------------------------------------------------------------------------------------------------
+# 🔌💻 Local Model
+# ------------------------------------------------------------------------------------------------------------------------
+class GetGLM2Handle(LocalLLMHandle):
+
+ def load_model_info(self):
+ # 🏃♂️🏃♂️🏃♂️ 子进程执行
+ self.model_name = model_name
+ self.cmd_to_install = cmd_to_install
+
+ def load_model_and_tokenizer(self):
+ # 🏃♂️🏃♂️🏃♂️ 子进程执行
+ import os, glob
+ import os
+ import platform
+ from transformers import AutoModel, AutoTokenizer
+ LOCAL_MODEL_QUANT, device = get_conf('LOCAL_MODEL_QUANT', 'LOCAL_MODEL_DEVICE')
+
+ if LOCAL_MODEL_QUANT == "INT4": # INT4
+ _model_name_ = "THUDM/chatglm2-6b-int4"
+ elif LOCAL_MODEL_QUANT == "INT8": # INT8
+ _model_name_ = "THUDM/chatglm2-6b-int8"
+ else:
+ _model_name_ = "THUDM/chatglm2-6b" # FP16
+
+ with ProxyNetworkActivate('Download_LLM'):
+ chatglm_tokenizer = AutoTokenizer.from_pretrained(_model_name_, trust_remote_code=True)
+ if device=='cpu':
+ chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True).float()
+ else:
+ chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True).half().cuda()
+ chatglm_model = chatglm_model.eval()
+
+ self._model = chatglm_model
+ self._tokenizer = chatglm_tokenizer
+ return self._model, self._tokenizer
+
+ def llm_stream_generator(self, **kwargs):
+ # 🏃♂️🏃♂️🏃♂️ 子进程执行
+ def adaptor(kwargs):
+ query = kwargs['query']
+ max_length = kwargs['max_length']
+ top_p = kwargs['top_p']
+ temperature = kwargs['temperature']
+ history = kwargs['history']
+ return query, max_length, top_p, temperature, history
+
+ query, max_length, top_p, temperature, history = adaptor(kwargs)
+
+ for response, history in self._model.stream_chat(self._tokenizer,
+ query,
+ history,
+ max_length=max_length,
+ top_p=top_p,
+ temperature=temperature,
+ ):
+ yield response
+
+ def try_to_import_special_deps(self, **kwargs):
+ # import something that will raise error if the user does not install requirement_*.txt
+ # 🏃♂️🏃♂️🏃♂️ 主进程执行
+ import importlib
+ # importlib.import_module('modelscope')
+
+
+# ------------------------------------------------------------------------------------------------------------------------
+# 🔌💻 GPT-Academic Interface
+# ------------------------------------------------------------------------------------------------------------------------
+predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetGLM2Handle, model_name)
\ No newline at end of file
diff --git a/request_llms/bridge_chatglm3.py b/request_llms/bridge_chatglm3.py
new file mode 100644
index 0000000..3caa476
--- /dev/null
+++ b/request_llms/bridge_chatglm3.py
@@ -0,0 +1,77 @@
+model_name = "ChatGLM3"
+cmd_to_install = "`pip install -r request_llms/requirements_chatglm.txt`"
+
+
+from toolbox import get_conf, ProxyNetworkActivate
+from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
+
+
+
+# ------------------------------------------------------------------------------------------------------------------------
+# 🔌💻 Local Model
+# ------------------------------------------------------------------------------------------------------------------------
+class GetGLM3Handle(LocalLLMHandle):
+
+ def load_model_info(self):
+ # 🏃♂️🏃♂️🏃♂️ 子进程执行
+ self.model_name = model_name
+ self.cmd_to_install = cmd_to_install
+
+ def load_model_and_tokenizer(self):
+ # 🏃♂️🏃♂️🏃♂️ 子进程执行
+ from transformers import AutoModel, AutoTokenizer
+ import os, glob
+ import os
+ import platform
+ LOCAL_MODEL_QUANT, device = get_conf('LOCAL_MODEL_QUANT', 'LOCAL_MODEL_DEVICE')
+
+ if LOCAL_MODEL_QUANT == "INT4": # INT4
+ _model_name_ = "THUDM/chatglm3-6b-int4"
+ elif LOCAL_MODEL_QUANT == "INT8": # INT8
+ _model_name_ = "THUDM/chatglm3-6b-int8"
+ else:
+ _model_name_ = "THUDM/chatglm3-6b" # FP16
+ with ProxyNetworkActivate('Download_LLM'):
+ chatglm_tokenizer = AutoTokenizer.from_pretrained(_model_name_, trust_remote_code=True)
+ if device=='cpu':
+ chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True, device='cpu').float()
+ else:
+ chatglm_model = AutoModel.from_pretrained(_model_name_, trust_remote_code=True, device='cuda')
+ chatglm_model = chatglm_model.eval()
+
+ self._model = chatglm_model
+ self._tokenizer = chatglm_tokenizer
+ return self._model, self._tokenizer
+
+ def llm_stream_generator(self, **kwargs):
+ # 🏃♂️🏃♂️🏃♂️ 子进程执行
+ def adaptor(kwargs):
+ query = kwargs['query']
+ max_length = kwargs['max_length']
+ top_p = kwargs['top_p']
+ temperature = kwargs['temperature']
+ history = kwargs['history']
+ return query, max_length, top_p, temperature, history
+
+ query, max_length, top_p, temperature, history = adaptor(kwargs)
+
+ for response, history in self._model.stream_chat(self._tokenizer,
+ query,
+ history,
+ max_length=max_length,
+ top_p=top_p,
+ temperature=temperature,
+ ):
+ yield response
+
+ def try_to_import_special_deps(self, **kwargs):
+ # import something that will raise error if the user does not install requirement_*.txt
+ # 🏃♂️🏃♂️🏃♂️ 主进程执行
+ import importlib
+ # importlib.import_module('modelscope')
+
+
+# ------------------------------------------------------------------------------------------------------------------------
+# 🔌💻 GPT-Academic Interface
+# ------------------------------------------------------------------------------------------------------------------------
+predict_no_ui_long_connection, predict = get_local_llm_predict_fns(GetGLM3Handle, model_name, history_format='chatglm3')
\ No newline at end of file
diff --git a/request_llm/bridge_chatglmft.py b/request_llms/bridge_chatglmft.py
similarity index 93%
rename from request_llm/bridge_chatglmft.py
rename to request_llms/bridge_chatglmft.py
index 71af942..d812bae 100644
--- a/request_llm/bridge_chatglmft.py
+++ b/request_llms/bridge_chatglmft.py
@@ -44,7 +44,7 @@ class GetGLMFTHandle(Process):
self.info = "依赖检测通过"
self.success = True
except:
- self.info = "缺少ChatGLMFT的依赖,如果要使用ChatGLMFT,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_chatglm.txt`安装ChatGLM的依赖。"
+ self.info = "缺少ChatGLMFT的依赖,如果要使用ChatGLMFT,除了基础的pip依赖以外,您还需要运行`pip install -r request_llms/requirements_chatglm.txt`安装ChatGLM的依赖。"
self.success = False
def ready(self):
@@ -59,11 +59,11 @@ class GetGLMFTHandle(Process):
if self.chatglmft_model is None:
from transformers import AutoConfig
import torch
- # conf = 'request_llm/current_ptune_model.json'
+ # conf = 'request_llms/current_ptune_model.json'
# if not os.path.exists(conf): raise RuntimeError('找不到微调模型信息')
# with open(conf, 'r', encoding='utf8') as f:
# model_args = json.loads(f.read())
- CHATGLM_PTUNING_CHECKPOINT, = get_conf('CHATGLM_PTUNING_CHECKPOINT')
+ CHATGLM_PTUNING_CHECKPOINT = get_conf('CHATGLM_PTUNING_CHECKPOINT')
assert os.path.exists(CHATGLM_PTUNING_CHECKPOINT), "找不到微调模型检查点"
conf = os.path.join(CHATGLM_PTUNING_CHECKPOINT, "config.json")
with open(conf, 'r', encoding='utf8') as f:
@@ -87,7 +87,7 @@ class GetGLMFTHandle(Process):
new_prefix_state_dict[k[len("transformer.prefix_encoder."):]] = v
model.transformer.prefix_encoder.load_state_dict(new_prefix_state_dict)
- if model_args['quantization_bit'] is not None:
+ if model_args['quantization_bit'] is not None and model_args['quantization_bit'] != 0:
print(f"Quantized to {model_args['quantization_bit']} bit")
model = model.quantize(model_args['quantization_bit'])
model = model.cuda()
@@ -140,7 +140,7 @@ glmft_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
global glmft_handle
if glmft_handle is None:
@@ -171,7 +171,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append((inputs, ""))
@@ -195,13 +195,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
history_feedin.append([history[2*i], history[2*i+1]] )
# 开始接收chatglmft的回复
- response = "[Local Message]: 等待ChatGLMFT响应中 ..."
+ response = "[Local Message] 等待ChatGLMFT响应中 ..."
for response in glmft_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)
# 总结输出
- if response == "[Local Message]: 等待ChatGLMFT响应中 ...":
- response = "[Local Message]: ChatGLMFT响应异常 ..."
+ if response == "[Local Message] 等待ChatGLMFT响应中 ...":
+ response = "[Local Message] ChatGLMFT响应异常 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)
diff --git a/request_llm/bridge_chatglmonnx.py b/request_llms/bridge_chatglmonnx.py
similarity index 82%
rename from request_llm/bridge_chatglmonnx.py
rename to request_llms/bridge_chatglmonnx.py
index 594bcca..4b90571 100644
--- a/request_llm/bridge_chatglmonnx.py
+++ b/request_llms/bridge_chatglmonnx.py
@@ -1,5 +1,5 @@
model_name = "ChatGLM-ONNX"
-cmd_to_install = "`pip install -r request_llm/requirements_chatglm_onnx.txt`"
+cmd_to_install = "`pip install -r request_llms/requirements_chatglm_onnx.txt`"
from transformers import AutoModel, AutoTokenizer
@@ -8,7 +8,7 @@ import threading
import importlib
from toolbox import update_ui, get_conf
from multiprocessing import Process, Pipe
-from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns, SingletonLocalLLM
+from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
from .chatglmoonx import ChatGLMModel, chat_template
@@ -17,7 +17,6 @@ from .chatglmoonx import ChatGLMModel, chat_template
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
-@SingletonLocalLLM
class GetONNXGLMHandle(LocalLLMHandle):
def load_model_info(self):
@@ -28,13 +27,13 @@ class GetONNXGLMHandle(LocalLLMHandle):
def load_model_and_tokenizer(self):
# 🏃♂️🏃♂️🏃♂️ 子进程执行
import os, glob
- if not len(glob.glob("./request_llm/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/*.bin")) >= 7: # 该模型有七个 bin 文件
+ if not len(glob.glob("./request_llms/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/*.bin")) >= 7: # 该模型有七个 bin 文件
from huggingface_hub import snapshot_download
- snapshot_download(repo_id="K024/ChatGLM-6b-onnx-u8s8", local_dir="./request_llm/ChatGLM-6b-onnx-u8s8")
+ snapshot_download(repo_id="K024/ChatGLM-6b-onnx-u8s8", local_dir="./request_llms/ChatGLM-6b-onnx-u8s8")
def create_model():
return ChatGLMModel(
- tokenizer_path = "./request_llm/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/sentencepiece.model",
- onnx_model_path = "./request_llm/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/chatglm-6b-int8.onnx"
+ tokenizer_path = "./request_llms/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/sentencepiece.model",
+ onnx_model_path = "./request_llms/ChatGLM-6b-onnx-u8s8/chatglm-6b-int8-onnx-merged/chatglm-6b-int8.onnx"
)
self._model = create_model()
return self._model, None
diff --git a/request_llm/bridge_chatgpt.py b/request_llms/bridge_chatgpt.py
similarity index 88%
rename from request_llm/bridge_chatgpt.py
rename to request_llms/bridge_chatgpt.py
index cb96884..e55ad37 100644
--- a/request_llm/bridge_chatgpt.py
+++ b/request_llms/bridge_chatgpt.py
@@ -7,8 +7,7 @@
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
具备多线程调用能力的函数
- 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑
- 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程
+ 2. predict_no_ui_long_connection:支持多线程
"""
import json
@@ -23,8 +22,8 @@ import random
# config_private.py放自己的秘密如API和代理网址
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder
-proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG = \
- get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG')
+proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
+ get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
'网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
@@ -45,16 +44,28 @@ def decode_chunk(chunk):
chunk_decoded = chunk.decode()
chunkjson = None
has_choices = False
+ choice_valid = False
has_content = False
has_role = False
try:
chunkjson = json.loads(chunk_decoded[6:])
has_choices = 'choices' in chunkjson
- if has_choices: has_content = "content" in chunkjson['choices'][0]["delta"]
- if has_choices: has_role = "role" in chunkjson['choices'][0]["delta"]
+ if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
+ if has_choices and choice_valid: has_content = "content" in chunkjson['choices'][0]["delta"]
+ if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
except:
pass
- return chunk_decoded, chunkjson, has_choices, has_content, has_role
+ return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role
+
+from functools import lru_cache
+@lru_cache(maxsize=32)
+def verify_endpoint(endpoint):
+ """
+ 检查endpoint是否可用
+ """
+ if "你亲手写的api名称" in endpoint:
+ raise ValueError("Endpoint不正确, 请检查AZURE_ENDPOINT的配置! 当前的Endpoint为:" + endpoint)
+ return endpoint
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
"""
@@ -77,7 +88,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
try:
# make a POST request to the API endpoint, stream=False
from .bridge_all import model_info
- endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
+ endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
response = requests.post(endpoint, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS); break
except requests.exceptions.ReadTimeout as e:
@@ -86,7 +97,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
if retry > MAX_RETRY: raise TimeoutError
if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……')
- stream_response = response.iter_lines()
+ stream_response = response.iter_lines()
result = ''
json_data = None
while True:
@@ -169,14 +180,22 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
return
+ # 检查endpoint是否合法
+ try:
+ from .bridge_all import model_info
+ endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
+ except:
+ tb_str = '```\n' + trimmed_format_exc() + '```'
+ chatbot[-1] = (inputs, tb_str)
+ yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面
+ return
+
history.append(inputs); history.append("")
retry = 0
while True:
try:
# make a POST request to the API endpoint, stream=True
- from .bridge_all import model_info
- endpoint = model_info[llm_kwargs['llm_model']]['endpoint']
response = requests.post(endpoint, headers=headers, proxies=proxies,
json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
except:
@@ -208,7 +227,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
# 提前读取一些信息 (用于判断异常)
- chunk_decoded, chunkjson, has_choices, has_content, has_role = decode_chunk(chunk)
+ chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded):
# 数据流的第一帧不携带content
@@ -216,6 +235,9 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
if chunk:
try:
+ if has_choices and not choice_valid:
+ # 一些垃圾第三方接口的出现这样的错误
+ continue
# 前者是API2D的结束条件,后者是OPENAI的结束条件
if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
# 判定为数据流的结束,gpt_replying_buffer也写完了
@@ -265,6 +287,8 @@ def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg)
chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website)
elif "associated with a deactivated account" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website)
+ elif "API key has been deactivated" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] API key has been deactivated. OpenAI以账户失效为由, 拒绝服务." + openai_website)
elif "bad forward key" in error_msg:
chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
elif "Not enough point" in error_msg:
@@ -289,7 +313,11 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
"Authorization": f"Bearer {api_key}"
}
if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG})
- if llm_kwargs['llm_model'].startswith('azure-'): headers.update({"api-key": api_key})
+ if llm_kwargs['llm_model'].startswith('azure-'):
+ headers.update({"api-key": api_key})
+ if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys():
+ azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
+ headers.update({"api-key": azure_api_key_unshared})
conversation_cnt = len(history) // 2
@@ -322,6 +350,7 @@ def generate_payload(inputs, llm_kwargs, history, system_prompt, stream):
model = random.choice([
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
+ "gpt-3.5-turbo-1106",
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-3.5-turbo-0301",
diff --git a/request_llms/bridge_chatgpt_vision.py b/request_llms/bridge_chatgpt_vision.py
new file mode 100644
index 0000000..e84bc0b
--- /dev/null
+++ b/request_llms/bridge_chatgpt_vision.py
@@ -0,0 +1,329 @@
+"""
+ 该文件中主要包含三个函数
+
+ 不具备多线程能力的函数:
+ 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
+
+ 具备多线程调用能力的函数
+ 2. predict_no_ui_long_connection:支持多线程
+"""
+
+import json
+import time
+import logging
+import requests
+import base64
+import os
+import glob
+
+from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc, is_the_upload_folder, update_ui_lastest_msg, get_max_token
+proxies, TIMEOUT_SECONDS, MAX_RETRY, API_ORG, AZURE_CFG_ARRAY = \
+ get_conf('proxies', 'TIMEOUT_SECONDS', 'MAX_RETRY', 'API_ORG', 'AZURE_CFG_ARRAY')
+
+timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \
+ '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。'
+
+def have_any_recent_upload_image_files(chatbot):
+ _5min = 5 * 60
+ if chatbot is None: return False, None # chatbot is None
+ most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
+ if not most_recent_uploaded: return False, None # most_recent_uploaded is None
+ if time.time() - most_recent_uploaded["time"] < _5min:
+ most_recent_uploaded = chatbot._cookies.get("most_recent_uploaded", None)
+ path = most_recent_uploaded['path']
+ file_manifest = [f for f in glob.glob(f'{path}/**/*.jpg', recursive=True)]
+ file_manifest += [f for f in glob.glob(f'{path}/**/*.jpeg', recursive=True)]
+ file_manifest += [f for f in glob.glob(f'{path}/**/*.png', recursive=True)]
+ if len(file_manifest) == 0: return False, None
+ return True, file_manifest # most_recent_uploaded is new
+ else:
+ return False, None # most_recent_uploaded is too old
+
+def report_invalid_key(key):
+ if get_conf("BLOCK_INVALID_APIKEY"):
+ # 实验性功能,自动检测并屏蔽失效的KEY,请勿使用
+ from request_llms.key_manager import ApiKeyManager
+ api_key = ApiKeyManager().add_key_to_blacklist(key)
+
+def get_full_error(chunk, stream_response):
+ """
+ 获取完整的从Openai返回的报错
+ """
+ while True:
+ try:
+ chunk += next(stream_response)
+ except:
+ break
+ return chunk
+
+def decode_chunk(chunk):
+ # 提前读取一些信息 (用于判断异常)
+ chunk_decoded = chunk.decode()
+ chunkjson = None
+ has_choices = False
+ choice_valid = False
+ has_content = False
+ has_role = False
+ try:
+ chunkjson = json.loads(chunk_decoded[6:])
+ has_choices = 'choices' in chunkjson
+ if has_choices: choice_valid = (len(chunkjson['choices']) > 0)
+ if has_choices and choice_valid: has_content = "content" in chunkjson['choices'][0]["delta"]
+ if has_choices and choice_valid: has_role = "role" in chunkjson['choices'][0]["delta"]
+ except:
+ pass
+ return chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role
+
+from functools import lru_cache
+@lru_cache(maxsize=32)
+def verify_endpoint(endpoint):
+ """
+ 检查endpoint是否可用
+ """
+ return endpoint
+
+def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
+ raise NotImplementedError
+
+
+def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
+
+ have_recent_file, image_paths = have_any_recent_upload_image_files(chatbot)
+
+ if is_any_api_key(inputs):
+ chatbot._cookies['api_key'] = inputs
+ chatbot.append(("输入已识别为openai的api_key", what_keys(inputs)))
+ yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面
+ return
+ elif not is_any_api_key(chatbot._cookies['api_key']):
+ chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。"))
+ yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面
+ return
+ if not have_recent_file:
+ chatbot.append((inputs, "没有检测到任何近期上传的图像文件,请上传jpg格式的图片,此外,请注意拓展名需要小写"))
+ yield from update_ui(chatbot=chatbot, history=history, msg="等待图片") # 刷新界面
+ return
+ if os.path.exists(inputs):
+ chatbot.append((inputs, "已经接收到您上传的文件,您不需要再重复强调该文件的路径了,请直接输入您的问题。"))
+ yield from update_ui(chatbot=chatbot, history=history, msg="等待指令") # 刷新界面
+ return
+
+
+ user_input = inputs
+ if additional_fn is not None:
+ from core_functional import handle_core_functionality
+ inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
+
+ raw_input = inputs
+ logging.info(f'[raw_input] {raw_input}')
+ def make_media_input(inputs, image_paths):
+ for image_path in image_paths:
+ inputs = inputs + f'
})
'
+ return inputs
+ chatbot.append((make_media_input(inputs, image_paths), ""))
+ yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面
+
+ # check mis-behavior
+ if is_the_upload_folder(user_input):
+ chatbot[-1] = (inputs, f"[Local Message] 检测到操作错误!当您上传文档之后,需点击“**函数插件区**”按钮进行处理,请勿点击“提交”按钮或者“基础功能区”按钮。")
+ yield from update_ui(chatbot=chatbot, history=history, msg="正常") # 刷新界面
+ time.sleep(2)
+
+ try:
+ headers, payload, api_key = generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths)
+ except RuntimeError as e:
+ chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。")
+ yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面
+ return
+
+ # 检查endpoint是否合法
+ try:
+ from .bridge_all import model_info
+ endpoint = verify_endpoint(model_info[llm_kwargs['llm_model']]['endpoint'])
+ except:
+ tb_str = '```\n' + trimmed_format_exc() + '```'
+ chatbot[-1] = (inputs, tb_str)
+ yield from update_ui(chatbot=chatbot, history=history, msg="Endpoint不满足要求") # 刷新界面
+ return
+
+ history.append(make_media_input(inputs, image_paths))
+ history.append("")
+
+ retry = 0
+ while True:
+ try:
+ # make a POST request to the API endpoint, stream=True
+ response = requests.post(endpoint, headers=headers, proxies=proxies,
+ json=payload, stream=True, timeout=TIMEOUT_SECONDS);break
+ except:
+ retry += 1
+ chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg))
+ retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else ""
+ yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面
+ if retry > MAX_RETRY: raise TimeoutError
+
+ gpt_replying_buffer = ""
+
+ is_head_of_the_stream = True
+ if stream:
+ stream_response = response.iter_lines()
+ while True:
+ try:
+ chunk = next(stream_response)
+ except StopIteration:
+ # 非OpenAI官方接口的出现这样的报错,OpenAI和API2D不会走这里
+ chunk_decoded = chunk.decode()
+ error_msg = chunk_decoded
+ # 首先排除一个one-api没有done数据包的第三方Bug情形
+ if len(gpt_replying_buffer.strip()) > 0 and len(error_msg) == 0:
+ yield from update_ui(chatbot=chatbot, history=history, msg="检测到有缺陷的非OpenAI官方接口,建议选择更稳定的接口。")
+ break
+ # 其他情况,直接返回报错
+ chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key)
+ yield from update_ui(chatbot=chatbot, history=history, msg="非OpenAI官方接口返回了错误:" + chunk.decode()) # 刷新界面
+ return
+
+ # 提前读取一些信息 (用于判断异常)
+ chunk_decoded, chunkjson, has_choices, choice_valid, has_content, has_role = decode_chunk(chunk)
+
+ if is_head_of_the_stream and (r'"object":"error"' not in chunk_decoded) and (r"content" not in chunk_decoded):
+ # 数据流的第一帧不携带content
+ is_head_of_the_stream = False; continue
+
+ if chunk:
+ try:
+ if has_choices and not choice_valid:
+ # 一些垃圾第三方接口的出现这样的错误
+ continue
+ # 前者是API2D的结束条件,后者是OPENAI的结束条件
+ if ('data: [DONE]' in chunk_decoded) or (len(chunkjson['choices'][0]["delta"]) == 0):
+ # 判定为数据流的结束,gpt_replying_buffer也写完了
+ lastmsg = chatbot[-1][-1] + f"\n\n\n\n「{llm_kwargs['llm_model']}调用结束,该模型不具备上下文对话能力,如需追问,请及时切换模型。」"
+ yield from update_ui_lastest_msg(lastmsg, chatbot, history, delay=1)
+ logging.info(f'[response] {gpt_replying_buffer}')
+ break
+ # 处理数据流的主体
+ status_text = f"finish_reason: {chunkjson['choices'][0].get('finish_reason', 'null')}"
+ # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
+ if has_content:
+ # 正常情况
+ gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
+ elif has_role:
+ # 一些第三方接口的出现这样的错误,兼容一下吧
+ continue
+ else:
+ # 一些垃圾第三方接口的出现这样的错误
+ gpt_replying_buffer = gpt_replying_buffer + chunkjson['choices'][0]["delta"]["content"]
+
+ history[-1] = gpt_replying_buffer
+ chatbot[-1] = (history[-2], history[-1])
+ yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
+ except Exception as e:
+ yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面
+ chunk = get_full_error(chunk, stream_response)
+ chunk_decoded = chunk.decode()
+ error_msg = chunk_decoded
+ chatbot, history = handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key)
+ yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
+ print(error_msg)
+ return
+
+def handle_error(inputs, llm_kwargs, chatbot, history, chunk_decoded, error_msg, api_key=""):
+ from .bridge_all import model_info
+ openai_website = ' 请登录OpenAI查看详情 https://platform.openai.com/signup'
+ if "reduce the length" in error_msg:
+ if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出
+ history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'],
+ max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)")
+ elif "does not exist" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.")
+ elif "Incorrect API key" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务. " + openai_website); report_invalid_key(api_key)
+ elif "exceeded your current quota" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
+ elif "account is not active" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] Your account is not active. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
+ elif "associated with a deactivated account" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] You are associated with a deactivated account. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
+ elif "API key has been deactivated" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] API key has been deactivated. OpenAI以账户失效为由, 拒绝服务." + openai_website); report_invalid_key(api_key)
+ elif "bad forward key" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.")
+ elif "Not enough point" in error_msg:
+ chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
+ else:
+ from toolbox import regular_txt_to_markdown
+ tb_str = '```\n' + trimmed_format_exc() + '```'
+ chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}")
+ return chatbot, history
+
+# Function to encode the image
+def encode_image(image_path):
+ with open(image_path, "rb") as image_file:
+ return base64.b64encode(image_file.read()).decode('utf-8')
+
+def generate_payload(inputs, llm_kwargs, history, system_prompt, image_paths):
+ """
+ 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备
+ """
+ if not is_any_api_key(llm_kwargs['api_key']):
+ raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")
+
+ api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model'])
+
+ headers = {
+ "Content-Type": "application/json",
+ "Authorization": f"Bearer {api_key}"
+ }
+ if API_ORG.startswith('org-'): headers.update({"OpenAI-Organization": API_ORG})
+ if llm_kwargs['llm_model'].startswith('azure-'):
+ headers.update({"api-key": api_key})
+ if llm_kwargs['llm_model'] in AZURE_CFG_ARRAY.keys():
+ azure_api_key_unshared = AZURE_CFG_ARRAY[llm_kwargs['llm_model']]["AZURE_API_KEY"]
+ headers.update({"api-key": azure_api_key_unshared})
+
+ base64_images = []
+ for image_path in image_paths:
+ base64_images.append(encode_image(image_path))
+
+ messages = []
+ what_i_ask_now = {}
+ what_i_ask_now["role"] = "user"
+ what_i_ask_now["content"] = []
+ what_i_ask_now["content"].append({
+ "type": "text",
+ "text": inputs
+ })
+
+ for image_path, base64_image in zip(image_paths, base64_images):
+ what_i_ask_now["content"].append({
+ "type": "image_url",
+ "image_url": {
+ "url": f"data:image/jpeg;base64,{base64_image}"
+ }
+ })
+
+ messages.append(what_i_ask_now)
+ model = llm_kwargs['llm_model']
+ if llm_kwargs['llm_model'].startswith('api2d-'):
+ model = llm_kwargs['llm_model'][len('api2d-'):]
+
+ payload = {
+ "model": model,
+ "messages": messages,
+ "temperature": llm_kwargs['temperature'], # 1.0,
+ "top_p": llm_kwargs['top_p'], # 1.0,
+ "n": 1,
+ "stream": True,
+ "max_tokens": get_max_token(llm_kwargs),
+ "presence_penalty": 0,
+ "frequency_penalty": 0,
+ }
+ try:
+ print(f" {llm_kwargs['llm_model']} : {inputs[:100]} ..........")
+ except:
+ print('输入中可能存在乱码。')
+ return headers, payload, api_key
+
+
diff --git a/request_llm/bridge_chatgpt_website.py b/request_llms/bridge_chatgpt_website.py
similarity index 97%
rename from request_llm/bridge_chatgpt_website.py
rename to request_llms/bridge_chatgpt_website.py
index 7f3147b..f2f0709 100644
--- a/request_llm/bridge_chatgpt_website.py
+++ b/request_llms/bridge_chatgpt_website.py
@@ -7,8 +7,7 @@
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
具备多线程调用能力的函数
- 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑
- 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程
+ 2. predict_no_ui_long_connection:支持多线程
"""
import json
diff --git a/request_llm/bridge_claude.py b/request_llms/bridge_claude.py
similarity index 97%
rename from request_llm/bridge_claude.py
rename to request_llms/bridge_claude.py
index 6084b1f..42b7505 100644
--- a/request_llm/bridge_claude.py
+++ b/request_llms/bridge_claude.py
@@ -7,7 +7,7 @@
1. predict: 正常对话时使用,具备完备的交互功能,不可多线程
具备多线程调用能力的函数
- 2. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程
+ 2. predict_no_ui_long_connection:支持多线程
"""
import os
diff --git a/request_llm/bridge_internlm.py b/request_llms/bridge_internlm.py
similarity index 91%
rename from request_llm/bridge_internlm.py
rename to request_llms/bridge_internlm.py
index 0ec65b6..b2be36a 100644
--- a/request_llm/bridge_internlm.py
+++ b/request_llms/bridge_internlm.py
@@ -1,13 +1,13 @@
model_name = "InternLM"
-cmd_to_install = "`pip install -r request_llm/requirements_chatglm.txt`"
+cmd_to_install = "`pip install -r request_llms/requirements_chatglm.txt`"
from transformers import AutoModel, AutoTokenizer
import time
import threading
import importlib
-from toolbox import update_ui, get_conf
+from toolbox import update_ui, get_conf, ProxyNetworkActivate
from multiprocessing import Process, Pipe
-from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns, SingletonLocalLLM
+from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
# ------------------------------------------------------------------------------------------------------------------------
@@ -34,7 +34,6 @@ def combine_history(prompt, hist):
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
-@SingletonLocalLLM
class GetInternlmHandle(LocalLLMHandle):
def load_model_info(self):
@@ -52,15 +51,16 @@ class GetInternlmHandle(LocalLLMHandle):
# 🏃♂️🏃♂️🏃♂️ 子进程执行
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
- device, = get_conf('LOCAL_MODEL_DEVICE')
- if self._model is None:
- tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
- if device=='cpu':
- model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).to(torch.bfloat16)
- else:
- model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).to(torch.bfloat16).cuda()
+ device = get_conf('LOCAL_MODEL_DEVICE')
+ with ProxyNetworkActivate('Download_LLM'):
+ if self._model is None:
+ tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True)
+ if device=='cpu':
+ model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).to(torch.bfloat16)
+ else:
+ model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-7b", trust_remote_code=True).to(torch.bfloat16).cuda()
- model = model.eval()
+ model = model.eval()
return model, tokenizer
def llm_stream_generator(self, **kwargs):
@@ -94,8 +94,9 @@ class GetInternlmHandle(LocalLLMHandle):
inputs = tokenizer([prompt], padding=True, return_tensors="pt")
input_length = len(inputs["input_ids"][0])
+ device = get_conf('LOCAL_MODEL_DEVICE')
for k, v in inputs.items():
- inputs[k] = v.cuda()
+ inputs[k] = v.to(device)
input_ids = inputs["input_ids"]
batch_size, input_ids_seq_length = input_ids.shape[0], input_ids.shape[-1]
if generation_config is None:
diff --git a/request_llm/bridge_jittorllms_llama.py b/request_llms/bridge_jittorllms_llama.py
similarity index 90%
rename from request_llm/bridge_jittorllms_llama.py
rename to request_llms/bridge_jittorllms_llama.py
index d485357..2d3005e 100644
--- a/request_llm/bridge_jittorllms_llama.py
+++ b/request_llms/bridge_jittorllms_llama.py
@@ -28,8 +28,8 @@ class GetGLMHandle(Process):
self.success = True
except:
from toolbox import trimmed_format_exc
- self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\
- r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\
+ self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llms/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\
+ r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llms/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\
r"警告:安装jittorllms依赖后将完全破坏现有的pytorch环境,建议使用docker环境!" + trimmed_format_exc()
self.success = False
@@ -45,15 +45,15 @@ class GetGLMHandle(Process):
env = os.environ.get("PATH", "")
os.environ["PATH"] = env.replace('/cuda/bin', '/x/bin')
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
- os.chdir(root_dir_assume + '/request_llm/jittorllms')
- sys.path.append(root_dir_assume + '/request_llm/jittorllms')
+ os.chdir(root_dir_assume + '/request_llms/jittorllms')
+ sys.path.append(root_dir_assume + '/request_llms/jittorllms')
validate_path() # validate path so you can run from base directory
def load_model():
import types
try:
if self.jittorllms_model is None:
- device, = get_conf('LOCAL_MODEL_DEVICE')
+ device = get_conf('LOCAL_MODEL_DEVICE')
from .jittorllms.models import get_model
# availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
args_dict = {'model': 'llama'}
@@ -109,7 +109,7 @@ llama_glm_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
global llama_glm_handle
if llama_glm_handle is None:
@@ -140,7 +140,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append((inputs, ""))
@@ -163,13 +163,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
history_feedin.append([history[2*i], history[2*i+1]] )
# 开始接收jittorllms的回复
- response = "[Local Message]: 等待jittorllms响应中 ..."
+ response = "[Local Message] 等待jittorllms响应中 ..."
for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)
# 总结输出
- if response == "[Local Message]: 等待jittorllms响应中 ...":
- response = "[Local Message]: jittorllms响应异常 ..."
+ if response == "[Local Message] 等待jittorllms响应中 ...":
+ response = "[Local Message] jittorllms响应异常 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)
diff --git a/request_llm/bridge_jittorllms_pangualpha.py b/request_llms/bridge_jittorllms_pangualpha.py
similarity index 90%
rename from request_llm/bridge_jittorllms_pangualpha.py
rename to request_llms/bridge_jittorllms_pangualpha.py
index 20a3021..2640176 100644
--- a/request_llm/bridge_jittorllms_pangualpha.py
+++ b/request_llms/bridge_jittorllms_pangualpha.py
@@ -28,8 +28,8 @@ class GetGLMHandle(Process):
self.success = True
except:
from toolbox import trimmed_format_exc
- self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\
- r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\
+ self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llms/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\
+ r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llms/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\
r"警告:安装jittorllms依赖后将完全破坏现有的pytorch环境,建议使用docker环境!" + trimmed_format_exc()
self.success = False
@@ -45,15 +45,15 @@ class GetGLMHandle(Process):
env = os.environ.get("PATH", "")
os.environ["PATH"] = env.replace('/cuda/bin', '/x/bin')
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
- os.chdir(root_dir_assume + '/request_llm/jittorllms')
- sys.path.append(root_dir_assume + '/request_llm/jittorllms')
+ os.chdir(root_dir_assume + '/request_llms/jittorllms')
+ sys.path.append(root_dir_assume + '/request_llms/jittorllms')
validate_path() # validate path so you can run from base directory
def load_model():
import types
try:
if self.jittorllms_model is None:
- device, = get_conf('LOCAL_MODEL_DEVICE')
+ device = get_conf('LOCAL_MODEL_DEVICE')
from .jittorllms.models import get_model
# availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
args_dict = {'model': 'pangualpha'}
@@ -109,7 +109,7 @@ pangu_glm_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
global pangu_glm_handle
if pangu_glm_handle is None:
@@ -140,7 +140,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append((inputs, ""))
@@ -163,13 +163,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
history_feedin.append([history[2*i], history[2*i+1]] )
# 开始接收jittorllms的回复
- response = "[Local Message]: 等待jittorllms响应中 ..."
+ response = "[Local Message] 等待jittorllms响应中 ..."
for response in pangu_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)
# 总结输出
- if response == "[Local Message]: 等待jittorllms响应中 ...":
- response = "[Local Message]: jittorllms响应异常 ..."
+ if response == "[Local Message] 等待jittorllms响应中 ...":
+ response = "[Local Message] jittorllms响应异常 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)
diff --git a/request_llm/bridge_jittorllms_rwkv.py b/request_llms/bridge_jittorllms_rwkv.py
similarity index 90%
rename from request_llm/bridge_jittorllms_rwkv.py
rename to request_llms/bridge_jittorllms_rwkv.py
index ee4f592..0021a50 100644
--- a/request_llm/bridge_jittorllms_rwkv.py
+++ b/request_llms/bridge_jittorllms_rwkv.py
@@ -28,8 +28,8 @@ class GetGLMHandle(Process):
self.success = True
except:
from toolbox import trimmed_format_exc
- self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\
- r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\
+ self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llms/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\
+ r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llms/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\
r"警告:安装jittorllms依赖后将完全破坏现有的pytorch环境,建议使用docker环境!" + trimmed_format_exc()
self.success = False
@@ -45,15 +45,15 @@ class GetGLMHandle(Process):
env = os.environ.get("PATH", "")
os.environ["PATH"] = env.replace('/cuda/bin', '/x/bin')
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
- os.chdir(root_dir_assume + '/request_llm/jittorllms')
- sys.path.append(root_dir_assume + '/request_llm/jittorllms')
+ os.chdir(root_dir_assume + '/request_llms/jittorllms')
+ sys.path.append(root_dir_assume + '/request_llms/jittorllms')
validate_path() # validate path so you can run from base directory
def load_model():
import types
try:
if self.jittorllms_model is None:
- device, = get_conf('LOCAL_MODEL_DEVICE')
+ device = get_conf('LOCAL_MODEL_DEVICE')
from .jittorllms.models import get_model
# availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"]
args_dict = {'model': 'chatrwkv'}
@@ -109,7 +109,7 @@ rwkv_glm_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
global rwkv_glm_handle
if rwkv_glm_handle is None:
@@ -140,7 +140,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append((inputs, ""))
@@ -163,13 +163,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
history_feedin.append([history[2*i], history[2*i+1]] )
# 开始接收jittorllms的回复
- response = "[Local Message]: 等待jittorllms响应中 ..."
+ response = "[Local Message] 等待jittorllms响应中 ..."
for response in rwkv_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)
# 总结输出
- if response == "[Local Message]: 等待jittorllms响应中 ...":
- response = "[Local Message]: jittorllms响应异常 ..."
+ if response == "[Local Message] 等待jittorllms响应中 ...":
+ response = "[Local Message] jittorllms响应异常 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)
diff --git a/request_llm/bridge_llama2.py b/request_llms/bridge_llama2.py
similarity index 97%
rename from request_llm/bridge_llama2.py
rename to request_llms/bridge_llama2.py
index d1be446..e6da4b7 100644
--- a/request_llm/bridge_llama2.py
+++ b/request_llms/bridge_llama2.py
@@ -1,18 +1,17 @@
model_name = "LLaMA"
-cmd_to_install = "`pip install -r request_llm/requirements_chatglm.txt`"
+cmd_to_install = "`pip install -r request_llms/requirements_chatglm.txt`"
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
from toolbox import update_ui, get_conf, ProxyNetworkActivate
from multiprocessing import Process, Pipe
-from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns, SingletonLocalLLM
+from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
from threading import Thread
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
-@SingletonLocalLLM
class GetONNXGLMHandle(LocalLLMHandle):
def load_model_info(self):
diff --git a/request_llm/bridge_moss.py b/request_llms/bridge_moss.py
similarity index 93%
rename from request_llm/bridge_moss.py
rename to request_llms/bridge_moss.py
index 3c6217d..ee8907c 100644
--- a/request_llm/bridge_moss.py
+++ b/request_llms/bridge_moss.py
@@ -1,8 +1,6 @@
-from transformers import AutoModel, AutoTokenizer
import time
import threading
-import importlib
from toolbox import update_ui, get_conf
from multiprocessing import Process, Pipe
@@ -24,12 +22,12 @@ class GetGLMHandle(Process):
def check_dependency(self): # 主进程执行
try:
import datasets, os
- assert os.path.exists('request_llm/moss/models')
+ assert os.path.exists('request_llms/moss/models')
self.info = "依赖检测通过"
self.success = True
except:
self.info = """
- 缺少MOSS的依赖,如果要使用MOSS,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_moss.txt`和`git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss`安装MOSS的依赖。
+ 缺少MOSS的依赖,如果要使用MOSS,除了基础的pip依赖以外,您还需要运行`pip install -r request_llms/requirements_moss.txt`和`git clone https://github.com/OpenLMLab/MOSS.git request_llms/moss`安装MOSS的依赖。
"""
self.success = False
return self.success
@@ -110,8 +108,8 @@ class GetGLMHandle(Process):
def validate_path():
import os, sys
root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
- os.chdir(root_dir_assume + '/request_llm/moss')
- sys.path.append(root_dir_assume + '/request_llm/moss')
+ os.chdir(root_dir_assume + '/request_llms/moss')
+ sys.path.append(root_dir_assume + '/request_llms/moss')
validate_path() # validate path so you can run from base directory
try:
@@ -176,7 +174,7 @@ moss_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
global moss_handle
if moss_handle is None:
@@ -206,7 +204,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append((inputs, ""))
@@ -219,7 +217,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
moss_handle = None
return
else:
- response = "[Local Message]: 等待MOSS响应中 ..."
+ response = "[Local Message] 等待MOSS响应中 ..."
chatbot[-1] = (inputs, response)
yield from update_ui(chatbot=chatbot, history=history)
@@ -238,7 +236,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
yield from update_ui(chatbot=chatbot, history=history)
# 总结输出
- if response == "[Local Message]: 等待MOSS响应中 ...":
- response = "[Local Message]: MOSS响应异常 ..."
+ if response == "[Local Message] 等待MOSS响应中 ...":
+ response = "[Local Message] MOSS响应异常 ..."
history.extend([inputs, response.strip('<|MOSS|>: ')])
yield from update_ui(chatbot=chatbot, history=history)
diff --git a/request_llm/bridge_newbingfree.py b/request_llms/bridge_newbingfree.py
similarity index 92%
rename from request_llm/bridge_newbingfree.py
rename to request_llms/bridge_newbingfree.py
index cc6e9b7..cb83a0f 100644
--- a/request_llm/bridge_newbingfree.py
+++ b/request_llms/bridge_newbingfree.py
@@ -54,7 +54,7 @@ class NewBingHandle(Process):
self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
self.success = True
except:
- self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
+ self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llms/requirements_newbing.txt`安装Newbing的依赖。"
self.success = False
def ready(self):
@@ -62,8 +62,8 @@ class NewBingHandle(Process):
async def async_run(self):
# 读取配置
- NEWBING_STYLE, = get_conf('NEWBING_STYLE')
- from request_llm.bridge_all import model_info
+ NEWBING_STYLE = get_conf('NEWBING_STYLE')
+ from request_llms.bridge_all import model_info
endpoint = model_info['newbing']['endpoint']
while True:
# 等待
@@ -141,10 +141,10 @@ class NewBingHandle(Process):
except:
self.success = False
tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
- self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
+ self.child.send(f'[Local Message] 不能加载Newbing组件,请注意Newbing组件已不再维护。{tb_str}')
self.child.send('[Fail]')
self.child.send('[Finish]')
- raise RuntimeError(f"不能加载Newbing组件。")
+ raise RuntimeError(f"不能加载Newbing组件,请注意Newbing组件已不再维护。")
self.success = True
try:
@@ -181,7 +181,7 @@ newbingfree_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
global newbingfree_handle
if (newbingfree_handle is None) or (not newbingfree_handle.success):
@@ -199,7 +199,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
response = ""
- if len(observe_window) >= 1: observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
+ if len(observe_window) >= 1: observe_window[0] = "[Local Message] 等待NewBing响应中 ..."
for response in newbingfree_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
if len(observe_window) >= 1: observe_window[0] = preprocess_newbing_out_simple(response)
if len(observe_window) >= 2:
@@ -210,9 +210,9 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
- chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
+ chatbot.append((inputs, "[Local Message] 等待NewBing响应中 ..."))
global newbingfree_handle
if (newbingfree_handle is None) or (not newbingfree_handle.success):
@@ -231,13 +231,13 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
for i in range(len(history)//2):
history_feedin.append([history[2*i], history[2*i+1]] )
- chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
- response = "[Local Message]: 等待NewBing响应中 ..."
+ chatbot[-1] = (inputs, "[Local Message] 等待NewBing响应中 ...")
+ response = "[Local Message] 等待NewBing响应中 ..."
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
for response in newbingfree_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
chatbot[-1] = (inputs, preprocess_newbing_out(response))
yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
- if response == "[Local Message]: 等待NewBing响应中 ...": response = "[Local Message]: NewBing响应异常,请刷新界面重试 ..."
+ if response == "[Local Message] 等待NewBing响应中 ...": response = "[Local Message] NewBing响应异常,请刷新界面重试 ..."
history.extend([inputs, response])
logging.info(f'[raw_input] {inputs}')
logging.info(f'[response] {response}')
diff --git a/request_llm/bridge_qianfan.py b/request_llms/bridge_qianfan.py
similarity index 92%
rename from request_llm/bridge_qianfan.py
rename to request_llms/bridge_qianfan.py
index be73976..a806e0d 100644
--- a/request_llm/bridge_qianfan.py
+++ b/request_llms/bridge_qianfan.py
@@ -75,11 +75,12 @@ def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
def generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
- BAIDU_CLOUD_QIANFAN_MODEL, = get_conf('BAIDU_CLOUD_QIANFAN_MODEL')
+ BAIDU_CLOUD_QIANFAN_MODEL = get_conf('BAIDU_CLOUD_QIANFAN_MODEL')
url_lib = {
- "ERNIE-Bot": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/completions" ,
- "ERNIE-Bot-turbo": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/eb-instant" ,
+ "ERNIE-Bot-4": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/completions_pro",
+ "ERNIE-Bot": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/completions",
+ "ERNIE-Bot-turbo": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/eb-instant",
"BLOOMZ-7B": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/bloomz_7b1",
"Llama-2-70B-Chat": "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_2_70b",
@@ -119,7 +120,7 @@ def generate_from_baidu_qianfan(inputs, llm_kwargs, history, system_prompt):
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
⭐多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
watch_dog_patience = 5
response = ""
@@ -134,7 +135,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
⭐单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append((inputs, ""))
@@ -158,8 +159,8 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
return
# 总结输出
- response = f"[Local Message]: {model_name}响应异常 ..."
- if response == f"[Local Message]: 等待{model_name}响应中 ...":
- response = f"[Local Message]: {model_name}响应异常 ..."
+ response = f"[Local Message] {model_name}响应异常 ..."
+ if response == f"[Local Message] 等待{model_name}响应中 ...":
+ response = f"[Local Message] {model_name}响应异常 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)
\ No newline at end of file
diff --git a/request_llm/bridge_qwen.py b/request_llms/bridge_qwen.py
similarity index 76%
rename from request_llm/bridge_qwen.py
rename to request_llms/bridge_qwen.py
index 07ed243..afd886b 100644
--- a/request_llm/bridge_qwen.py
+++ b/request_llms/bridge_qwen.py
@@ -1,21 +1,20 @@
model_name = "Qwen"
-cmd_to_install = "`pip install -r request_llm/requirements_qwen.txt`"
+cmd_to_install = "`pip install -r request_llms/requirements_qwen.txt`"
from transformers import AutoModel, AutoTokenizer
import time
import threading
import importlib
-from toolbox import update_ui, get_conf
+from toolbox import update_ui, get_conf, ProxyNetworkActivate
from multiprocessing import Process, Pipe
-from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns, SingletonLocalLLM
+from .local_llm_class import LocalLLMHandle, get_local_llm_predict_fns
# ------------------------------------------------------------------------------------------------------------------------
# 🔌💻 Local Model
# ------------------------------------------------------------------------------------------------------------------------
-@SingletonLocalLLM
class GetONNXGLMHandle(LocalLLMHandle):
def load_model_info(self):
@@ -30,13 +29,13 @@ class GetONNXGLMHandle(LocalLLMHandle):
import platform
from modelscope import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
- model_id = 'qwen/Qwen-7B-Chat'
- revision = 'v1.0.1'
- self._tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision, trust_remote_code=True)
- # use fp16
- model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", revision=revision, trust_remote_code=True, fp16=True).eval()
- model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
- self._model = model
+ with ProxyNetworkActivate('Download_LLM'):
+ model_id = 'qwen/Qwen-7B-Chat'
+ self._tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen-7B-Chat', trust_remote_code=True, resume_download=True)
+ # use fp16
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True, fp16=True).eval()
+ model.generation_config = GenerationConfig.from_pretrained(model_id, trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
+ self._model = model
return self._model, self._tokenizer
diff --git a/request_llm/bridge_spark.py b/request_llms/bridge_spark.py
similarity index 82%
rename from request_llm/bridge_spark.py
rename to request_llms/bridge_spark.py
index 0fe925f..6ba39ee 100644
--- a/request_llm/bridge_spark.py
+++ b/request_llms/bridge_spark.py
@@ -8,7 +8,7 @@ from multiprocessing import Process, Pipe
model_name = '星火认知大模型'
def validate_key():
- XFYUN_APPID, = get_conf('XFYUN_APPID', )
+ XFYUN_APPID = get_conf('XFYUN_APPID')
if XFYUN_APPID == '00000000' or XFYUN_APPID == '':
return False
return True
@@ -16,7 +16,7 @@ def validate_key():
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
"""
⭐多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
watch_dog_patience = 5
response = ""
@@ -36,13 +36,13 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
"""
⭐单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
chatbot.append((inputs, ""))
yield from update_ui(chatbot=chatbot, history=history)
if validate_key() is False:
- yield from update_ui_lastest_msg(lastmsg="[Local Message]: 请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET", chatbot=chatbot, history=history, delay=0)
+ yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置讯飞星火大模型的XFYUN_APPID, XFYUN_API_KEY, XFYUN_API_SECRET", chatbot=chatbot, history=history, delay=0)
return
if additional_fn is not None:
@@ -57,7 +57,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
yield from update_ui(chatbot=chatbot, history=history)
# 总结输出
- if response == f"[Local Message]: 等待{model_name}响应中 ...":
- response = f"[Local Message]: {model_name}响应异常 ..."
+ if response == f"[Local Message] 等待{model_name}响应中 ...":
+ response = f"[Local Message] {model_name}响应异常 ..."
history.extend([inputs, response])
yield from update_ui(chatbot=chatbot, history=history)
\ No newline at end of file
diff --git a/request_llm/bridge_stackclaude.py b/request_llms/bridge_stackclaude.py
similarity index 92%
rename from request_llm/bridge_stackclaude.py
rename to request_llms/bridge_stackclaude.py
index 3f2ee67..0b42a17 100644
--- a/request_llm/bridge_stackclaude.py
+++ b/request_llms/bridge_stackclaude.py
@@ -36,7 +36,7 @@ try:
CHANNEL_ID = None
async def open_channel(self):
- response = await self.conversations_open(users=get_conf('SLACK_CLAUDE_BOT_ID')[0])
+ response = await self.conversations_open(users=get_conf('SLACK_CLAUDE_BOT_ID'))
self.CHANNEL_ID = response["channel"]["id"]
async def chat(self, text):
@@ -51,7 +51,7 @@ try:
# TODO:暂时不支持历史消息,因为在同一个频道里存在多人使用时历史消息渗透问题
resp = await self.conversations_history(channel=self.CHANNEL_ID, oldest=self.LAST_TS, limit=1)
msg = [msg for msg in resp["messages"]
- if msg.get("user") == get_conf('SLACK_CLAUDE_BOT_ID')[0]]
+ if msg.get("user") == get_conf('SLACK_CLAUDE_BOT_ID')]
return msg
except (SlackApiError, KeyError) as e:
raise RuntimeError(f"获取Slack消息失败。")
@@ -99,7 +99,7 @@ class ClaudeHandle(Process):
self.info = "依赖检测通过,等待Claude响应。注意目前不能多人同时调用Claude接口(有线程锁),否则将导致每个人的Claude问询历史互相渗透。调用Claude时,会自动使用已配置的代理。"
self.success = True
except:
- self.info = "缺少的依赖,如果要使用Claude,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_slackclaude.txt`安装Claude的依赖,然后重启程序。"
+ self.info = "缺少的依赖,如果要使用Claude,除了基础的pip依赖以外,您还需要运行`pip install -r request_llms/requirements_slackclaude.txt`安装Claude的依赖,然后重启程序。"
self.success = False
def ready(self):
@@ -146,14 +146,14 @@ class ClaudeHandle(Process):
self.local_history = []
if (self.claude_model is None) or (not self.success):
# 代理设置
- proxies, = get_conf('proxies')
+ proxies = get_conf('proxies')
if proxies is None:
self.proxies_https = None
else:
self.proxies_https = proxies['https']
try:
- SLACK_CLAUDE_USER_TOKEN, = get_conf('SLACK_CLAUDE_USER_TOKEN')
+ SLACK_CLAUDE_USER_TOKEN = get_conf('SLACK_CLAUDE_USER_TOKEN')
self.claude_model = SlackClient(token=SLACK_CLAUDE_USER_TOKEN, proxy=self.proxies_https)
print('Claude组件初始化成功。')
except:
@@ -204,7 +204,7 @@ claude_handle = None
def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
"""
多线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
global claude_handle
if (claude_handle is None) or (not claude_handle.success):
@@ -222,7 +222,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
response = ""
- observe_window[0] = "[Local Message]: 等待Claude响应中 ..."
+ observe_window[0] = "[Local Message] 等待Claude响应中 ..."
for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
observe_window[0] = preprocess_newbing_out_simple(response)
if len(observe_window) >= 2:
@@ -234,9 +234,9 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="",
def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
"""
单线程方法
- 函数的说明请见 request_llm/bridge_all.py
+ 函数的说明请见 request_llms/bridge_all.py
"""
- chatbot.append((inputs, "[Local Message]: 等待Claude响应中 ..."))
+ chatbot.append((inputs, "[Local Message] 等待Claude响应中 ..."))
global claude_handle
if (claude_handle is None) or (not claude_handle.success):
@@ -255,14 +255,14 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
for i in range(len(history)//2):
history_feedin.append([history[2*i], history[2*i+1]])
- chatbot[-1] = (inputs, "[Local Message]: 等待Claude响应中 ...")
- response = "[Local Message]: 等待Claude响应中 ..."
+ chatbot[-1] = (inputs, "[Local Message] 等待Claude响应中 ...")
+ response = "[Local Message] 等待Claude响应中 ..."
yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
for response in claude_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt):
chatbot[-1] = (inputs, preprocess_newbing_out(response))
yield from update_ui(chatbot=chatbot, history=history, msg="Claude响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
- if response == "[Local Message]: 等待Claude响应中 ...":
- response = "[Local Message]: Claude响应异常,请刷新界面重试 ..."
+ if response == "[Local Message] 等待Claude响应中 ...":
+ response = "[Local Message] Claude响应异常,请刷新界面重试 ..."
history.extend([inputs, response])
logging.info(f'[raw_input] {inputs}')
logging.info(f'[response] {response}')
diff --git a/request_llm/bridge_tgui.py b/request_llms/bridge_tgui.py
similarity index 100%
rename from request_llm/bridge_tgui.py
rename to request_llms/bridge_tgui.py
diff --git a/request_llms/bridge_zhipu.py b/request_llms/bridge_zhipu.py
new file mode 100644
index 0000000..a1e0de5
--- /dev/null
+++ b/request_llms/bridge_zhipu.py
@@ -0,0 +1,59 @@
+
+import time
+from toolbox import update_ui, get_conf, update_ui_lastest_msg
+
+model_name = '智谱AI大模型'
+
+def validate_key():
+ ZHIPUAI_API_KEY = get_conf("ZHIPUAI_API_KEY")
+ if ZHIPUAI_API_KEY == '': return False
+ return True
+
+def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
+ """
+ ⭐多线程方法
+ 函数的说明请见 request_llms/bridge_all.py
+ """
+ watch_dog_patience = 5
+ response = ""
+
+ if validate_key() is False:
+ raise RuntimeError('请配置ZHIPUAI_API_KEY')
+
+ from .com_zhipuapi import ZhipuRequestInstance
+ sri = ZhipuRequestInstance()
+ for response in sri.generate(inputs, llm_kwargs, history, sys_prompt):
+ if len(observe_window) >= 1:
+ observe_window[0] = response
+ if len(observe_window) >= 2:
+ if (time.time()-observe_window[1]) > watch_dog_patience: raise RuntimeError("程序终止。")
+ return response
+
+def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
+ """
+ ⭐单线程方法
+ 函数的说明请见 request_llms/bridge_all.py
+ """
+ chatbot.append((inputs, ""))
+ yield from update_ui(chatbot=chatbot, history=history)
+
+ if validate_key() is False:
+ yield from update_ui_lastest_msg(lastmsg="[Local Message] 请配置ZHIPUAI_API_KEY", chatbot=chatbot, history=history, delay=0)
+ return
+
+ if additional_fn is not None:
+ from core_functional import handle_core_functionality
+ inputs, history = handle_core_functionality(additional_fn, inputs, history, chatbot)
+
+ # 开始接收回复
+ from .com_zhipuapi import ZhipuRequestInstance
+ sri = ZhipuRequestInstance()
+ for response in sri.generate(inputs, llm_kwargs, history, system_prompt):
+ chatbot[-1] = (inputs, response)
+ yield from update_ui(chatbot=chatbot, history=history)
+
+ # 总结输出
+ if response == f"[Local Message] 等待{model_name}响应中 ...":
+ response = f"[Local Message] {model_name}响应异常 ..."
+ history.extend([inputs, response])
+ yield from update_ui(chatbot=chatbot, history=history)
\ No newline at end of file
diff --git a/request_llm/chatglmoonx.py b/request_llms/chatglmoonx.py
similarity index 100%
rename from request_llm/chatglmoonx.py
rename to request_llms/chatglmoonx.py
diff --git a/request_llm/com_sparkapi.py b/request_llms/com_sparkapi.py
similarity index 95%
rename from request_llm/com_sparkapi.py
rename to request_llms/com_sparkapi.py
index ae970b9..5c1a3a4 100644
--- a/request_llm/com_sparkapi.py
+++ b/request_llms/com_sparkapi.py
@@ -64,6 +64,7 @@ class SparkRequestInstance():
self.api_key = XFYUN_API_KEY
self.gpt_url = "ws://spark-api.xf-yun.com/v1.1/chat"
self.gpt_url_v2 = "ws://spark-api.xf-yun.com/v2.1/chat"
+ self.gpt_url_v3 = "ws://spark-api.xf-yun.com/v3.1/chat"
self.time_to_yield_event = threading.Event()
self.time_to_exit_event = threading.Event()
@@ -87,6 +88,8 @@ class SparkRequestInstance():
def create_blocking_request(self, inputs, llm_kwargs, history, system_prompt):
if llm_kwargs['llm_model'] == 'sparkv2':
gpt_url = self.gpt_url_v2
+ elif llm_kwargs['llm_model'] == 'sparkv3':
+ gpt_url = self.gpt_url_v3
else:
gpt_url = self.gpt_url
@@ -168,6 +171,11 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
"""
通过appid和用户的提问来生成请参数
"""
+ domains = {
+ "spark": "general",
+ "sparkv2": "generalv2",
+ "sparkv3": "generalv3",
+ }
data = {
"header": {
"app_id": appid,
@@ -175,7 +183,7 @@ def gen_params(appid, inputs, llm_kwargs, history, system_prompt):
},
"parameter": {
"chat": {
- "domain": "generalv2" if llm_kwargs['llm_model'] == 'sparkv2' else "general",
+ "domain": domains[llm_kwargs['llm_model']],
"temperature": llm_kwargs["temperature"],
"random_threshold": 0.5,
"max_tokens": 4096,
diff --git a/request_llms/com_zhipuapi.py b/request_llms/com_zhipuapi.py
new file mode 100644
index 0000000..445720d
--- /dev/null
+++ b/request_llms/com_zhipuapi.py
@@ -0,0 +1,67 @@
+from toolbox import get_conf
+import threading
+import logging
+
+timeout_bot_msg = '[Local Message] Request timeout. Network error.'
+
+class ZhipuRequestInstance():
+ def __init__(self):
+
+ self.time_to_yield_event = threading.Event()
+ self.time_to_exit_event = threading.Event()
+
+ self.result_buf = ""
+
+ def generate(self, inputs, llm_kwargs, history, system_prompt):
+ # import _thread as thread
+ import zhipuai
+ ZHIPUAI_API_KEY, ZHIPUAI_MODEL = get_conf("ZHIPUAI_API_KEY", "ZHIPUAI_MODEL")
+ zhipuai.api_key = ZHIPUAI_API_KEY
+ self.result_buf = ""
+ response = zhipuai.model_api.sse_invoke(
+ model=ZHIPUAI_MODEL,
+ prompt=generate_message_payload(inputs, llm_kwargs, history, system_prompt),
+ top_p=llm_kwargs['top_p'],
+ temperature=llm_kwargs['temperature'],
+ )
+ for event in response.events():
+ if event.event == "add":
+ self.result_buf += event.data
+ yield self.result_buf
+ elif event.event == "error" or event.event == "interrupted":
+ raise RuntimeError("Unknown error:" + event.data)
+ elif event.event == "finish":
+ yield self.result_buf
+ break
+ else:
+ raise RuntimeError("Unknown error:" + str(event))
+
+ logging.info(f'[raw_input] {inputs}')
+ logging.info(f'[response] {self.result_buf}')
+ return self.result_buf
+
+def generate_message_payload(inputs, llm_kwargs, history, system_prompt):
+ conversation_cnt = len(history) // 2
+ messages = [{"role": "user", "content": system_prompt}, {"role": "assistant", "content": "Certainly!"}]
+ if conversation_cnt:
+ for index in range(0, 2*conversation_cnt, 2):
+ what_i_have_asked = {}
+ what_i_have_asked["role"] = "user"
+ what_i_have_asked["content"] = history[index]
+ what_gpt_answer = {}
+ what_gpt_answer["role"] = "assistant"
+ what_gpt_answer["content"] = history[index+1]
+ if what_i_have_asked["content"] != "":
+ if what_gpt_answer["content"] == "":
+ continue
+ if what_gpt_answer["content"] == timeout_bot_msg:
+ continue
+ messages.append(what_i_have_asked)
+ messages.append(what_gpt_answer)
+ else:
+ messages[-1]['content'] = what_gpt_answer['content']
+ what_i_ask_now = {}
+ what_i_ask_now["role"] = "user"
+ what_i_ask_now["content"] = inputs
+ messages.append(what_i_ask_now)
+ return messages
diff --git a/request_llm/edge_gpt_free.py b/request_llms/edge_gpt_free.py
similarity index 100%
rename from request_llm/edge_gpt_free.py
rename to request_llms/edge_gpt_free.py
diff --git a/request_llms/key_manager.py b/request_llms/key_manager.py
new file mode 100644
index 0000000..8563d2e
--- /dev/null
+++ b/request_llms/key_manager.py
@@ -0,0 +1,29 @@
+import random
+
+def Singleton(cls):
+ _instance = {}
+
+ def _singleton(*args, **kargs):
+ if cls not in _instance:
+ _instance[cls] = cls(*args, **kargs)
+ return _instance[cls]
+
+ return _singleton
+
+
+@Singleton
+class OpenAI_ApiKeyManager():
+ def __init__(self, mode='blacklist') -> None:
+ # self.key_avail_list = []
+ self.key_black_list = []
+
+ def add_key_to_blacklist(self, key):
+ self.key_black_list.append(key)
+
+ def select_avail_key(self, key_list):
+ # select key from key_list, but avoid keys also in self.key_black_list, raise error if no key can be found
+ available_keys = [key for key in key_list if key not in self.key_black_list]
+ if not available_keys:
+ raise KeyError("No available key found.")
+ selected_key = random.choice(available_keys)
+ return selected_key
\ No newline at end of file
diff --git a/request_llms/local_llm_class.py b/request_llms/local_llm_class.py
new file mode 100644
index 0000000..091707a
--- /dev/null
+++ b/request_llms/local_llm_class.py
@@ -0,0 +1,319 @@
+import time
+import threading
+from toolbox import update_ui, Singleton
+from multiprocessing import Process, Pipe
+from contextlib import redirect_stdout
+from request_llms.queued_pipe import create_queue_pipe
+
+class ThreadLock(object):
+ def __init__(self):
+ self._lock = threading.Lock()
+
+ def acquire(self):
+ # print("acquiring", self)
+ #traceback.print_tb
+ self._lock.acquire()
+ # print("acquired", self)
+
+ def release(self):
+ # print("released", self)
+ #traceback.print_tb
+ self._lock.release()
+
+ def __enter__(self):
+ self.acquire()
+
+ def __exit__(self, type, value, traceback):
+ self.release()
+
+@Singleton
+class GetSingletonHandle():
+ def __init__(self):
+ self.llm_model_already_running = {}
+
+ def get_llm_model_instance(self, cls, *args, **kargs):
+ if cls not in self.llm_model_already_running:
+ self.llm_model_already_running[cls] = cls(*args, **kargs)
+ return self.llm_model_already_running[cls]
+ elif self.llm_model_already_running[cls].corrupted:
+ self.llm_model_already_running[cls] = cls(*args, **kargs)
+ return self.llm_model_already_running[cls]
+ else:
+ return self.llm_model_already_running[cls]
+
+def reset_tqdm_output():
+ import sys, tqdm
+ def status_printer(self, file):
+ fp = file
+ if fp in (sys.stderr, sys.stdout):
+ getattr(sys.stderr, 'flush', lambda: None)()
+ getattr(sys.stdout, 'flush', lambda: None)()
+
+ def fp_write(s):
+ print(s)
+ last_len = [0]
+
+ def print_status(s):
+ from tqdm.utils import disp_len
+ len_s = disp_len(s)
+ fp_write('\r' + s + (' ' * max(last_len[0] - len_s, 0)))
+ last_len[0] = len_s
+ return print_status
+ tqdm.tqdm.status_printer = status_printer
+
+
+class LocalLLMHandle(Process):
+ def __init__(self):
+ # ⭐run in main process
+ super().__init__(daemon=True)
+ self.is_main_process = True # init
+ self.corrupted = False
+ self.load_model_info()
+ self.parent, self.child = create_queue_pipe()
+ self.parent_state, self.child_state = create_queue_pipe()
+ # allow redirect_stdout
+ self.std_tag = "[Subprocess Message] "
+ self.running = True
+ self._model = None
+ self._tokenizer = None
+ self.state = ""
+ self.check_dependency()
+ self.is_main_process = False # state wrap for child process
+ self.start()
+ self.is_main_process = True # state wrap for child process
+ self.threadLock = ThreadLock()
+
+ def get_state(self):
+ # ⭐run in main process
+ while self.parent_state.poll():
+ self.state = self.parent_state.recv()
+ return self.state
+
+ def set_state(self, new_state):
+ # ⭐run in main process or 🏃♂️🏃♂️🏃♂️ run in child process
+ if self.is_main_process:
+ self.state = new_state
+ else:
+ self.child_state.send(new_state)
+
+ def load_model_info(self):
+ # 🏃♂️🏃♂️🏃♂️ run in child process
+ raise NotImplementedError("Method not implemented yet")
+ self.model_name = ""
+ self.cmd_to_install = ""
+
+ def load_model_and_tokenizer(self):
+ """
+ This function should return the model and the tokenizer
+ """
+ # 🏃♂️🏃♂️🏃♂️ run in child process
+ raise NotImplementedError("Method not implemented yet")
+
+ def llm_stream_generator(self, **kwargs):
+ # 🏃♂️🏃♂️🏃♂️ run in child process
+ raise NotImplementedError("Method not implemented yet")
+
+ def try_to_import_special_deps(self, **kwargs):
+ """
+ import something that will raise error if the user does not install requirement_*.txt
+ """
+ # ⭐run in main process
+ raise NotImplementedError("Method not implemented yet")
+
+ def check_dependency(self):
+ # ⭐run in main process
+ try:
+ self.try_to_import_special_deps()
+ self.set_state("`依赖检测通过`")
+ self.running = True
+ except:
+ self.set_state(f"缺少{self.model_name}的依赖,如果要使用{self.model_name},除了基础的pip依赖以外,您还需要运行{self.cmd_to_install}安装{self.model_name}的依赖。")
+ self.running = False
+
+ def run(self):
+ # 🏃♂️🏃♂️🏃♂️ run in child process
+ # 第一次运行,加载参数
+ self.child.flush = lambda *args: None
+ self.child.write = lambda x: self.child.send(self.std_tag + x)
+ reset_tqdm_output()
+ self.set_state("`尝试加载模型`")
+ try:
+ with redirect_stdout(self.child):
+ self._model, self._tokenizer = self.load_model_and_tokenizer()
+ except:
+ self.set_state("`加载模型失败`")
+ self.running = False
+ from toolbox import trimmed_format_exc
+ self.child.send(
+ f'[Local Message] 不能正常加载{self.model_name}的参数.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
+ self.child.send('[FinishBad]')
+ raise RuntimeError(f"不能正常加载{self.model_name}的参数!")
+
+ self.set_state("`准备就绪`")
+ while True:
+ # 进入任务等待状态
+ kwargs = self.child.recv()
+ # 收到消息,开始请求
+ try:
+ for response_full in self.llm_stream_generator(**kwargs):
+ self.child.send(response_full)
+ # print('debug' + response_full)
+ self.child.send('[Finish]')
+ # 请求处理结束,开始下一个循环
+ except:
+ from toolbox import trimmed_format_exc
+ self.child.send(
+ f'[Local Message] 调用{self.model_name}失败.' + '\n```\n' + trimmed_format_exc() + '\n```\n')
+ self.child.send('[Finish]')
+
+ def clear_pending_messages(self):
+ # ⭐run in main process
+ while True:
+ if self.parent.poll():
+ self.parent.recv()
+ continue
+ for _ in range(5):
+ time.sleep(0.5)
+ if self.parent.poll():
+ r = self.parent.recv()
+ continue
+ break
+ return
+
+ def stream_chat(self, **kwargs):
+ # ⭐run in main process
+ if self.get_state() == "`准备就绪`":
+ yield "`正在等待线程锁,排队中请稍后 ...`"
+
+ with self.threadLock:
+ if self.parent.poll():
+ yield "`排队中请稍后 ...`"
+ self.clear_pending_messages()
+ self.parent.send(kwargs)
+ std_out = ""
+ std_out_clip_len = 4096
+ while True:
+ res = self.parent.recv()
+ # pipe_watch_dog.feed()
+ if res.startswith(self.std_tag):
+ new_output = res[len(self.std_tag):]
+ std_out = std_out[:std_out_clip_len]
+ # print(new_output, end='')
+ std_out = new_output + std_out
+ yield self.std_tag + '\n```\n' + std_out + '\n```\n'
+ elif res == '[Finish]':
+ break
+ elif res == '[FinishBad]':
+ self.running = False
+ self.corrupted = True
+ break
+ else:
+ std_out = ""
+ yield res
+
+def get_local_llm_predict_fns(LLMSingletonClass, model_name, history_format='classic'):
+ load_message = f"{model_name}尚未加载,加载需要一段时间。注意,取决于`config.py`的配置,{model_name}消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……"
+
+ def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False):
+ """
+ refer to request_llms/bridge_all.py
+ """
+ _llm_handle = GetSingletonHandle().get_llm_model_instance(LLMSingletonClass)
+ if len(observe_window) >= 1:
+ observe_window[0] = load_message + "\n\n" + _llm_handle.get_state()
+ if not _llm_handle.running:
+ raise RuntimeError(_llm_handle.get_state())
+
+ if history_format == 'classic':
+ # 没有 sys_prompt 接口,因此把prompt加入 history
+ history_feedin = []
+ history_feedin.append([sys_prompt, "Certainly!"])
+ for i in range(len(history)//2):
+ history_feedin.append([history[2*i], history[2*i+1]])
+ elif history_format == 'chatglm3':
+ # 有 sys_prompt 接口
+ conversation_cnt = len(history) // 2
+ history_feedin = [{"role": "system", "content": sys_prompt}]
+ if conversation_cnt:
+ for index in range(0, 2*conversation_cnt, 2):
+ what_i_have_asked = {}
+ what_i_have_asked["role"] = "user"
+ what_i_have_asked["content"] = history[index]
+ what_gpt_answer = {}
+ what_gpt_answer["role"] = "assistant"
+ what_gpt_answer["content"] = history[index+1]
+ if what_i_have_asked["content"] != "":
+ if what_gpt_answer["content"] == "":
+ continue
+ history_feedin.append(what_i_have_asked)
+ history_feedin.append(what_gpt_answer)
+ else:
+ history_feedin[-1]['content'] = what_gpt_answer['content']
+
+ watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
+ response = ""
+ for response in _llm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
+ if len(observe_window) >= 1:
+ observe_window[0] = response
+ if len(observe_window) >= 2:
+ if (time.time()-observe_window[1]) > watch_dog_patience:
+ raise RuntimeError("程序终止。")
+ return response
+
+ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream=True, additional_fn=None):
+ """
+ refer to request_llms/bridge_all.py
+ """
+ chatbot.append((inputs, ""))
+
+ _llm_handle = GetSingletonHandle().get_llm_model_instance(LLMSingletonClass)
+ chatbot[-1] = (inputs, load_message + "\n\n" + _llm_handle.get_state())
+ yield from update_ui(chatbot=chatbot, history=[])
+ if not _llm_handle.running:
+ raise RuntimeError(_llm_handle.get_state())
+
+ if additional_fn is not None:
+ from core_functional import handle_core_functionality
+ inputs, history = handle_core_functionality(
+ additional_fn, inputs, history, chatbot)
+
+ # 处理历史信息
+ if history_format == 'classic':
+ # 没有 sys_prompt 接口,因此把prompt加入 history
+ history_feedin = []
+ history_feedin.append([system_prompt, "Certainly!"])
+ for i in range(len(history)//2):
+ history_feedin.append([history[2*i], history[2*i+1]])
+ elif history_format == 'chatglm3':
+ # 有 sys_prompt 接口
+ conversation_cnt = len(history) // 2
+ history_feedin = [{"role": "system", "content": system_prompt}]
+ if conversation_cnt:
+ for index in range(0, 2*conversation_cnt, 2):
+ what_i_have_asked = {}
+ what_i_have_asked["role"] = "user"
+ what_i_have_asked["content"] = history[index]
+ what_gpt_answer = {}
+ what_gpt_answer["role"] = "assistant"
+ what_gpt_answer["content"] = history[index+1]
+ if what_i_have_asked["content"] != "":
+ if what_gpt_answer["content"] == "":
+ continue
+ history_feedin.append(what_i_have_asked)
+ history_feedin.append(what_gpt_answer)
+ else:
+ history_feedin[-1]['content'] = what_gpt_answer['content']
+
+ # 开始接收回复
+ response = f"[Local Message] 等待{model_name}响应中 ..."
+ for response in _llm_handle.stream_chat(query=inputs, history=history_feedin, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
+ chatbot[-1] = (inputs, response)
+ yield from update_ui(chatbot=chatbot, history=history)
+
+ # 总结输出
+ if response == f"[Local Message] 等待{model_name}响应中 ...":
+ response = f"[Local Message] {model_name}响应异常 ..."
+ history.extend([inputs, response])
+ yield from update_ui(chatbot=chatbot, history=history)
+
+ return predict_no_ui_long_connection, predict
diff --git a/request_llms/queued_pipe.py b/request_llms/queued_pipe.py
new file mode 100644
index 0000000..1fc2e5b
--- /dev/null
+++ b/request_llms/queued_pipe.py
@@ -0,0 +1,24 @@
+from multiprocessing import Pipe, Queue
+import time
+import threading
+
+class PipeSide(object):
+ def __init__(self, q_2remote, q_2local) -> None:
+ self.q_2remote = q_2remote
+ self.q_2local = q_2local
+
+ def recv(self):
+ return self.q_2local.get()
+
+ def send(self, buf):
+ self.q_2remote.put(buf)
+
+ def poll(self):
+ return not self.q_2local.empty()
+
+def create_queue_pipe():
+ q_p2c = Queue()
+ q_c2p = Queue()
+ pipe_c = PipeSide(q_2local=q_p2c, q_2remote=q_c2p)
+ pipe_p = PipeSide(q_2local=q_c2p, q_2remote=q_p2c)
+ return pipe_c, pipe_p
diff --git a/request_llm/requirements_chatglm.txt b/request_llms/requirements_chatglm.txt
similarity index 100%
rename from request_llm/requirements_chatglm.txt
rename to request_llms/requirements_chatglm.txt
diff --git a/request_llm/requirements_chatglm_onnx.txt b/request_llms/requirements_chatglm_onnx.txt
similarity index 100%
rename from request_llm/requirements_chatglm_onnx.txt
rename to request_llms/requirements_chatglm_onnx.txt
diff --git a/request_llm/requirements_jittorllms.txt b/request_llms/requirements_jittorllms.txt
similarity index 100%
rename from request_llm/requirements_jittorllms.txt
rename to request_llms/requirements_jittorllms.txt
diff --git a/request_llm/requirements_moss.txt b/request_llms/requirements_moss.txt
similarity index 100%
rename from request_llm/requirements_moss.txt
rename to request_llms/requirements_moss.txt
diff --git a/request_llm/requirements_newbing.txt b/request_llms/requirements_newbing.txt
similarity index 100%
rename from request_llm/requirements_newbing.txt
rename to request_llms/requirements_newbing.txt
diff --git a/request_llm/requirements_qwen.txt b/request_llms/requirements_qwen.txt
similarity index 100%
rename from request_llm/requirements_qwen.txt
rename to request_llms/requirements_qwen.txt
diff --git a/request_llm/requirements_slackclaude.txt b/request_llms/requirements_slackclaude.txt
similarity index 100%
rename from request_llm/requirements_slackclaude.txt
rename to request_llms/requirements_slackclaude.txt
diff --git a/requirements.txt b/requirements.txt
index e832a28..a5782f7 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,23 +1,24 @@
./docs/gradio-3.32.6-py3-none-any.whl
-pydantic==1.10.11
+pypdf2==2.12.1
tiktoken>=0.3.3
requests[socks]
+pydantic==1.10.11
transformers>=4.27.1
+scipdf_parser>=0.52
python-markdown-math
+websocket-client
beautifulsoup4
prompt_toolkit
latex2mathml
python-docx
mdtex2html
anthropic
+pyautogen
colorama
Markdown
pygments
pymupdf
openai
-numpy
arxiv
+numpy
rich
-pypdf2==2.12.1
-websocket-client
-scipdf_parser>=0.3
diff --git a/tests/test_llms.py b/tests/test_llms.py
index 75e2303..6285f03 100644
--- a/tests/test_llms.py
+++ b/tests/test_llms.py
@@ -10,14 +10,16 @@ def validate_path():
validate_path() # validate path so you can run from base directory
if __name__ == "__main__":
- # from request_llm.bridge_newbingfree import predict_no_ui_long_connection
- # from request_llm.bridge_moss import predict_no_ui_long_connection
- # from request_llm.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
- # from request_llm.bridge_jittorllms_llama import predict_no_ui_long_connection
- # from request_llm.bridge_claude import predict_no_ui_long_connection
- # from request_llm.bridge_internlm import predict_no_ui_long_connection
- # from request_llm.bridge_qwen import predict_no_ui_long_connection
- from request_llm.bridge_spark import predict_no_ui_long_connection
+ # from request_llms.bridge_newbingfree import predict_no_ui_long_connection
+ # from request_llms.bridge_moss import predict_no_ui_long_connection
+ # from request_llms.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
+ # from request_llms.bridge_jittorllms_llama import predict_no_ui_long_connection
+ # from request_llms.bridge_claude import predict_no_ui_long_connection
+ from request_llms.bridge_internlm import predict_no_ui_long_connection
+ # from request_llms.bridge_qwen import predict_no_ui_long_connection
+ # from request_llms.bridge_spark import predict_no_ui_long_connection
+ # from request_llms.bridge_zhipu import predict_no_ui_long_connection
+ # from request_llms.bridge_chatglm3 import predict_no_ui_long_connection
llm_kwargs = {
'max_length': 4096,
diff --git a/tests/test_markdown.py b/tests/test_markdown.py
new file mode 100644
index 0000000..c92b4c4
--- /dev/null
+++ b/tests/test_markdown.py
@@ -0,0 +1,44 @@
+md = """
+作为您的写作和编程助手,我可以为您提供以下服务:
+
+1. 写作:
+ - 帮助您撰写文章、报告、散文、故事等。
+ - 提供写作建议和技巧。
+ - 协助您进行文案策划和内容创作。
+
+2. 编程:
+ - 帮助您解决编程问题,提供编程思路和建议。
+ - 协助您编写代码,包括但不限于 Python、Java、C++ 等。
+ - 为您解释复杂的技术概念,让您更容易理解。
+
+3. 项目支持:
+ - 协助您规划项目进度和任务分配。
+ - 提供项目管理和协作建议。
+ - 在项目实施过程中提供支持,确保项目顺利进行。
+
+4. 学习辅导:
+ - 帮助您巩固编程基础,提高编程能力。
+ - 提供计算机科学、数据科学、人工智能等相关领域的学习资源和建议。
+ - 解答您在学习过程中遇到的问题,让您更好地掌握知识。
+
+5. 行业动态和趋势分析:
+ - 为您提供业界最新的新闻和技术趋势。
+ - 分析行业动态,帮助您了解市场发展和竞争态势。
+ - 为您制定技术战略提供参考和建议。
+
+请随时告诉我您的需求,我会尽力提供帮助。如果您有任何问题或需要解答的议题,请随时提问。
+"""
+
+def validate_path():
+ import os, sys
+ dir_name = os.path.dirname(__file__)
+ root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
+ os.chdir(root_dir_assume)
+ sys.path.append(root_dir_assume)
+validate_path() # validate path so you can run from base directory
+from toolbox import markdown_convertion
+
+html = markdown_convertion(md)
+print(html)
+with open('test.html', 'w', encoding='utf-8') as f:
+ f.write(html)
\ No newline at end of file
diff --git a/themes/default.css b/themes/default.css
index 65d5940..7c1d400 100644
--- a/themes/default.css
+++ b/themes/default.css
@@ -1,3 +1,8 @@
+/* 插件下拉菜单 */
+#elem_audio {
+ border-style: hidden !important;
+}
+
.dark {
--background-fill-primary: #050810;
--body-background-fill: var(--background-fill-primary);
diff --git a/themes/gradios.py b/themes/gradios.py
index 7693a23..96a9c54 100644
--- a/themes/gradios.py
+++ b/themes/gradios.py
@@ -18,7 +18,7 @@ def adjust_theme():
set_theme = gr.themes.ThemeClass()
with ProxyNetworkActivate('Download_Gradio_Theme'):
logging.info('正在下载Gradio主题,请稍等。')
- THEME, = get_conf('THEME')
+ THEME = get_conf('THEME')
if THEME.startswith('Huggingface-'): THEME = THEME.lstrip('Huggingface-')
if THEME.startswith('huggingface-'): THEME = THEME.lstrip('huggingface-')
set_theme = set_theme.from_hub(THEME.lower())
diff --git a/themes/theme.py b/themes/theme.py
index 42ee750..f59db9f 100644
--- a/themes/theme.py
+++ b/themes/theme.py
@@ -1,6 +1,6 @@
import gradio as gr
from toolbox import get_conf
-THEME, = get_conf('THEME')
+THEME = get_conf('THEME')
def load_dynamic_theme(THEME):
adjust_dynamic_theme = None
diff --git a/toolbox.py b/toolbox.py
index cd6cd1c..b7b762d 100644
--- a/toolbox.py
+++ b/toolbox.py
@@ -7,6 +7,7 @@ import os
import gradio
import shutil
import glob
+import math
from latex2mathml.converter import convert as tex2mathml
from functools import wraps, lru_cache
pj = os.path.join
@@ -151,13 +152,13 @@ def CatchException(f):
except Exception as e:
from check_proxy import check_proxy
from toolbox import get_conf
- proxies, = get_conf('proxies')
+ proxies = get_conf('proxies')
tb_str = '```\n' + trimmed_format_exc() + '```'
if len(chatbot_with_cookie) == 0:
chatbot_with_cookie.clear()
chatbot_with_cookie.append(["插件调度异常", "异常原因"])
chatbot_with_cookie[-1] = (chatbot_with_cookie[-1][0],
- f"[Local Message] 实验性函数调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
+ f"[Local Message] 插件调用出错: \n\n{tb_str} \n\n当前代理可用性: \n\n{check_proxy(proxies)}")
yield from update_ui(chatbot=chatbot_with_cookie, history=history, msg=f'异常 {e}') # 刷新界面
return decorated
@@ -186,7 +187,7 @@ def HotReload(f):
其他小工具:
- write_history_to_file: 将结果写入markdown文件中
- regular_txt_to_markdown: 将普通文本转换为Markdown格式的文本。
- - report_execption: 向chatbot中添加简单的意外错误信息
+ - report_exception: 向chatbot中添加简单的意外错误信息
- text_divide_paragraph: 将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
- markdown_convertion: 用多种方式组合,将markdown转化为好看的html
- format_io: 接管gradio默认的markdown处理方式
@@ -259,7 +260,7 @@ def regular_txt_to_markdown(text):
-def report_execption(chatbot, history, a, b):
+def report_exception(chatbot, history, a, b):
"""
向chatbot中添加错误信息
"""
@@ -278,9 +279,12 @@ def text_divide_paragraph(text):
if '```' in text:
# careful input
- return pre + text + suf
+ return text
+ elif '
' in text:
+ # careful input
+ return text
else:
- # wtf input
+ # whatever input
lines = text.split("\n")
for i, line in enumerate(lines):
lines[i] = lines[i].replace(" ", " ")
@@ -372,6 +376,26 @@ def markdown_convertion(txt):
contain_any_eq = True
return contain_any_eq
+ def fix_markdown_indent(txt):
+ # fix markdown indent
+ if (' - ' not in txt) or ('. ' not in txt):
+ return txt # do not need to fix, fast escape
+ # walk through the lines and fix non-standard indentation
+ lines = txt.split("\n")
+ pattern = re.compile(r'^\s+-')
+ activated = False
+ for i, line in enumerate(lines):
+ if line.startswith('- ') or line.startswith('1. '):
+ activated = True
+ if activated and pattern.match(line):
+ stripped_string = line.lstrip()
+ num_spaces = len(line) - len(stripped_string)
+ if (num_spaces % 4) == 3:
+ num_spaces_should_be = math.ceil(num_spaces/4) * 4
+ lines[i] = ' ' * num_spaces_should_be + stripped_string
+ return '\n'.join(lines)
+
+ txt = fix_markdown_indent(txt)
if is_equation(txt): # 有$标识的公式符号,且没有代码段```的标识
# convert everything to html format
split = markdown.markdown(text='---')
@@ -534,14 +558,14 @@ def disable_auto_promotion(chatbot):
return
def is_the_upload_folder(string):
- PATH_PRIVATE_UPLOAD, = get_conf('PATH_PRIVATE_UPLOAD')
+ PATH_PRIVATE_UPLOAD = get_conf('PATH_PRIVATE_UPLOAD')
pattern = r'^PATH_PRIVATE_UPLOAD/[A-Za-z0-9_-]+/\d{4}-\d{2}-\d{2}-\d{2}-\d{2}-\d{2}$'
pattern = pattern.replace('PATH_PRIVATE_UPLOAD', PATH_PRIVATE_UPLOAD)
if re.match(pattern, string): return True
else: return False
def del_outdated_uploads(outdate_time_seconds):
- PATH_PRIVATE_UPLOAD, = get_conf('PATH_PRIVATE_UPLOAD')
+ PATH_PRIVATE_UPLOAD = get_conf('PATH_PRIVATE_UPLOAD')
current_time = time.time()
one_hour_ago = current_time - outdate_time_seconds
# Get a list of all subdirectories in the PATH_PRIVATE_UPLOAD folder
@@ -567,7 +591,7 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
# 创建工作路径
user_name = "default" if not request.username else request.username
time_tag = gen_time_str()
- PATH_PRIVATE_UPLOAD, = get_conf('PATH_PRIVATE_UPLOAD')
+ PATH_PRIVATE_UPLOAD = get_conf('PATH_PRIVATE_UPLOAD')
target_path_base = pj(PATH_PRIVATE_UPLOAD, user_name, time_tag)
os.makedirs(target_path_base, exist_ok=True)
@@ -604,13 +628,14 @@ def on_file_uploaded(request: gradio.Request, files, chatbot, txt, txt2, checkbo
def on_report_generated(cookies, files, chatbot):
- from toolbox import find_recent_files
- PATH_LOGGING, = get_conf('PATH_LOGGING')
+ # from toolbox import find_recent_files
+ # PATH_LOGGING = get_conf('PATH_LOGGING')
if 'files_to_promote' in cookies:
report_files = cookies['files_to_promote']
cookies.pop('files_to_promote')
else:
- report_files = find_recent_files(PATH_LOGGING)
+ report_files = []
+ # report_files = find_recent_files(PATH_LOGGING)
if len(report_files) == 0:
return cookies, None, chatbot
# files.extend(report_files)
@@ -621,10 +646,21 @@ def on_report_generated(cookies, files, chatbot):
def load_chat_cookies():
API_KEY, LLM_MODEL, AZURE_API_KEY = get_conf('API_KEY', 'LLM_MODEL', 'AZURE_API_KEY')
- DARK_MODE, NUM_CUSTOM_BASIC_BTN = get_conf('DARK_MODE', 'NUM_CUSTOM_BASIC_BTN')
+ AZURE_CFG_ARRAY, NUM_CUSTOM_BASIC_BTN = get_conf('AZURE_CFG_ARRAY', 'NUM_CUSTOM_BASIC_BTN')
+
+ # deal with azure openai key
if is_any_api_key(AZURE_API_KEY):
if is_any_api_key(API_KEY): API_KEY = API_KEY + ',' + AZURE_API_KEY
else: API_KEY = AZURE_API_KEY
+ if len(AZURE_CFG_ARRAY) > 0:
+ for azure_model_name, azure_cfg_dict in AZURE_CFG_ARRAY.items():
+ if not azure_model_name.startswith('azure'):
+ raise ValueError("AZURE_CFG_ARRAY中配置的模型必须以azure开头")
+ AZURE_API_KEY_ = azure_cfg_dict["AZURE_API_KEY"]
+ if is_any_api_key(AZURE_API_KEY_):
+ if is_any_api_key(API_KEY): API_KEY = API_KEY + ',' + AZURE_API_KEY_
+ else: API_KEY = AZURE_API_KEY_
+
customize_fn_overwrite_ = {}
for k in range(NUM_CUSTOM_BASIC_BTN):
customize_fn_overwrite_.update({
@@ -637,7 +673,7 @@ def load_chat_cookies():
return {'api_key': API_KEY, 'llm_model': LLM_MODEL, 'customize_fn_overwrite': customize_fn_overwrite_}
def is_openai_api_key(key):
- CUSTOM_API_KEY_PATTERN, = get_conf('CUSTOM_API_KEY_PATTERN')
+ CUSTOM_API_KEY_PATTERN = get_conf('CUSTOM_API_KEY_PATTERN')
if len(CUSTOM_API_KEY_PATTERN) != 0:
API_MATCH_ORIGINAL = re.match(CUSTOM_API_KEY_PATTERN, key)
else:
@@ -772,6 +808,11 @@ def read_single_conf_with_lru_cache(arg):
r = getattr(importlib.import_module('config'), arg)
# 在读取API_KEY时,检查一下是不是忘了改config
+ if arg == 'API_URL_REDIRECT':
+ oai_rd = r.get("https://api.openai.com/v1/chat/completions", None) # API_URL_REDIRECT填写格式是错误的,请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`
+ if oai_rd and not oai_rd.endswith('/completions'):
+ print亮红( "\n\n[API_URL_REDIRECT] API_URL_REDIRECT填错了。请阅读`https://github.com/binary-husky/gpt_academic/wiki/项目配置说明`。如果您确信自己没填错,无视此消息即可。")
+ time.sleep(5)
if arg == 'API_KEY':
print亮蓝(f"[API_KEY] 本项目现已支持OpenAI和Azure的api-key。也支持同时填写多个api-key,如API_KEY=\"openai-key1,openai-key2,azure-key3\"")
print亮蓝(f"[API_KEY] 您既可以在config.py中修改api-key(s),也可以在问题输入区输入临时的api-key(s),然后回车键提交后即可生效。")
@@ -796,6 +837,7 @@ def get_conf(*args):
for arg in args:
r = read_single_conf_with_lru_cache(arg)
res.append(r)
+ if len(res) == 1: return res[0]
return res
@@ -867,7 +909,7 @@ def clip_history(inputs, history, tokenizer, max_token_limit):
直到历史记录的标记数量降低到阈值以下。
"""
import numpy as np
- from request_llm.bridge_all import model_info
+ from request_llms.bridge_all import model_info
def get_token_num(txt):
return len(tokenizer.encode(txt, disallowed_special=()))
input_token_num = get_token_num(inputs)
@@ -957,7 +999,7 @@ def gen_time_str():
return time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())
def get_log_folder(user='default', plugin_name='shared'):
- PATH_LOGGING, = get_conf('PATH_LOGGING')
+ PATH_LOGGING = get_conf('PATH_LOGGING')
_dir = pj(PATH_LOGGING, user, plugin_name)
if not os.path.exists(_dir): os.makedirs(_dir)
return _dir
@@ -974,13 +1016,13 @@ class ProxyNetworkActivate():
else:
# 给定了task, 我们检查一下
from toolbox import get_conf
- WHEN_TO_USE_PROXY, = get_conf('WHEN_TO_USE_PROXY')
+ WHEN_TO_USE_PROXY = get_conf('WHEN_TO_USE_PROXY')
self.valid = (task in WHEN_TO_USE_PROXY)
def __enter__(self):
if not self.valid: return self
from toolbox import get_conf
- proxies, = get_conf('proxies')
+ proxies = get_conf('proxies')
if 'no_proxy' in os.environ: os.environ.pop('no_proxy')
if proxies is not None:
if 'http' in proxies: os.environ['HTTP_PROXY'] = proxies['http']
@@ -1022,7 +1064,7 @@ def Singleton(cls):
"""
========================================================================
第四部分
-接驳虚空终端:
+接驳void-terminal:
- set_conf: 在运行过程中动态地修改配置
- set_multi_conf: 在运行过程中动态地修改多个配置
- get_plugin_handle: 获取插件的句柄
@@ -1037,7 +1079,7 @@ def set_conf(key, value):
read_single_conf_with_lru_cache.cache_clear()
get_conf.cache_clear()
os.environ[key] = str(value)
- altered, = get_conf(key)
+ altered = get_conf(key)
return altered
def set_multi_conf(dic):
@@ -1058,20 +1100,17 @@ def get_plugin_handle(plugin_name):
def get_chat_handle():
"""
"""
- from request_llm.bridge_all import predict_no_ui_long_connection
+ from request_llms.bridge_all import predict_no_ui_long_connection
return predict_no_ui_long_connection
def get_plugin_default_kwargs():
"""
"""
- from toolbox import get_conf, ChatBotWithCookies
-
- WEB_PORT, LLM_MODEL, API_KEY = \
- get_conf('WEB_PORT', 'LLM_MODEL', 'API_KEY')
-
+ from toolbox import ChatBotWithCookies
+ cookies = load_chat_cookies()
llm_kwargs = {
- 'api_key': API_KEY,
- 'llm_model': LLM_MODEL,
+ 'api_key': cookies['api_key'],
+ 'llm_model': cookies['llm_model'],
'top_p':1.0,
'max_length': None,
'temperature':1.0,
@@ -1086,25 +1125,21 @@ def get_plugin_default_kwargs():
"chatbot_with_cookie": chatbot,
"history": [],
"system_prompt": "You are a good AI.",
- "web_port": WEB_PORT
+ "web_port": None
}
return DEFAULT_FN_GROUPS_kwargs
def get_chat_default_kwargs():
"""
"""
- from toolbox import get_conf
-
- LLM_MODEL, API_KEY = get_conf('LLM_MODEL', 'API_KEY')
-
+ cookies = load_chat_cookies()
llm_kwargs = {
- 'api_key': API_KEY,
- 'llm_model': LLM_MODEL,
+ 'api_key': cookies['api_key'],
+ 'llm_model': cookies['llm_model'],
'top_p':1.0,
'max_length': None,
'temperature':1.0,
}
-
default_chat_kwargs = {
"inputs": "Hello there, are you ready?",
"llm_kwargs": llm_kwargs,
@@ -1116,3 +1151,12 @@ def get_chat_default_kwargs():
return default_chat_kwargs
+def get_max_token(llm_kwargs):
+ from request_llms.bridge_all import model_info
+ return model_info[llm_kwargs['llm_model']]['max_token']
+
+def check_packages(packages=[]):
+ import importlib.util
+ for p in packages:
+ spam_spec = importlib.util.find_spec(p)
+ if spam_spec is None: raise ModuleNotFoundError
\ No newline at end of file
diff --git a/version b/version
index 1470eb4..81ad2fd 100644
--- a/version
+++ b/version
@@ -1,5 +1,5 @@
{
- "version": 3.56,
+ "version": 3.60,
"show_feature": true,
- "new_feature": "支持动态追加基础功能按钮 <-> 新汇报PDF汇总页面 <-> 重新编译Gradio优化使用体验 <-> 新增动态代码解释器(CodeInterpreter) <-> 增加文本回答复制按钮 <-> 细分代理场合 <-> 支持动态选择不同界面主题 <-> 提高稳定性&解决多用户冲突问题 <-> 支持插件分类和更多UI皮肤外观 <-> 支持用户使用自然语言调度各个插件(虚空终端) ! <-> 改进UI,设计新主题 <-> 支持借助GROBID实现PDF高精度翻译 <-> 接入百度千帆平台和文心一言 <-> 接入阿里通义千问、讯飞星火、上海AI-Lab书生 <-> 优化一键升级 <-> 提高arxiv翻译速度和成功率"
+ "new_feature": "11月12日紧急BUG修复 <-> AutoGen多智能体插件测试版 <-> 修复本地模型在Windows下的加载BUG <-> 支持文心一言v4和星火v3 <-> 支持GLM3和智谱的API <-> 解决本地模型并发BUG <-> 支持动态追加基础功能按钮 <-> 新汇报PDF汇总页面 <-> 重新编译Gradio优化使用体验"
}