Merge branch 'v3.3'
This commit is contained in:
		
						commit
						8f571ff68f
					
				
							
								
								
									
										10
									
								
								README.md
									
									
									
									
									
								
							
							
						
						
									
										10
									
								
								README.md
									
									
									
									
									
								
							@ -25,24 +25,26 @@ If you like this project, please give it a Star. If you've come up with more use
 | 
				
			|||||||
--- | ---
 | 
					--- | ---
 | 
				
			||||||
一键润色 | 支持一键润色、一键查找论文语法错误
 | 
					一键润色 | 支持一键润色、一键查找论文语法错误
 | 
				
			||||||
一键中英互译 | 一键中英互译
 | 
					一键中英互译 | 一键中英互译
 | 
				
			||||||
一键代码解释 | 可以正确显示代码、解释代码
 | 
					一键代码解释 | 显示代码、解释代码、生成代码、给代码加注释
 | 
				
			||||||
[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
 | 
					[自定义快捷键](https://www.bilibili.com/video/BV14s4y1E7jN) | 支持自定义快捷键
 | 
				
			||||||
[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持代理连接OpenAI/Google等,秒解锁ChatGPT互联网[实时信息聚合](https://www.bilibili.com/video/BV1om4y127ck/)能力
 | 
					[配置代理服务器](https://www.bilibili.com/video/BV1rc411W7Dr) | 支持代理连接OpenAI/Google等,秒解锁ChatGPT互联网[实时信息聚合](https://www.bilibili.com/video/BV1om4y127ck/)能力
 | 
				
			||||||
模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
 | 
					模块化设计 | 支持自定义强大的[函数插件](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions),插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
 | 
				
			||||||
[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
 | 
					[自我程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] [一键读懂](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)本项目的源代码
 | 
				
			||||||
[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
 | 
					[程序剖析](https://www.bilibili.com/video/BV1cj411A7VW) | [函数插件] 一键可以剖析其他Python/C/C++/Java/Lua/...项目树
 | 
				
			||||||
读论文 | [函数插件] 一键解读latex论文全文并生成摘要
 | 
					读论文、翻译论文 | [函数插件] 一键解读latex/pdf论文全文并生成摘要
 | 
				
			||||||
Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
 | 
					Latex全文[翻译](https://www.bilibili.com/video/BV1nk4y1Y7Js/)、[润色](https://www.bilibili.com/video/BV1FT411H7c5/) | [函数插件] 一键翻译或润色latex论文
 | 
				
			||||||
批量注释生成 | [函数插件] 一键批量生成函数注释
 | 
					批量注释生成 | [函数插件] 一键批量生成函数注释
 | 
				
			||||||
chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
 | 
					 | 
				
			||||||
Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
 | 
					Markdown[中英互译](https://www.bilibili.com/video/BV1yo4y157jV/) | [函数插件] 看到上面5种语言的[README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md)了吗?
 | 
				
			||||||
[arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
 | 
					chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
 | 
				
			||||||
[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
 | 
					[PDF论文全文翻译功能](https://www.bilibili.com/video/BV1KT411x7Wn) | [函数插件] PDF论文提取题目&摘要+翻译全文(多线程)
 | 
				
			||||||
 | 
					[Arxiv小助手](https://www.bilibili.com/video/BV1LM4y1279X) | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
 | 
				
			||||||
[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
 | 
					[谷歌学术统合小助手](https://www.bilibili.com/video/BV19L411U7ia) | [函数插件] 给定任意谷歌学术搜索页面URL,让gpt帮你[写relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/)
 | 
				
			||||||
 | 
					互联网信息聚合+GPT | [函数插件] 一键让ChatGPT先Google搜索,再回答问题,信息流永不过时
 | 
				
			||||||
公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
 | 
					公式/图片/表格显示 | 可以同时显示公式的[tex形式和渲染形式](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png),支持公式、代码高亮
 | 
				
			||||||
多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
 | 
					多线程函数插件支持 | 支持多线调用chatgpt,一键处理[海量文本](https://www.bilibili.com/video/BV1FT411H7c5/)或程序
 | 
				
			||||||
启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
 | 
					启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
 | 
				
			||||||
[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
 | 
					[多LLM模型](https://www.bilibili.com/video/BV1wT411p7yf)支持,[API2D](https://api2d.com/)接口支持 | 同时被GPT3.5、GPT4和[清华ChatGLM](https://github.com/THUDM/ChatGLM-6B)伺候的感觉一定会很不错吧?
 | 
				
			||||||
 | 
					更多LLM模型接入 | 新加入Newbing测试接口(新必应AI)
 | 
				
			||||||
huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
 | 
					huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
 | 
				
			||||||
…… | ……
 | 
					…… | ……
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
				
			|||||||
							
								
								
									
										10
									
								
								config.py
									
									
									
									
									
								
							
							
						
						
									
										10
									
								
								config.py
									
									
									
									
									
								
							@ -45,7 +45,7 @@ MAX_RETRY = 2
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
# OpenAI模型选择是(gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d)
 | 
					# OpenAI模型选择是(gpt4现在只对申请成功的人开放,体验gpt-4可以试试api2d)
 | 
				
			||||||
LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
 | 
					LLM_MODEL = "gpt-3.5-turbo" # 可选 ↓↓↓
 | 
				
			||||||
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm"]
 | 
					AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing"]
 | 
				
			||||||
 | 
					
 | 
				
			||||||
# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
 | 
					# 本地LLM模型如ChatGLM的执行方式 CPU/GPU
 | 
				
			||||||
LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
 | 
					LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda"
 | 
				
			||||||
@ -58,8 +58,14 @@ CONCURRENT_COUNT = 100
 | 
				
			|||||||
AUTHENTICATION = []
 | 
					AUTHENTICATION = []
 | 
				
			||||||
 | 
					
 | 
				
			||||||
# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
 | 
					# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!)
 | 
				
			||||||
# 格式 {"https://api.openai.com/v1/chat/completions": "重定向的URL"}
 | 
					# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"}
 | 
				
			||||||
API_URL_REDIRECT = {}
 | 
					API_URL_REDIRECT = {}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
 | 
					# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!)
 | 
				
			||||||
CUSTOM_PATH = "/"
 | 
					CUSTOM_PATH = "/"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# 如果需要使用newbing,把newbing的长长的cookie放到这里
 | 
				
			||||||
 | 
					NEWBING_STYLE = "creative"  # ["creative", "balanced", "precise"]
 | 
				
			||||||
 | 
					NEWBING_COOKIES = """
 | 
				
			||||||
 | 
					your bing cookies here
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
@ -1,5 +1,4 @@
 | 
				
			|||||||
import traceback
 | 
					from toolbox import update_ui, get_conf, trimmed_format_exc
 | 
				
			||||||
from toolbox import update_ui, get_conf
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
def input_clipping(inputs, history, max_token_limit):
 | 
					def input_clipping(inputs, history, max_token_limit):
 | 
				
			||||||
    import numpy as np
 | 
					    import numpy as np
 | 
				
			||||||
@ -94,12 +93,12 @@ def request_gpt_model_in_new_thread_with_ui_alive(
 | 
				
			|||||||
                    continue # 返回重试
 | 
					                    continue # 返回重试
 | 
				
			||||||
                else:
 | 
					                else:
 | 
				
			||||||
                    # 【选择放弃】
 | 
					                    # 【选择放弃】
 | 
				
			||||||
                    tb_str = '```\n' + traceback.format_exc() + '```'
 | 
					                    tb_str = '```\n' + trimmed_format_exc() + '```'
 | 
				
			||||||
                    mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
 | 
					                    mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
 | 
				
			||||||
                    return mutable[0] # 放弃
 | 
					                    return mutable[0] # 放弃
 | 
				
			||||||
            except:
 | 
					            except:
 | 
				
			||||||
                # 【第三种情况】:其他错误:重试几次
 | 
					                # 【第三种情况】:其他错误:重试几次
 | 
				
			||||||
                tb_str = '```\n' + traceback.format_exc() + '```'
 | 
					                tb_str = '```\n' + trimmed_format_exc() + '```'
 | 
				
			||||||
                print(tb_str)
 | 
					                print(tb_str)
 | 
				
			||||||
                mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
 | 
					                mutable[0] += f"[Local Message] 警告,在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
 | 
				
			||||||
                if retry_op > 0:
 | 
					                if retry_op > 0:
 | 
				
			||||||
@ -173,7 +172,7 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
 | 
				
			|||||||
    if max_workers == -1: # 读取配置文件
 | 
					    if max_workers == -1: # 读取配置文件
 | 
				
			||||||
        try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
 | 
					        try: max_workers, = get_conf('DEFAULT_WORKER_NUM')
 | 
				
			||||||
        except: max_workers = 8
 | 
					        except: max_workers = 8
 | 
				
			||||||
        if max_workers <= 0 or max_workers >= 20: max_workers = 8
 | 
					        if max_workers <= 0: max_workers = 3
 | 
				
			||||||
    # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
 | 
					    # 屏蔽掉 chatglm的多线程,可能会导致严重卡顿
 | 
				
			||||||
    if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
 | 
					    if not (llm_kwargs['llm_model'].startswith('gpt-') or llm_kwargs['llm_model'].startswith('api2d-')):
 | 
				
			||||||
        max_workers = 1
 | 
					        max_workers = 1
 | 
				
			||||||
@ -220,14 +219,14 @@ def request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
 | 
				
			|||||||
                    continue # 返回重试
 | 
					                    continue # 返回重试
 | 
				
			||||||
                else:
 | 
					                else:
 | 
				
			||||||
                    # 【选择放弃】
 | 
					                    # 【选择放弃】
 | 
				
			||||||
                    tb_str = '```\n' + traceback.format_exc() + '```'
 | 
					                    tb_str = '```\n' + trimmed_format_exc() + '```'
 | 
				
			||||||
                    gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
 | 
					                    gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
 | 
				
			||||||
                    if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
 | 
					                    if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
 | 
				
			||||||
                    mutable[index][2] = "输入过长已放弃"
 | 
					                    mutable[index][2] = "输入过长已放弃"
 | 
				
			||||||
                    return gpt_say # 放弃
 | 
					                    return gpt_say # 放弃
 | 
				
			||||||
            except:
 | 
					            except:
 | 
				
			||||||
                # 【第三种情况】:其他错误
 | 
					                # 【第三种情况】:其他错误
 | 
				
			||||||
                tb_str = '```\n' + traceback.format_exc() + '```'
 | 
					                tb_str = '```\n' + trimmed_format_exc() + '```'
 | 
				
			||||||
                print(tb_str)
 | 
					                print(tb_str)
 | 
				
			||||||
                gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
 | 
					                gpt_say += f"[Local Message] 警告,线程{index}在执行过程中遭遇问题, Traceback:\n\n{tb_str}\n\n"
 | 
				
			||||||
                if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
 | 
					                if len(mutable[index][0]) > 0: gpt_say += "此线程失败前收到的回答:\n\n" + mutable[index][0]
 | 
				
			||||||
 | 
				
			|||||||
@ -1,5 +1,6 @@
 | 
				
			|||||||
from toolbox import update_ui
 | 
					from toolbox import update_ui
 | 
				
			||||||
from toolbox import CatchException, report_execption, write_results_to_file
 | 
					from toolbox import CatchException, report_execption, write_results_to_file
 | 
				
			||||||
 | 
					from .crazy_utils import input_clipping
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
 | 
					def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt):
 | 
				
			||||||
    import os, copy
 | 
					    import os, copy
 | 
				
			||||||
@ -61,13 +62,15 @@ def 解析源代码新(file_manifest, project_folder, llm_kwargs, plugin_kwargs,
 | 
				
			|||||||
        previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
 | 
					        previous_iteration_files.extend([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
 | 
				
			||||||
        previous_iteration_files_string = ', '.join(previous_iteration_files)
 | 
					        previous_iteration_files_string = ', '.join(previous_iteration_files)
 | 
				
			||||||
        current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
 | 
					        current_iteration_focus = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(this_iteration_file_manifest)])
 | 
				
			||||||
        i_say = f'根据以上分析,对程序的整体功能和构架重新做出概括。然后用一张markdown表格整理每个文件的功能(包括{previous_iteration_files_string})。'
 | 
					        i_say = f'用一张Markdown表格简要描述以下文件的功能:{previous_iteration_files_string}。根据以上分析,用一句话概括程序的整体功能。'
 | 
				
			||||||
        inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
 | 
					        inputs_show_user = f'根据以上分析,对程序的整体功能和构架重新做出概括,由于输入长度限制,可能需要分组处理,本组文件为 {current_iteration_focus} + 已经汇总的文件组。'
 | 
				
			||||||
        this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
 | 
					        this_iteration_history = copy.deepcopy(this_iteration_gpt_response_collection)
 | 
				
			||||||
        this_iteration_history.append(last_iteration_result)
 | 
					        this_iteration_history.append(last_iteration_result)
 | 
				
			||||||
 | 
					        # 裁剪input
 | 
				
			||||||
 | 
					        inputs, this_iteration_history_feed = input_clipping(inputs=i_say, history=this_iteration_history, max_token_limit=2560)
 | 
				
			||||||
        result = yield from request_gpt_model_in_new_thread_with_ui_alive(
 | 
					        result = yield from request_gpt_model_in_new_thread_with_ui_alive(
 | 
				
			||||||
            inputs=i_say, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
 | 
					            inputs=inputs, inputs_show_user=inputs_show_user, llm_kwargs=llm_kwargs, chatbot=chatbot,
 | 
				
			||||||
            history=this_iteration_history,   # 迭代之前的分析
 | 
					            history=this_iteration_history_feed,   # 迭代之前的分析
 | 
				
			||||||
            sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
 | 
					            sys_prompt="你是一个程序架构分析师,正在分析一个项目的源代码。")
 | 
				
			||||||
        report_part_2.extend([i_say, result])
 | 
					        report_part_2.extend([i_say, result])
 | 
				
			||||||
        last_iteration_result = result
 | 
					        last_iteration_result = result
 | 
				
			||||||
 | 
				
			|||||||
							
								
								
									
										3
									
								
								main.py
									
									
									
									
									
								
							
							
						
						
									
										3
									
								
								main.py
									
									
									
									
									
								
							@ -173,9 +173,6 @@ def main():
 | 
				
			|||||||
            yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
 | 
					            yield from ArgsGeneralWrapper(crazy_fns[k]["Function"])(*args, **kwargs)
 | 
				
			||||||
        click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
 | 
					        click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo)
 | 
				
			||||||
        click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
 | 
					        click_handle.then(on_report_generated, [file_upload, chatbot], [file_upload, chatbot])
 | 
				
			||||||
        # def expand_file_area(file_upload, area_file_up):
 | 
					 | 
				
			||||||
        #     if len(file_upload)>0: return {area_file_up: gr.update(open=True)}
 | 
					 | 
				
			||||||
        # click_handle.then(expand_file_area, [file_upload, area_file_up], [area_file_up])
 | 
					 | 
				
			||||||
        cancel_handles.append(click_handle)
 | 
					        cancel_handles.append(click_handle)
 | 
				
			||||||
        # 终止按钮的回调函数注册
 | 
					        # 终止按钮的回调函数注册
 | 
				
			||||||
        stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
 | 
					        stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
 | 
				
			||||||
 | 
				
			|||||||
@ -11,7 +11,7 @@
 | 
				
			|||||||
import tiktoken
 | 
					import tiktoken
 | 
				
			||||||
from functools import lru_cache
 | 
					from functools import lru_cache
 | 
				
			||||||
from concurrent.futures import ThreadPoolExecutor
 | 
					from concurrent.futures import ThreadPoolExecutor
 | 
				
			||||||
from toolbox import get_conf
 | 
					from toolbox import get_conf, trimmed_format_exc
 | 
				
			||||||
 | 
					
 | 
				
			||||||
from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
 | 
					from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui
 | 
				
			||||||
from .bridge_chatgpt import predict as chatgpt_ui
 | 
					from .bridge_chatgpt import predict as chatgpt_ui
 | 
				
			||||||
@ -19,6 +19,9 @@ from .bridge_chatgpt import predict as chatgpt_ui
 | 
				
			|||||||
from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
 | 
					from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui
 | 
				
			||||||
from .bridge_chatglm import predict as chatglm_ui
 | 
					from .bridge_chatglm import predict as chatglm_ui
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					from .bridge_newbing import predict_no_ui_long_connection as newbing_noui
 | 
				
			||||||
 | 
					from .bridge_newbing import predict as newbing_ui
 | 
				
			||||||
 | 
					
 | 
				
			||||||
# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
 | 
					# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui
 | 
				
			||||||
# from .bridge_tgui import predict as tgui_ui
 | 
					# from .bridge_tgui import predict as tgui_ui
 | 
				
			||||||
 | 
					
 | 
				
			||||||
@ -48,6 +51,7 @@ class LazyloadTiktoken(object):
 | 
				
			|||||||
API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
 | 
					API_URL_REDIRECT, = get_conf("API_URL_REDIRECT")
 | 
				
			||||||
openai_endpoint = "https://api.openai.com/v1/chat/completions"
 | 
					openai_endpoint = "https://api.openai.com/v1/chat/completions"
 | 
				
			||||||
api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
 | 
					api2d_endpoint = "https://openai.api2d.net/v1/chat/completions"
 | 
				
			||||||
 | 
					newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub"
 | 
				
			||||||
# 兼容旧版的配置
 | 
					# 兼容旧版的配置
 | 
				
			||||||
try:
 | 
					try:
 | 
				
			||||||
    API_URL, = get_conf("API_URL")
 | 
					    API_URL, = get_conf("API_URL")
 | 
				
			||||||
@ -59,6 +63,7 @@ except:
 | 
				
			|||||||
# 新版配置
 | 
					# 新版配置
 | 
				
			||||||
if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
 | 
					if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint]
 | 
				
			||||||
if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
 | 
					if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint]
 | 
				
			||||||
 | 
					if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint]
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
# 获取tokenizer
 | 
					# 获取tokenizer
 | 
				
			||||||
@ -116,7 +121,15 @@ model_info = {
 | 
				
			|||||||
        "tokenizer": tokenizer_gpt35,
 | 
					        "tokenizer": tokenizer_gpt35,
 | 
				
			||||||
        "token_cnt": get_token_num_gpt35,
 | 
					        "token_cnt": get_token_num_gpt35,
 | 
				
			||||||
    },
 | 
					    },
 | 
				
			||||||
 | 
					    # newbing
 | 
				
			||||||
 | 
					    "newbing": {
 | 
				
			||||||
 | 
					        "fn_with_ui": newbing_ui,
 | 
				
			||||||
 | 
					        "fn_without_ui": newbing_noui,
 | 
				
			||||||
 | 
					        "endpoint": newbing_endpoint,
 | 
				
			||||||
 | 
					        "max_token": 4096,
 | 
				
			||||||
 | 
					        "tokenizer": tokenizer_gpt35,
 | 
				
			||||||
 | 
					        "token_cnt": get_token_num_gpt35,
 | 
				
			||||||
 | 
					    },
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
@ -128,10 +141,7 @@ def LLM_CATCH_EXCEPTION(f):
 | 
				
			|||||||
        try:
 | 
					        try:
 | 
				
			||||||
            return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
 | 
					            return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience)
 | 
				
			||||||
        except Exception as e:
 | 
					        except Exception as e:
 | 
				
			||||||
            from toolbox import get_conf
 | 
					            tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
 | 
				
			||||||
            import traceback
 | 
					 | 
				
			||||||
            proxies, = get_conf('proxies')
 | 
					 | 
				
			||||||
            tb_str = '\n```\n' + traceback.format_exc() + '\n```\n'
 | 
					 | 
				
			||||||
            observe_window[0] = tb_str
 | 
					            observe_window[0] = tb_str
 | 
				
			||||||
            return tb_str
 | 
					            return tb_str
 | 
				
			||||||
    return decorated
 | 
					    return decorated
 | 
				
			||||||
@ -182,7 +192,7 @@ def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, obser
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
        def mutex_manager(window_mutex, observe_window):
 | 
					        def mutex_manager(window_mutex, observe_window):
 | 
				
			||||||
            while True:
 | 
					            while True:
 | 
				
			||||||
                time.sleep(0.5)
 | 
					                time.sleep(0.25)
 | 
				
			||||||
                if not window_mutex[-1]: break
 | 
					                if not window_mutex[-1]: break
 | 
				
			||||||
                # 看门狗(watchdog)
 | 
					                # 看门狗(watchdog)
 | 
				
			||||||
                for i in range(n_model): 
 | 
					                for i in range(n_model): 
 | 
				
			||||||
 | 
				
			|||||||
@ -21,7 +21,7 @@ import importlib
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
# config_private.py放自己的秘密如API和代理网址
 | 
					# config_private.py放自己的秘密如API和代理网址
 | 
				
			||||||
# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
 | 
					# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件
 | 
				
			||||||
from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history
 | 
					from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc
 | 
				
			||||||
proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
 | 
					proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \
 | 
				
			||||||
    get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
 | 
					    get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY')
 | 
				
			||||||
 | 
					
 | 
				
			||||||
@ -215,7 +215,7 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
 | 
				
			|||||||
                        chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
 | 
					                        chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.")
 | 
				
			||||||
                    else:
 | 
					                    else:
 | 
				
			||||||
                        from toolbox import regular_txt_to_markdown
 | 
					                        from toolbox import regular_txt_to_markdown
 | 
				
			||||||
                        tb_str = '```\n' + traceback.format_exc() + '```'
 | 
					                        tb_str = '```\n' + trimmed_format_exc() + '```'
 | 
				
			||||||
                        chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
 | 
					                        chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded[4:])}")
 | 
				
			||||||
                    yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
 | 
					                    yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面
 | 
				
			||||||
                    return
 | 
					                    return
 | 
				
			||||||
 | 
				
			|||||||
							
								
								
									
										250
									
								
								request_llm/bridge_newbing.py
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										250
									
								
								request_llm/bridge_newbing.py
									
									
									
									
									
										Normal file
									
								
							@ -0,0 +1,250 @@
 | 
				
			|||||||
 | 
					"""
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					第一部分:来自EdgeGPT.py
 | 
				
			||||||
 | 
					https://github.com/acheong08/EdgeGPT
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					from .edge_gpt import NewbingChatbot
 | 
				
			||||||
 | 
					load_message = "等待NewBing响应。"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					第二部分:子进程Worker(调用主体)
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					import time
 | 
				
			||||||
 | 
					import json
 | 
				
			||||||
 | 
					import re
 | 
				
			||||||
 | 
					import asyncio
 | 
				
			||||||
 | 
					import importlib
 | 
				
			||||||
 | 
					import threading
 | 
				
			||||||
 | 
					from toolbox import update_ui, get_conf, trimmed_format_exc
 | 
				
			||||||
 | 
					from multiprocessing import Process, Pipe
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def preprocess_newbing_out(s):
 | 
				
			||||||
 | 
					    pattern = r'\^(\d+)\^' # 匹配^数字^
 | 
				
			||||||
 | 
					    sub = lambda m: '\['+m.group(1)+'\]' # 将匹配到的数字作为替换值
 | 
				
			||||||
 | 
					    result = re.sub(pattern, sub, s) # 替换操作
 | 
				
			||||||
 | 
					    if '[1]' in result:
 | 
				
			||||||
 | 
					        result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
 | 
				
			||||||
 | 
					    return result
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def preprocess_newbing_out_simple(result):
 | 
				
			||||||
 | 
					    if '[1]' in result:
 | 
				
			||||||
 | 
					        result += '\n\n```\n' + "\n".join([r for r in result.split('\n') if r.startswith('[')]) + '\n```\n'
 | 
				
			||||||
 | 
					    return result
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					class NewBingHandle(Process):
 | 
				
			||||||
 | 
					    def __init__(self):
 | 
				
			||||||
 | 
					        super().__init__(daemon=True)
 | 
				
			||||||
 | 
					        self.parent, self.child = Pipe()
 | 
				
			||||||
 | 
					        self.newbing_model = None
 | 
				
			||||||
 | 
					        self.info = ""
 | 
				
			||||||
 | 
					        self.success = True
 | 
				
			||||||
 | 
					        self.local_history = []
 | 
				
			||||||
 | 
					        self.check_dependency()
 | 
				
			||||||
 | 
					        self.start()
 | 
				
			||||||
 | 
					        self.threadLock = threading.Lock()
 | 
				
			||||||
 | 
					        
 | 
				
			||||||
 | 
					    def check_dependency(self):
 | 
				
			||||||
 | 
					        try:
 | 
				
			||||||
 | 
					            self.success = False
 | 
				
			||||||
 | 
					            import certifi, httpx, rich
 | 
				
			||||||
 | 
					            self.info = "依赖检测通过,等待NewBing响应。注意目前不能多人同时调用NewBing接口(有线程锁),否则将导致每个人的NewBing问询历史互相渗透。调用NewBing时,会自动使用已配置的代理。"
 | 
				
			||||||
 | 
					            self.success = True
 | 
				
			||||||
 | 
					        except:
 | 
				
			||||||
 | 
					            self.info = "缺少的依赖,如果要使用Newbing,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_newbing.txt`安装Newbing的依赖。"
 | 
				
			||||||
 | 
					            self.success = False
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    def ready(self):
 | 
				
			||||||
 | 
					        return self.newbing_model is not None
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    async def async_run(self):
 | 
				
			||||||
 | 
					        # 读取配置
 | 
				
			||||||
 | 
					        NEWBING_STYLE, = get_conf('NEWBING_STYLE')
 | 
				
			||||||
 | 
					        from request_llm.bridge_all import model_info
 | 
				
			||||||
 | 
					        endpoint = model_info['newbing']['endpoint']
 | 
				
			||||||
 | 
					        while True:
 | 
				
			||||||
 | 
					            # 等待
 | 
				
			||||||
 | 
					            kwargs = self.child.recv()
 | 
				
			||||||
 | 
					            question=kwargs['query']
 | 
				
			||||||
 | 
					            history=kwargs['history']
 | 
				
			||||||
 | 
					            system_prompt=kwargs['system_prompt']
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					            # 是否重置
 | 
				
			||||||
 | 
					            if len(self.local_history) > 0 and len(history)==0:
 | 
				
			||||||
 | 
					                await self.newbing_model.reset()
 | 
				
			||||||
 | 
					                self.local_history = []
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					            # 开始问问题
 | 
				
			||||||
 | 
					            prompt = ""
 | 
				
			||||||
 | 
					            if system_prompt not in self.local_history:
 | 
				
			||||||
 | 
					                self.local_history.append(system_prompt)
 | 
				
			||||||
 | 
					                prompt += system_prompt + '\n'
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					            # 追加历史
 | 
				
			||||||
 | 
					            for ab in history:
 | 
				
			||||||
 | 
					                a, b = ab
 | 
				
			||||||
 | 
					                if a not in self.local_history:
 | 
				
			||||||
 | 
					                    self.local_history.append(a)
 | 
				
			||||||
 | 
					                    prompt += a + '\n'
 | 
				
			||||||
 | 
					                if b not in self.local_history:
 | 
				
			||||||
 | 
					                    self.local_history.append(b)
 | 
				
			||||||
 | 
					                    prompt += b + '\n'
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					            # 问题
 | 
				
			||||||
 | 
					            prompt += question
 | 
				
			||||||
 | 
					            self.local_history.append(question)
 | 
				
			||||||
 | 
					            
 | 
				
			||||||
 | 
					            # 提交
 | 
				
			||||||
 | 
					            async for final, response in self.newbing_model.ask_stream(
 | 
				
			||||||
 | 
					                prompt=question,
 | 
				
			||||||
 | 
					                conversation_style=NEWBING_STYLE,     # ["creative", "balanced", "precise"]
 | 
				
			||||||
 | 
					                wss_link=endpoint,                      # "wss://sydney.bing.com/sydney/ChatHub"
 | 
				
			||||||
 | 
					            ):
 | 
				
			||||||
 | 
					                if not final:
 | 
				
			||||||
 | 
					                    print(response)
 | 
				
			||||||
 | 
					                    self.child.send(str(response))
 | 
				
			||||||
 | 
					                else:
 | 
				
			||||||
 | 
					                    print('-------- receive final ---------')
 | 
				
			||||||
 | 
					                    self.child.send('[Finish]')
 | 
				
			||||||
 | 
					            
 | 
				
			||||||
 | 
					    
 | 
				
			||||||
 | 
					    def run(self):
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        这个函数运行在子进程
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        # 第一次运行,加载参数
 | 
				
			||||||
 | 
					        self.success = False
 | 
				
			||||||
 | 
					        self.local_history = []
 | 
				
			||||||
 | 
					        if (self.newbing_model is None) or (not self.success):
 | 
				
			||||||
 | 
					            # 代理设置
 | 
				
			||||||
 | 
					            proxies, = get_conf('proxies')
 | 
				
			||||||
 | 
					            if proxies is None: 
 | 
				
			||||||
 | 
					                self.proxies_https = None
 | 
				
			||||||
 | 
					            else: 
 | 
				
			||||||
 | 
					                self.proxies_https = proxies['https']
 | 
				
			||||||
 | 
					            # cookie
 | 
				
			||||||
 | 
					            NEWBING_COOKIES, = get_conf('NEWBING_COOKIES')
 | 
				
			||||||
 | 
					            try:
 | 
				
			||||||
 | 
					                cookies = json.loads(NEWBING_COOKIES)
 | 
				
			||||||
 | 
					            except:
 | 
				
			||||||
 | 
					                self.success = False
 | 
				
			||||||
 | 
					                tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
 | 
				
			||||||
 | 
					                self.child.send(f'[Local Message] 不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。')
 | 
				
			||||||
 | 
					                self.child.send('[Fail]')
 | 
				
			||||||
 | 
					                self.child.send('[Finish]')
 | 
				
			||||||
 | 
					                raise RuntimeError(f"不能加载Newbing组件。NEWBING_COOKIES未填写或有格式错误。")
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					            try:
 | 
				
			||||||
 | 
					                self.newbing_model = NewbingChatbot(proxy=self.proxies_https, cookies=cookies)
 | 
				
			||||||
 | 
					            except:
 | 
				
			||||||
 | 
					                self.success = False
 | 
				
			||||||
 | 
					                tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n'
 | 
				
			||||||
 | 
					                self.child.send(f'[Local Message] 不能加载Newbing组件。{tb_str}')
 | 
				
			||||||
 | 
					                self.child.send('[Fail]')
 | 
				
			||||||
 | 
					                self.child.send('[Finish]')
 | 
				
			||||||
 | 
					                raise RuntimeError(f"不能加载Newbing组件。")
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        self.success = True
 | 
				
			||||||
 | 
					        try:
 | 
				
			||||||
 | 
					            # 进入任务等待状态
 | 
				
			||||||
 | 
					            asyncio.run(self.async_run())
 | 
				
			||||||
 | 
					        except Exception:
 | 
				
			||||||
 | 
					            tb_str = '```\n' + trimmed_format_exc() + '```'
 | 
				
			||||||
 | 
					            self.child.send(f'[Local Message] Newbing失败 {tb_str}.')
 | 
				
			||||||
 | 
					            self.child.send('[Fail]')
 | 
				
			||||||
 | 
					            self.child.send('[Finish]')
 | 
				
			||||||
 | 
					        
 | 
				
			||||||
 | 
					    def stream_chat(self, **kwargs):
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        这个函数运行在主进程
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        self.threadLock.acquire()
 | 
				
			||||||
 | 
					        self.parent.send(kwargs)    # 发送请求到子进程
 | 
				
			||||||
 | 
					        while True:
 | 
				
			||||||
 | 
					            res = self.parent.recv()    # 等待newbing回复的片段
 | 
				
			||||||
 | 
					            if res == '[Finish]':
 | 
				
			||||||
 | 
					                break       # 结束
 | 
				
			||||||
 | 
					            elif res == '[Fail]':
 | 
				
			||||||
 | 
					                self.success = False
 | 
				
			||||||
 | 
					                break
 | 
				
			||||||
 | 
					            else:
 | 
				
			||||||
 | 
					                yield res   # newbing回复的片段
 | 
				
			||||||
 | 
					        self.threadLock.release()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					第三部分:主进程统一调用函数接口
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					global newbing_handle
 | 
				
			||||||
 | 
					newbing_handle = None
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False):
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					        多线程方法
 | 
				
			||||||
 | 
					        函数的说明请见 request_llm/bridge_all.py
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    global newbing_handle
 | 
				
			||||||
 | 
					    if (newbing_handle is None) or (not newbing_handle.success):
 | 
				
			||||||
 | 
					        newbing_handle = NewBingHandle()
 | 
				
			||||||
 | 
					        observe_window[0] = load_message + "\n\n" + newbing_handle.info
 | 
				
			||||||
 | 
					        if not newbing_handle.success: 
 | 
				
			||||||
 | 
					            error = newbing_handle.info
 | 
				
			||||||
 | 
					            newbing_handle = None
 | 
				
			||||||
 | 
					            raise RuntimeError(error)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    # 没有 sys_prompt 接口,因此把prompt加入 history
 | 
				
			||||||
 | 
					    history_feedin = []
 | 
				
			||||||
 | 
					    for i in range(len(history)//2):
 | 
				
			||||||
 | 
					        history_feedin.append([history[2*i], history[2*i+1]] )
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可
 | 
				
			||||||
 | 
					    response = ""
 | 
				
			||||||
 | 
					    observe_window[0] = "[Local Message]: 等待NewBing响应中 ..."
 | 
				
			||||||
 | 
					    for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
 | 
				
			||||||
 | 
					        observe_window[0] = preprocess_newbing_out_simple(response)
 | 
				
			||||||
 | 
					        if len(observe_window) >= 2:  
 | 
				
			||||||
 | 
					            if (time.time()-observe_window[1]) > watch_dog_patience:
 | 
				
			||||||
 | 
					                raise RuntimeError("程序终止。")
 | 
				
			||||||
 | 
					    return preprocess_newbing_out_simple(response)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None):
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					        单线程方法
 | 
				
			||||||
 | 
					        函数的说明请见 request_llm/bridge_all.py
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    chatbot.append((inputs, "[Local Message]: 等待NewBing响应中 ..."))
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    global newbing_handle
 | 
				
			||||||
 | 
					    if (newbing_handle is None) or (not newbing_handle.success):
 | 
				
			||||||
 | 
					        newbing_handle = NewBingHandle()
 | 
				
			||||||
 | 
					        chatbot[-1] = (inputs, load_message + "\n\n" + newbing_handle.info)
 | 
				
			||||||
 | 
					        yield from update_ui(chatbot=chatbot, history=[])
 | 
				
			||||||
 | 
					        if not newbing_handle.success: 
 | 
				
			||||||
 | 
					            newbing_handle = None
 | 
				
			||||||
 | 
					            return
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    if additional_fn is not None:
 | 
				
			||||||
 | 
					        import core_functional
 | 
				
			||||||
 | 
					        importlib.reload(core_functional)    # 热更新prompt
 | 
				
			||||||
 | 
					        core_functional = core_functional.get_core_functions()
 | 
				
			||||||
 | 
					        if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs)  # 获取预处理函数(如果有的话)
 | 
				
			||||||
 | 
					        inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"]
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    history_feedin = []
 | 
				
			||||||
 | 
					    for i in range(len(history)//2):
 | 
				
			||||||
 | 
					        history_feedin.append([history[2*i], history[2*i+1]] )
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    chatbot[-1] = (inputs, "[Local Message]: 等待NewBing响应中 ...")
 | 
				
			||||||
 | 
					    response = "[Local Message]: 等待NewBing响应中 ..."
 | 
				
			||||||
 | 
					    yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
 | 
				
			||||||
 | 
					    for response in newbing_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']):
 | 
				
			||||||
 | 
					        chatbot[-1] = (inputs, preprocess_newbing_out(response))
 | 
				
			||||||
 | 
					        yield from update_ui(chatbot=chatbot, history=history, msg="NewBing响应缓慢,尚未完成全部响应,请耐心完成后再提交新问题。")
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    history.extend([inputs, preprocess_newbing_out(response)])
 | 
				
			||||||
 | 
					    yield from update_ui(chatbot=chatbot, history=history, msg="完成全部响应,请提交新问题。")
 | 
				
			||||||
 | 
					
 | 
				
			||||||
							
								
								
									
										409
									
								
								request_llm/edge_gpt.py
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										409
									
								
								request_llm/edge_gpt.py
									
									
									
									
									
										Normal file
									
								
							@ -0,0 +1,409 @@
 | 
				
			|||||||
 | 
					"""
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					第一部分:来自EdgeGPT.py
 | 
				
			||||||
 | 
					https://github.com/acheong08/EdgeGPT
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					import argparse
 | 
				
			||||||
 | 
					import asyncio
 | 
				
			||||||
 | 
					import json
 | 
				
			||||||
 | 
					import os
 | 
				
			||||||
 | 
					import random
 | 
				
			||||||
 | 
					import re
 | 
				
			||||||
 | 
					import ssl
 | 
				
			||||||
 | 
					import sys
 | 
				
			||||||
 | 
					import uuid
 | 
				
			||||||
 | 
					from enum import Enum
 | 
				
			||||||
 | 
					from typing import Generator
 | 
				
			||||||
 | 
					from typing import Literal
 | 
				
			||||||
 | 
					from typing import Optional
 | 
				
			||||||
 | 
					from typing import Union
 | 
				
			||||||
 | 
					import websockets.client as websockets
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					DELIMITER = "\x1e"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					# Generate random IP between range 13.104.0.0/14
 | 
				
			||||||
 | 
					FORWARDED_IP = (
 | 
				
			||||||
 | 
					    f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
 | 
				
			||||||
 | 
					)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					HEADERS = {
 | 
				
			||||||
 | 
					    "accept": "application/json",
 | 
				
			||||||
 | 
					    "accept-language": "en-US,en;q=0.9",
 | 
				
			||||||
 | 
					    "content-type": "application/json",
 | 
				
			||||||
 | 
					    "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-arch": '"x86"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-bitness": '"64"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-full-version": '"109.0.1518.78"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-mobile": "?0",
 | 
				
			||||||
 | 
					    "sec-ch-ua-model": "",
 | 
				
			||||||
 | 
					    "sec-ch-ua-platform": '"Windows"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-platform-version": '"15.0.0"',
 | 
				
			||||||
 | 
					    "sec-fetch-dest": "empty",
 | 
				
			||||||
 | 
					    "sec-fetch-mode": "cors",
 | 
				
			||||||
 | 
					    "sec-fetch-site": "same-origin",
 | 
				
			||||||
 | 
					    "x-ms-client-request-id": str(uuid.uuid4()),
 | 
				
			||||||
 | 
					    "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32",
 | 
				
			||||||
 | 
					    "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx",
 | 
				
			||||||
 | 
					    "Referrer-Policy": "origin-when-cross-origin",
 | 
				
			||||||
 | 
					    "x-forwarded-for": FORWARDED_IP,
 | 
				
			||||||
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					HEADERS_INIT_CONVER = {
 | 
				
			||||||
 | 
					    "authority": "edgeservices.bing.com",
 | 
				
			||||||
 | 
					    "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
 | 
				
			||||||
 | 
					    "accept-language": "en-US,en;q=0.9",
 | 
				
			||||||
 | 
					    "cache-control": "max-age=0",
 | 
				
			||||||
 | 
					    "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-arch": '"x86"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-bitness": '"64"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-full-version": '"110.0.1587.69"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-mobile": "?0",
 | 
				
			||||||
 | 
					    "sec-ch-ua-model": '""',
 | 
				
			||||||
 | 
					    "sec-ch-ua-platform": '"Windows"',
 | 
				
			||||||
 | 
					    "sec-ch-ua-platform-version": '"15.0.0"',
 | 
				
			||||||
 | 
					    "sec-fetch-dest": "document",
 | 
				
			||||||
 | 
					    "sec-fetch-mode": "navigate",
 | 
				
			||||||
 | 
					    "sec-fetch-site": "none",
 | 
				
			||||||
 | 
					    "sec-fetch-user": "?1",
 | 
				
			||||||
 | 
					    "upgrade-insecure-requests": "1",
 | 
				
			||||||
 | 
					    "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69",
 | 
				
			||||||
 | 
					    "x-edge-shopping-flag": "1",
 | 
				
			||||||
 | 
					    "x-forwarded-for": FORWARDED_IP,
 | 
				
			||||||
 | 
					}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def get_ssl_context():
 | 
				
			||||||
 | 
					    import certifi
 | 
				
			||||||
 | 
					    ssl_context = ssl.create_default_context()
 | 
				
			||||||
 | 
					    ssl_context.load_verify_locations(certifi.where())
 | 
				
			||||||
 | 
					    return ssl_context
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					class NotAllowedToAccess(Exception):
 | 
				
			||||||
 | 
					    pass
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					class ConversationStyle(Enum):
 | 
				
			||||||
 | 
					    creative = "h3imaginative,clgalileo,gencontentv3"
 | 
				
			||||||
 | 
					    balanced = "galileo"
 | 
				
			||||||
 | 
					    precise = "h3precise,clgalileo"
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					CONVERSATION_STYLE_TYPE = Optional[
 | 
				
			||||||
 | 
					    Union[ConversationStyle, Literal["creative", "balanced", "precise"]]
 | 
				
			||||||
 | 
					]
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def _append_identifier(msg: dict) -> str:
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    Appends special character to end of message to identify end of message
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    # Convert dict to json string
 | 
				
			||||||
 | 
					    return json.dumps(msg) + DELIMITER
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def _get_ran_hex(length: int = 32) -> str:
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    Returns random hex string
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    return "".join(random.choice("0123456789abcdef") for _ in range(length))
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					class _ChatHubRequest:
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    Request object for ChatHub
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    def __init__(
 | 
				
			||||||
 | 
					        self,
 | 
				
			||||||
 | 
					        conversation_signature: str,
 | 
				
			||||||
 | 
					        client_id: str,
 | 
				
			||||||
 | 
					        conversation_id: str,
 | 
				
			||||||
 | 
					        invocation_id: int = 0,
 | 
				
			||||||
 | 
					    ) -> None:
 | 
				
			||||||
 | 
					        self.struct: dict = {}
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        self.client_id: str = client_id
 | 
				
			||||||
 | 
					        self.conversation_id: str = conversation_id
 | 
				
			||||||
 | 
					        self.conversation_signature: str = conversation_signature
 | 
				
			||||||
 | 
					        self.invocation_id: int = invocation_id
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    def update(
 | 
				
			||||||
 | 
					        self,
 | 
				
			||||||
 | 
					        prompt,
 | 
				
			||||||
 | 
					        conversation_style,
 | 
				
			||||||
 | 
					        options,
 | 
				
			||||||
 | 
					    ) -> None:
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        Updates request object
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        if options is None:
 | 
				
			||||||
 | 
					            options = [
 | 
				
			||||||
 | 
					                "deepleo",
 | 
				
			||||||
 | 
					                "enable_debug_commands",
 | 
				
			||||||
 | 
					                "disable_emoji_spoken_text",
 | 
				
			||||||
 | 
					                "enablemm",
 | 
				
			||||||
 | 
					            ]
 | 
				
			||||||
 | 
					        if conversation_style:
 | 
				
			||||||
 | 
					            if not isinstance(conversation_style, ConversationStyle):
 | 
				
			||||||
 | 
					                conversation_style = getattr(ConversationStyle, conversation_style)
 | 
				
			||||||
 | 
					            options = [
 | 
				
			||||||
 | 
					                "nlu_direct_response_filter",
 | 
				
			||||||
 | 
					                "deepleo",
 | 
				
			||||||
 | 
					                "disable_emoji_spoken_text",
 | 
				
			||||||
 | 
					                "responsible_ai_policy_235",
 | 
				
			||||||
 | 
					                "enablemm",
 | 
				
			||||||
 | 
					                conversation_style.value,
 | 
				
			||||||
 | 
					                "dtappid",
 | 
				
			||||||
 | 
					                "cricinfo",
 | 
				
			||||||
 | 
					                "cricinfov2",
 | 
				
			||||||
 | 
					                "dv3sugg",
 | 
				
			||||||
 | 
					            ]
 | 
				
			||||||
 | 
					        self.struct = {
 | 
				
			||||||
 | 
					            "arguments": [
 | 
				
			||||||
 | 
					                {
 | 
				
			||||||
 | 
					                    "source": "cib",
 | 
				
			||||||
 | 
					                    "optionsSets": options,
 | 
				
			||||||
 | 
					                    "sliceIds": [
 | 
				
			||||||
 | 
					                        "222dtappid",
 | 
				
			||||||
 | 
					                        "225cricinfo",
 | 
				
			||||||
 | 
					                        "224locals0",
 | 
				
			||||||
 | 
					                    ],
 | 
				
			||||||
 | 
					                    "traceId": _get_ran_hex(32),
 | 
				
			||||||
 | 
					                    "isStartOfSession": self.invocation_id == 0,
 | 
				
			||||||
 | 
					                    "message": {
 | 
				
			||||||
 | 
					                        "author": "user",
 | 
				
			||||||
 | 
					                        "inputMethod": "Keyboard",
 | 
				
			||||||
 | 
					                        "text": prompt,
 | 
				
			||||||
 | 
					                        "messageType": "Chat",
 | 
				
			||||||
 | 
					                    },
 | 
				
			||||||
 | 
					                    "conversationSignature": self.conversation_signature,
 | 
				
			||||||
 | 
					                    "participant": {
 | 
				
			||||||
 | 
					                        "id": self.client_id,
 | 
				
			||||||
 | 
					                    },
 | 
				
			||||||
 | 
					                    "conversationId": self.conversation_id,
 | 
				
			||||||
 | 
					                },
 | 
				
			||||||
 | 
					            ],
 | 
				
			||||||
 | 
					            "invocationId": str(self.invocation_id),
 | 
				
			||||||
 | 
					            "target": "chat",
 | 
				
			||||||
 | 
					            "type": 4,
 | 
				
			||||||
 | 
					        }
 | 
				
			||||||
 | 
					        self.invocation_id += 1
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					class _Conversation:
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    Conversation API
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    def __init__(
 | 
				
			||||||
 | 
					        self,
 | 
				
			||||||
 | 
					        cookies,
 | 
				
			||||||
 | 
					        proxy,
 | 
				
			||||||
 | 
					    ) -> None:
 | 
				
			||||||
 | 
					        self.struct: dict = {
 | 
				
			||||||
 | 
					            "conversationId": None,
 | 
				
			||||||
 | 
					            "clientId": None,
 | 
				
			||||||
 | 
					            "conversationSignature": None,
 | 
				
			||||||
 | 
					            "result": {"value": "Success", "message": None},
 | 
				
			||||||
 | 
					        }
 | 
				
			||||||
 | 
					        import httpx
 | 
				
			||||||
 | 
					        self.proxy = proxy
 | 
				
			||||||
 | 
					        proxy = (
 | 
				
			||||||
 | 
					            proxy
 | 
				
			||||||
 | 
					            or os.environ.get("all_proxy")
 | 
				
			||||||
 | 
					            or os.environ.get("ALL_PROXY")
 | 
				
			||||||
 | 
					            or os.environ.get("https_proxy")
 | 
				
			||||||
 | 
					            or os.environ.get("HTTPS_PROXY")
 | 
				
			||||||
 | 
					            or None
 | 
				
			||||||
 | 
					        )
 | 
				
			||||||
 | 
					        if proxy is not None and proxy.startswith("socks5h://"):
 | 
				
			||||||
 | 
					            proxy = "socks5://" + proxy[len("socks5h://") :]
 | 
				
			||||||
 | 
					        self.session = httpx.Client(
 | 
				
			||||||
 | 
					            proxies=proxy,
 | 
				
			||||||
 | 
					            timeout=30,
 | 
				
			||||||
 | 
					            headers=HEADERS_INIT_CONVER,
 | 
				
			||||||
 | 
					        )
 | 
				
			||||||
 | 
					        for cookie in cookies:
 | 
				
			||||||
 | 
					            self.session.cookies.set(cookie["name"], cookie["value"])
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					        # Send GET request
 | 
				
			||||||
 | 
					        response = self.session.get(
 | 
				
			||||||
 | 
					            url=os.environ.get("BING_PROXY_URL")
 | 
				
			||||||
 | 
					            or "https://edgeservices.bing.com/edgesvc/turing/conversation/create",
 | 
				
			||||||
 | 
					        )
 | 
				
			||||||
 | 
					        if response.status_code != 200:
 | 
				
			||||||
 | 
					            response = self.session.get(
 | 
				
			||||||
 | 
					                "https://edge.churchless.tech/edgesvc/turing/conversation/create",
 | 
				
			||||||
 | 
					            )
 | 
				
			||||||
 | 
					        if response.status_code != 200:
 | 
				
			||||||
 | 
					            print(f"Status code: {response.status_code}")
 | 
				
			||||||
 | 
					            print(response.text)
 | 
				
			||||||
 | 
					            print(response.url)
 | 
				
			||||||
 | 
					            raise Exception("Authentication failed")
 | 
				
			||||||
 | 
					        try:
 | 
				
			||||||
 | 
					            self.struct = response.json()
 | 
				
			||||||
 | 
					        except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc:
 | 
				
			||||||
 | 
					            raise Exception(
 | 
				
			||||||
 | 
					                "Authentication failed. You have not been accepted into the beta.",
 | 
				
			||||||
 | 
					            ) from exc
 | 
				
			||||||
 | 
					        if self.struct["result"]["value"] == "UnauthorizedRequest":
 | 
				
			||||||
 | 
					            raise NotAllowedToAccess(self.struct["result"]["message"])
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					class _ChatHub:
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    Chat API
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    def __init__(self, conversation) -> None:
 | 
				
			||||||
 | 
					        self.wss = None
 | 
				
			||||||
 | 
					        self.request: _ChatHubRequest
 | 
				
			||||||
 | 
					        self.loop: bool
 | 
				
			||||||
 | 
					        self.task: asyncio.Task
 | 
				
			||||||
 | 
					        print(conversation.struct)
 | 
				
			||||||
 | 
					        self.request = _ChatHubRequest(
 | 
				
			||||||
 | 
					            conversation_signature=conversation.struct["conversationSignature"],
 | 
				
			||||||
 | 
					            client_id=conversation.struct["clientId"],
 | 
				
			||||||
 | 
					            conversation_id=conversation.struct["conversationId"],
 | 
				
			||||||
 | 
					        )
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    async def ask_stream(
 | 
				
			||||||
 | 
					        self,
 | 
				
			||||||
 | 
					        prompt: str,
 | 
				
			||||||
 | 
					        wss_link: str,
 | 
				
			||||||
 | 
					        conversation_style: CONVERSATION_STYLE_TYPE = None,
 | 
				
			||||||
 | 
					        raw: bool = False,
 | 
				
			||||||
 | 
					        options: dict = None,
 | 
				
			||||||
 | 
					    ) -> Generator[str, None, None]:
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        Ask a question to the bot
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        if self.wss and not self.wss.closed:
 | 
				
			||||||
 | 
					            await self.wss.close()
 | 
				
			||||||
 | 
					        # Check if websocket is closed
 | 
				
			||||||
 | 
					        self.wss = await websockets.connect(
 | 
				
			||||||
 | 
					            wss_link,
 | 
				
			||||||
 | 
					            extra_headers=HEADERS,
 | 
				
			||||||
 | 
					            max_size=None,
 | 
				
			||||||
 | 
					            ssl=get_ssl_context()
 | 
				
			||||||
 | 
					        )
 | 
				
			||||||
 | 
					        await self._initial_handshake()
 | 
				
			||||||
 | 
					        # Construct a ChatHub request
 | 
				
			||||||
 | 
					        self.request.update(
 | 
				
			||||||
 | 
					            prompt=prompt,
 | 
				
			||||||
 | 
					            conversation_style=conversation_style,
 | 
				
			||||||
 | 
					            options=options,
 | 
				
			||||||
 | 
					        )
 | 
				
			||||||
 | 
					        # Send request
 | 
				
			||||||
 | 
					        await self.wss.send(_append_identifier(self.request.struct))
 | 
				
			||||||
 | 
					        final = False
 | 
				
			||||||
 | 
					        while not final:
 | 
				
			||||||
 | 
					            objects = str(await self.wss.recv()).split(DELIMITER)
 | 
				
			||||||
 | 
					            for obj in objects:
 | 
				
			||||||
 | 
					                if obj is None or not obj:
 | 
				
			||||||
 | 
					                    continue
 | 
				
			||||||
 | 
					                response = json.loads(obj)
 | 
				
			||||||
 | 
					                if response.get("type") != 2 and raw:
 | 
				
			||||||
 | 
					                    yield False, response
 | 
				
			||||||
 | 
					                elif response.get("type") == 1 and response["arguments"][0].get(
 | 
				
			||||||
 | 
					                    "messages",
 | 
				
			||||||
 | 
					                ):
 | 
				
			||||||
 | 
					                    resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][
 | 
				
			||||||
 | 
					                        0
 | 
				
			||||||
 | 
					                    ]["body"][0].get("text")
 | 
				
			||||||
 | 
					                    yield False, resp_txt
 | 
				
			||||||
 | 
					                elif response.get("type") == 2:
 | 
				
			||||||
 | 
					                    final = True
 | 
				
			||||||
 | 
					                    yield True, response
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    async def _initial_handshake(self) -> None:
 | 
				
			||||||
 | 
					        await self.wss.send(_append_identifier({"protocol": "json", "version": 1}))
 | 
				
			||||||
 | 
					        await self.wss.recv()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    async def close(self) -> None:
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        Close the connection
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        if self.wss and not self.wss.closed:
 | 
				
			||||||
 | 
					            await self.wss.close()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					class NewbingChatbot:
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    Combines everything to make it seamless
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    def __init__(
 | 
				
			||||||
 | 
					        self,
 | 
				
			||||||
 | 
					        cookies,
 | 
				
			||||||
 | 
					        proxy
 | 
				
			||||||
 | 
					    ) -> None:
 | 
				
			||||||
 | 
					        if cookies is None:
 | 
				
			||||||
 | 
					            cookies = {}
 | 
				
			||||||
 | 
					        self.cookies = cookies
 | 
				
			||||||
 | 
					        self.proxy = proxy
 | 
				
			||||||
 | 
					        self.chat_hub: _ChatHub = _ChatHub(
 | 
				
			||||||
 | 
					            _Conversation(self.cookies, self.proxy),
 | 
				
			||||||
 | 
					        )
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    async def ask(
 | 
				
			||||||
 | 
					        self,
 | 
				
			||||||
 | 
					        prompt: str,
 | 
				
			||||||
 | 
					        wss_link: str,
 | 
				
			||||||
 | 
					        conversation_style: CONVERSATION_STYLE_TYPE = None,
 | 
				
			||||||
 | 
					        options: dict = None,
 | 
				
			||||||
 | 
					    ) -> dict:
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        Ask a question to the bot
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        async for final, response in self.chat_hub.ask_stream(
 | 
				
			||||||
 | 
					            prompt=prompt,
 | 
				
			||||||
 | 
					            conversation_style=conversation_style,
 | 
				
			||||||
 | 
					            wss_link=wss_link,
 | 
				
			||||||
 | 
					            options=options,
 | 
				
			||||||
 | 
					        ):
 | 
				
			||||||
 | 
					            if final:
 | 
				
			||||||
 | 
					                return response
 | 
				
			||||||
 | 
					        await self.chat_hub.wss.close()
 | 
				
			||||||
 | 
					        return None
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    async def ask_stream(
 | 
				
			||||||
 | 
					        self,
 | 
				
			||||||
 | 
					        prompt: str,
 | 
				
			||||||
 | 
					        wss_link: str,
 | 
				
			||||||
 | 
					        conversation_style: CONVERSATION_STYLE_TYPE = None,
 | 
				
			||||||
 | 
					        raw: bool = False,
 | 
				
			||||||
 | 
					        options: dict = None,
 | 
				
			||||||
 | 
					    ) -> Generator[str, None, None]:
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        Ask a question to the bot
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        async for response in self.chat_hub.ask_stream(
 | 
				
			||||||
 | 
					            prompt=prompt,
 | 
				
			||||||
 | 
					            conversation_style=conversation_style,
 | 
				
			||||||
 | 
					            wss_link=wss_link,
 | 
				
			||||||
 | 
					            raw=raw,
 | 
				
			||||||
 | 
					            options=options,
 | 
				
			||||||
 | 
					        ):
 | 
				
			||||||
 | 
					            yield response
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    async def close(self) -> None:
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        Close the connection
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        await self.chat_hub.close()
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					    async def reset(self) -> None:
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        Reset the conversation
 | 
				
			||||||
 | 
					        """
 | 
				
			||||||
 | 
					        await self.close()
 | 
				
			||||||
 | 
					        self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy))
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
							
								
								
									
										8
									
								
								request_llm/requirements_newbing.txt
									
									
									
									
									
										Normal file
									
								
							
							
						
						
									
										8
									
								
								request_llm/requirements_newbing.txt
									
									
									
									
									
										Normal file
									
								
							@ -0,0 +1,8 @@
 | 
				
			|||||||
 | 
					BingImageCreator
 | 
				
			||||||
 | 
					certifi
 | 
				
			||||||
 | 
					httpx
 | 
				
			||||||
 | 
					prompt_toolkit
 | 
				
			||||||
 | 
					requests
 | 
				
			||||||
 | 
					rich
 | 
				
			||||||
 | 
					websockets
 | 
				
			||||||
 | 
					httpx[socks]
 | 
				
			||||||
							
								
								
									
										57
									
								
								toolbox.py
									
									
									
									
									
								
							
							
						
						
									
										57
									
								
								toolbox.py
									
									
									
									
									
								
							@ -5,7 +5,20 @@ import inspect
 | 
				
			|||||||
import re
 | 
					import re
 | 
				
			||||||
from latex2mathml.converter import convert as tex2mathml
 | 
					from latex2mathml.converter import convert as tex2mathml
 | 
				
			||||||
from functools import wraps, lru_cache
 | 
					from functools import wraps, lru_cache
 | 
				
			||||||
############################### 插件输入输出接驳区 #######################################
 | 
					
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					第一部分
 | 
				
			||||||
 | 
					函数插件输入输出接驳区
 | 
				
			||||||
 | 
					    - ChatBotWithCookies:   带Cookies的Chatbot类,为实现更多强大的功能做基础
 | 
				
			||||||
 | 
					    - ArgsGeneralWrapper:   装饰器函数,用于重组输入参数,改变输入参数的顺序与结构
 | 
				
			||||||
 | 
					    - update_ui:            刷新界面用 yield from update_ui(chatbot, history)
 | 
				
			||||||
 | 
					    - CatchException:       将插件中出的所有问题显示在界面上
 | 
				
			||||||
 | 
					    - HotReload:            实现插件的热更新
 | 
				
			||||||
 | 
					    - trimmed_format_exc:   打印traceback,为了安全而隐藏绝对地址
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					
 | 
				
			||||||
class ChatBotWithCookies(list):
 | 
					class ChatBotWithCookies(list):
 | 
				
			||||||
    def __init__(self, cookie):
 | 
					    def __init__(self, cookie):
 | 
				
			||||||
        self._cookies = cookie
 | 
					        self._cookies = cookie
 | 
				
			||||||
@ -20,6 +33,7 @@ class ChatBotWithCookies(list):
 | 
				
			|||||||
    def get_cookies(self):
 | 
					    def get_cookies(self):
 | 
				
			||||||
        return self._cookies
 | 
					        return self._cookies
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def ArgsGeneralWrapper(f):
 | 
					def ArgsGeneralWrapper(f):
 | 
				
			||||||
    """
 | 
					    """
 | 
				
			||||||
    装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
 | 
					    装饰器函数,用于重组输入参数,改变输入参数的顺序与结构。
 | 
				
			||||||
@ -47,6 +61,7 @@ def ArgsGeneralWrapper(f):
 | 
				
			|||||||
        yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
 | 
					        yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)
 | 
				
			||||||
    return decorated
 | 
					    return decorated
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def update_ui(chatbot, history, msg='正常', **kwargs):  # 刷新界面
 | 
					def update_ui(chatbot, history, msg='正常', **kwargs):  # 刷新界面
 | 
				
			||||||
    """
 | 
					    """
 | 
				
			||||||
    刷新用户界面
 | 
					    刷新用户界面
 | 
				
			||||||
@ -54,10 +69,18 @@ def update_ui(chatbot, history, msg='正常', **kwargs):  # 刷新界面
 | 
				
			|||||||
    assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
 | 
					    assert isinstance(chatbot, ChatBotWithCookies), "在传递chatbot的过程中不要将其丢弃。必要时,可用clear将其清空,然后用for+append循环重新赋值。"
 | 
				
			||||||
    yield chatbot.get_cookies(), chatbot, history, msg
 | 
					    yield chatbot.get_cookies(), chatbot, history, msg
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					def trimmed_format_exc():
 | 
				
			||||||
 | 
					    import os, traceback
 | 
				
			||||||
 | 
					    str = traceback.format_exc()
 | 
				
			||||||
 | 
					    current_path = os.getcwd()
 | 
				
			||||||
 | 
					    replace_path = "."
 | 
				
			||||||
 | 
					    return str.replace(current_path, replace_path)
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def CatchException(f):
 | 
					def CatchException(f):
 | 
				
			||||||
    """
 | 
					    """
 | 
				
			||||||
    装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
 | 
					    装饰器函数,捕捉函数f中的异常并封装到一个生成器中返回,并显示到聊天当中。
 | 
				
			||||||
    """
 | 
					    """
 | 
				
			||||||
 | 
					
 | 
				
			||||||
    @wraps(f)
 | 
					    @wraps(f)
 | 
				
			||||||
    def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
 | 
					    def decorated(txt, top_p, temperature, chatbot, history, systemPromptTxt, WEB_PORT):
 | 
				
			||||||
        try:
 | 
					        try:
 | 
				
			||||||
@ -66,7 +89,7 @@ def CatchException(f):
 | 
				
			|||||||
            from check_proxy import check_proxy
 | 
					            from check_proxy import check_proxy
 | 
				
			||||||
            from toolbox import get_conf
 | 
					            from toolbox import get_conf
 | 
				
			||||||
            proxies, = get_conf('proxies')
 | 
					            proxies, = get_conf('proxies')
 | 
				
			||||||
            tb_str = '```\n' + traceback.format_exc() + '```'
 | 
					            tb_str = '```\n' + trimmed_format_exc() + '```'
 | 
				
			||||||
            if chatbot is None or len(chatbot) == 0:
 | 
					            if chatbot is None or len(chatbot) == 0:
 | 
				
			||||||
                chatbot = [["插件调度异常", "异常原因"]]
 | 
					                chatbot = [["插件调度异常", "异常原因"]]
 | 
				
			||||||
            chatbot[-1] = (chatbot[-1][0],
 | 
					            chatbot[-1] = (chatbot[-1][0],
 | 
				
			||||||
@ -93,7 +116,23 @@ def HotReload(f):
 | 
				
			|||||||
    return decorated
 | 
					    return decorated
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
####################################### 其他小工具 #####################################
 | 
					"""
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					第二部分
 | 
				
			||||||
 | 
					其他小工具:
 | 
				
			||||||
 | 
					    - write_results_to_file:    将结果写入markdown文件中
 | 
				
			||||||
 | 
					    - regular_txt_to_markdown:  将普通文本转换为Markdown格式的文本。
 | 
				
			||||||
 | 
					    - report_execption:         向chatbot中添加简单的意外错误信息
 | 
				
			||||||
 | 
					    - text_divide_paragraph:    将文本按照段落分隔符分割开,生成带有段落标签的HTML代码。
 | 
				
			||||||
 | 
					    - markdown_convertion:      用多种方式组合,将markdown转化为好看的html
 | 
				
			||||||
 | 
					    - format_io:                接管gradio默认的markdown处理方式
 | 
				
			||||||
 | 
					    - on_file_uploaded:         处理文件的上传(自动解压)
 | 
				
			||||||
 | 
					    - on_report_generated:      将生成的报告自动投射到文件上传区
 | 
				
			||||||
 | 
					    - clip_history:             当历史上下文过长时,自动截断
 | 
				
			||||||
 | 
					    - get_conf:                 获取设置
 | 
				
			||||||
 | 
					    - select_api_key:           根据当前的模型类别,抽取可用的api-key
 | 
				
			||||||
 | 
					========================================================================
 | 
				
			||||||
 | 
					"""
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def get_reduce_token_percent(text):
 | 
					def get_reduce_token_percent(text):
 | 
				
			||||||
    """
 | 
					    """
 | 
				
			||||||
@ -113,7 +152,6 @@ def get_reduce_token_percent(text):
 | 
				
			|||||||
        return 0.5, '不详'
 | 
					        return 0.5, '不详'
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
 | 
					 | 
				
			||||||
def write_results_to_file(history, file_name=None):
 | 
					def write_results_to_file(history, file_name=None):
 | 
				
			||||||
    """
 | 
					    """
 | 
				
			||||||
    将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
 | 
					    将对话记录history以Markdown格式写入文件中。如果没有指定文件名,则使用当前时间生成文件名。
 | 
				
			||||||
@ -369,6 +407,9 @@ def find_recent_files(directory):
 | 
				
			|||||||
 | 
					
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
 | 
					def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    当文件被上传时的回调函数
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
    if len(files) == 0:
 | 
					    if len(files) == 0:
 | 
				
			||||||
        return chatbot, txt
 | 
					        return chatbot, txt
 | 
				
			||||||
    import shutil
 | 
					    import shutil
 | 
				
			||||||
@ -388,8 +429,7 @@ def on_file_uploaded(files, chatbot, txt, txt2, checkboxes):
 | 
				
			|||||||
        shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
 | 
					        shutil.copy(file.name, f'private_upload/{time_tag}/{file_origin_name}')
 | 
				
			||||||
        err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
 | 
					        err_msg += extract_archive(f'private_upload/{time_tag}/{file_origin_name}',
 | 
				
			||||||
                                   dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
 | 
					                                   dest_dir=f'private_upload/{time_tag}/{file_origin_name}.extract')
 | 
				
			||||||
    moved_files = [fp for fp in glob.glob(
 | 
					    moved_files = [fp for fp in glob.glob('private_upload/**/*', recursive=True)]
 | 
				
			||||||
        'private_upload/**/*', recursive=True)]
 | 
					 | 
				
			||||||
    if "底部输入区" in checkboxes:
 | 
					    if "底部输入区" in checkboxes:
 | 
				
			||||||
        txt = ""
 | 
					        txt = ""
 | 
				
			||||||
        txt2 = f'private_upload/{time_tag}'
 | 
					        txt2 = f'private_upload/{time_tag}'
 | 
				
			||||||
@ -508,7 +548,7 @@ def clear_line_break(txt):
 | 
				
			|||||||
class DummyWith():
 | 
					class DummyWith():
 | 
				
			||||||
    """
 | 
					    """
 | 
				
			||||||
    这段代码定义了一个名为DummyWith的空上下文管理器,
 | 
					    这段代码定义了一个名为DummyWith的空上下文管理器,
 | 
				
			||||||
    它的作用是……额……没用,即在代码结构不变得情况下取代其他的上下文管理器。
 | 
					    它的作用是……额……就是不起作用,即在代码结构不变得情况下取代其他的上下文管理器。
 | 
				
			||||||
    上下文管理器是一种Python对象,用于与with语句一起使用,
 | 
					    上下文管理器是一种Python对象,用于与with语句一起使用,
 | 
				
			||||||
    以确保一些资源在代码块执行期间得到正确的初始化和清理。
 | 
					    以确保一些资源在代码块执行期间得到正确的初始化和清理。
 | 
				
			||||||
    上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
 | 
					    上下文管理器必须实现两个方法,分别为 __enter__()和 __exit__()。
 | 
				
			||||||
@ -522,6 +562,9 @@ class DummyWith():
 | 
				
			|||||||
        return
 | 
					        return
 | 
				
			||||||
 | 
					
 | 
				
			||||||
def run_gradio_in_subpath(demo, auth, port, custom_path):
 | 
					def run_gradio_in_subpath(demo, auth, port, custom_path):
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
 | 
					    把gradio的运行地址更改到指定的二次路径上
 | 
				
			||||||
 | 
					    """
 | 
				
			||||||
    def is_path_legal(path: str)->bool:
 | 
					    def is_path_legal(path: str)->bool:
 | 
				
			||||||
        '''
 | 
					        '''
 | 
				
			||||||
        check path for sub url
 | 
					        check path for sub url
 | 
				
			||||||
 | 
				
			|||||||
							
								
								
									
										4
									
								
								version
									
									
									
									
									
								
							
							
						
						
									
										4
									
								
								version
									
									
									
									
									
								
							@ -1,5 +1,5 @@
 | 
				
			|||||||
{
 | 
					{
 | 
				
			||||||
  "version": 3.2,
 | 
					  "version": 3.3,
 | 
				
			||||||
  "show_feature": true,
 | 
					  "show_feature": true,
 | 
				
			||||||
  "new_feature": "保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4)"
 | 
					  "new_feature": "支持NewBing !! <-> 保存对话功能 <-> 解读任意语言代码+同时询问任意的LLM组合 <-> 添加联网(Google)回答问题插件 <-> 修复ChatGLM上下文BUG <-> 添加支持清华ChatGLM和GPT-4 <-> 改进架构,支持与多个LLM模型同时对话 <-> 添加支持API2D(国内,可支持gpt4)"
 | 
				
			||||||
}
 | 
					}
 | 
				
			||||||
 | 
				
			|||||||
		Loading…
	
	
			
			x
			
			
		
	
		Reference in New Issue
	
	Block a user