* Zhipu sdk update 适配最新的智谱SDK,支持GLM4v (#1502) * 适配 google gemini 优化为从用户input中提取文件 * 适配最新的智谱SDK、支持glm-4v * requirements.txt fix * pending history check --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * Update "生成多种Mermaid图表" plugin: Separate out the file reading function (#1520) * Update crazy_functional.py with new functionality deal with PDF * Update crazy_functional.py and Mermaid.py for plugin_kwargs * Update crazy_functional.py with new chart type: mind map * Update SELECT_PROMPT and i_say_show_user messages * Update ArgsReminder message in get_crazy_functions() function * Update with read md file and update PROMPTS * Return the PROMPTS as the test found that the initial version worked best * Update Mermaid chart generation function * version 3.71 * 解决issues #1510 * Remove unnecessary text from sys_prompt in 解析历史输入 function * Remove sys_prompt message in 解析历史输入 function * Update bridge_all.py: supports gpt-4-turbo-preview (#1517) * Update bridge_all.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update bridge_all.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Update config.py: supports gpt-4-turbo-preview (#1516) * Update config.py: supports gpt-4-turbo-preview supports gpt-4-turbo-preview * Update config.py --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * Refactor 解析历史输入 function to handle file input * Update Mermaid chart generation functionality * rename files and functions --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * 接入mathpix ocr功能 (#1468) * Update Latex输出PDF结果.py 借助mathpix实现了PDF翻译中文并重新编译PDF * Update config.py add mathpix appid & appkey * Add 'PDF翻译中文并重新编译PDF' feature to plugins. --------- Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * fix zhipuai * check picture * remove glm-4 due to bug * 修改config * 检查MATHPIX_APPID * Remove unnecessary code and update function_plugins dictionary * capture non-standard token overflow * bug fix #1524 * change mermaid style * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 (#1530) * 支持mermaid 滚动放大缩小重置,鼠标滚动和拖拽 * 微调未果 先stage一下 * update --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> Co-authored-by: binary-husky <96192199+binary-husky@users.noreply.github.com> * ver 3.72 * change live2d * save the status of ``clear btn` in cookie * 前端选择保持 * js ui bug fix * reset btn bug fix * update live2d tips * fix missing get_token_num method * fix live2d toggle switch * fix persistent custom btn with cookie * fix zhipuai feedback with core functionality * Refactor button update and clean up functions * tailing space removal * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration * Prompt fix、脑图提示词优化 (#1537) * 适配 google gemini 优化为从用户input中提取文件 * 脑图提示词优化 * Fix missing MATHPIX_APPID and MATHPIX_APPKEY configuration --------- Co-authored-by: binary-husky <qingxu.fu@outlook.com> * 优化“PDF翻译中文并重新编译PDF”插件 (#1602) * Add gemini_endpoint to API_URL_REDIRECT (#1560) * Add gemini_endpoint to API_URL_REDIRECT * Update gemini-pro and gemini-pro-vision model_info endpoints * Update to support new claude models (#1606) * Add anthropic library and update claude models * 更新bridge_claude.py文件,添加了对图片输入的支持。修复了一些bug。 * 添加Claude_3_Models变量以限制图片数量 * Refactor code to improve readability and maintainability * minor claude bug fix * more flexible one-api support * reformat config * fix one-api new access bug * dummy * compat non-standard api * version 3.73 --------- Co-authored-by: XIao <46100050+Kilig947@users.noreply.github.com> Co-authored-by: Menghuan1918 <menghuan2003@outlook.com> Co-authored-by: hongyi-zhao <hongyi.zhao@gmail.com> Co-authored-by: Hao Ma <893017927@qq.com> Co-authored-by: zeyuan huang <599012428@qq.com>
85 lines
3.6 KiB
Python
85 lines
3.6 KiB
Python
# encoding: utf-8
|
||
# @Time : 2024/1/22
|
||
# @Author : Kilig947 & binary husky
|
||
# @Descr : 兼容最新的智谱Ai
|
||
from toolbox import get_conf
|
||
from zhipuai import ZhipuAI
|
||
from toolbox import get_conf, encode_image, get_pictures_list
|
||
import logging, os
|
||
|
||
|
||
def input_encode_handler(inputs, llm_kwargs):
|
||
if llm_kwargs["most_recent_uploaded"].get("path"):
|
||
image_paths = get_pictures_list(llm_kwargs["most_recent_uploaded"]["path"])
|
||
md_encode = []
|
||
for md_path in image_paths:
|
||
type_ = os.path.splitext(md_path)[1].replace(".", "")
|
||
type_ = "jpeg" if type_ == "jpg" else type_
|
||
md_encode.append({"data": encode_image(md_path), "type": type_})
|
||
return inputs, md_encode
|
||
|
||
|
||
class ZhipuChatInit:
|
||
|
||
def __init__(self):
|
||
ZHIPUAI_API_KEY, ZHIPUAI_MODEL = get_conf("ZHIPUAI_API_KEY", "ZHIPUAI_MODEL")
|
||
if len(ZHIPUAI_MODEL) > 0:
|
||
logging.error('ZHIPUAI_MODEL 配置项选项已经弃用,请在LLM_MODEL中配置')
|
||
self.zhipu_bro = ZhipuAI(api_key=ZHIPUAI_API_KEY)
|
||
self.model = ''
|
||
|
||
def __conversation_user(self, user_input: str, llm_kwargs):
|
||
if self.model not in ["glm-4v"]:
|
||
return {"role": "user", "content": user_input}
|
||
else:
|
||
input_, encode_img = input_encode_handler(user_input, llm_kwargs=llm_kwargs)
|
||
what_i_have_asked = {"role": "user", "content": []}
|
||
what_i_have_asked['content'].append({"type": 'text', "text": user_input})
|
||
if encode_img:
|
||
img_d = {"type": "image_url",
|
||
"image_url": {'url': encode_img}}
|
||
what_i_have_asked['content'].append(img_d)
|
||
return what_i_have_asked
|
||
|
||
def __conversation_history(self, history, llm_kwargs):
|
||
messages = []
|
||
conversation_cnt = len(history) // 2
|
||
if conversation_cnt:
|
||
for index in range(0, 2 * conversation_cnt, 2):
|
||
what_i_have_asked = self.__conversation_user(history[index], llm_kwargs)
|
||
what_gpt_answer = {
|
||
"role": "assistant",
|
||
"content": history[index + 1]
|
||
}
|
||
messages.append(what_i_have_asked)
|
||
messages.append(what_gpt_answer)
|
||
return messages
|
||
|
||
def __conversation_message_payload(self, inputs, llm_kwargs, history, system_prompt):
|
||
messages = []
|
||
if system_prompt:
|
||
messages.append({"role": "system", "content": system_prompt})
|
||
self.model = llm_kwargs['llm_model']
|
||
messages.extend(self.__conversation_history(history, llm_kwargs)) # 处理 history
|
||
messages.append(self.__conversation_user(inputs, llm_kwargs)) # 处理用户对话
|
||
response = self.zhipu_bro.chat.completions.create(
|
||
model=self.model, messages=messages, stream=True,
|
||
temperature=llm_kwargs.get('temperature', 0.95) * 0.95, # 只能传默认的 temperature 和 top_p
|
||
top_p=llm_kwargs.get('top_p', 0.7) * 0.7,
|
||
max_tokens=llm_kwargs.get('max_tokens', 1024 * 4), # 最大输出模型的一半
|
||
)
|
||
return response
|
||
|
||
def generate_chat(self, inputs, llm_kwargs, history, system_prompt):
|
||
self.model = llm_kwargs['llm_model']
|
||
response = self.__conversation_message_payload(inputs, llm_kwargs, history, system_prompt)
|
||
bro_results = ''
|
||
for chunk in response:
|
||
bro_results += chunk.choices[0].delta.content
|
||
yield chunk.choices[0].delta.content, bro_results
|
||
|
||
|
||
if __name__ == '__main__':
|
||
zhipu = ZhipuChatInit()
|
||
zhipu.generate_chat('你好', {'llm_model': 'glm-4'}, [], '你是WPSAi')
|