mirror of
https://github.com/okxlin/appstore.git
synced 2025-07-13 21:02:18 +08:00
feat:添加langchain-chatchat到列表
This commit is contained in:
parent
7ac7712d38
commit
23124e452a
3
apps/langchain-chatchat/0.2.7/.env.sample
Normal file
3
apps/langchain-chatchat/0.2.7/.env.sample
Normal file
@ -0,0 +1,3 @@
|
||||
CONTAINER_NAME="langchain-chatchat"
|
||||
PANEL_APP_PORT_HTTP="40208"
|
||||
GPU_DRIVER_TYPE="nvidia"
|
17
apps/langchain-chatchat/0.2.7/data.yml
Normal file
17
apps/langchain-chatchat/0.2.7/data.yml
Normal file
@ -0,0 +1,17 @@
|
||||
additionalProperties:
|
||||
formFields:
|
||||
- default: 40208
|
||||
edit: true
|
||||
envKey: PANEL_APP_PORT_HTTP
|
||||
labelEn: Port
|
||||
labelZh: 端口
|
||||
required: true
|
||||
rule: paramPort
|
||||
type: number
|
||||
- default: nvidia
|
||||
edit: true
|
||||
envKey: GPU_DRIVER_TYPE
|
||||
labelEn: GPU Driver type
|
||||
labelZh: GPU 驱动的类型
|
||||
required: true
|
||||
type: text
|
23
apps/langchain-chatchat/0.2.7/docker-compose.yml
Normal file
23
apps/langchain-chatchat/0.2.7/docker-compose.yml
Normal file
@ -0,0 +1,23 @@
|
||||
version: '3'
|
||||
services:
|
||||
langchain-chatchat:
|
||||
container_name: ${CONTAINER_NAME}
|
||||
restart: always
|
||||
networks:
|
||||
- 1panel-network
|
||||
ports:
|
||||
- "${PANEL_APP_PORT_HTTP}:8501"
|
||||
deploy:
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: ${GPU_DRIVER_TYPE}
|
||||
count: all
|
||||
capabilities: [gpu]
|
||||
image: registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.7
|
||||
labels:
|
||||
createdBy: "Apps"
|
||||
|
||||
networks:
|
||||
1panel-network:
|
||||
external: true
|
97
apps/langchain-chatchat/README.md
Normal file
97
apps/langchain-chatchat/README.md
Normal file
@ -0,0 +1,97 @@
|
||||
# 使用说明
|
||||
|
||||
- **需要注意查看项目`Wiki`先安装好对应环境**
|
||||
|
||||
- **`1Panel`可能会覆盖`docker-compose.yml`的`gpu`设置,所以最好安装完成后检查一下,**
|
||||
**不对则用以下覆盖并在应用目录下手动执行`docker-compose down && docker-compose up -d`。**
|
||||
|
||||
```
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
langchain-chatchat:
|
||||
container_name: ${CONTAINER_NAME}
|
||||
restart: always
|
||||
networks:
|
||||
- 1panel-network
|
||||
ports:
|
||||
- "${PANEL_APP_PORT_HTTP}:8501"
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: ${CPUS}
|
||||
memory: ${MEMORY_LIMIT}
|
||||
reservations:
|
||||
devices:
|
||||
- driver: ${GPU_DRIVER_TYPE}
|
||||
count: all
|
||||
capabilities: [gpu]
|
||||
image: registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.7 # 镜像版本,按需修改
|
||||
labels:
|
||||
createdBy: "Apps"
|
||||
|
||||
networks:
|
||||
1panel-network:
|
||||
external: true
|
||||
|
||||
```
|
||||
|
||||
# 原始相关
|
||||
***
|
||||

|
||||
|
||||
🌍 [READ THIS IN ENGLISH](https://github.com/chatchat-space/Langchain-Chatchat/blob/master/README_en.md)
|
||||
🌍 [日本語で読む](https://github.com/chatchat-space/Langchain-Chatchat/blob/master/README_ja.md)
|
||||
|
||||
📃 **LangChain-Chatchat** (原 Langchain-ChatGLM)
|
||||
|
||||
基于 ChatGLM 等大语言模型与 Langchain 等应用框架实现,开源、可离线部署的检索增强生成(RAG)大模型知识库项目。
|
||||
|
||||
|
||||
## 介绍
|
||||
|
||||
🤖️ 一种利用 [langchain](https://github.com/langchain-ai/langchain)
|
||||
思想实现的基于本地知识库的问答应用,目标期望建立一套对中文场景与开源模型支持友好、可离线运行的知识库问答解决方案。
|
||||
|
||||
💡 受 [GanymedeNil](https://github.com/GanymedeNil) 的项目 [document.ai](https://github.com/GanymedeNil/document.ai)
|
||||
和 [AlexZhangji](https://github.com/AlexZhangji)
|
||||
创建的 [ChatGLM-6B Pull Request](https://github.com/THUDM/ChatGLM-6B/pull/216)
|
||||
启发,建立了全流程可使用开源模型实现的本地知识库问答应用。本项目的最新版本中通过使用 [FastChat](https://github.com/lm-sys/FastChat)
|
||||
接入 Vicuna, Alpaca, LLaMA, Koala, RWKV 等模型,依托于 [langchain](https://github.com/langchain-ai/langchain)
|
||||
框架支持通过基于 [FastAPI](https://github.com/tiangolo/fastapi) 提供的 API
|
||||
调用服务,或使用基于 [Streamlit](https://github.com/streamlit/streamlit) 的 WebUI 进行操作。
|
||||
|
||||
✅ 依托于本项目支持的开源 LLM 与 Embedding 模型,本项目可实现全部使用**开源**模型**离线私有部署**。与此同时,本项目也支持
|
||||
OpenAI GPT API 的调用,并将在后续持续扩充对各类模型及模型 API 的接入。
|
||||
|
||||
⛓️ 本项目实现原理如下图所示,过程包括加载文件 -> 读取文本 -> 文本分割 -> 文本向量化 -> 问句向量化 ->
|
||||
在文本向量中匹配出与问句向量最相似的 `top k`个 -> 匹配出的文本作为上下文和问题一起添加到 `prompt`中 -> 提交给 `LLM`生成回答。
|
||||
|
||||
📺 [原理介绍视频](https://www.bilibili.com/video/BV13M4y1e7cN/?share_source=copy_web&vd_source=e6c5aafe684f30fbe41925d61ca6d514)
|
||||
|
||||

|
||||
|
||||
从文档处理角度来看,实现流程如下:
|
||||
|
||||

|
||||
|
||||
🚩 本项目未涉及微调、训练过程,但可利用微调或训练对本项目效果进行优化。
|
||||
|
||||
🌲 一行命令运行 Docker :
|
||||
|
||||
```shell
|
||||
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.7
|
||||
```
|
||||
|
||||
🧩 本项目有一个非常完整的[Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/) , README只是一个简单的介绍,_
|
||||
_仅仅是入门教程,能够基础运行__。
|
||||
如果你想要更深入的了解本项目,或者想对本项目做出贡献。请移步 [Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/)
|
||||
界面
|
||||
|
||||
## 解决的痛点
|
||||
|
||||
该项目是一个可以实现 __完全本地化__推理的知识库增强方案, 重点解决数据安全保护,私域化部署的企业痛点。
|
||||
本开源方案采用```Apache License```,可以免费商用,无需付费。
|
||||
|
||||
我们支持市面上主流的本地大语言模型和Embedding模型,支持开源的本地向量数据库。
|
||||
支持列表详见[Wiki](https://github.com/chatchat-space/Langchain-Chatchat/wiki/)
|
19
apps/langchain-chatchat/data.yml
Normal file
19
apps/langchain-chatchat/data.yml
Normal file
@ -0,0 +1,19 @@
|
||||
name: LangChain-Chatchat
|
||||
tags:
|
||||
- AI / 大模型
|
||||
title: 基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答
|
||||
description: 基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答
|
||||
additionalProperties:
|
||||
key: langchain-chatchat
|
||||
name: LangChain-Chatchat
|
||||
tags:
|
||||
- AI
|
||||
shortDescZh: 基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答
|
||||
shortDescEn: A LLM application aims to implement knowledge and search engine based QA based on Langchain and open-source or remote LLM API
|
||||
type: tool
|
||||
crossVersionUpdate: true
|
||||
limit: 0
|
||||
recommend: 0
|
||||
website: https://github.com/chatchat-space/Langchain-Chatchat
|
||||
github: https://github.com/chatchat-space/Langchain-Chatchat
|
||||
document: https://github.com/chatchat-space/Langchain-Chatchat/wiki
|
BIN
apps/langchain-chatchat/logo.png
Normal file
BIN
apps/langchain-chatchat/logo.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.7 KiB |
Loading…
Reference in New Issue
Block a user