Skip to content

Commit 6653437

Browse files
authored
update docs (#990)
1 parent 6ff2908 commit 6653437

File tree

2 files changed

+25
-7
lines changed

2 files changed

+25
-7
lines changed

docs/CN/source/getting_started/installation.rst

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,16 @@ Lightllm 是一个纯python开发的推理框架,其中的算子使用triton
2323
$ # 拉取官方镜像
2424
$ docker pull ghcr.io/modeltc/lightllm:main
2525
$
26-
$ # 运行
26+
$ # 运行服务, 注意现在的lightllm服务非常的依赖共享内存部分,在启动
27+
$ # 前请确保你的docker设置中已经分配了足够的共享内存,否则可能导致
28+
$ # 服务无法正常启动。
29+
$ # 1.如果是纯文本服务,建议分配2GB以上的共享内存, 如果你的内存充足,建议分配16GB以上的共享内存.
30+
$ # 2.如果是多模态服务,建议分配16GB以上的共享内存,具体可以根据实际情况进行调整.
31+
$ # 如果你没有足够的共享内存,可以尝试在启动服务的时候调低 --running_max_req_size 参数,这会降低
32+
$ # 服务的并发请求数量,但可以减少共享内存的占用。如果是多模态服务,也可以通过降低 --cache_capacity
33+
$ # 参数来减少共享内存的占用。
2734
$ docker run -it --gpus all -p 8080:8080 \
28-
$ --shm-size 1g -v your_local_path:/data/ \
35+
$ --shm-size 2g -v your_local_path:/data/ \
2936
$ ghcr.io/modeltc/lightllm:main /bin/bash
3037
3138
你也可以使用源码手动构建镜像并运行:
@@ -37,7 +44,7 @@ Lightllm 是一个纯python开发的推理框架,其中的算子使用triton
3744
$
3845
$ # 运行
3946
$ docker run -it --gpus all -p 8080:8080 \
40-
$ --shm-size 1g -v your_local_path:/data/ \
47+
$ --shm-size 2g -v your_local_path:/data/ \
4148
$ <image_name> /bin/bash
4249
4350
或者你也可以直接使用脚本一键启动镜像并且运行:

docs/EN/source/getting_started/installation.rst

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,20 @@ The easiest way to install Lightllm is using the official image. You can directl
2323
$ # Pull the official image
2424
$ docker pull ghcr.io/modeltc/lightllm:main
2525
$
26-
$ # Run
26+
$ # Run,The current LightLLM service relies heavily on shared memory.
27+
$ # Before starting, please make sure that you have allocated enough shared memory
28+
$ # in your Docker settings; otherwise, the service may fail to start properly.
29+
$ #
30+
$ # 1. For text-only services, it is recommended to allocate more than 2GB of shared memory.
31+
$ # If your system has sufficient RAM, allocating 16GB or more is recommended.
32+
$ # 2.For multimodal services, it is recommended to allocate 16GB or more of shared memory.
33+
$ # You can adjust this value according to your specific requirements.
34+
$ #
35+
$ # If you do not have enough shared memory available, you can try lowering
36+
$ # the --running_max_req_size parameter when starting the service.
37+
$ # This will reduce the number of concurrent requests, but also decrease shared memory usage.
2738
$ docker run -it --gpus all -p 8080:8080 \
28-
$ --shm-size 1g -v your_local_path:/data/ \
39+
$ --shm-size 2g -v your_local_path:/data/ \
2940
$ ghcr.io/modeltc/lightllm:main /bin/bash
3041
3142
You can also manually build the image from source and run it:
@@ -35,9 +46,9 @@ You can also manually build the image from source and run it:
3546
$ # Manually build the image
3647
$ docker build -t <image_name> .
3748
$
38-
$ # Run
49+
$ # Run,
3950
$ docker run -it --gpus all -p 8080:8080 \
40-
$ --shm-size 1g -v your_local_path:/data/ \
51+
$ --shm-size 2g -v your_local_path:/data/ \
4152
$ <image_name> /bin/bash
4253
4354
Or you can directly use the script to launch the image and run it with one click:

0 commit comments

Comments
 (0)