Performance Tuning
Increase Module Memory
Increase the memory for modules using environment variables. Memory configuration is available for Config, Fuze, Search, Payment, Files, PDF, Manager, NLP, Publish, and Security. Below are examples of how to configure each.
Config
version: '3.5'
services:
config:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
Fuze
version: '3.5'
services:
fuze:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
Search
version: '3.5'
services:
search:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
Payment
version: '3.5'
services:
payment:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
Files
version: '3.5'
services:
files:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
PDF
version: '3.5'
services:
pdf:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
Manager
version: '3.5'
services:
manager:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
NLP
version: '3.5'
services:
nlp:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
Publish
version: '3.5'
services:
publish:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
Security
version: '3.5'
services:
security:
environment:
- MAX_HEAP=2048m
- INITIAL_HEAP=2048m
Increase ONgDB Memory
You can find the memory configuration in the ONgDB section of the docker-compose.yml
project file.
version: '3.5'
services:
ongdb:
environment:
- ONGDB_dbms_memory_heap_initial__size=1024m
- ONGDB_dbms_memory_heap_max__size=1024m
- ONGDB_dbms_memory_pagecache_size=1024m
Increase Elasticsearch Memory
Elasticsearch memory can be increased by modifying docker-compose.yml
in the root of the package
version: '3.5'
services:
elasticsearch:
environment:
- ES_JAVA_OPTS=-Xmx1024m -Xms1024m
Enable GPU for NLP
Enable GPU to allow GraphGrid NLP to run with all available Nvidia GPUs for increased performance. There are 3 GPU options available through the GraphGrid CLI:
enable
, disable
, and check
.
The enable
/disable
command adds/removes the following snippet to the docker-compose
NLP section:
# docker-compose.yml
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities:
- gpu
The check
command checks for compatibility to use GraphGrid with GPU support and provides information about the environment.
GPU Compatibility
The check
command runs a check on the system's GPU software and hardware against the following requirements:
- Nvidia driver min.
450
- CUDA version: max.
11.2
- Compute capacity: min.
3.0
More compatibility testing to come! GraphGrid NLP has only been tested on Nvidia GPUs and therefore cannot provide requirement parameters for other GPUs.
GraphGrid CLI GPU Commands
Use the following GraphGrid CLI commands to enable
/disable
or check
the environment compatibility to run GraphGrid with GPU support.
For a list of all options:
./bin/graphgrid gpu --help
GPU usage is disabled by default. The user must manually enable the GPU with the following command or else processes will be handled by the CPU.
Enable
./bin/graphgrid gpu enable
Disable
This setting is a change within the docker-compose file, therefore, disabling the GPU requires GraphGrid NLP to either be offline or restarted if it is online.
./bin/graphgrid gpu disable
To restart GraphGrid NLP if it is online:
./bin/graphgrid start nlp
Check
./bin/graphgrid gpu check