Running high-end remote desktop software NICE DCV in containers offers an attractive way to package and deploy NICE DCV installations. For ease of use of NICE DCV we have created an automatic container build environment which creates the NICE DCV container and runs it so you can directly login to a NICE DCV session running inside the NICE DCV container.
Table of contents
Containers for NICE DCV Remote Desktops
Containers offer very easy of controlled and automatic deployment of software including testing different versions of software. NICE DCV in containers can be once configured and prepared and then offer fast deployment of DCV on multiple visualization servers.
We have created 2 build scripts which automatically install everything necessary on the host and inside the container to support running NICE DCV with and without GPU inside container and automatically create session for high-end remote access. Please follow the steps in the respective section below.
NICE DCV Container (no GPU)
Here you can download the tarball to create the NICE DCV Centos 7 container for DCV servers without GPU:
- Tarball to build a NICE DCV Centos 7 container without GPU on Redhat/CentOS or Ubuntu hosts
Below are the steps to build the NICE DCV Container which will use about 3.5 GB disk space including the base OS and software installation:
# Download the tarball
wget https://www.ni-sp.com/wp-content/uploads/2019/10/dcv-centos7-nogpu-container.tar
# Extract the tarball
tar xvf dcv-container-centos7.tar
> Dockerfile
> LICENSE
> run_script.sh
> send_dcvsessionready_notification.sh
> startup_script.sh
> dcvserver.service
> README
> dcv-container-build.sh
# In case you need a DCV trial license please contact NI SP via https://www.ni-sp.com/contact/
cp DCV_Trial_License.lic license.lic
# Run the script to build and run the NICE DCV container
./dcv-container-build.sh
You might want to adapt the Dockerfile to use the latest version of NICE DCV. The script “dcv-container-build.sh” will run among others the following commands e.g. in the case of podman (in case of docker please replace podman with docker) to automatically build the NICE DCV container:
sudo podman build -t "dcv-centos7" .
After the container has been created after a couple of minutes it will be run with the command:
sudo podman run --privileged --rm --network="host" dcv-centos7
The container exposes the DCV port 8443 for connection with the webbrowser or native DCV client which offers the best performance. You can connect to DCV in 2 ways:
- Web browser:
https://External_IP_or_Server_Name:8443
(accept the security exception as we do not have a SSL certificate installed) – or – - DCV native client for best performance: Enter “
External_IP_or_Server_Name
” in the connection field (the portable DCV client can be downloaded here: https://download.nice-dcv.com/)
The default user name is “user” and the password “dcv” which can be adapted in the “startup_script.sh”. You can also create additional users inside the container and additional sessions with the commands:
_username="newuser" # replace newuser with the new user name
_passwd="pw01"
adduser "${_username}" -G wheel
echo "${_username}:${_passwd}" |chpasswd
/usr/bin/dcv create-session --storage-root=%home% --owner "${_username}" --user "${_username}" "${_username}session"
NICE DCV Container with nVidia GPU
Here you can download the tarball to create the NICE DCV Centos 7 container for DCV servers with GPU:
- Tarball to build a NICE DCV Centos 7 container with GPU on Redhat/CentOS or Ubuntu hosts
Below are the steps to build the NICE DCV Container which will use about 4 GB disk space including the base OS and software installation:
# Download the tarball
wget https://www.ni-sp.com/wp-content/uploads/2019/10/dcv-centos7-gpu-container.tar
# Extract the tarball
tar xvf dcv-centos7-gpu-container.tar
> Dockerfile
> LICENSE
> run_script.sh
> send_dcvsessionready_notification.sh
> startup_script.sh
> dcvserver.service
> README
> dcv-container-build.sh
# In case you need a DCV trial license please contact NI SP via https://www.ni-sp.com/contact/
cp DCV_Trial_License.lic license.lic
# Run the script to build and run the NICE DCV container
./dcv-container-build.sh
The build script performs the following tasks:
- Check if the nVidia driver is installed on the hosts and if not disable the nouveau driver and install the nVidia driver
- Check for the DCV license in case needed. You can request a free trial license via https://www.ni-sp.com/contact/
- Install the podman or docker container environment
- Build the NICE DCV container including the nVidia driver inside and the DCV installation
- Start the container and provide instructions how to connect to the new GPU enabled DCV server
The default user name is “user” and the password “dcv” which can be adapted in the “startup_script.sh”. In addition you can also create additional users inside the container (see the instructions displayed on how to log into the container) and additional sessions with the commands:
_username="newuser" # replace newuser with the new user name
_passwd="pw01"
adduser "${_username}" -G wheel
echo "${_username}:${_passwd}" |chpasswd
/usr/bin/dcv create-session --storage-root=%home% --owner "${_username}" --user "${_username}" "${_username}session"
The container exposes the DCV port 8443 for connection with the webbrowser or native DCV client which offers the best performance. You can connect to DCV in 2 ways:
- Web browser:
https://External_IP_or_Server_Name:8443
(accept the security exception as we do not have a SSL certificate installed) – or – - DCV native client for best performance: Enter “
External_IP_or_Server_Name
” in the connection field (the portable DCV client can be downloaded here: https://download.nice-dcv.com/)
In case your prefer to use a Ubuntu 20 GPU container here is a Ubuntu-based GPU docker file from this discussion.
Working with DCV containers
Here are a number of commands to use with the DCV containers in case of podman or docker:
- sudo podman ps / sudo docker ps: list the active containers
- sudo podman exec -ti CONTAINER_ID /bin/bash / sudo docker exec -ti CONTAINER_ID /bin/bash: Login into the container using the container ID shown by the ps command above
- sudo podman image ls / sudo docker image ls: list the images created
- sudo podman stop CONTAINER_ID / sudo docker stop CONTAINER_ID: stop the DCV container
- Start the DCV container with GPU support:
- PODMAN: sudo podman run –security-opt=label=disable –hooks-dir=/usr/share/containers/oci/hooks.d/ –privileged –rm –network=host dcv-centos7-gpu
- DOCKER: sudo docker run –gpus all –privileged –rm –network=host dcv-centos7-gpu
Any questions just let us know. Here is more information about NICE DCV.