# Use of Intel oneAPI docker image If you want to run within a container, a set of predefined images are usable: - [default oneAPI image](intel/oneapi-basekit) (4.26 GiB): see your Intel CPU and iGPU. - [small one](gitlab-registry.in2p3.fr/codeursintensifs/grayscott/grayscottsyclsetup/jfalcou:2024.1) (2.29 GiB): only see your Intel CPU. - [big image](gitlab-registry.in2p3.fr/codeursintensifs/grayscott/grayscottsyclsetup/dchamont:2024.1) (5.29 GiB): with CUDA and the NVidia codeplay plugin. IMPORTANT: our images only includes the external software needed for the SYCL pratices, and not the pedagogical material from this repository and its brothers. You must `git clone` them yourself, either before starting the container, them you need to mount the directory in the container, or after starting the container, them you clone directly in the container, provded it has access to the internet. We recommend you to start your container with the following command, which sets some useful options for Intel GPU (`--device=/dev/dri`) and|or NVidia card (`--gpus all`), and mount the current directory as `/work` in the container: ```sh cd GrayScottSyclSetup/ IMG=gitlab-registry.in2p3.fr/codeursintensifs/grayscott/grayscottsyclsetup/dchamont:2024.1 docker pull ${IMG} docker run --gpus all --device=/dev/dri --network host -it --rm -v ${PWD}:/work -w /work ${IMG} ``` First check that the installation is OK with `sycl-ls`, and `nvidia-smi` if you have a CUDA card. Then you can go through the test program for CPU and|or CUDA: ```sh # move to the top directory cd GrayScottSyclSetup/ # check CPU nodes IMG=gitlab-registry.in2p3.fr/codeursintensifs/grayscott/grayscottsyclsetup/jfalcou:2024.1 docker run --network host -it --rm -v ${PWD}:/work -w /work ${IMG} cd CheckOneApi ./intel.bash # give the list of available devices ./intel.bash 1 # check the results of device 1 exit # Check GPU nodes IMG=gitlab-registry.in2p3.fr/codeursintensifs/grayscott/grayscottsyclsetup/dchamont:2024.1 docker run --gpus all --device=/dev/dri --network host -it --rm -v ${PWD}:/work -w /work ${IMG} cd CheckOneApi nvidia-smi ./cuda.bash # give the list of available devices ./cuda.bash 4 # give the list of available devices exit ``` Have a look at `intel.bash` to see the various compile and run steps, and optionally adapt it to your own needs.