Is Python 3 in dynamo use GPU or CPU?

Is Python 3 in dynamo use GPU or CPU?

I think this is an extra setting to decide if the machine uses GPU or not, I am using TensorFlow now and I have to install extra Cuda Toolkit to use GPU.

2 Likes

@chuongmep thank you for your reply. what is extra Cuda Toolkit need to use GPU?

I recommend with you two link this.

NVIDIA cuDNN | NVIDIA Developer

1 Like

until now i cant acces GPU from Dynamo.
I installed Cudatoolkit but i can’t access to GPU from dynamo

you have use cudnn and tensorflow gpu?

https://developer.nvidia.com/cudnn
Let check version tensorflow before install to map with cuda, at the moment support 11.2

use command line :

pip install --upgrade tensorflow-gpu --user

and check again

# importing the tensorflow package
import tensorflow as tf
tf.test.is_built_with_cuda()
tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)

sorry Iam new to used Cuda :smiling_face:

Thats mean i need to build Tesorflow from source?

let try quick check in jupyter notebook first

import sys

import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf

print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
gpu = len(tf.config.list_physical_devices('GPU'))>0
print("GPU is", "available" if gpu else "NOT AVAILABLE")

i test it in colab and it is work

no, colab use gpu from server of google, we are setup in local to can run with local.

1 Like

run at jupyter

Let complete setup cuda, install drive ndivia,cudnn to support first , next check again and use it in dynamo core, can show me info card vga in your computer ?

2 Likes



:pensive: :pensive: :pensive: :pensive: :sleepy: :sleepy:

:v: :v: :v: :v: :v: :v:

3 Likes

now my problem it still tensorflow not work with Gpu

Try set some config like that

gpus = tf.config.list_physical_devices('GPU')
if gpus:
  # Create 2 virtual GPUs with 1GB memory each
  try:
    tf.config.set_logical_device_configuration(
        gpus[0],
        [tf.config.LogicalDeviceConfiguration(memory_limit=1024),
         tf.config.LogicalDeviceConfiguration(memory_limit=1024)])
    logical_gpus = tf.config.list_logical_devices('GPU')
    print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    # Virtual devices must be set before GPUs have been initialized
    print(e)

if result return 1 Physical GPU, 2 Logical GPUs, you can run it normal in Dynamo.

This is a sample constant to check whether function run with GPU or not

tf.debugging.set_log_device_placement(True)

gpus = tf.config.list_logical_devices('GPU')
if gpus:
  # Replicate your computation on multiple GPUs
  c = []
  for gpu in gpus:
    with tf.device(gpu.name):
      a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
      b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
      c.append(tf.matmul(a, b))

  with tf.device('/CPU:0'):
    matmul_sum = tf.add_n(c)

  print(matmul_sum)

Result return a Tensor(“AddN:0”, shape=(2, 2), dtype=float32, device=/device:CPU:0)

You can use it in dynamo now.