ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

6-4 使用多GPU训练模型——eat_tensorflow2_in_30_days

2022-07-02 18:05:06  阅读:156  来源: 互联网

标签:tensorflow2 task 30 days job replica device GPU localhost


6-4 使用多GPU训练模型

如果使用多GPU训练模型,推荐使用内置fit方法,较为方便,仅需添加2行代码。
MirroredStrategy过程简介:

  • 训练开始前,该策略在所有 N 个计算设备上均各复制一份完整的模型;
  • 每次训练传入一个批次的数据时,将数据分成 N 份,分别传入 N 个计算设备(即数据并行);
  • N 个计算设备使用本地变量(镜像变量)分别计算自己所获得的部分数据的梯度;
  • 使用分布式计算的 All-reduce 操作,在计算设备间高效交换梯度数据并进行求和,使得最终每个设备都有了所有设备的梯度之和;
  • 使用梯度求和的结果更新本地变量(镜像变量);
  • 当所有设备均更新本地变量后,进行下一轮训练(即该并行策略是同步的)。
import tensorflow as tf
print(tf.__version__)
from tensorflow.keras import * 

#打印时间分割线
@tf.function
def printbar():
    ts = tf.timestamp()
    today_ts = ts%(24*60*60)

    hour = tf.cast(today_ts//3600+8,tf.int32)%tf.constant(24)
    minite = tf.cast((today_ts%3600)//60,tf.int32)
    second = tf.cast(tf.floor(today_ts%60),tf.int32)
    
    def timeformat(m):
        if tf.strings.length(tf.strings.format("{}",m))==1:
            return(tf.strings.format("0{}",m))
        else:
            return(tf.strings.format("{}",m))
    
    timestring = tf.strings.join([timeformat(hour),timeformat(minite),
                timeformat(second)],separator = ":")
    tf.print("=========="*8,end = "")
    tf.print(timestring)

"""
2.6.0
"""
# 使用1个GPU模拟出两个逻辑GPU进行多GPU训练
gpus = tf.config.experimental.list_physical_devices("GPU")
if gpus:
    # 设置两个逻辑GPU模拟多GPU训练
    try:
        tf.config.experimental.set_virtual_device_configuration(gpus[0],
                                                               [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
                                                               tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
        logical_gpus = tf.config.experimental.list_logical_devices("GPU")
        print(len(gpus), "Physical GPU", len(logical_gpus), "Logical GPUs")
    except RuntimeError as e:
        print(e)

"""
1 Physical GPU 2 Logical GPUs
"""

准备数据

MAX_LEN = 200
BATCH_SIZE = 32
(x_train, y_train), (x_test, y_test) = datasets.reuters.load_data()
x_train = preprocessing.sequence.pad_sequences(x_train, maxlen=MAX_LEN)
x_test = preprocessing.sequence.pad_sequences(x_test, maxlen=MAX_LEN)

MAX_WORDS = x_train.max() + 1
CAT_NUM = y_train.max() + 1

ds_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)) \
        .shuffle(buffer_size=1000).batch(BATCH_SIZE) \
        .prefetch(tf.data.experimental.AUTOTUNE).cache()

ds_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)) \
        .shuffle(buffer_size=1000).batch(BATCH_SIZE) \
        .prefetch(tf.data.experimental.AUTOTUNE).cache()

定义模型

tf.keras.backend.clear_session()
def create_model():
    model = models.Sequential()
    model.add(layers.Embedding(MAX_WORDS, 7, input_length=MAX_LEN))
    model.add(layers.Conv1D(filters=64, kernel_size=5, activation="relu"))
    model.add(layers.MaxPool1D(2))
    model.add(layers.Conv1D(filters=32, kernel_size=3, activation="relu"))
    model.add(layers.MaxPool1D(2))
    model.add(layers.Flatten())
    model.add(layers.Dense(CAT_NUM, activation="softmax"))
    return model

def compile_model(model):
    model.compile(optimizer=optimizers.Nadam(), loss=losses.SparseCategoricalCrossentropy(), 
                       metrics=[metrics.SparseCategoricalAccuracy(), metrics.SparseTopKCategoricalAccuracy(5)])
    return model

训练模型

%%time
# 增加以下两行代码
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
    model = create_model()
    model.summary()
    model = compile_model(model)
history = model.fit(ds_train, validation_data=ds_test, epochs=10)

"""
WARNING:tensorflow:NCCL is not supported when using virtual GPUs, fallingback to reduction to one device
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1')
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, 200, 7)            216860    
_________________________________________________________________
conv1d (Conv1D)              (None, 196, 64)           2304      
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 98, 64)            0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 96, 32)            6176      
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 48, 32)            0         
_________________________________________________________________
flatten (Flatten)            (None, 1536)              0         
_________________________________________________________________
dense (Dense)                (None, 46)                70702     
=================================================================
Total params: 296,042
Trainable params: 296,042
Non-trainable params: 0
_________________________________________________________________
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
Epoch 1/10
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1').
281/281 [==============================] - 6s 14ms/step - loss: 2.0152 - sparse_categorical_accuracy: 0.4662 - sparse_top_k_categorical_accuracy: 0.7448 - val_loss: 1.6548 - val_sparse_categorical_accuracy: 0.5712 - val_sparse_top_k_categorical_accuracy: 0.7600
Epoch 2/10
281/281 [==============================] - 4s 13ms/step - loss: 1.4559 - sparse_categorical_accuracy: 0.6231 - sparse_top_k_categorical_accuracy: 0.8034 - val_loss: 1.5219 - val_sparse_categorical_accuracy: 0.6118 - val_sparse_top_k_categorical_accuracy: 0.7912
Epoch 3/10
281/281 [==============================] - 4s 14ms/step - loss: 1.1807 - sparse_categorical_accuracy: 0.6924 - sparse_top_k_categorical_accuracy: 0.8532 - val_loss: 1.5316 - val_sparse_categorical_accuracy: 0.6309 - val_sparse_top_k_categorical_accuracy: 0.8059
Epoch 4/10
281/281 [==============================] - 4s 14ms/step - loss: 0.9384 - sparse_categorical_accuracy: 0.7521 - sparse_top_k_categorical_accuracy: 0.9071 - val_loss: 1.6911 - val_sparse_categorical_accuracy: 0.6362 - val_sparse_top_k_categorical_accuracy: 0.8005
Epoch 5/10
281/281 [==============================] - 4s 15ms/step - loss: 0.7138 - sparse_categorical_accuracy: 0.8157 - sparse_top_k_categorical_accuracy: 0.9450 - val_loss: 1.9105 - val_sparse_categorical_accuracy: 0.6229 - val_sparse_top_k_categorical_accuracy: 0.7952
Epoch 6/10
281/281 [==============================] - 4s 14ms/step - loss: 0.5354 - sparse_categorical_accuracy: 0.8658 - sparse_top_k_categorical_accuracy: 0.9680 - val_loss: 2.1388 - val_sparse_categorical_accuracy: 0.6064 - val_sparse_top_k_categorical_accuracy: 0.7930
Epoch 7/10
281/281 [==============================] - 4s 14ms/step - loss: 0.4178 - sparse_categorical_accuracy: 0.8975 - sparse_top_k_categorical_accuracy: 0.9800 - val_loss: 2.3550 - val_sparse_categorical_accuracy: 0.6086 - val_sparse_top_k_categorical_accuracy: 0.7921
Epoch 8/10
281/281 [==============================] - 4s 14ms/step - loss: 0.3433 - sparse_categorical_accuracy: 0.9161 - sparse_top_k_categorical_accuracy: 0.9874 - val_loss: 2.5456 - val_sparse_categorical_accuracy: 0.6064 - val_sparse_top_k_categorical_accuracy: 0.7907
Epoch 9/10
281/281 [==============================] - 4s 14ms/step - loss: 0.2937 - sparse_categorical_accuracy: 0.9287 - sparse_top_k_categorical_accuracy: 0.9904 - val_loss: 2.7295 - val_sparse_categorical_accuracy: 0.6020 - val_sparse_top_k_categorical_accuracy: 0.7956
Epoch 10/10
281/281 [==============================] - 4s 14ms/step - loss: 0.2608 - sparse_categorical_accuracy: 0.9354 - sparse_top_k_categorical_accuracy: 0.9932 - val_loss: 2.8850 - val_sparse_categorical_accuracy: 0.6002 - val_sparse_top_k_categorical_accuracy: 0.7952
CPU times: user 1min 34s, sys: 20.5 s, total: 1min 55s
Wall time: 42.1 s
"""

标签:tensorflow2,task,30,days,job,replica,device,GPU,localhost
来源: https://www.cnblogs.com/lotuslaw/p/16438020.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有