ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

CIFAR100与VGG13实战

2021-04-15 18:51:12  阅读:193  来源: 互联网

标签:实战 layers activation VGG13 same padding tf CIFAR100 size


目录

  • CIFAR100
  • 13 Layers
  • cafar100_train


CIFAR100

37-CIFAR100与VGG13实战-cifar100.jpg

13 Layers

37-CIFAR100与VGG13实战-13层.jpg

cafar100_train

import tensorflow as tf
from tensorflow.keras import layers, optimizers, datasets, Sequential
import os

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
conv_layers = [
    # 5 units of conv + max polling
    # unit 1
    layers.Conv2D(64,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.Conv2D(64,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit2
    layers.Conv2D(128,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.Conv2D(128,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit3
    layers.Conv2D(256,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.Conv2D(256,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit4
    layers.Conv2D(512,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.Conv2D(512,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),

    # unit5
    layers.Conv2D(512,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.Conv2D(512,
                  kernel_size=[3, 3],
                  padding="same",
                  activation=tf.nn.relu),
    layers.MaxPool2D(pool_size=[2, 2], strides=2, padding='same'),
]


def preprocess(x, y):
    # [0-1]
    x = tf.cast(x, dtype=tf.float32) / 255.
    y = tf.cast(y, dtype=tf.int32)
    return x, y


(x, y), (x_test, y_test) = datasets.cifar100.load_data()
y = tf.squeeze(y, axis=1)
y_test = tf.squeeze(y_test, axis=1)
print(x.shape, y.shape, x_test.shape, y_test.shape)

train_db = tf.data.Dataset.from_tensor_slices((x, y))
train_db = train_db.shuffle(1000).map(preprocess).batch(64)

test_db = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_db = test_db.map(preprocess).batch(64)


def main():

    # [b,32,32,3]-->[b,1,1,512]
    conv_net = Sequential(conv_layers)
    conv_net.build(input_shape=[None, 32, 32, 3])
    #     x = tf.random.normal([4, 32, 32, 3])
    #     out = conv_net(x)
    #     print(out.shape)

    fc_net = Sequential([
        layers.Dense(256, activation=tf.nn.relu),
        layers.Dense(128, activation=tf.nn.relu),
        layers.Dense(100, activation=None),
    ])

    conv_net.build(input_shape=[None, 32, 32, 3])
    fc_net.build(input_shape=[None, 512])
    optimizer = optimizers.Adam(lr=1e-4)
    
    # [1,2]+[3,4] = [1,2,3,4]
    variables = conv_net.trainable_variables + fc_net.trainable_variables

    for epoch in range(3):
        for step, (x, y) in enumerate(train_db):
            with tf.GradientTape() as tape:
                # [b,32,32,3]
                out = conv_net(x)
                # flatten ==> [b,512]
                out = tf.reshape(out, [-1, 512])
                # [b,512] --> [b,100]
                logits = fc_net(out)
                # [b] --> [b,100]
                y_onehot = tf.one_hot(y, depth=100)
                # compute loss
                loss = tf.losses.categorical_crossentropy(y_onehot,logits,from_logits=True)
                loss = tf.reduce_mean(loss)
                
            grads = tape.gradient(loss,variables)
            optimizer.apply_gradients(zip(grads,variables))
            
            if step % 100 ==0:
                print(epoch,step,'loss:',float(loss))
            
        total_num = 0
        total_correct = 0
        for x,y in test_db:

            out = conv_net(x)
            out = tf.reshape(out, [-1, 512])
            logits = fc_net(out)
            prob = tf.nn.softmax(logits, axis=1)
            pred = tf.argmax(prob, axis=1)
            pred = tf.cast(pred, dtype=tf.int32)

            correct = tf.cast(tf.equal(pred, y), dtype=tf.int32)
            correct = tf.reduce_sum(correct)

            total_num += x.shape[0]
            total_correct += int(correct)

        acc = total_correct / total_num
        print(epoch, 'acc:', acc)
            

if __name__ == '__main__':
    main()

标签:实战,layers,activation,VGG13,same,padding,tf,CIFAR100,size
来源: https://blog.51cto.com/u_13804357/2709164

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有