When will you grow up?

Convolution Neural Network (using CIFAR-10 data) 본문

02. Study/Keras

Convolution Neural Network (using CIFAR-10 data)

미카이 2017. 11. 5. 00:33

CIFAR-10 dataset 을 이용하여, Keras로 CNN모델구성을 하여, 학습을 시켜보고 

약 85%성능을 내는 모델을 만들어보겠습니다.



Convolution Neural Network
(using  CIFAR-10 data)





Processing


1.Load Data
2.Define Model
3.Compile Model
4.Fit Model
5.Evaluate Model
6.Tie It All Together

1~6번 순서로 진행하며, (2,3,4)번을 바꿔가면서 최적의 성능을 내는 CNN를 완성할 것이다.




Load Data





Define Model


                       [Conv -Activation - BatchNormalization]                     [Conv -BatchNormalizationActivation]  


Layer 순서를 위와같이 두가지로 나눠서 진행을 하였고,

또한 epoch도 20, 200으로 각각 진행을 하였다.





Compile Model



Loss Function : Cross-entropy

Optimization: Adam

Learning Rate : 0.001





Fit Model


Epoch = 20, 200

Batch_size = 32





Evaluate Model


 [Conv -Activation - BatchNormalization]   /     [Conv -BatchNormalizationActivation ]

epoch(20)                                                         epoch(20)

Accuracy : 87.71%                                              Accuracy : 87.98%

Val_Accuracy : 82.05%                                     Val_Accuracy : 82.54%



epoch(200)                                                         epoch(200)

Accuracy : 95.02%                                              Accuracy : 95.72%

Val_Accuracy : 85.21%                                     Val_Accuracy : 85.73%







Visualization


 [Conv -Activation - BatchNormalization]       /        [Conv -BatchNormalizationActivation ]

epoch(20)                                                                epoch(20)



 [Conv -Activation - BatchNormalization]       /        [Conv -BatchNormalizationActivation ]

epoch(200)                                                                epoch(200)






Code


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from matplotlib import pyplot as plt
import numpy as np
from keras.utils import np_utils
from keras.layers.normalization import BatchNormalization
 
batch_size = 32
num_classes = 10
epochs = 200
 
 
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
# One hot Encoding
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
 
model = Sequential()
model.add(Conv2D(32, (33), padding='same', input_shape=x_train.shape[1:]))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(32, (33)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(22)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (33), padding='same'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Conv2D(64, (33)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(22)))
model.add(Dropout(0.25))
 
model.add(Flatten())
model.add(Dense(512))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
 
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
 
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
 
hist = model.fit(x_train, y_train, validation_data=(x_test, y_test), nb_epoch=epochs, batch_size=batch_size, verbose=2)
 
scores = model.evaluate(x_test, y_test, verbose=0)
print("CNN Error: %.2f%%" % (100-scores[1]*100))
 
#모델 시각
fig, loss_ax = plt.subplots()
 
acc_ax = loss_ax.twinx()
 
loss_ax.plot(hist.history['loss'], 'y', label='train loss')
loss_ax.plot(hist.history['val_loss'], 'r', label='val loss')
 
acc_ax.plot(hist.history['acc'], 'b', label='train acc')
acc_ax.plot(hist.history['val_acc'], 'g', label='val acc')
 
loss_ax.set_xlabel('epoch')
loss_ax.set_ylabel('loss')
acc_ax. set_ylabel('accuracy')
 
loss_ax.legend(loc='upper left')
acc_ax.legend(loc='lower left')
 
plt.show()
cs


Comments