일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
Tags
- findContours
- word embedding
- #일상영어
- #영어
- #실생활 영어
- #opencv
- #실생활영어
- c언어
- text2img
- 딥러닝
- keras
- TensorFlow
- 이미지 생성
- 완전탐색
- opencv SURF
- #English
- 영어
- #1일1영어
- tokenizing
- #Android
- #영어 명언
- convexhull
- Convolution Neural Network
- python 알고리즘
- tensorflow update
- python __init__
- #프로젝트
- object detection
- python list
- 영어명언
Archives
- Today
- Total
When will you grow up?
VGG+ResNet(Fashion_MNIST) 본문
앞서 간단한 CNN 구조를 이용하여 FASHION-MNIST data를 학습을 시켰었다. (참고)
keras는 Sequential model, Functional API을 사용할 수 있는데,
간단하게 모델을 구성할때는 Sequential model로 조금 복잡한 모델은 Functional API을 이용하여 model을 만들수 있습니다.
이번에는 Keras의 Functional API이용하여 복잡한 구조의 모델을 한번 짜보도록 하겠습니다.
[VGG16 model] [ResNet model]
그렇다면 이 두 모델의 핵심을 Keras의 Functional API을 이용하여 핵심 부분을 합쳐보면 어떨까?
[합친 Model의 구조]
[Fashion_MNIST data Training]
성능으로는 약 88%정도 나오는것을 확인할 수 있었으며, 기존 구조가 간단한 CNN구성을 통하여 학습을 했을경우 보다 안좋은 성능이 나왔으며, 구조가 복잡하거나 깊다고 다 학습이 잘안되는 모습을 확인할 수 있었습니다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 | import numpy as np import mnist_reader from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import Flatten from keras.layers.convolutional import Convolution2D from keras.layers.convolutional import MaxPooling2D from keras.utils import np_utils from matplotlib import pyplot as plt import time from keras.layers.normalization import BatchNormalization from keras.layers import Conv2D, Input, Activation from keras.models import Model from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D import keras from keras.layers import merge #Time start_time = time.time() #data load x_train, y_train = mnist_reader.load_mnist('data/', kind='train') x_test, y_test = mnist_reader.load_mnist('data/', kind='t10k') x_train = x_train.reshape(x_train.shape[0], 28, 28, 1).astype('float32') x_test = x_test.reshape(x_test.shape[0], 28, 28, 1).astype('float32') # normalize inputs from 0-255 to 0-1 x_train = x_train / 255 x_test = x_test / 255 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # create model input_img = Input(shape=(28, 28, 1), name='main_input') #VGG Net x1 = Conv2D(64, (3, 3))(input_img) x1 = Activation('relu')(x1) x1 = Conv2D(64, (3, 3))(x1) x1 = Activation('relu')(x1) x1 = MaxPooling2D()(x1) x1 = Conv2D(64, (3, 3))(x1) x1 = Activation('relu')(x1) x1 = Conv2D(64, (3, 3))(x1) x1 = Activation('relu')(x1) x1 = MaxPooling2D()(x1) x1 = Conv2D(64, (3, 3))(x1) x1 = Activation('relu')(x1) x1 = MaxPooling2D()(x1) x1 = Flatten()(x1) x1 = Dense(256)(x1) x1 = BatchNormalization()(x1) x1 = Activation('relu')(x1) x1 = Dense(256)(x1) x1 = BatchNormalization()(x1) x1 = Activation('relu')(x1) #Res Net x = Conv2D(64, (3, 3))(input_img) x = BatchNormalization()(x) x = Activation('relu')(x) x = (ZeroPadding2D((1,1)))(x) x = Conv2D(64, (3, 3))(input_img) x = BatchNormalization()(x) x = Activation('relu')(x) x = Conv2D(1, (3, 3))(input_img) x = BatchNormalization()(x) x = Activation('relu')(x) x = (ZeroPadding2D((1,1)))(x) x = merge([x, input_img], mode='sum') x = Flatten()(x) x = Dense(256)(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = Dense(256)(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = keras.layers.concatenate([x1, x]) out = Dense(num_classes, activation='softmax')(x) # Compile model model = Model(inputs=input_img, outputs=out) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) #model 가시화 만들기 #from IPython.display import SVG #from keras.utils.vis_utils import model_to_dot #SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg')) # Fit the model hist = model.fit(x_train, y_train, validation_data=(x_test, y_test), nb_epoch=10, batch_size=50, verbose=2) # Final evaluation of the model scores = model.evaluate(x_test, y_test, verbose=0) print("CNN Error: %.2f%%" % (100-scores[1]*100)) #모델 시각 fig, loss_ax = plt.subplots() acc_ax = loss_ax.twinx() loss_ax.plot(hist.history['loss'], 'y', label='train loss') loss_ax.plot(hist.history['val_loss'], 'r', label='val loss') acc_ax.plot(hist.history['acc'], 'b', label='train acc') acc_ax.plot(hist.history['val_acc'], 'g', label='val acc') loss_ax.set_xlabel('epoch') loss_ax.set_ylabel('loss') acc_ax. set_ylabel('accuracy') loss_ax.legend(loc='upper left') acc_ax.legend(loc='lower left') plt.show() #End Time print("--- %s seconds ---" %(time.time() - start_time)) | cs |
[전체 Code]
'02. Study > Keras' 카테고리의 다른 글
Keras Visualization (0) | 2018.03.17 |
---|---|
Keras Update(Window10 Conda) (0) | 2018.03.14 |
Sequence-to Sequence (0) | 2017.12.08 |
Text Generation(using LSTM) (0) | 2017.11.24 |
Long Short Term Memory(using IMDB dataset) (0) | 2017.11.12 |
Comments