今回は「畳み込みニューラルネットワークの精度向上 - KIKAGAKU」を学ぶ。
Table of Contents [Disable]
学習内容
ベースモデルを作成する
- データセットの準備
- モデルの定義と学習
- 結果の確認
最適化アルゴリズム
- SGD (Stochastic Gradient Descent: 確率的勾配降下法)
- Momentum SGD
- RMSprop
- Adam (Adaptive moment estimation)
過学習対策
- ドロップアウト
- 正則化 (Regularization)
- 早期終了
- バッチノーマリゼーション
活性化関数
ソースコードと実行結果
ベースモデル
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# ベースモデルを作成する | |
import numpy as np | |
import pandas as pd | |
import matplotlib.pyplot as plt | |
import tensorflow as tf | |
# データセットの準備 | |
(x_train, t_train), (x_test, t_test) = tf.keras.datasets.cifar10.load_data() | |
plt.figure(figsize=(12,12)) | |
for i in range(25): | |
plt.subplot(5, 5, i+1) | |
plt.imshow(x_train[i]) | |
plt.show() | |
# 正規化 | |
x_train = x_train / 255.0 | |
x_test = x_test / 255.0 | |
print(x_train.shape, x_test.shape, t_train.shape, t_test.shape) | |
# モデルの定義と学習 | |
import os | |
import random | |
def reset_seed(seed=0): | |
os.environ['PYTHONHASHSEED'] = '0' | |
random.seed(seed) # random関数のシードを固定 | |
np.random.seed(seed) #numpyのシードを固定 | |
tf.random.set_seed(seed) #tensorflowのシードを固定 | |
from tensorflow.keras import models, layers | |
# シードの固定 | |
reset_seed(0) | |
# モデルの構築 | |
model = models.Sequential([ | |
layers.Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=(32, 32, 3)), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(64, (3, 3), padding='same', activation='relu'), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(128, (3, 3), padding='same', activation='relu'), | |
layers.MaxPooling2D((2, 2)), | |
layers.Flatten(), | |
layers.Dense(128, activation='relu'), | |
layers.Dense(10, activation='softmax') | |
]) | |
# optimizerの設定 | |
optimizer = tf.keras.optimizers.Adam(lr=1e-3) | |
# モデルのコンパイル | |
model.compile(loss='sparse_categorical_crossentropy', | |
optimizer=optimizer, | |
metrics=['accuracy']) | |
model.summary() | |
# 学習の詳細設定 | |
batch_size = 1024 | |
epochs = 50 | |
# 学習の実行 | |
history = model.fit(x_train, t_train, | |
batch_size=batch_size, | |
epochs=epochs, | |
verbose=1, | |
validation_data=(x_test, t_test)) | |
# 結果の確認 | |
results = pd.DataFrame(history.history) | |
results[['accuracy', 'val_accuracy']].plot() | |
plt.show() | |
results[["loss", "val_loss"]].plot() | |
plt.show() | |
print(results.tail(1)) | |
Fig.1 データセット
Fig.2 Accuracy
Fig.3 Loss
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(50000, 32, 32, 3) (10000, 32, 32, 3) (50000, 1) (10000, 1) | |
2020-06-07 10:13:08.921074: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 | |
Model: "sequential" | |
_________________________________________________________________ | |
Layer (type) Output Shape Param # | |
================================================================= | |
conv2d (Conv2D) (None, 32, 32, 32) 896 | |
_________________________________________________________________ | |
max_pooling2d (MaxPooling2D) (None, 16, 16, 32) 0 | |
_________________________________________________________________ | |
conv2d_1 (Conv2D) (None, 16, 16, 64) 18496 | |
_________________________________________________________________ | |
max_pooling2d_1 (MaxPooling2 (None, 8, 8, 64) 0 | |
_________________________________________________________________ | |
conv2d_2 (Conv2D) (None, 8, 8, 128) 73856 | |
_________________________________________________________________ | |
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128) 0 | |
_________________________________________________________________ | |
flatten (Flatten) (None, 2048) 0 | |
_________________________________________________________________ | |
dense (Dense) (None, 128) 262272 | |
_________________________________________________________________ | |
dense_1 (Dense) (None, 10) 1290 | |
================================================================= | |
Total params: 356,810 | |
Trainable params: 356,810 | |
Non-trainable params: 0 | |
_________________________________________________________________ | |
Train on 50000 samples, validate on 10000 samples | |
Epoch 1/50 | |
2020-06-07 10:13:12.204334: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 10:13:15.886013: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
1024/50000 [..............................] - ETA: 5:01 - loss: 2.3110 - accuracy: 0.10642020-06-07 10:13:17.255931: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 10:13:18.197471: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
<中略> | |
49152/50000 [============================>.] - ETA: 0s - loss: 0.2993 - accuracy: 0.9004 | |
50000/50000 [==============================] - 51s 1ms/sample - loss: 0.3000 - accuracy: 0.9003 - val_loss: 0.9475 - val_accuracy: 0.7249 | |
loss accuracy val_loss val_accuracy | |
49 0.299961 0.9003 0.94746 0.7249 | |
過学習対策(ドロップアウト)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# 過学習対策 ドロップアウト | |
import numpy as np | |
import pandas as pd | |
import matplotlib.pyplot as plt | |
import tensorflow as tf | |
# データセットの準備 | |
(x_train, t_train), (x_test, t_test) = tf.keras.datasets.cifar10.load_data() | |
plt.figure(figsize=(12,12)) | |
for i in range(25): | |
plt.subplot(5, 5, i+1) | |
plt.imshow(x_train[i]) | |
plt.show() | |
# 正規化 | |
x_train = x_train / 255.0 | |
x_test = x_test / 255.0 | |
print(x_train.shape, x_test.shape, t_train.shape, t_test.shape) | |
# モデルの定義と学習 | |
import os | |
import random | |
def reset_seed(seed=0): | |
os.environ['PYTHONHASHSEED'] = '0' | |
random.seed(seed) # random関数のシードを固定 | |
np.random.seed(seed) #numpyのシードを固定 | |
tf.random.set_seed(seed) #tensorflowのシードを固定 | |
from tensorflow.keras import models, layers | |
# シードの固定 | |
reset_seed(0) | |
# モデルのインスタンス化 | |
model = models.Sequential([ | |
layers.Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=(32, 32, 3)), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(64, (3, 3), padding='same', activation='relu'), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(128, (3, 3), padding='same', activation='relu'), | |
layers.MaxPooling2D((2, 2)), | |
layers.Flatten(), | |
layers.Dropout(0.5), | |
layers.Dense(128, activation='relu'), | |
layers.Dense(10, activation='softmax') | |
]) | |
# optimizerの設定 | |
optimizer = tf.keras.optimizers.Adam(lr=1e-3) | |
# モデルのコンパイル | |
model.compile(loss='sparse_categorical_crossentropy', | |
optimizer=optimizer, | |
metrics=['accuracy']) | |
# 学習の詳細設定 | |
batch_size = 1024 | |
epochs = 50 | |
# 学習の実行 | |
history = model.fit(x_train, t_train, | |
batch_size=batch_size, | |
epochs=epochs, | |
verbose=1, | |
validation_data=(x_test, t_test)) | |
# 結果の確認 | |
results = pd.DataFrame(history.history) | |
results[['accuracy', 'val_accuracy']].plot() | |
plt.show() | |
results[['loss', 'val_loss']].plot() | |
plt.show() | |
print(results.tail(1)) | |
Fig.4 Accuracy
Fig.5 Loss
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(50000, 32, 32, 3) (10000, 32, 32, 3) (50000, 1) (10000, 1) | |
2020-06-07 15:38:17.190777: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 | |
Train on 50000 samples, validate on 10000 samples | |
Epoch 1/50 | |
2020-06-07 15:38:34.154096: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 15:38:34.853361: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
1024/50000 [..............................] - ETA: 6:26 - loss: 2.3177 - accuracy: 0.10942020-06-07 15:38:35.193546: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 15:38:35.816916: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2048/50000 [>.............................] - ETA: 3:30 - loss: 2.3118 - accuracy: 0.10452020-06-07 15:38:36.111336: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
<中略> | |
49152/50000 [============================>.] - ETA: 0s - loss: 0.4858 - accuracy: 0.8293 | |
50000/50000 [==============================] - 54s 1ms/sample - loss: 0.4865 - accuracy: 0.8290 - val_loss: 0.6490 - val_accuracy: 0.7791 | |
loss accuracy val_loss val_accuracy | |
49 0.486461 0.82898 0.648977 0.7791 | |
Process finished with exit code 0 |
過学習対策(正則化)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# 過学習対策 正則化 | |
import numpy as np | |
import pandas as pd | |
import matplotlib.pyplot as plt | |
import tensorflow as tf | |
# データセットの準備 | |
(x_train, t_train), (x_test, t_test) = tf.keras.datasets.cifar10.load_data() | |
plt.figure(figsize=(12,12)) | |
for i in range(25): | |
plt.subplot(5, 5, i+1) | |
plt.imshow(x_train[i]) | |
plt.show() | |
# 正規化 | |
x_train = x_train / 255.0 | |
x_test = x_test / 255.0 | |
print(x_train.shape, x_test.shape, t_train.shape, t_test.shape) | |
# モデルの定義と学習 | |
import os | |
import random | |
def reset_seed(seed=0): | |
os.environ['PYTHONHASHSEED'] = '0' | |
random.seed(seed) # random関数のシードを固定 | |
np.random.seed(seed) #numpyのシードを固定 | |
tf.random.set_seed(seed) #tensorflowのシードを固定 | |
from tensorflow.keras import models, layers | |
from tensorflow.keras import regularizers | |
# シードの固定 | |
reset_seed(0) | |
# モデルのインスタンス化 | |
model = models.Sequential([ | |
layers.Conv2D(32, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l2(1e-2), input_shape=(32, 32, 3)), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(64, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l2(1e-2)), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(128, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l2(1e-2)), | |
layers.MaxPooling2D((2, 2)), | |
layers.Flatten(), | |
layers.Dense(128, activation='relu'), | |
layers.Dense(10, activation='softmax') | |
]) | |
# optimizerの設定 | |
optimizer = tf.keras.optimizers.Adam(lr=1e-3) | |
# モデルのコンパイル | |
model.compile(loss='sparse_categorical_crossentropy', | |
optimizer=optimizer, | |
metrics=['accuracy']) | |
# 学習の詳細設定 | |
batch_size = 1024 | |
epochs = 50 | |
# 学習の実行 | |
history = model.fit(x_train, t_train, | |
batch_size=batch_size, | |
epochs=epochs, | |
verbose=1, | |
validation_data=(x_test, t_test)) | |
# 結果の確認 | |
results = pd.DataFrame(history.history) | |
results[['accuracy', 'val_accuracy']].plot() | |
plt.show() | |
results[['loss', 'val_loss']].plot() | |
plt.show() | |
print(results.tail(1)) | |
Fig.6 Accuracy
Fig.7 Loss
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(50000, 32, 32, 3) (10000, 32, 32, 3) (50000, 1) (10000, 1) | |
2020-06-07 16:36:13.681960: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 | |
Train on 50000 samples, validate on 10000 samples | |
Epoch 1/50 | |
2020-06-07 16:36:17.696425: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 16:36:18.462207: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
1024/50000 [..............................] - ETA: 3:33 - loss: 3.6468 - accuracy: 0.10642020-06-07 16:36:19.507067: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 16:36:19.962333: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2048/50000 [>.............................] - ETA: 2:05 - loss: 3.6190 - accuracy: 0.10302020-06-07 16:36:20.311691: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
<中略> | |
49152/50000 [============================>.] - ETA: 0s - loss: 0.9750 - accuracy: 0.7319 | |
50000/50000 [==============================] - 48s 955us/sample - loss: 0.9751 - accuracy: 0.7320 - val_loss: 1.0727 - val_accuracy: 0.6992 | |
loss accuracy val_loss val_accuracy | |
49 0.975128 0.73196 1.072687 0.6992 | |
Process finished with exit code 0 |
過学習対策(早期終了)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# 過学習対策 早期終了 | |
import numpy as np | |
import pandas as pd | |
import matplotlib.pyplot as plt | |
import tensorflow as tf | |
# データセットの準備 | |
(x_train, t_train), (x_test, t_test) = tf.keras.datasets.cifar10.load_data() | |
plt.figure(figsize=(12,12)) | |
for i in range(25): | |
plt.subplot(5, 5, i+1) | |
plt.imshow(x_train[i]) | |
plt.show() | |
# 正規化 | |
x_train = x_train / 255.0 | |
x_test = x_test / 255.0 | |
print(x_train.shape, x_test.shape, t_train.shape, t_test.shape) | |
# モデルの定義と学習 | |
import os | |
import random | |
def reset_seed(seed=0): | |
os.environ['PYTHONHASHSEED'] = '0' | |
random.seed(seed) # random関数のシードを固定 | |
np.random.seed(seed) #numpyのシードを固定 | |
tf.random.set_seed(seed) #tensorflowのシードを固定 | |
from tensorflow.keras import models, layers | |
from tensorflow.keras import regularizers | |
# シードの固定 | |
reset_seed(0) | |
# モデルのインスタンス化 | |
model = models.Sequential([ | |
layers.Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=(32, 32, 3)), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(64, (3, 3), padding='same', activation='relu'), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(128, (3, 3), padding='same', activation='relu'), | |
layers.MaxPooling2D((2, 2)), | |
layers.Flatten(), | |
layers.Dense(128, activation='relu'), | |
layers.Dense(10, activation='softmax') | |
]) | |
# optimizerの設定 | |
optimizer = tf.keras.optimizers.Adam(lr=1e-3) | |
# モデルのコンパイル | |
model.compile(loss='sparse_categorical_crossentropy', | |
optimizer=optimizer, | |
metrics=['accuracy']) | |
# 学習の詳細設定 | |
batch_size = 1024 | |
epochs = 50 | |
# Early Stopping | |
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3) | |
# 学習の実行 | |
history = model.fit(x_train, t_train, | |
batch_size=batch_size, | |
epochs=epochs, | |
verbose=1, | |
validation_data=(x_test, t_test), | |
callbacks=[callback]) | |
# 結果の確認 | |
results = pd.DataFrame(history.history) | |
results[['accuracy', 'val_accuracy']].plot() | |
plt.show() | |
results[['loss', 'val_loss']].plot() | |
plt.show() | |
print(results.tail(1)) | |
Fig.8 Accuracy
Fig.9 Loss
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(50000, 32, 32, 3) (10000, 32, 32, 3) (50000, 1) (10000, 1) | |
2020-06-07 17:54:00.583218: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 | |
Train on 50000 samples, validate on 10000 samples | |
Epoch 1/50 | |
2020-06-07 17:54:02.839901: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 17:54:04.509140: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
1024/50000 [..............................] - ETA: 2:28 - loss: 2.3110 - accuracy: 0.10642020-06-07 17:54:05.022859: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 17:54:05.634450: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2048/50000 [>.............................] - ETA: 1:32 - loss: 2.3094 - accuracy: 0.10302020-06-07 17:54:05.868622: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
<中略> | |
49152/50000 [============================>.] - ETA: 0s - loss: 0.6657 - accuracy: 0.7711 | |
50000/50000 [==============================] - 50s 995us/sample - loss: 0.6658 - accuracy: 0.7713 - val_loss: 0.8530 - val_accuracy: 0.7070 | |
loss accuracy val_loss val_accuracy | |
22 0.665794 0.77134 0.852974 0.707 | |
Process finished with exit code 0 |
過学習対策(バッチノーマリゼーション)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# 過学習対策 ノーマリゼーション | |
import numpy as np | |
import pandas as pd | |
import matplotlib.pyplot as plt | |
import tensorflow as tf | |
# データセットの準備 | |
(x_train, t_train), (x_test, t_test) = tf.keras.datasets.cifar10.load_data() | |
plt.figure(figsize=(12,12)) | |
for i in range(25): | |
plt.subplot(5, 5, i+1) | |
plt.imshow(x_train[i]) | |
plt.show() | |
# 正規化 | |
x_train = x_train / 255.0 | |
x_test = x_test / 255.0 | |
print(x_train.shape, x_test.shape, t_train.shape, t_test.shape) | |
# モデルの定義と学習 | |
import os | |
import random | |
def reset_seed(seed=0): | |
os.environ['PYTHONHASHSEED'] = '0' | |
random.seed(seed) # random関数のシードを固定 | |
np.random.seed(seed) #numpyのシードを固定 | |
tf.random.set_seed(seed) #tensorflowのシードを固定 | |
from tensorflow.keras import models, layers | |
from tensorflow.keras import regularizers | |
# シードの固定 | |
reset_seed(0) | |
#モデルのインスタンス化 | |
model = models.Sequential([ | |
layers.Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=(32, 32, 3)), | |
layers.BatchNormalization(), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(64, (3, 3), padding='same', activation='relu'), | |
layers.BatchNormalization(), | |
layers.MaxPooling2D((2, 2)), | |
layers.Conv2D(128, (3, 3), padding='same', activation='relu'), | |
layers.BatchNormalization(), | |
layers.MaxPooling2D((2, 2)), | |
layers.Flatten(), | |
layers.Dense(128, activation='relu'), | |
layers.Dense(10, activation='softmax') | |
]) | |
# optimizerの設定 | |
optimizer = tf.keras.optimizers.Adam(lr=1e-3) | |
# モデルのコンパイル | |
model.compile(loss='sparse_categorical_crossentropy', | |
optimizer=optimizer, | |
metrics=['accuracy']) | |
# 学習の詳細設定 | |
batch_size = 1024 | |
epochs = 50 | |
# 学習の実行 | |
history = model.fit(x_train, t_train, | |
batch_size=batch_size, | |
epochs=epochs, | |
verbose=1, | |
validation_data=(x_test, t_test)) | |
# 結果の確認 | |
results = pd.DataFrame(history.history) | |
results[['accuracy', 'val_accuracy']].plot() | |
plt.show() | |
results[['loss', 'val_loss']].plot() | |
plt.show() | |
print(results.tail(1)) | |
Fig.10 Accuracy
Fig.11 Loss
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(50000, 32, 32, 3) (10000, 32, 32, 3) (50000, 1) (10000, 1) | |
2020-06-07 20:15:07.077611: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 | |
Train on 50000 samples, validate on 10000 samples | |
Epoch 1/50 | |
2020-06-07 20:15:16.254540: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 20:15:16.336112: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 20:15:18.766551: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
1024/50000 [..............................] - ETA: 4:15 - loss: 3.5467 - accuracy: 0.09472020-06-07 20:15:19.132735: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2020-06-07 20:15:19.174841: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 134217728 exceeds 10% of system memory. | |
2048/50000 [>.............................] - ETA: 2:42 - loss: 3.4690 - accuracy: 0.1172 | |
<中略> | |
49152/50000 [============================>.] - ETA: 1s - loss: 4.6658e-04 - accuracy: 1.0000 | |
50000/50000 [==============================] - 119s 2ms/sample - loss: 4.6756e-04 - accuracy: 1.0000 - val_loss: 1.7506 - val_accuracy: 0.7244 | |
loss accuracy val_loss val_accuracy | |
49 0.000468 1.0 1.750645 0.7244 | |
Process finished with exit code 0 |
活性化関数
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# 活性化関数 | |
import numpy as np | |
import pandas as pd | |
import matplotlib.pyplot as plt | |
import tensorflow as tf | |
def sigmoid(x): | |
return 1 / (1+np.exp(-x)) | |
def relu(x): | |
return np.maximum(0, x) | |
def tanh(x): | |
return np.tanh(x) | |
def leaky_relu(x): | |
return np.maximum(x, 0.01*x) | |
def prelu(x, a): | |
return np.maximum(x, a*x) | |
fig = plt.figure(figsize=(10, 6)) | |
x = np.linspace(-10, 10, 1000) | |
ax = fig.add_subplot(111) | |
ax.plot(x, sigmoid(x), label='sigmoid') | |
ax.plot(x, relu(x), label='ReLU') | |
ax.plot(x, tanh(x), label='tanh') | |
ax.plot(x, leaky_relu(x), label='leaky_relu') | |
ax.plot(x, prelu(x, 0.08), label='prelu') | |
plt.legend() | |
plt.xlim(-5, 5) | |
plt.ylim(-1.1, 2) | |
plt.grid(color='white', linestyle='-') | |
plt.show() | |
Fig.12 活性化関数