川崎のシステム開発・アプリ開発・Web制作

Pooling in Neural Network

Today, in Deep Learning a common CNN model architecture is to have a number of convolution and pooling layers stacked each other.
Pooling is process of compressing the image, to help find image features. Pooling layers are used to reduce the dimensions of the feature maps, so it reduces the number of parameters to learn and the amount of computation performed in the network. When convolution combine with pooling they can become really powerful.

Max Pooling

An easy and common way to apply pooling is to go over the image of the pooling size ( in this case is 2×2 ) at a time, on these pixels, pick the biggest value and keep only that value to the end of the process.
Then the result will get is combine of biggest value of each 2×2 pixel.

Max Pooling in TensorFlow-Keras :

tensorflow.keras.layers.MaxPooling2D(2,2)

Max Pooling in python :

Average Pooling

Difference to Max Pooling , Average Pooling is going to computes the average of all pixel in pooling size ( in this case is 2×2 ) at a time and keep that result to the end of the process.
The result is combine of average of each 2×2 pixel.

Average Pooling in TensorFlow-Keras :

tensorflow.keras.layers.AveragePooling2D(2,2)

Global Pooling

There is another type of pooling that is sometimes used called global pooling. Global Pooling layers can be used in a variety of cases. Primarily, it can be used to reduce the dimensionality of the feature maps output by some convolutional layer, to replace Flattening and sometimes even Dense layers in your classifier. The output Global Pooling is single value .

Global Max Pooling

Global Max Pooling in TensorFlow-Keras :

tensorflow.keras.layers.GlobalMaxPooling2D()

Global Average Pooling

Global Average Pooling in TensorFlow-Keras :

tensorflow.keras.layers.GlobalAveragePooling2D()
この記事を書いた人

コメントを残す

メールアドレスが公開されることはありません。 * が付いている欄は必須項目です