0

CNN を使用してマルチクラス分類プロジェクトを実行しようとしています。私の問題は、精度が高いことですが、検証データをうまく予測できません。l2 正則化を導入しましたが、一般化がうまくいきません。また、異なるl2正則化値(1e-3、1e-4)で試しました。これが私のAccuracyグラフLossグラフです。トポロジー:

model = Sequential()
inputs = keras.Input(shape=(512, 512, 3), name="img")
x = Conv2D(32, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
x = BatchNormalization()(x)
x1 = Activation('relu')(x)
x2 = Conv2D(32, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same'
(x1)
x = BatchNormalization()(x2)
x = Activation('relu')(x2)
x3 = Conv2D(32, kernel_size=(3,3), strides=(1,1),  kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x3)
x = tensorflow.keras.layers.add([x, x1]) # ==> Shortcut
x = Activation('relu')(x)
x4 = Conv2D(64, kernel_size=(3,3), strides=(2,2), kernel_regularizer=l2(1e-5),padding='same')(x)
x = BatchNormalization()(x4)
x = Activation('relu')(x)
x5 = Conv2D(64, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x5)
x = Activation('relu')(x)
x6 = Conv2D(64, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x6)
x = tensorflow.keras.layers.add([x, x4]) # ==> Shortcut
x = Activation('relu')(x)
x7 = Conv2D(128, kernel_size=(3,3), strides=(2,2), kernel_regularizer=l2(1e-5), padding='same')
(x)
x = BatchNormalization()(x7)
x = Activation('relu')(x)
x8 = Conv2D(128, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x8)
x = Activation('relu')(x)
x9 = Conv2D(128, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x9)
x = tensorflow.keras.layers.add([x, x7]) #
x = Activation('relu')(x)

x10 = Conv2D(256, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x10)
x = Activation('relu')(x)

x11 = Conv2D(256, kernel_size=(3,3) , strides=(1,1), kernel_regularizer=l2(1e-5),padding='same') 
(x)
x = BatchNormalization()(x11)
x = Activation('relu')(x)

x12 = Conv2D(256, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x12)
x = tensorflow.keras.layers.add([x, x10]) #
x = Activation('relu')(x)

x13 = Conv2D(512, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x13)
x = Activation('relu')(x)

x14 = Conv2D(512, kernel_size=(3,3), strides=(1,1),kernel_regularizer=l2(1e-5), padding='same') 
(x)
x = BatchNormalization()(x14)
x = Activation('relu')(x)

x15 = Conv2D(512, kernel_size=(3,3), strides=(1,1), kernel_regularizer=l2(1e-5), padding='same')(x)
x = BatchNormalization()(x15)
x = tensorflow.keras.layers.add([x, x13]) #
x = Activation('relu')(x)
x = Flatten()(Conv2D(1, kernel_size=1, strides=(1,1), kernel_regularizer=l2(1e-5), 
 padding='same')(x))
x = layers.Dropout(0.3)(x)
outputs = Dense(4, activation ='softmax', kernel_initializer ='he_normal')(x) 
model = Model(inputs, outputs)
model.summary()

` レイヤーを減らしたり増やしたりしながら、さまざまなフィルターを試してみました。この問題は過剰適合によるものですか? より滑らかな曲線と適切な予測を得るために、私が改善できることについての提案。

4

1 に答える 1