1

コードで重みを共有したいという問題がありますlstm_decoder(したがって、基本的に LSTM を 1 つだけ使用します)。それに関するオンラインのリソースがいくつかあることは知っていますが、以下が重みを共有しない理由をまだ理解できません。

initial_input = tf.unstack(tf.zeros(shape=(1,1,hidden_size2)))

for index in range(window_size):
    with tf.variable_scope('lstm_cell_decoder', reuse = index > 0):
        rnn_decoder_cell = tf.nn.rnn_cell.LSTMCell(hidden_size, state_is_tuple = True)

        output_decoder, state_decoder = tf.nn.static_rnn(rnn_decoder_cell, initial_input, initial_state=last_encoder_state, dtype=tf.float32)

        # Compute the score for source output vector
        scores = tf.matmul(concat_lstm_outputs, tf.reshape(output_decoder[-1],(hidden_size,1)))
        attention_coef = tf.nn.softmax(scores)
        context_vector = tf.reduce_sum(tf.multiply(concat_lstm_outputs, tf.reshape(attention_coef, (window_size, 1))),0)
        context_vector = tf.reshape(context_vector, (1,hidden_size))

        # compute the tilda hidden state \tilde{h}_t=tanh(W[c_t, h_t]+b_t)
        concat_context = tf.concat([context_vector, output_decoder[-1]], axis = 1)
        W_tilde = tf.Variable(tf.random_normal(shape = [hidden_size*2, hidden_size2], stddev = 0.1), name = "weights_tilde", trainable = True)
        b_tilde = tf.Variable(tf.zeros([1, hidden_size2]), name="bias_tilde", trainable = True)
        hidden_tilde = tf.nn.tanh(tf.matmul(concat_context, W_tilde)+b_tilde) # hidden_tilde is [1*64]

        # update for next time step
        initial_input = tf.unstack(tf.reshape(hidden_tilde, (1,1,hidden_size2)))
        last_encoder_state = state_decoder
        print(initial_input, last_encoder_state)

        # predict the target
        W_target = tf.Variable(tf.random_normal(shape = [hidden_size2, 1], stddev = 0.1), name = "weights_target", trainable = True)
        print(W_target)
        logit = tf.matmul(hidden_tilde, W_target)
        logits = tf.concat([logits, logit], axis = 0)

logits = logits[1:]

ループの反復ごとに同じ LSTM セルと同じ W_target を使用したいと考えています。ただし、ループ内で window_size = 2 の場合print(initial_input, last_encoder_state)とforで次の出力が得られます。print(W_target)

[<tf.Tensor 'lstm_cell_decoder/unstack:0' shape=(1, 64) dtype=float32>] 
LSTMStateTuple(c=<tf.Tensor 
'lstm_cell_decoder/rnn/rnn/lstm_cell/lstm_cell/add_1:0' shape=(1, 64) 
dtype=float32>, h=<tf.Tensor 
'lstm_cell_decoder/rnn/rnn/lstm_cell/lstm_cell/mul_2:0' shape=(1, 64) 
dtype=float32>)
<tf.Variable 'lstm_cell_decoder/weights_target:0' shape=(64, 1) 
dtype=float32_ref>
[<tf.Tensor 'lstm_cell_decoder_1/unstack:0' shape=(1, 64) dtype=float32>] 
LSTMStateTuple(c=<tf.Tensor 
'lstm_cell_decoder_1/rnn/rnn/lstm_cell/lstm_cell/add_1:0' shape=(1, 64) 
dtype=float32>, h=<tf.Tensor 
'lstm_cell_decoder_1/rnn/rnn/lstm_cell/lstm_cell/mul_2:0' shape=(1, 64) 
dtype=float32>)
<tf.Variable 'lstm_cell_decoder_1/weights_target:0' shape=(64, 1) 
dtype=float32_ref>

更新:マキシムのコメントの後、次の構文を試しました

for index in range(window_size):
  with tf.variable_scope('lstm_cell_decoder', reuse = index > 0):
     rnn_decoder_cell = tf.nn.rnn_cell.LSTMCell(hidden_size,reuse=index > 0)
     output_decoder, state_decoder = tf.nn.static_rnn(rnn_decoder_cell, ...)
     W_target = tf.get_variable(...)

変数 W_target を適切に共有するようになりましたが、lstm セル/重みの共有にはまだ問題があります。

<tf.Tensor 'lstm_cell_decoder/rnn/rnn/lstm_cell/lstm_cell/mul_2:0' shape=(1, 
 64) dtype=float32>]
 LSTMStateTuple(c=<tf.Tensor 
 'lstm_cell_decoder/rnn/rnn/lstm_cell/lstm_cell/add_1:0' shape=(1, 64) 
 dtype=float32>, h=<tf.Tensor 
'lstm_cell_decoder/rnn/rnn/lstm_cell/lstm_cell/mul_2:0' shape=(1, 64) 
 dtype=float32>)
 <tf.Variable 'lstm_cell_decoder/weights_target:0' shape=(64, 1) 
 dtype=float32_ref>

 [<tf.Tensor 'lstm_cell_decoder_1/rnn/rnn/lstm_cell/lstm_cell/mul_2:0' 
 shape=(1, 64) dtype=float32>]
 LSTMStateTuple(c=<tf.Tensor 
 'lstm_cell_decoder_1/rnn/rnn/lstm_cell/lstm_cell/add_1:0' shape=(1, 64) 
 dtype=float32>, h=<tf.Tensor 
 'lstm_cell_decoder_1/rnn/rnn/lstm_cell/lstm_cell/mul_2:0' shape=(1, 64) 
 dtype=float32>)
 <tf.Variable 'lstm_cell_decoder/weights_target:0' shape=(64, 1) 
 dtype=float32_ref>
4

1 に答える 1

1

まず、で変数を作成しtf.Variableても再利用可能にはなりません。tf.Variableこれがとの主な違いの 1 つですtf.get_variable。次の例を参照してください。

with tf.variable_scope('foo', reuse=tf.AUTO_REUSE):
  for i in range(3):
    x = tf.Variable(0.0, name='x')
    y = tf.get_variable(name='y', shape=())

作成された変数を調べると、次のように表示されます。

<tf.Variable 'foo/x:0' shape=() dtype=float32_ref>
<tf.Variable 'foo/y:0' shape=() dtype=float32_ref>
<tf.Variable 'foo/x_1:0' shape=() dtype=float32_ref>
<tf.Variable 'foo/x_2:0' shape=() dtype=float32_ref>

次に、RNN セルは再利用のための独自のメカニズムを提供します。たとえば、コンストラクター引数tf.nn.rnn_cell.LSTMCellの場合:reuse

reuse = tf.AUTO_REUSE  # Try also True and False
cell1 = tf.nn.rnn_cell.LSTMCell(3, reuse=reuse)
cell2 = tf.nn.rnn_cell.LSTMCell(3, reuse=reuse)
outputs1, states1 = tf.nn.dynamic_rnn(cell1, X, dtype=tf.float32)
outputs2, states2 = tf.nn.dynamic_rnn(cell2, X, dtype=tf.float32)
于 2018-02-07T15:32:54.963 に答える