1

私の分析の 1 つで適用している戦略について、コメント/ヘルプをいただければ幸いです。要するに、私の場合は次のとおりです。

1) My data have biological origin, collected in a period of 120s, from a
 subject receiving, each time, one of possible three stimuli (response label 1
 to 3), in a random manner, one stimulus per second (trial). Sampling 
 frequency is 256 Hz and 61 different sensors (input variables). So, my 
 dataset has 120x256 rows and 62 columns (1 response label + 61 input 
 variables);
2) My goal is to identify if there is an underlying pattern for each stimulus.
 For that, I would like to use deep learning neural networks to test my
 hypothesis, but not in a conventional way (to predict the stimulus from a
 single observation/row).
3) My approach is to divide the whole dataset, after shuffling per row
 (avoiding any time bias), in training and validation sets (50/50) and then to
 run the deep learning algorithm. The division does not segregate trial events
 (120), so each training/validation sets should contain data (rows) from the
 same trial (but never the same row). If there is a dominant pattern per
 stimulus, the validation confusion matrix error should be low. If there is a
 dominant pattern per trial, the validation confusion matrix error should be
 high. So, the validation confusion matrix error is my indicator of the
 presence of a hidden pattern per stimulus;

私の論理の妥当性に関して、あなたが私に提供できる情報をいただければ幸いです。行入力に基づいて刺激を予測しようとしているわけではないことを強調したいと思います。

ありがとう。

4

1 に答える 1