0

2 つの質問があります。

Q1: untet ネットワークにトレーニング データを供給する最良の方法は何かと考えていました。

  1. 一度に 1 人の患者を送り込みます。各ボリュームは 160x3x192x192 です。
  2. k 人の患者からランダムなスライスを送信する

Q2: 現在、最初のオプションを実行しましたが、良い結果が得られませんでした。振動サイコロのスコアを取得しています。たとえば、サイコロの損失は 0.99 で始まり、0.8 に下がり、8 に急上昇し、パターンが繰り返されます。なぜこれが起こるのか、誰にも答えがありますか?

コード:

class main:
def __init__(self, args):
    self.args = args
    self.train_loader = None
    self.in_channel = None
    self.out_channel = None



def _config_dataloader(self):
    print("Starting configuration of the dataset")
    print("Collecting validation and training set")


    validation_mode = "val/"
    training_mode = "train/"

    collect = Get_mean_std(self.args.path + training_mode)
    mean,std = collect(self.args.k)
     
    mean_flair = mean["FLAIR"]
    mean_t1 = mean["T1"]

    std_flair = std["FLAIR"]
    std_t1 = std["T1"]


    train_dataset = MSdataset(self.args.path + training_mode, composed_transforms = [
                        normalize(z_norm = True, mean = mean_flair, std = std_flair),
                        normalize(z_norm = True, mean = mean_t1, std = std_t1),
                        add_channel(depth = self.args.depth), 
                        ToTensor()]
                        )
    
    validation_dataset = MSdataset(self.args.path + validation_mode, composed_transforms = [
                        normalize(z_norm = True, mean = mean_flair, std = std_flair),
                        normalize(z_norm = True, mean = mean_t1, std = std_t1),
                        add_channel(depth = self.args.depth), 
                        ToTensor()]
                        )
    

    
    train_loader = DataLoader(train_dataset, 
                              self.args.batch_size, 
                              self.args.shuffle)
    
    validation_loader = DataLoader(validation_dataset, 
                              self.args.batch_size-1, 
                              self.args.shuffle)
    
    print("Data collected. Returning dataloaders for training and validation set")
    return train_loader, validation_loader

def __call__(self, is_train = False):
    train_loader, validation_loader = self._config_dataloader()
    
    complete_data = {"train": train_loader, "validation":validation_loader }

    device = torch.device("cpu" if not torch.cuda.is_available() else self.args.device)

    unet = UNet(in_channels=3, out_channels=1, init_features=32)
    unet.to(device)
    
    optimizer = optim.Adam(unet.parameters(), lr=self.args.lr)
    dsc_loss = DiceLoss()

    loss_train = []
    loss_valid = []

    print("Starting training process. Please wait..")
    sub_batch_size = 14 
    for current_epoch in tqdm(range(self.args.epoch),total= self.args.epoch):

        for phase in ["train", "validation"]:

            if phase == "train":
                unet.train()
            
            if phase == "validation":
                unet.eval()

            for i, data_set_batch in enumerate(complete_data[phase]):
                data_dict = data_set_batch
                X, mask = data_dict["volume"], data_dict["mask"]
                X, mask = (X.to(device)).float(), mask.to(device)
                B,D,C,H,W = X.shape #
             
                mask =mask.reshape((B*D,H,W)) 
                X = X.reshape((B*D,C,H,W)) 
  
                loss_depths = 0 # Nulle ut depth loss
                with torch.set_grad_enabled(is_train):

                    for sub_batches in tqdm(range(0,X.shape[0]-sub_batch_size)): 
                
                  
                        predicted = unet(X[sub_batches: sub_batches + sub_batch_size,:,:,:])
                        loss = dsc_loss(predicted.squeeze(1), mask[sub_batches: sub_batches + sub_batch_size,:,:])
                      
                        if phase == "train":
                       
                            loss_depths = loss_depths + loss
                        if phase == "validation":
                            continue
                if phase == "train":
              
                    loss_train.append(loss_depths)
                    loss_depths.backward()
                    optimizer.step()
                    optimizer.zero_grad()
                 

    print("Training and validation is done. Exiting program and returning loss")  
    return loss_train

検証のセクションを完全には実装していないことに注意してください。ネットワークが最初にどのように学習するかを確認したかっただけです。ありがとう!

4

0 に答える 0