ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

KBQA学习记录-NER训练及验证

2021-12-16 16:02:53  阅读:232  来源: 互联网

标签:f1 验证 args KBQA batch eval loss model NER


目录

1.前提

2.模型训练整体流程

3.模型验证函数

4.打印结果并保存模型


1.前提

我们已经准备好了训练和验证数据,这些数据原来是文本,之后被转为了id,又加了padding,构造成为了特征,又通过类存储起来,实例化之后,通过类.input_ids,类.token_type_ids等方式,被调用,并存在了列表中,转为了tensor,最后通过TensorDataset将4个特征保存了起来。而这个dataset,就将用于训练模型。

2.模型训练整体流程

通过DataLoader获取数据,选择优化器并定义好预热,以及梯度累积的步数。

训练:将参数提取出来,做成词典,传入模型,当梯度累积步数达到设定值时,优化器和预热器都进行一次迭代,并打印log损失。

在指定的step进行验证。代码如下:

def trains(args,train_dataset,eval_dataset,model):

    train_sampler = RandomSampler(train_dataset)
    train_dataloader = DataLoader(train_dataset, sampler=train_sampler, batch_size=args.train_batch_size)

    t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs

    no_decay = ['bias', 'LayerNorm.weight','transitions']
    optimizer_grouped_parameters = [
        {'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
         'weight_decay': args.weight_decay},
        {'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
    ]
    optimizer = AdamW(optimizer_grouped_parameters,lr=args.learning_rate,eps=args.adam_epsilon)

    scheduler = WarmupLinearSchedule(optimizer, warmup_steps=args.warmup_steps, t_total=t_total)
    logger.info("***** Running training *****")
    logger.info("  Num examples = %d", len(train_dataset))
    logger.info("  Num Epochs = %d", args.num_train_epochs)
    logger.info("  Gradient Accumulation steps = %d", args.gradient_accumulation_steps)
    logger.info("  Total optimization steps = %d", t_total)

    global_step = 0
    tr_loss, logging_loss = 0.0, 0.0
    model.zero_grad()
    train_iterator = trange(int(args.num_train_epochs), desc="Epoch")
    set_seed(args)
    best_f1 = 0.
    for _ in train_iterator:
        epoch_iterator = tqdm(train_dataloader, desc="Iteration")
        for step,batch in enumerate(epoch_iterator):
            batch = tuple(t.to(args.device) for t in batch)
            inputs = {'input_ids':batch[0],
                      'attention_mask':batch[1],
                      'token_type_ids':batch[2],
                      'tags':batch[3],
                      'decode':True
            }
            outputs = model(**inputs)
            loss,pre_tag = outputs[0], outputs[1]

            if args.gradient_accumulation_steps > 1:
                loss = loss / args.gradient_accumulation_steps
            loss.backward()
            torch.nn.utils.clip_grad_norm_(model.parameters(),args.max_grad_norm)
            logging_loss += loss.item()
            tr_loss += loss.item()
            if 0 == (step + 1) % args.gradient_accumulation_steps:
                optimizer.step()
                scheduler.step()
                model.zero_grad()
                global_step += 1
                logger.info("EPOCH = [%d/%d] global_step = %d   loss = %f",_+1,args.num_train_epochs,global_step,
                            logging_loss)
                logging_loss = 0.0

                # if (global_step < 100 and global_step % 10 == 0) or (global_step % 50 == 0):
                # 每 相隔 100步,评估一次
                if global_step % 100 == 0:
                    best_f1 = evaluate_and_save_model(args,model,eval_dataset,_,global_step,best_f1)

    # 最后循环结束 再评估一次
    best_f1 = evaluate_and_save_model(args, model, eval_dataset,_,global_step, best_f1)

3.模型验证函数

在验证的同时,保存最佳模型以及参数。

从验证数据集中取出数据,将模型调成验证模式,输入数据,就可以得到预测结果。

将所有的预测结果打平合并,和真实结果导入classification report即可得到precision, recall, f1

最后再把模型转回训练模式

def evaluate(args, model, eval_dataset):

    eval_output_dirs = args.output_dir
    if not os.path.exists(eval_output_dirs):
        os.makedirs(eval_output_dirs)
    eval_sampler = SequentialSampler(eval_dataset)
    eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler,
                                 batch_size=args.eval_batch_size)

    logger.info("***** Running evaluation *****")
    logger.info("  Num examples = %d", len(eval_dataset))
    logger.info("  Batch size = %d", args.eval_batch_size)


    loss = []
    real_token_label = []
    pred_token_label = []
    for batch in tqdm(eval_dataloader, desc="Evaluating"):
        model.eval()
        batch = tuple(t.to(args.device) for t in batch)
        with torch.no_grad():
            inputs = {'input_ids':batch[0],
                      'attention_mask':batch[1],
                      'token_type_ids':batch[2],
                      'tags':batch[3],
                      'decode':True,
                      'reduction':'none'
            }
            outputs = model(**inputs)
            # temp_eval_loss shape: (batch_size)
            # temp_pred : list[list[int]] 长度不齐
            temp_eval_loss, temp_pred = outputs[0], outputs[1]

            loss.extend(temp_eval_loss.tolist())
            pred_token_label.extend(temp_pred)
            real_token_label.extend(statistical_real_sentences(batch[3],batch[1],temp_pred))


    loss = np.array(loss).mean()
    real_token_label = np.array(flatten(real_token_label))
    pred_token_label = np.array(flatten(pred_token_label))
    assert real_token_label.shape == pred_token_label.shape
    ret = classification_report(y_true = real_token_label,y_pred = pred_token_label,output_dict = True)
    model.train()
    return ret

4.打印结果并保存模型

得到的结果中,有三种数据,是因为我们有三种标签,开头定义了分别是["O", "B-LOC", "I-LOC"],因此,只取后两个即可。

def evaluate_and_save_model(args,model,eval_dataset,epoch,global_step,best_f1):
    ret = evaluate(args, model, eval_dataset)

    precision_b = ret['1']['precision']
    recall_b = ret['1']['recall']
    f1_b = ret['1']['f1-score']
    support_b = ret['1']['support']

    precision_i = ret['2']['precision']
    recall_i = ret['2']['recall']
    f1_i = ret['2']['f1-score']
    support_i = ret['2']['support']

    weight_b = support_b / (support_b + support_i)
    weight_i = 1 - weight_b

    avg_precision = precision_b * weight_b + precision_i * weight_i
    avg_recall = recall_b * weight_b + recall_i * weight_i
    avg_f1 = f1_b * weight_b + f1_i * weight_i

    all_avg_precision = ret['macro avg']['precision']
    all_avg_recall = ret['macro avg']['recall']
    all_avg_f1 = ret['macro avg']['f1-score']

    logger.info("Evaluating EPOCH = [%d/%d] global_step = %d", epoch+1,args.num_train_epochs,global_step)
    logger.info("B-LOC precision = %f recall = %f  f1 = %f support = %d", precision_b, recall_b, f1_b,
                support_b)
    logger.info("I-LOC precision = %f recall = %f  f1 = %f support = %d", precision_i, recall_i, f1_i,
                support_i)

    logger.info("attention AVG:precision = %f recall = %f  f1 = %f ", avg_precision, avg_recall,
                avg_f1)
    logger.info("all AVG:precision = %f recall = %f  f1 = %f ", all_avg_precision, all_avg_recall,
                all_avg_f1)

    if avg_f1 > best_f1:
        best_f1 = avg_f1
        torch.save(model.state_dict(), os.path.join(args.output_dir, "best_ner.bin"))
        logging.info("save the best model %s,avg_f1= %f", os.path.join(args.output_dir, "best_bert.bin"),
                     best_f1)
    # 返回出去,用于更新外面的 最佳值
    return best_f1

标签:f1,验证,args,KBQA,batch,eval,loss,model,NER
来源: https://blog.csdn.net/Swayzzu/article/details/121972164

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有