ICode9

精准搜索请尝试: 精确搜索
首页 > 编程语言> 文章详细

非线性邻域滤波(NNFs)在MR-FBP图像重建算法中的应用(基于astra-toolbox开发, 实现双边滤波,非局部均值滤波(NLM)的惩戒项的改进,并进行MAE,MAR分析)

2020-12-01 22:32:03  阅读:364  来源: 互联网

标签:MAR extend img NNFs self 滤波 patch list proj


     本科荒废了两年,大三终于开始好好学习了,,,,但是跟无头苍蝇,只知道上课,大四即将找工作,心理慌得一批,由于自己太辣鸡,,,只能找到3,4000的,,,终于还是决定步入研究生大抗,,,上了后发现他娘的,,,全要搞学术研究,,,但找工作还是看编程啊,,但为了毕业只能吐血怒肝论文,,,终于发表了,在此记录我的工作内容供需要的朋友参考

 

我硕士研究方向是图像重建,论文创新主要基于以下这篇大牛的论文

Improving Filtered Backprojection Reconstruction by Data-Dependent Filtering 

Daniël M. Pelt and Kees Joost Batenburg

该大牛论文实现代码链接如下

dmpelt/pymrfbp: Python implementation of the MR-FBP tomographic reconstruction method (github.com)

该大牛用大的工具箱如下

astra-toolbox/astra-toolbox: ASTRA Tomography Toolbox (github.com)

实现语言:Python (3.6.8,其他版本没试过,按照大牛的资源是这个版本)

开发平台:PyCharm (强烈建议,太方便了,可以配置各种版本的虚拟环境)

系统环境:Ubuntu(linux系统)windows不行!千万别在windows系统上开发,不行!看到astra工具箱也能在windows系统安装,就因为这个以为也能在windows下开发,没想到不行,抗了我好久。

需要学习的内容:python语法,pycharm平台使用,conda配置环境,Ubuntu(linux系统)下如何下载相应的python,pycharm

可以装一个虚拟机,这样就可以不用换点电脑的主系统乐,我虚拟机用的是VMware Workstation ,astra工具箱一定要好好学,网址里有工具箱的手册,每个函数,数据类型要搞明白。或者用我的PPT,我PPT作为辅助参考,毕竟没手册详细,可以作为启发式的浏览,我把我的关于astra的使用PPT和我创新的参考论文链接如下:

https://pan.baidu.com/s/1OTd0ppZemd0vmj9Awf_4Ow

提取码:6666

可以去翻我的收藏夹,里面针对上述知识点有归类

关于我的论文,可以参考2021年    国外电子测量技术    基于非线性邻域滤波的MR-FBP重建算法改进  

data_analyse_function.py

(用data_analyse_function.py替换作者原文里的example.py)

import numpy as np
import astra
import scipy
import os
import pylab as plb
import time

plb.rcParams['font.sans-serif'] = ['SimHei']
plb.rcParams['axes.unicode_minus'] = False

"""
如果在该模块调试,导入必要的模块和代码,如果不在这个模块调试,这个模块被别的模块导入,必要的
模块和代码需要在导入者模块中给出
"""
if __name__ == '__main__':
    # Register MR-FBP plugin with ASTRA
    import mrfbp

    astra.plugin.register(mrfbp.plugin)


def proj_geom_list_for_reconstruction(proj_geom_list, img, astra_name, noisy_b=False,
                                      noisy_intense=10000, b_proj_list=False,
                                      b_rec_proj=False, b_count_time=False, **extraOptions):
    """
    这个函数将依据传入的不同的 proj_geom参数 形成的 列表proj_geom_list 执行固定的重建算法(可能是几个,也可能一个)
    
    ''param  proj_geom_list'': [type: list], 用来存放不同 proj_geom 的 proj_geom列表
    
    ''param  img'': [type: array], 将要被用来重建的图像,用它来仿真投影
    
    ''param  astra_name'': [type: str], 你的plugin的名字
    
    ''param  noisy_b'': [type: bool], 是否添加噪声,逻辑真代表添加噪声
    
    ''param  noisy_intense'': [type: float], 噪声强度,具体参考MR-FBP论文定义
    
    ''param  b_proj_list'': [type: bool], 是否返回仿真的投影
    
    ''param  b_rec_proj'': [type: bool], 是否返回仿真的投影再重建的结果   
    
    ''param  b_count_time'': [type: bool], 是否对每种类型的图像重建进行耗时计算并返回他们
    
    ''param  **extraOptions'': [type: key = value]
              key: [type: str], is extraOptions,
              value: [type: no certain, be bound with extraOptions]
              主要是惩戒项及其相应参数取值

    ''返回值  rec_list'': [type: list], 为不同的 proj_geom 保存不同惩戒项方法的重建结果, 是一个2维列表,1维保
    存用特定 的 proj_geom 的不同惩戒项方法的重建结果,一维大小跟惩戒项方法有关,大小为 惩戒项方法数目+1(没惩戒项),
    二维大小和 proj_geom 数目有关。
  
    ''返回值  projection_list'': [type: list], 不同 proj_geom 产生的投影,是一个2维列表,1维数目等于proj_geom_list
     的大小,二维大小等于投影数据的多少,具体跟proj_geom_list中的每个proj_geom有关,二维数据是array类型。

    ''返回值  rec_proj_list'': [type: list], 保存重建再投影的结果,是一个二维列表,1维保存用特定的 proj_geom 的
    不同惩戒项方法的重建再投影,一维大小跟惩戒项方法有关,大小为 惩戒项方法数目+1(没惩戒项),二维大小和 proj_geom 数目有关 

     ''return  time_cost_array'': [type: array], 保存各种惩戒项方法的耗时,是一个二维列表,1维保存用特定的 proj_geom 的
    不同惩戒项方法的耗时,一维大小跟惩戒项方法有关,大小为 惩戒项方法数目+1(没惩戒项),二维大小和 proj_geom 数目有关
    """
    reconstruction_list = []
    projection_list = []
    rec_proj_list = []
    time_cost_array = np.zeros([len(proj_geom_list), len(extraOptions)+1])
    """使用 proj_geom_list 的每个 proj_geom"""
    for n, p_g in enumerate(proj_geom_list):
        arg_ = tuple(p_g)
        vol_geom = astra.create_vol_geom(img.shape[0], img.shape[1])
        proj_geom = astra.create_proj_geom(*arg_)
        pid = astra.create_projector('linear', proj_geom, vol_geom)
        p = astra.OpTomo(pid)

        # 计算投影
        projection = (p * img).reshape(p.sshape)

        # 把噪声加入投影中并保存
        if noisy_b:
            projection = astra.add_noise_to_sino(projection, noisy_intense)

        # 保存投影
        if b_proj_list:
            projection_list.append(projection)

        """ 以下就是不同惩戒项 """

        result = []
        """ 先来个无惩戒项的 """
        if b_count_time:
            t1 = time.time()

        rec = p.reconstruct(astra_name, projection)

        if b_count_time:
            t2 = time.time()
            time_cost_array[n, 0] = t2 - t1

        result.append(rec)
        """ 有惩戒项的 """
        m = 1

        for key, value in extraOptions.items():
            extraOption = {key: value}  # 诺列每个惩戒项

            if b_count_time:  # 看是否需要计时
                t1 = time.time()

            rec = p.reconstruct(astra_name, projection, extraOptions=extraOption)  # 计算投影在该惩戒项下的重建结果

            if b_count_time:  # 看是否需要计时
                t2 = time.time()
                time_cost_array[n, m] = t2 - t1  # 把耗时放在二维里面

            result.append(rec)  # 不同惩戒项的重建结果加到 列表result 中
            m = m + 1

        reconstruction_list.append(result)  # 把不同惩戒项的重建结果以 列表result 形式加到新的 列表reconstruction_list 里
        astra.log.info('perform have been finished for {number} times!!!!!!!!'.format(number=n+1))
        """ 以上就是不同惩戒项 """
        # 判断是否需要用重建结果再投影
        if b_rec_proj:
            r = []
            """ 不同惩戒项的重建结果 """
            for rec in result:
                s = p * rec  # 一维数据
                r.append(s)  # 把不同惩戒项重建结果再投影存放到 r列表 中
            rec_proj_list .append(r) # 把特定 proj_geom 的不同惩戒项的重建再投影以 列表r 的形式加入到 列表rec_proj_list

        astra.projector.delete(pid)  # 删除投影仪释放资源

    """ 根据你选择的不同,可能返回1到4个参数,他们顺序如下 """
    if b_rec_proj & b_proj_list & b_count_time:
        return reconstruction_list, projection_list, rec_proj_list, time_cost_array
    elif b_proj_list & b_rec_proj:
        return reconstruction_list, projection_list, rec_proj_list
    elif b_rec_proj & b_count_time:
        return reconstruction_list, rec_proj_list, time_cost_array
    elif b_proj_list & b_count_time:
        return reconstruction_list, projection_list, time_cost_array
    elif b_proj_list:
        return reconstruction_list, projection_list
    elif b_rec_proj:
        return reconstruction_list, rec_proj_list
    elif b_count_time:
        return reconstruction_list, time_cost_array
    else:
        return reconstruction_list


def MAR_analyse(rec_proj, projection):
    """
    这个函数计算 MAR 
    ''param  rec_proj'': [type: list], 参考上一个函数的 rec_proj_list

    ''param  projection'': [type: list], 参考上一个函数 projection_list

    ''return  MAR_list'': [type: array], 是个二维的,对于rec_proj每个成员的MAR结果
    """
    l = len(projection)
    """
    eg: rec_proj is [ 
                      [ [1,1,1], [2,2,2] ],
                      [ [3,3,3,3], [4,4,4,4] ] 
                    ], it can't use the array() to invert  
    """
    rp_l = []
    for rp in rec_proj:
        rp = np.array(rp)
        rp_l.append(rp)

    MAR_list = []
    i = 0
    while i < l:
        MAR = np.abs(rp_l[i] - projection[i].flatten()).sum(axis=1) / projection[i].size
        MAR_list.append(MAR)
        i = i + 1

    return np.array(MAR_list)


def MAE_analyse(rec_list, img):
    """
    这个函数用来计算 rec_list 中的 MAE

    ''param  rec_list'': [type: list], 参考第一个函数的 reconstruction_list

    ''param img'': [type: array], 正确的图像

    ''return  MAE_list'': [type: array], rec_list中的每个重建结果的MAE
    """

    MAE_list = []
    rec_list = np.array(rec_list)

    for rec in rec_list:

        r_list = []
        for r in rec:
            MAE_NLM = (1 / img.size) * np.abs(img - r).sum() / (np.max(img) - np.min(img))
            r_list.append(MAE_NLM)

        MAE_list.append(r_list)

    return np.array(MAE_list)


def get_list_proj_geom(start, stop, num, pro_num):
    """
    得到一个包含若干 proj_geom 的列表,和他们对应的投影方向的数目

    ''param  start'': [type: int], 投影方向的数目

    ''param  stop'': [type: int], 投影方向的数目

    ‘’param num'': [type: int], 有几组投影方向的数目

    ‘’pro_num'': [type: int], 投影射线数目/探测器数目

    ''return  proj_geom_list'': [list], 存放创建proj_geom_list的参数

    ''return  proj_number_array'': [array], 存放创建的proj_geom对应的投影方向数
    """
    proj_geom_list = []
    """ 
    从start 到 stop 均匀选 num 个数,比如start 为30, stop 为180,num 为16,则
    选30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180作为
    投影方向数
    """
    proj_number_array = np.linspace(start, stop, num, dtype='int32')

    for proj_number in proj_number_array:
        proj_geom_list.append(['parallel', 1.0, pro_num, np.linspace(0, np.pi, proj_number, False)])

    return proj_geom_list, proj_number_array


def show_rec_figure(rec, title=None, save=None, show_range=None, b_one_figure=True):
    """
    该函数展示重建结果的图片

    ''param  rec'': [type: list], 重建结果

    ''param  title'': [type: list], 图片名

    ‘’param save'': [type: list], 保存路径名

    ‘’show_range'': [type: list], 图片显示范围,[a,b,c,d],其中 0<a,b<=1, a代表显示的宽是图片宽的a倍,b代表显示的高是图片
    高的b倍,-1<=c,d<=1 ,c代表以图片中心的横坐标为参照,左右偏移的程度,-0.5偏移到图像最左边,0.5偏移到最右边,d代表以图片
    中心的纵坐标为参照,上下偏移的程度,-0.5偏移到图像最上边,0.5偏移到最下边, 如果四个数选择不合适,则显示局部失败,显示完整的

    ‘’b_one_figure'': [type: bool], 是否多个重建结果现实到一个图上
    """

    import pylab as plb

    if b_one_figure:
        # 重建结果现实到一张图上
        plb.figure()
        number = len(rec)  # 看有几个重建结果

        # 合理安排几行几列
        if number**(1/2)-int(number**(1/2)) == 0:
            # 若开二次根被整除,则形成正方形布局
            r_n = number**(1/2)
            c_n = number**(1/2)
        else:
            # 若开二次根没被整除,则证明正方形布局不够,加长列以让其足够放下
            r_n = int(number**(1/2))
            c_n = number//r_n + 1
        if r_n * c_n > 9:
            # 当布局最多放下的图大于9时,意味着一位数0----9不能表示所有序号,故要用俩位数,最多可表示99张图
            # 一般也够用了
            m = r_n*1000+c_n*100
        else:
            # 当布局最多放下的图不大于9时,一位数0---9足以表示图片序号
            m = r_n*100+c_n*10

    # 一次把每个结果按照需要,显示,保存
    for i, r in enumerate(rec):
        """ --------------------显示部分-------------------- """
        if not b_one_figure:
            # 重建结果各自一张图
            plb.figure()
            plb.gray()
            plb.axis('off')
            if title is not None:
                # 如果每张图有标题
                plb.title(title[i])
        else:
            # 重建结果放到一张图
            m = m+1
            plb.subplot(m)
            plb.gray()
            plb.axis('off')
            if title is not None:
                # 如果每张图有标题
                plb.title(title[i])

        if show_range is not None:
            # 如果给定了显示范围
            width = r.shape[1] * show_range[0]
            heigh = r.shape[0] * show_range[1]
            x = 0.5 * r.shape[1] - show_range[2] * r.shape[1]
            y = 0.5 * r.shape[0] - show_range[3] * r.shape[0]
            r_begin = int(y - heigh * 0.5)
            r_end = int(y + heigh * 0.5)
            c_begin = int(x - width * 0.5)
            c_end = int(x + width * 0.5)
            if r_begin < 0 or r_end < 0 or c_begin < 0 or c_end < 0:
                # 显示局部失败
                plb.imshow(r, vmin=0, vmax=1)
            else:
                plb.imshow(r[r_begin: r_end, c_begin: c_end], vmin=0, vmax=1)
        else:
            # 如果没给定显示范围
            plb.imshow(r, vmin=0, vmax=1)
        """ -----------------------保存部分----------------------- """
        if not b_one_figure:

            if save is not None:
                # 如果给了保存路径名
                plb.savefig(save[i], bbox_inches='tight', dpi=600)
        else:
            # 如果重建结果显示到一张图
            if i+1 == len(rec):
                # 在把最后一个结果显示到图中时,进行保存判断
                plb.tight_layout()
                if save is not None:
                    # 如果给了保存路径名
                    plb.savefig(save[0], bbox_inches='tight', dpi=600)


def show_analysis_figure(x_array, y_array, line_label, line_style_list=None,
                    title='', save=None, xlabel='', ylabel='', xscale='linear'):
    """
        这个函数用来绘制被y_array列数决定的线条,你可以指定线条风格(line_style_list)和标签(line_label)但是他们必须和
        y_array 的列数一样,必备参数是 x_array,  y_array, line_label

    Parameters
        ----------
        x_array, y_array : [type: array-like or scalar]---行坐标  纵坐标  的数据座标点

        line_label : [type: list]---每一条线的标签

        line_style_list : [type: list]---每一条线的风格

        title : [type: str]---图片标题

        save : [type: str]---用来保存图片的文件名,可以指定路径

        xlabel, ylabel : [type: str]---行坐标 和 纵坐标 的名称

        xscale : [type: str]---行坐标的坐标尺度类型
    """
    import pylab as plb

    plb.figure()
    plb.xlabel(xlabel)
    plb.ylabel(ylabel)
    plb.title(title)
    plb.xscale(xscale)
    for i, label in enumerate(line_label):
        if line_style_list is not None:
            plb.plot(x_array, y_array[:, i], line_style_list[i], label=label)
        else:
            plb.plot(x_array, y_array[:, i], label=label)
    plb.legend()
    plb.tight_layout()
    if save is not None:
        plb.savefig(save, bbox_inches='tight', dpi=600)
    plb.show()


"""
如果在该模块调试,导入必要的模块和代码,如果不在这个模块调试,这个模块被别的模块导入,必要的
模块和代码需要在导入者模块中给出
"""

if __name__ == '__main__':
    # 加载图像
    # testPhantom = np.load('phantom.npy') # 加载npy数据格式的
    testPhantom = scipy.io.loadmat('Shepp-Logan')['Shepp_Logan']  # 加载mat数据格式的

    # extraOptions dict  不同惩戒项以及对应的参数
    arg = {'reg_NLM': 20, 'reg_bilateral': 100, 'reg_grad': 30}

    # 不同惩戒项的中文名,显示时的标题,顺序上要和传入的惩戒项一样,前面无惩戒项一定要加
    rec_type = ['无惩戒项', '非局部均值惩戒项', '双边滤波惩戒项', '梯度惩戒项']

    # 绘图线条的风格,数目要等于上面的,rec_type,arg的成员数目
    line_style_list = ['-rD', '--go', '-.b^', ':c*']
    #"""
    " ----------------------------------------用不同噪声强度和惩戒项进行重建---------------------------------------- "
    
    # 创建一个array用来存放不同强度噪声
    noisy_array = np.array([2500, 4000])

    # 为不同噪声强度创造一个proj_geom(也就是某个特定度数的投影角度)
    angle = 60  # 其实角度
    path1 = '.' + os.sep + '{angle}_add_noisy'.format(angle=angle) + os.sep  # 保存路径

    # 确定保存路径path1是否已经存在了,存在就不必再次创建了,没存在则创建该路径
    isExists = os.path.exists(path1)
    if not isExists:
        os.makedirs(path1)
    # 第二个参数无用,该函数起始角度为angle,终止角度为181,均匀取1个,故取angle作为投影角度大小
    proj_geom, no_use = get_list_proj_geom(angle, 181, 1, testPhantom.shape[0])

    # 不同强度的噪声,每种强度噪声又对应不同惩戒项方法,用他们进行重建
    R = []
    P = []
    RP = []
    # 依次用不同噪声强度重建
    for noisy in noisy_array:
        # 噪声强度要从100开始,不然会出错
        r, p, rp = \
            proj_geom_list_for_reconstruction(proj_geom, testPhantom, 'MR-FBP', noisy_b=True,
                                              noisy_intense=noisy,  b_proj_list=True,
                                              b_rec_proj=True, **arg)
        astra.log.info("********over the {a} intense noisy reconstruction********".format(a=noisy))
        R.append(r[0])
        P.append(p[0])
        RP.append(rp[0])

    MAE_ = MAE_analyse(R, testPhantom)
    MAR_ = MAR_analyse(RP, P)
    r1 = R[1]
    save1 = []

    for t in rec_type:
        save1.append(path1 + '{angle}_{type}_noisy_reconstruction.svg'.format(angle=angle, type=t))

    show_rec_figure(r1, save=save1, show_range=[0.23, 0.28, 0, 0.175], title=rec_type)

    show_analysis_figure(noisy_array, MAE_, rec_type, line_style_list,
                    xlabel='噪声参数I', ylabel='图像的平均绝对误差', xscale='symlog',
                    title='{angle}个投影角度数目'.format(angle=angle),
                    save=path1+'{angle}_MAE_noisy.svg'.format(angle=angle))

    show_analysis_figure(noisy_array, MAR_, rec_type, line_style_list,
                    xlabel='噪声参数I', ylabel='投影的平均绝对残差', xscale='symlog',
                    title='{angle}个投影角度数目'.format(angle=angle),
                    save=path1+'{angle}_MAR_noisy.svg'.format(angle=angle))
    #"""
    #"""
    " ------------------------------不同惩戒项 在 不同投影角度数下 的 重建(无噪声)------------------------------ "
    path2 = '.' + os.sep + 'no_noisy' + os.sep  # 保存路径
    # 确定保存路径path1是否已经存在了,存在就不必再次创建了,没存在则创建该路径
    isExists = os.path.exists(path2)
    if not isExists:
        os.makedirs(path2)

    # 创建 proj_geom_list 以下从36角度为起始,终止角度为180,均值取12个
    proj_geom_list, proj_number_array = get_list_proj_geom(36, 180, 12, testPhantom.shape[0])

    # 用不同惩戒项(arg中)在不同proj_geom(在proj_geom_list中)下重建图像,并返回耗时,投影,重建再投影,重建,四个数据
    r_l, p_l, rp_l, time_cost = proj_geom_list_for_reconstruction(proj_geom_list, testPhantom, 'MR-FBP',
                                                                  b_proj_list=True, b_rec_proj=True,
                                                                  b_count_time=True, **arg)
    r2 = r_l[4]  # 挑选某个proj_geom(某个投影角度数下)显示重建结果

    MAE = MAE_analyse(r_l, testPhantom)  # 分析他们的MAE
    MAR = MAR_analyse(rp_l, p_l)         # 分析他们的pl

    save2 = []
    # 把不同惩戒项的无噪声重建结果保存成矢量图svg,
    for t in rec_type:
        save2.append(path2 + '{type}_no_noisy_reconstruction.svg'.format(type=t))

    show_rec_figure(r2, save=save2, b_one_figure=False, title=rec_type)

    # 绘制 不同惩戒项 在 不同角度数下 的 MAE曲线变化图
    show_analysis_figure(proj_number_array, MAE, rec_type, line_style_list,
                         xlabel='投影角度数目', ylabel='图像的平均绝对误差', xscale='linear',
                         title='无噪声', save=path2+'MAE_no_noisy.svg')

    # 绘制 不同惩戒项 在 不同角度数下 的 MAR曲线变化图
    show_analysis_figure(proj_number_array, MAR, rec_type, line_style_list,
                         xlabel='投影角度数目', ylabel='投影的平均绝对残差', xscale='linear',
                         title='无噪声', save=path2+'MAR_no_noisy.svg')

    show_analysis_figure(proj_number_array, time_cost, rec_type, line_style_list,
                         xlabel='投影角度数目', ylabel='算法耗时', xscale='linear',
                         title='无噪声', save=path2+'time_cost.svg')
    #"""

astra_plugin_r.py

(用这个替换原文astra_plugin.py,并在__init__.py 中把 from .astra_plugin import plugin 改为 from .astra_plugin_r import plugin 即可)

import astra
import numpy as np
from six.moves import range
from scipy.signal import fftconvolve
import scipy.ndimage.filters as snf
import scipy.io as sio
import numpy.linalg as na

import os, errno

try:
    from pywt import wavedec2
except:
    astra.log.warn("No pywavelets installed, wavelet functions will not work.")


def mkdir_p(path):
    try:
        os.makedirs(path)
    except OSError as exc:  # Python >2.5
        if exc.errno == errno.EEXIST and os.path.isdir(path):
            pass
        else:
            raise


class plugin(astra.plugin.ReconstructionAlgorithm2D):
    """Reconstructs using the MR-FBP method [1].

    Options:

    'save_filter' (optional): file to save filter to (overwrites existing)
    'use_saved_filter' (optional): file to load filter from
    'reg_grad' (optional): amount of l2 gradient minimization
    'reg_path' (optional): folder to save range of 'reg_grad' values to
    'reg_range' (optional): range of 'reg_grad' values to try (of form (min, max, number of trials))
    'reg_wav' (optional): amount of l2 wavelet minimization
    'wav_bas' (optional): wavelet to use (see pywt.wavelist())
    'nlinear' (optional): number of linear steps in exponential binning
    'reg_bilateral' (optional): (logic value) Whether the regularization based on bilateral filtering
    'reg_NLM' (optional): (logic value) Whether the regularization based on NLM filtering
    """

    astra_name = "MR-FBP"

    def customFBP(self, f, s):
        sf = np.zeros_like(s)  # 形成尺寸和s一样的零向量组sf
        padded = np.zeros(s.shape[1] * 2)  # 形成 1 x 探测器列数*2 的数组padded
        l = int(s.shape[1] / 2.)  # l = 探测器数目/2.向左取值,eg: int(5/2.)=2  0,1,2,3,4  int(4/2.)=2  0,1,2,3
        r = l + s.shape[1]  # l+探测器数目

        for i in range(sf.shape[0]):  # 每个角度方向处理
            padded[l:r] = s[i]  # 把数据放入padded中央
            padded[:l] = padded[l]  # 边缘扩展
            padded[r:] = padded[r - 1]  # 边缘扩展
            sf[i] = fftconvolve(padded, f, 'same')[l:r]  # 取中间l到r-1的值
        return (self.W.T * sf).reshape(self.v.shape)

    # create the exp_bin
    def create_exp_bin(self, nlinear):
        # ---------------------------------指数装箱部分--------------------------------- #
        fs = self.s.shape[1]  # 取探测器列数

        if fs % 2 == 0:  # 若为偶数,则扩充为奇数
            fs += 1
        mf = int(fs / 2)  # 取中点位置

        w = 1
        c = mf

        bas = np.zeros(fs, dtype=np.float32)  # fs长度的一个一维数组
        self.basis = []  # 装指数装箱的向量e1,e2......
        count = 0
        while c < fs:  # 中点位置处起位置c<总长度fs
            bas[:] = 0
            l = c  # 中点位置起位置c赋给l
            r = c + w  # r=中点位置起位置c+1

            if r > fs:  # r>总长度fs,r=总长度fs
                r = fs

            bas[l:r] = 1  # l到r赋值1

            if l != 0:
                l = fs - c - w
                r = l + w

                if l < 0:
                    l = 0

                bas[l:r] = 1
            self.basis.append(bas.copy())  # 把bas深拷贝到self.basis列表中去
            c += w
            count += 1
            if count > nlinear:  # 当大于我们规定的nlinear数时,开始2的倍数次扩张
                w = 2 * w
        self.nf = len(self.basis)  # self.nf存放指数装箱所产生的向量的个数

    # turn the pixel value into the range 0 to 255
    def get_256_pix_value(self, fbp_out):
        min_pix_value = np.min(fbp_out)
        max_pix_value = np.max(fbp_out)
        abs_max_minus_min = np.abs(max_pix_value - min_pix_value)
        fbp_255_out = (fbp_out - min_pix_value) * 255 / abs_max_minus_min

        return fbp_255_out

    # create the optimum filter
    def create_optimum_filter(self, out):
        # 形成e1尺寸的零向量flt
        flt = np.zeros_like(self.basis[0])
        # 还原h对应处的值
        for i, bas in enumerate(self.basis):
            flt += out[0][i] * bas

        return flt

    # save a lots of reconstruction by using optimum filter which square of |dx|,|dy| have
    # times a factor from reg_range
    def save_range_gradient(self, A, nrows, b):
        import tifffile
        mkdir_p(self.reg_path)
        # range of 'reg_grad' values to try (of form (min, max, number of trials))
        l, r, s = self.reg_range
        # show "Generating regularized images" in command line
        astra.log.info("Generating regularized images")

        for i, reg in enumerate(np.linspace(l, r, s)):
            # show the percent of procession in command line
            astra.log.info('{:.2f} % done'.format(100 * float(i) / s))

            A[self.s.size: nrows, :] *= reg  # let square of |dx| and |dy| times a factor
            out = na.lstsq(A, b)  # 求线性矩阵方程Ax=b的最小二乘解x。
            flt = self.create_optimum_filter(out)  # create optimum filter
            rc = self.customFBP(flt, self.s)  # 用所求的flt滤波器滤波投影s得到rc
            # 以tiff格式存储到路径下
            tifffile.imsave(self.reg_path + os.sep + str(reg) + '.tiff', rc)
            A[self.s.size: nrows, :] /= reg

        return A[self.s.size: nrows, :]

# ----------------function for regularization based on the L2 gradient minimization--------------- #

    # run the regularization based on the L2 gradient minimization
    def run_gradient(self, A, img, i):
        dx = np.zeros_like(self.v)
        dx[:-1, :] = img[:-1, :] - img[1:, :]  # 前n-1行 = 前n-1行 - 后n-1行
        dy = np.zeros_like(self.v)
        dy[:, :-1] = img[:, :-1] - img[:, 1:]  # 前n-1列 = 前n-1列 - 后n-1列
        # 头 Nd*Nθ 行的Ap
        A[0:self.s.size, i] = self.W * img
        # 存放x方向的梯度的N²行
        A[self.s.size + 0 * self.v.size:self.s.size + 1 * self.v.size, i] = \
            self.reg_grad * dx.flatten()
        # 存放y方向的梯度的N²行
        A[self.s.size + 1 * self.v.size:self.s.size + 2 * self.v.size, i] = \
            self.reg_grad * dy.flatten()

        return A[:, i]

# ----------------function for regularization based on the L2 wavelet minimization---------------- #

    # prepare for wav
    def prepare_for_wav(self):
        q = wavedec2(self.v, self.wav_bas)  # 使用self.wav_bas类型的小波去小波变换
        l = [q[0].flatten()]
        for z in range(1, len(q)):
            l.extend([y.flatten() for y in q[z]])
        l = np.hstack(l)

        return l

    # run the regularization based on the L2 wavelet minimization
    def run_wavelet(self, A, img, i, nrows):
        q = wavedec2(img, self.wav_bas)
        l = [q[0].flatten()]
        for z in range(1, len(q)):
            l.extend([y.flatten() for y in q[z]])
        l = np.hstack(l)
        A[nrows - l.size:, i] = self.reg_wav * l

        return A[nrows - l.size:, i]

# ------------------function for regularization based on the bilateral filtering----------------- #

    # because of using the self.r,self.width...so ,must be used after these variable definition
    def initialize_model(self):
        # 在2*r+1的模板中存放 行(r),列(c) 的相对增量
        # 在2*r+1的模板中存放相对位置距离的平方
        for i in range(self.width ** 2):  # 0....(2*r+1)^2-1
            # ----- 在2*r+1的模板中存放 行,列 的相对增量 ----- #
            row = i // self.width  # 在模板中行数(r)
            col = i % self.width  # 在模板中列数(c)
            self.model_r[row][col] = row - self.r  # 在里面放入相对位置(r上的)行增量
            self.model_c[row][col] = col - self.r  # 在里面放入相对位置(c上的)列增量

            # ----- 在2*r+1的模板中存放 行,列 的d2 -----#
            self.model_rc[row][col] = self.model_r[row][col] ** 2 + self.model_c[row][col] ** 2

    # define the variable for regularization based on bilateral filtering
    def define_variable_for_bilateral(self, bilateral_r, bilateral_u_2, bilateral_d_2):
        # ----------------------------------------以下是基于双边滤波器的正则化的属性---------------------------
        self.fbp_out = None          # 存放FBP后的结果
        self.r = bilateral_r         # 假定双边滤波器的边长
        self.width = self.r*2 + 1
        self.model_size = self.width**2
        # 创造模板存放搜索窗内系数对应的像素r方向增量,c方向增量
        self.model_r = np.zeros((self.width, self.width))
        self.model_c = np.zeros((self.width, self.width))
        # 创造模板存放距离d²
        self.model_rc = np.zeros((self.width, self.width))
        # 控制距离影响的因子与控制像素差距影响的因子2*σd² 和 2*σu²
        self.theta_2_d2 = bilateral_d_2
        self.theta_2_u2 = bilateral_u_2
        self.W_list_for_bilateral = []

    # prepare for the bilateral
    def prepare_for_bilateral(self):
        # 把FBP重建结果放入self.fbp_out中,后面要用
        if astra.projector.is_cuda(self.W.proj_id):
            fbp_out = self.W.reconstruct('FBP_CUDA', self.s)
        else:
            fbp_out = self.W.reconstruct('FBP', self.s)

        # 扩充矩阵,行列各扩充2*r
        self.fbp_out = np.zeros([self.v.shape[0] + 2 * self.r, self.v.shape[1] + 2 * self.r])
        # 把fbp重建结果放入正中间
        self.fbp_out[self.r: self.r + self.v.shape[0], self.r: self.r + self.v.shape[1]] = \
            fbp_out.reshape(self.v.shape)
        self.fbp_out = self.get_256_pix_value(self.fbp_out)

        img_extend = \
            np.zeros([self.v.shape[0] + self.r * 2,
                      self.v.shape[1] + self.r * 2])  # 建立图像扩展,值为0
        WT_Cp_col = []  # WT_Cp_col为一维数组,长度为 WT*Cp的行数

        return img_extend, WT_Cp_col

    # certain W for bilateral
    def certain_W_list_for_bilateral(self):

        for j in range(self.v.size):

            row = j // self.v.shape[1]         # 此时窗函数的中心像素的行坐标(以v的大小为参照)
            col = j % self.v.shape[1]          # 此时窗函数的中心像素的列坐标(以v的大小为参照)

            # 提取中心像素所在搜索窗口内的邻域像素到model_pix_fbp(在fbp扩展的结果中)
            model_pix_fbp = \
                self.fbp_out[row: row + self.width, col: col + self.width]

            row = row + self.r  # 此时中心像素的行坐标(以扩展后的大小为参照)
            col = col + self.r  # 此时中心像素的列坐标(以扩展后的大小为参照)

            son=np.exp(-1*(self.model_rc/self.theta_2_d2+(((model_pix_fbp-self.fbp_out[row, col])**2)/self.theta_2_u2)))

            mother = son.sum()
            w = (son/mother)

            self.W_list_for_bilateral.append(w)

    # run the regularization based on the bilateral filtering
    def run_bilateral(self, A, img, i, WT_Cp_col, img_extend):
        # 图像扩展中间部分用img替代
        img_extend[self.r: self.r + img.shape[0], self.r: self.r + img.shape[1]] = img.copy()

        # ---以下for循环求每个像素(作为中心像素)所处窗口内的各像素权重 0...size-1 并对求得该像素的加权结果--- #
        for j in range(self.v.size):

            row = j // self.v.shape[1]         # 此时窗函数的中心像素的行坐标(以v的大小为参照)
            col = j % self.v.shape[1]          # 此时窗函数的中心像素的列坐标(以v的大小为参照)

            # 提取中心像素所在搜索窗口内的邻域像素到model_pix_img(在img扩展的结果中)
            model_pix_img = \
                img_extend[row: row + self.width, col: col + self.width]

            row = row + self.r  # 此时中心像素的行坐标(以扩展后的大小为参照)
            col = col + self.r  # 此时中心像素的列坐标(以扩展后的大小为参照)

            sum_ = (model_pix_img * self.W_list_for_bilateral[j]).sum()
            WT_Cp_col.append(self.reg_bilateral * (img_extend[row, col] - sum_))

        # ---以上for循环求每个像素(作为中心像素)所处窗口内的各像素权重 0...size-1 并对求得该像素的加权结果--- #

        A[0: self.s.size, i] = self.W * img                       # W*WT*Cp*ei  [ W*WT*Cp]      [p]
        A[self.s.size: self.s.size + self.v.size, i] = WT_Cp_col  # D(WT*Cp*ei) [ DWT*Cp ] * h= [0]
        WT_Cp_col.clear()                                         # clear the element ,for next recycle

        return A[:, i]

    # ------------------------function for regularization based on the NLM-------------------------

    # define the variable for regularization based on NLM
    def define_variable_for_NLM(self, NLM_patch_width, NLM_search_width, h):
        # -------------------after,the parameter of regularization which is based on NLM ------------------
        self.search_width = NLM_search_width
        self.patch_width = NLM_patch_width        # self.patch_width, self.search_width must be odd
        self.h = self.patch_width ** 2 * h
        self.r_n_extend = int(self.patch_width - self.v.shape[0] % self.patch_width)
        self.c_n_extend = int(self.patch_width - self.v.shape[1] % self.patch_width)
        self.search_r = int((self.search_width - 1) / 2)
        self.patch_r = int((self.patch_width - 1) / 2)
        self.patch_wnd = np.zeros((self.patch_width, self.patch_width))
        self.center_wnd = np.zeros((self.patch_width, self.patch_width))
        self.one_line_search_times = int((self.c_n_extend + self.v.shape[1]) / self.patch_width)
        self.W_list_for_NLM = []                  # save the w of each center patch
        self.v_extend_size = (self.v.shape[0]+self.r_n_extend)*(self.v.shape[1]+self.c_n_extend)
        self.extend_r = self.v.shape[0]+self.r_n_extend
        self.extend_c = self.v.shape[1]+self.c_n_extend

    # prepare for the NLM
    def prepare_for_NLM(self):
        # 把FBP重建结果放入self.fbp_out中,后面要用
        if astra.projector.is_cuda(self.W.proj_id):
            fbp_out = self.W.reconstruct('FBP_CUDA', self.s)
        else:
            fbp_out = self.W.reconstruct('FBP', self.s)

        # extend the matrix ,let row and col add teh self.r_n_extend and self.c_n_extend ,make
        # its' row and col divide the self.patch_width is integer
        self.fbp_out = np.zeros([self.v.shape[0] + self.r_n_extend, self.v.shape[1] + self.c_n_extend])
        # place the fbp_out into self.fbp_out, begin from the left upper
        self.fbp_out[: self.v.shape[0], : self.v.shape[1]] = fbp_out.reshape(self.v.shape)
        self.fbp_out = self.get_256_pix_value(self.fbp_out)
        # create the img_extend, img_backup_extend,which extend to row and col more than img
        # about self.r_n_extend, self.c_n_extend
        img_extend = np.zeros([self.v.shape[0] + self.r_n_extend,
                               self.v.shape[1] + self.c_n_extend])
        img_backup_extend = np.zeros([self.v.shape[0] + self.r_n_extend,
                                      self.v.shape[1] + self.c_n_extend])
        m = self.fbp_out.sum()/self.fbp_out.size
        f = ((self.fbp_out - m)**2).sum() / self.fbp_out.size
        #self.h = self.patch_width ** 2 * f * 0.2
        self.h = self.patch_width ** 2 * f*f * 3.2 * 10**(-4)
        return img_extend, img_backup_extend

    # certain W for NLM
    def certain_W_list_for_NLM(self):
        j = self.extend_c * self.patch_r + self.patch_r  # go to reasonable site
        times = 0
        while j < self.v_extend_size:
            row = j // self.extend_c   # save the row of fbp(in origin,also in extend)
            col = j % self.extend_c    # save the col of fbp(in origin,also in extend)

            # save the center patch
            self.center_wnd[:, :] = self.fbp_out[row - self.patch_r: row + self.patch_r + 1,
                                                 col - self.patch_r: col + self.patch_r + 1]
            # certain the search range(in origin, also in extend)
            """
            tuple for min and max of row, if min row is smaller than 0, use 0 as min,if max is 
            bigger than self.v.shape[0], use the self.v.shape[0] as max, the col also like this
            could take the boundary in tuple
            """
            row_range = (max(row - self.search_r, 0), min(row + self.search_r, self.v.shape[0] + self.r_n_extend - 1))
            col_range = (max(col - self.search_r, 0), min(col + self.search_r, self.v.shape[1] + self.c_n_extend - 1))

            # search wnd row length and col length
            r_length = row_range[1] - row_range[0] + 1
            c_length = col_range[1] - col_range[0] + 1
            # search wnd row and col could assign how much number of patch
            r_number = r_length // self.patch_width
            c_number = c_length // self.patch_width

            son_list = []
            mother = 0
            w = []
            for r in range(r_number):
                for c in range(c_number):
                    self.patch_wnd[:, :] = \
                        self.fbp_out[row_range[0] + self.patch_width * r:
                                     row_range[0] + self.patch_width * (r + 1),
                                     col_range[0] + self.patch_width * c:
                                     col_range[0] + self.patch_width * (c + 1)]
                    e_d = ((self.patch_wnd - self.center_wnd) ** 2).sum()
                    son = np.exp(-1. * e_d / self.h)
                    son_list.append(son)
                    mother = son + mother

            for every_son in son_list:
                wjk = every_son / mother
                w.append(wjk)

            self.W_list_for_NLM.append(w)

            times = times + 1
            # one by one move by self.patch_width
            if times == self.one_line_search_times:
                j = j + 2 * self.patch_r * self.extend_c + self.patch_width
                times = 0
            else:
                j = j + self.patch_width

    # Run the regularization based on the NLM
    def run_NLM(self, A, img, i, img_extend, img_backup_extend):
        # place the img into img_extend , begin from the left upper
        img_extend[: img.shape[0], : img.shape[1]] = img.copy()
        img_backup_extend[: img.shape[0], : img.shape[1]] = img.copy()
        j = self.extend_c * self.patch_r + self.patch_r  # go to reasonable site
        times = 0
        w_index = 0

        while j < self.v_extend_size:

            row = j // self.extend_c  # save the row of fbp(in origin,also in extend)
            col = j % self.extend_c   # save the col of fbp(in origin,also in extend)

            # certain the search range(in origin, also in extend)
            """
            tuple for min and max of row, if min row is smaller than 0, use 0 as min,if max is 
            bigger than self.v.shape[0], use the self.v.shape[0] as max, the col also like this
            """
            row_range = (max(row - self.search_r, 0), min(row + self.search_r, self.v.shape[0] + self.r_n_extend - 1))
            col_range = (max(col - self.search_r, 0), min(col + self.search_r, self.v.shape[1] + self.c_n_extend - 1))

            # search wnd row length and col length
            r_length = row_range[1] - row_range[0] + 1
            c_length = col_range[1] - col_range[0] + 1
            # search wnd row and col could assign how much number of patch
            r_number = r_length // self.patch_width
            c_number = c_length // self.patch_width

            sum_ = 0
            number = 0
            # along the pix range to calculate the every pix value in center patch by using the
            # pix value in every patch_wnd in search_wnd with timing the wjk
            for r in range(r_number):
                for c in range(c_number):
                    sum_ = sum_ + img_extend[row_range[0] + self.patch_width * r:
                                             row_range[0] + self.patch_width * (r + 1),
                                             col_range[0] + self.patch_width * c:
                                             col_range[0] + self.patch_width * (c + 1)] * \
                           self.W_list_for_NLM[w_index][number]
                    number = number + 1

            img_backup_extend[row - self.patch_r: row + self.patch_r + 1,
                              col - self.patch_r: col + self.patch_r + 1] = \
                img_extend[row - self.patch_r: row + self.patch_r + 1,
                           col - self.patch_r: col + self.patch_r + 1] - sum_

            # one line search times add 1
            times = times + 1
            # one by one move by self.patch_width
            if times == self.one_line_search_times:
                j = j + 2 * self.patch_r * self.extend_c + self.patch_width
                times = 0
            else:
                j = j + self.patch_width

            w_index = w_index + 1

        A[0: self.s.size, i] = self.W * img
        A[self.s.size: self.s.size + self.v.size, i] = \
            self.reg_NLM * \
            img_backup_extend[: self.v.shape[0],            # W*WT*Cp*ei  [ W*WT*Cp]      [p]
                              : self.v.shape[1]].flatten()  # D(WT*Cp*ei) [ DWT*Cp ] * h= [0]

        return A[:, i]

    # --------------------------------------------------------------------------------------------
    """NLM_search_width/NLM_patch_width must be integer and must be odd"""
    def initialize(self, cfg, nlinear=2, reg_wav=None, wav_bas='haar',
                   reg_grad=None, reg_path=None, reg_range=(1, 100, 10),
                   reg_bilateral=None, bilateral_r=1, bilateral_u_2=121, bilateral_d_2=2,
                   reg_NLM=None, NLM_patch_width=7, NLM_search_width=21, NLM_h=80,
                   save_filter=None, use_saved_filter=None):

        self.W = astra.OpTomo(cfg['ProjectorId'])  # 生成OpTomo类实例,存入W实例属性中
        self.reg_grad = reg_grad
        self.reg_wav = reg_wav
        self.save_filter = save_filter
        self.use_saved = use_saved_filter
        self.reg_path = reg_path
        self.reg_range = reg_range
        self.wav_bas = wav_bas
        self.reg_NLM = reg_NLM
        self.reg_bilateral = reg_bilateral

        if reg_bilateral:
            # define the variable for regularization based on bilateral filtering
            self.define_variable_for_bilateral(bilateral_r, bilateral_u_2, bilateral_d_2)
            # using this function finish the Model initialization
            self.initialize_model()

        if reg_NLM:
            # define the variable for regularization based on NLM
            self.define_variable_for_NLM(NLM_patch_width, NLM_search_width, NLM_h)

        """
        self.v存的重建的数据,self.s存的投影的数据,ones_like(a)返回形状与类型与a一致的数组,shape
        以元组形式返回各维度的大小,reshape根据给定的维度大小重塑
        """
        # 若存放L2 gradient minimization的文件夹路径不为0
        if self.reg_path:
            self.reg_grad = 1.  # 系数λ

        self.create_exp_bin(nlinear)  # create a exp_bin

    # -------------------------------------------------------------------------------------------------------

    def run(self, iterations):
        # file to load the filter is not None
        if self.use_saved:
            flt = sio.loadmat(self.use_saved)['mrfbp_flt'].flatten()  # 将其拉成一维数组,且拷贝到新的空间
            self.v[:] = self.customFBP(flt, self.s)  # 调用自定义FBP算法重建,然后结束run
            return

        nrows = self.s.size  # 投影数据的总个数赋给     nrows   NdNθ   行数
        ncols = self.nf  # 指数装箱缩减后的列数赋给      ncols   logNd  列数

        # 若基于L2的梯度稀疏化正则项存在,则行数加上2 * self.v.size
        if self.reg_grad:
            nrows += 2 * self.v.size

        # if regularization based on wavelet is not None
        if self.reg_wav:
            l = self.prepare_for_wav()        # prepare for wav ,certain how line want to add
            nrows += l.size                   # 加上l.size行

        # if regularization based on NLM is not None
        if self.reg_NLM:
            nrows += self.v.size                   # add v.size line
            img_extend, img_backup_extend = \
                self.prepare_for_NLM()             # prepare for bilateral,create fbp data
            #self.certain_W_list_for_NLM()          # use self.fbp_out to certain the w for each center patch
            self.certain_W_list_for_NLM()

        # if regularization based on bilateral is not None
        if self.reg_bilateral:
            nrows += self.v.size                   # 加上v.size行
            img_extend, WT_Cp_col = \
                self.prepare_for_bilateral()       # prepare for bilateral,create fbp data
            self.certain_W_list_for_bilateral()    # use self.fbp_out to certain the w for each pix

        # ------------------------------------------------------------------------------
        A = np.zeros((nrows, ncols), dtype=np.float32)  # 形成Ap
        if self.reg_NLM:
            name = 'Generating MR-FBP-NLM matrix'
        elif self.reg_bilateral:
            name = 'Generating MR-FBP-bilateral matrix'
        elif self.reg_grad:
            name = 'Generating MR-FBP-gradient matrix'
        elif self.reg_wav:
            name = 'Generating MR-FBP-wavelet matrix'
        else:
            name = 'Generating MR-FBP matrix'

        astra.log.info(name)      # 记录信息Generating MR-FBP matrix

        # 依次取出e1,e2......并进行相应的求列操作
        for i, bas in enumerate(self.basis):
            astra.log.info('{:.2f} % done'.format(100 * float(i) / self.nf))  # 记录完成百分比信息

            # 把投影s和bas传给customFBP,返回 WT*Cp*ei(WT*Cp的第i列)的 v形式
            img = self.customFBP(bas, self.s)

            # -------------------if regularization is based on wavelet-------------------- #
            if self.reg_wav:
                # use the run_wavelet for A[nrows - l.size, i]
                A[nrows - l.size:, i] = self.run_wavelet(A, img, i, nrows)

            # -----------if regularization is based on L2 gradient minimization----------- #
            if self.reg_grad:
                # use the run_gradient for A[:, i]
                A[:, i] = self.run_gradient(A, img, i)

            # -----------------------if regularization is based on NLM-------------------- #
            if self.reg_NLM:
                # use the run_NLM for A[:, i]
                #A[:, i] = self.run_NLM(A, img, i, img_extend, img_backup_extend)
                A[:, i] = self.run_NLM(A, img, i, img_extend, img_backup_extend)

            # -------------if regularization is based on bilateral filtering-------------- #
            if self.reg_bilateral:
                A[:, i] = self.run_bilateral(A, img, i, WT_Cp_col, img_extend)
            else:
                # 若没有基于双边滤波,且也没执行别的正则化操作,执行如下语句
                A[:self.s.size, i] = self.W * img  # 进行投影 W*WT*Cp*ei

        # -------------------------------------------------------------------------------------------------------

        b = np.zeros(nrows, dtype=np.float32)  # 创造一维长度为 nrows 的零向量组
        b[:self.s.size] = self.s.flatten()     # s拉平成一维数组,装入b中

        # ----------giving lots of factor which from reg_range for square of |dx|----------- #
        # -----------------------, |dy| and save it by tiff format-------------------------- #
        if self.reg_path:
            A[self.s.size: nrows, :] = self.save_range_gradient(A, nrows, b)

        out = na.lstsq(A, b)                   # 求线性矩阵方程Ax=b的最小二乘解x
        flt = self.create_optimum_filter(out)  # create optimum filter

        # --------if self.save_filter(file) is not None, save the flt to save_filter -------- #
        if self.save_filter:
            sio.savemat(self.save_filter, {'mrfbp_flt': flt},
                        do_compression=True, appendmat=False)

        astra.log.info("Done!")                # show the "Done!" in command line
        rc = self.customFBP(flt, self.s)       # 用所求的flt滤波器滤波投影s得到rc
        self.v[:] = rc                         # 把rc存到v中

 

标签:MAR,extend,img,NNFs,self,滤波,patch,list,proj
来源: https://blog.csdn.net/qq_41607336/article/details/108637204

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有