手把手教你部署一个蘑菇识别的小应用


本文介绍使用飞桨EasyEdge平台部署蘑菇分类模型的流程。先定义图像分类任务,解压并标注蘑菇数据集,划分训练集和验证集,定义数据集类并做图像增强。选用mobilenet_v2网络,配置优化器训练模型,最后保存为静态图,通过EasyEdge平台完成部署,操作简洁。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

项目背景

飞桨最近新上了EasyEdge端与边缘AI服务平台,这对于新手来说非常友好

在原来我们开发了模型之后,没有办法快速部署到手机上

现在有了EasyEdge这个平台,直接在EasyEdge端与边缘AI服务平台部署就可以了,操作十分简洁流畅!下面以一个示例来给大家展示:

       

① 问题定义

对于一个任务,当你想使用深度学习来解决时,一般流程如下:

①问题定义->②数据准备->③模型选择与开发->④模型训练和调优->⑤模型评估测试->⑥部署上线

本项目中的蘑菇的分类的本质是图像分类任务,采用轻量级卷积神经网络mobilenet_v2进行相关实践。

② 数据准备

2.1 解压缩数据集

我们将网上获取的数据集以压缩包的方式上传到aistudio数据集中,并加载到我们的项目内。

在使用之前我们进行数据集压缩包的一个解压。

In [ ]
# !unzip -oq /home/aistudio/data/data81902/mushrooms_train.zip -d work/
    In [ ]
import paddle
paddle.seed(8888)import numpy as npfrom typing import Callable#参数配置config_parameters = {    "class_dim": 9,  #分类数
    "target_path":"/home/aistudio/work/",                     
    'train_image_dir': '/home/aistudio/work/trainImages',    'eval_image_dir': '/home/aistudio/work/evalImages',    'epochs':100,    'batch_size': 128,    'lr': 0.01}
   

2.2 数据标注

我们先看一下解压缩后的数据集长成什么样子。

In [ ]
import osimport randomfrom matplotlib import pyplot as pltfrom PIL import Image

imgs = []
paths = os.listdir('work/mushrooms_train')for path in paths:   
    img_path = os.path.join('work/mushrooms_train', path)    if os.path.isdir(img_path):
        img_paths = os.listdir(img_path)
        img = Image.open(os.path.join(img_path, random.choice(img_paths)))
        imgs.append((img, path))

f, ax = plt.subplots(3, 3, figsize=(12,12))for i, img in enumerate(imgs[:9]):
    ax[i//3, i%3].imshow(img[0])
    ax[i//3, i%3].axis('off')
    ax[i//3, i%3].set_title('label: %s' % img[1])
plt.show()
       
               

2.3 划分数据集与数据集的定义

接下来我们使用标注好的文件进行数据集类的定义,方便后续模型训练使用。

2.3.1 划分数据集

In [ ]
# import os# import shutil# train_dir = config_parameters['train_image_dir']# eval_dir = config_parameters['eval_image_dir']# paths = os.listdir('work/mushrooms_train')# if not os.path.exists(train_dir):#     os.mkdir(train_dir)# if not os.path.exists(eval_dir):#     os.mkdir(eval_dir)# for path in paths:#     imgs_dir = os.listdir(os.path.join('work/mushrooms_train', path))#     target_train_dir = os.path.join(train_dir,path)#     target_eval_dir = os.path.join(eval_dir,path)#     if not os.path.exists(target_train_dir):#         os.mkdir(target_train_dir)#     if not os.path.exists(target_eval_dir):#         os.mkdir(target_eval_dir)#     for i in range(len(imgs_dir)):#         if ' ' in imgs_dir[i]:#             new_name = imgs_dir[i].replace(' ', '_')#         else:#             new_name = imgs_dir[i]#         target_train_path = os.path.join(target_train_dir, new_name)#         target_eval_path = os.path.join(target_eval_dir, new_name)     #         if i % 5 == 0:#             shutil.copyfile(os.path.join(os.path.join('work/mushrooms_train', path), imgs_dir[i]), target_eval_path)#         else:#             shutil.copyfile(os.path.join(os.path.join('work/mushrooms_train', path), imgs_dir[i]), target_train_path)# # print('finished train val split!')
   

2.3.2 导入数据集的定义实现

In [ ]
#数据集的定义class TowerDataset(paddle.io.Dataset):
    """
    步骤一:继承paddle.io.Dataset类
    """
    def __init__(self, transforms: Callable, mode: str ='train'):
        """
        步骤二:实现构造函数,定义数据读取方式
        """
        super(TowerDataset, self).__init__()
        
        self.mode = mode
        self.transforms = transforms

        train_image_dir = config_parameters['train_image_dir']
        eval_image_dir = config_parameters['eval_image_dir']

        train_data_folder = paddle.vision.DatasetFolder(train_image_dir)
        eval_data_folder = paddle.vision.DatasetFolder(eval_image_dir)        
        if self.mode  == 'train':
            self.data = train_data_folder        elif self.mode  == 'eval':
            self.data = eval_data_folder    def __getitem__(self, index):
        """
        步骤三:实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据(训练数据,对应的标签)
        """
        data = np.array(self.data[index][0]).astype('float32')

        data = self.transforms(data)

        label = np.array([self.data[index][1]]).astype('int64')        
        return data, label        
    def __len__(self):
        """
        步骤四:实现__len__方法,返回数据集总数目
        """
        return len(self.data)
   

2.3.3 实例化数据集类及图像增强

根据所使用的数据集需求实例化数据集类,并查看总样本量。

In [ ]
from paddle.vision import transforms as T#数据增强transform_train =T.Compose([T.Resize((256,256)),
                            T.RandomHorizontalFlip(10),
                            T.RandomRotation(10),
                            T.Transpose(),
                            T.Normalize(mean=[0, 0, 0],                           # 像素值归一化
                                        std =[255, 255, 255]),                    # transforms.ToTensor(), # transpose操作 + (img / 255),并且数据结构变为PaddleTensor
                            T.Normalize(mean=[0.50950350, 0.54632660, 0.57409690],# 减均值 除标准差    
                                        std= [0.26059777, 0.26041326, 0.29220656])# 计算过程:output[channel] = (input[channel] - mean[channel]) / std[channel]
                            ])
transform_eval =T.Compose([ T.Resize((256,256)),
                            T.Transpose(),
                            T.Normalize(mean=[0, 0, 0],                           # 像素值归一化
                                        std =[255, 255, 255]),                    # transforms.ToTensor(), # transpose操作 + (img / 255),并且数据结构变为PaddleTensor
                            T.Normalize(mean=[0.50950350, 0.54632660, 0.57409690],# 减均值 除标准差    
                                        std= [0.26059777, 0.26041326, 0.29220656])# 计算过程:output[channel] = (input[channel] - mean[channel]) / std[channel]
                            ])
    In [ ]
train_dataset = TowerDataset(mode='train',transforms=transform_train)
eval_dataset  = TowerDataset(mode='eval', transforms=transform_eval )#数据异步加载train_loader = paddle.io.DataLoader(train_dataset, 
                                    places=paddle.CUDAPlace(0), 
                                    batch_size=128, 
                                    shuffle=True,                                    #num_workers=2,
                                    #use_shared_memory=True
                                    )
eval_loader = paddle.io.DataLoader (eval_dataset, 
                                    places=paddle.CUDAPlace(0), 
                                    batch_size=128,                                    #num_workers=2,
                                    #use_shared_memory=True
                                    )
    In [ ]
print('训练集样本量: {},验证集样本量: {}'.format(len(train_loader), len(eval_loader)))
       
训练集样本量: 42,验证集样本量: 11
       

③ 模型选择和开发

3.1 网络构建

本次我们使用mobilenet_v2网络来完成我们的案例实践。

In [ ]
network=paddle.vision.models.mobilenet_v2(pretrained=True,num_classes=9)
model=paddle.Model(network)
model.summary((-1, 3, 256, 256))
       
2025-06-03 21:51:44,710 - INFO - unique_endpoints {''}
2025-06-03 21:51:44,711 - INFO - Downloading mobilenet_v2_x1.0.pdparams from https://paddle-hapi.bj.bcebos.com/models/mobilenet_v2_x1.0.pdparams
100%|██████████| 20795/20795 [00:00<00:00, 22406.24it/s]
2021-06-03 21:51:46,068 - INFO - File /home/aistudio/.cache/paddle/hapi/weights/mobilenet_v2_x1.0.pdparams md5 checking...
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1303: UserWarning: Skip loading for classifier.1.weight. classifier.1.weight receives a shape [1280, 1000], but the expected shape is [1280, 9].
  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1303: UserWarning: Skip loading for classifier.1.bias. classifier.1.bias receives a shape [1000], but the expected shape is [9].
  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
       
-------------------------------------------------------------------------------
   Layer (type)         Input Shape          Output Shape         Param #    
===============================================================================
     Conv2D-1        [[1, 3, 256, 256]]   [1, 32, 128, 128]         864      
   BatchNorm2D-1    [[1, 32, 128, 128]]   [1, 32, 128, 128]         128      
      ReLU6-1       [[1, 32, 128, 128]]   [1, 32, 128, 128]          0       
     Conv2D-2       [[1, 32, 128, 128]]   [1, 32, 128, 128]         288      
   BatchNorm2D-2    [[1, 32, 128, 128]]   [1, 32, 128, 128]         128      
      ReLU6-2       [[1, 32, 128, 128]]   [1, 32, 128, 128]          0       
     Conv2D-3       [[1, 32, 128, 128]]   [1, 16, 128, 128]         512      
   BatchNorm2D-3    [[1, 16, 128, 128]]   [1, 16, 128, 128]         64       
InvertedResidual-1  [[1, 32, 128, 128]]   [1, 16, 128, 128]          0       
     Conv2D-4       [[1, 16, 128, 128]]   [1, 96, 128, 128]        1,536     
   BatchNorm2D-4    [[1, 96, 128, 128]]   [1, 96, 128, 128]         384      
      ReLU6-3       [[1, 96, 128, 128]]   [1, 96, 128, 128]          0       
     Conv2D-5       [[1, 96, 128, 128]]    [1, 96, 64, 64]          864      
   BatchNorm2D-5     [[1, 96, 64, 64]]     [1, 96, 64, 64]          384      
      ReLU6-4        [[1, 96, 64, 64]]     [1, 96, 64, 64]           0       
     Conv2D-6        [[1, 96, 64, 64]]     [1, 24, 64, 64]         2,304     
   BatchNorm2D-6     [[1, 24, 64, 64]]     [1, 24, 64, 64]          96       
InvertedResidual-2  [[1, 16, 128, 128]]    [1, 24, 64, 64]           0       
     Conv2D-7        [[1, 24, 64, 64]]     [1, 144, 64, 64]        3,456     
   BatchNorm2D-7     [[1, 144, 64, 64]]    [1, 144, 64, 64]         576      
      ReLU6-5        [[1, 144, 64, 64]]    [1, 144, 64, 64]          0       
     Conv2D-8        [[1, 144, 64, 64]]    [1, 144, 64, 64]        1,296     
   BatchNorm2D-8     [[1, 144, 64, 64]]    [1, 144, 64, 64]         576      
      ReLU6-6        [[1, 144, 64, 64]]    [1, 144, 64, 64]          0       
     Conv2D-9        [[1, 144, 64, 64]]    [1, 24, 64, 64]         3,456     
   BatchNorm2D-9     [[1, 24, 64, 64]]     [1, 24, 64, 64]          96       
InvertedResidual-3   [[1, 24, 64, 64]]     [1, 24, 64, 64]           0       
     Conv2D-10       [[1, 24, 64, 64]]     [1, 144, 64, 64]        3,456     
  BatchNorm2D-10     [[1, 144, 64, 64]]    [1, 144, 64, 64]         576      
      ReLU6-7        [[1, 144, 64, 64]]    [1, 144, 64, 64]          0       
     Conv2D-11       [[1, 144, 64, 64]]    [1, 144, 32, 32]        1,296     
  BatchNorm2D-11     [[1, 144, 32, 32]]    [1, 144, 32, 32]         576      
      ReLU6-8        [[1, 144, 32, 32]]    [1, 144, 32, 32]          0       
     Conv2D-12       [[1, 144, 32, 32]]    [1, 32, 32, 32]         4,608     
  BatchNorm2D-12     [[1, 32, 32, 32]]     [1, 32, 32, 32]          128      
InvertedResidual-4   [[1, 24, 64, 64]]     [1, 32, 32, 32]           0       
     Conv2D-13       [[1, 32, 32, 32]]     [1, 192, 32, 32]        6,144     
  BatchNorm2D-13     [[1, 192, 32, 32]]    [1, 192, 32, 32]         768      
      ReLU6-9        [[1, 192, 32, 32]]    [1, 192, 32, 32]          0       
     Conv2D-14       [[1, 192, 32, 32]]    [1, 192, 32, 32]        1,728     
  BatchNorm2D-14     [[1, 192, 32, 32]]    [1, 192, 32, 32]         768      
     ReLU6-10        [[1, 192, 32, 32]]    [1, 192, 32, 32]          0       
     Conv2D-15       [[1, 192, 32, 32]]    [1, 32, 32, 32]         6,144     
  BatchNorm2D-15     [[1, 32, 32, 32]]     [1, 32, 32, 32]          128      
InvertedResidual-5   [[1, 32, 32, 32]]     [1, 32, 32, 32]           0       
     Conv2D-16       [[1, 32, 32, 32]]     [1, 192, 32, 32]        6,144     
  BatchNorm2D-16     [[1, 192, 32, 32]]    [1, 192, 32, 32]         768      
     ReLU6-11        [[1, 192, 32, 32]]    [1, 192, 32, 32]          0       
     Conv2D-17       [[1, 192, 32, 32]]    [1, 192, 32, 32]        1,728     
  BatchNorm2D-17     [[1, 192, 32, 32]]    [1, 192, 32, 32]         768      
     ReLU6-12        [[1, 192, 32, 32]]    [1, 192, 32, 32]          0       
     Conv2D-18       [[1, 192, 32, 32]]    [1, 32, 32, 32]         6,144     
  BatchNorm2D-18     [[1, 32, 32, 32]]     [1, 32, 32, 32]          128      
InvertedResidual-6   [[1, 32, 32, 32]]     [1, 32, 32, 32]           0       
     Conv2D-19       [[1, 32, 32, 32]]     [1, 192, 32, 32]        6,144     
  BatchNorm2D-19     [[1, 192, 32, 32]]    [1, 192, 32, 32]         768      
     ReLU6-13        [[1, 192, 32, 32]]    [1, 192, 32, 32]          0       
     Conv2D-20       [[1, 192, 32, 32]]    [1, 192, 16, 16]        1,728     
  BatchNorm2D-20     [[1, 192, 16, 16]]    [1, 192, 16, 16]         768      
     ReLU6-14        [[1, 192, 16, 16]]    [1, 192, 16, 16]          0       
     Conv2D-21       [[1, 192, 16, 16]]    [1, 64, 16, 16]        12,288     
  BatchNorm2D-21     [[1, 64, 16, 16]]     [1, 64, 16, 16]          256      
InvertedResidual-7   [[1, 32, 32, 32]]     [1, 64, 16, 16]           0       
     Conv2D-22       [[1, 64, 16, 16]]     [1, 384, 16, 16]       24,576     
  BatchNorm2D-22     [[1, 384, 16, 16]]    [1, 384, 16, 16]        1,536     
     ReLU6-15        [[1, 384, 16, 16]]    [1, 384, 16, 16]          0       
     Conv2D-23       [[1, 384, 16, 16]]    [1, 384, 16, 16]        3,456     
  BatchNorm2D-23     [[1, 384, 16, 16]]    [1, 384, 16, 16]        1,536     
     ReLU6-16        [[1, 384, 16, 16]]    [1, 384, 16, 16]          0       
     Conv2D-24       [[1, 384, 16, 16]]    [1, 64, 16, 16]        24,576     
  BatchNorm2D-24     [[1, 64, 16, 16]]     [1, 64, 16, 16]          256      
InvertedResidual-8   [[1, 64, 16, 16]]     [1, 64, 16, 16]           0       
     Conv2D-25       [[1, 64, 16, 16]]     [1, 384, 16, 16]       24,576     
  BatchNorm2D-25     [[1, 384, 16, 16]]    [1, 384, 16, 16]        1,536     
     ReLU6-17        [[1, 384, 16, 16]]    [1, 384, 16, 16]          0       
     Conv2D-26       [[1, 384, 16, 16]]    [1, 384, 16, 16]        3,456     
  BatchNorm2D-26     [[1, 384, 16, 16]]    [1, 384, 16, 16]        1,536     
     ReLU6-18        [[1, 384, 16, 16]]    [1, 384, 16, 16]          0       
     Conv2D-27       [[1, 384, 16, 16]]    [1, 64, 16, 16]        24,576     
  BatchNorm2D-27     [[1, 64, 16, 16]]     [1, 64, 16, 16]          256      
InvertedResidual-9   [[1, 64, 16, 16]]     [1, 64, 16, 16]           0       
     Conv2D-28       [[1, 64, 16, 16]]     [1, 384, 16, 16]       24,576     
  BatchNorm2D-28     [[1, 384, 16, 16]]    [1, 384, 16, 16]        1,536     
     ReLU6-19        [[1, 384, 16, 16]]    [1, 384, 16, 16]          0       
     Conv2D-29       [[1, 384, 16, 16]]    [1, 384, 16, 16]        3,456     
  BatchNorm2D-29     [[1, 384, 16, 16]]    [1, 384, 16, 16]        1,536     
     ReLU6-20        [[1, 384, 16, 16]]    [1, 384, 16, 16]          0       
     Conv2D-30       [[1, 384, 16, 16]]    [1, 64, 16, 16]        24,576     
  BatchNorm2D-30     [[1, 64, 16, 16]]     [1, 64, 16, 16]          256      
InvertedResidual-10  [[1, 64, 16, 16]]     [1, 64, 16, 16]           0       
     Conv2D-31       [[1, 64, 16, 16]]     [1, 384, 16, 16]       24,576     
  BatchNorm2D-31     [[1, 384, 16, 16]]    [1, 384, 16, 16]        1,536     
     ReLU6-21        [[1, 384, 16, 16]]    [1, 384, 16, 16]          0       
     Conv2D-32       [[1, 384, 16, 16]]    [1, 384, 16, 16]        3,456     
  BatchNorm2D-32     [[1, 384, 16, 16]]    [1, 384, 16, 16]        1,536     
     ReLU6-22        [[1, 384, 16, 16]]    [1, 384, 16, 16]          0       
     Conv2D-33       [[1, 384, 16, 16]]    [1, 96, 16, 16]        36,864     
  BatchNorm2D-33     [[1, 96, 16, 16]]     [1, 96, 16, 16]          384      
InvertedResidual-11  [[1, 64, 16, 16]]     [1, 96, 16, 16]           0       
     Conv2D-34       [[1, 96, 16, 16]]     [1, 576, 16, 16]       55,296     
  BatchNorm2D-34     [[1, 576, 16, 16]]    [1, 576, 16, 16]        2,304     
     ReLU6-23        [[1, 576, 16, 16]]    [1, 576, 16, 16]          0       
     Conv2D-35       [[1, 576, 16, 16]]    [1, 576, 16, 16]        5,184     
  BatchNorm2D-35     [[1, 576, 16, 16]]    [1, 576, 16, 16]        2,304     
     ReLU6-24        [[1, 576, 16, 16]]    [1, 576, 16, 16]          0       
     Conv2D-36       [[1, 576, 16, 16]]    [1, 96, 16, 16]        55,296     
  BatchNorm2D-36     [[1, 96, 16, 16]]     [1, 96, 16, 16]          384      
InvertedResidual-12  [[1, 96, 16, 16]]     [1, 96, 16, 16]           0       
     Conv2D-37       [[1, 96, 16, 16]]     [1, 576, 16, 16]       55,296     
  BatchNorm2D-37     [[1, 576, 16, 16]]    [1, 576, 16, 16]        2,304     
     ReLU6-25        [[1, 576, 16, 16]]    [1, 576, 16, 16]          0       
     Conv2D-38       [[1, 576, 16, 16]]    [1, 576, 16, 16]        5,184     
  BatchNorm2D-38     [[1, 576, 16, 16]]    [1, 576, 16, 16]        2,304     
     ReLU6-26        [[1, 576, 16, 16]]    [1, 576, 16, 16]          0       
     Conv2D-39       [[1, 576, 16, 16]]    [1, 96, 16, 16]        55,296     
  BatchNorm2D-39     [[1, 96, 16, 16]]     [1, 96, 16, 16]          384      
InvertedResidual-13  [[1, 96, 16, 16]]     [1, 96, 16, 16]           0       
     Conv2D-40       [[1, 96, 16, 16]]     [1, 576, 16, 16]       55,296     
  BatchNorm2D-40     [[1, 576, 16, 16]]    [1, 576, 16, 16]        2,304     
     ReLU6-27        [[1, 576, 16, 16]]    [1, 576, 16, 16]          0       
     Conv2D-41       [[1, 576, 16, 16]]     [1, 576, 8, 8]         5,184     
  BatchNorm2D-41      [[1, 576, 8, 8]]      [1, 576, 8, 8]         2,304     
     ReLU6-28         [[1, 576, 8, 8]]      [1, 576, 8, 8]           0       
     Conv2D-42        [[1, 576, 8, 8]]      [1, 160, 8, 8]        92,160     
  BatchNorm2D-42      [[1, 160, 8, 8]]      [1, 160, 8, 8]          640      
InvertedResidual-14  [[1, 96, 16, 16]]      [1, 160, 8, 8]           0       
     Conv2D-43        [[1, 160, 8, 8]]      [1, 960, 8, 8]        153,600    
  BatchNorm2D-43      [[1, 960, 8, 8]]      [1, 960, 8, 8]         3,840     
     ReLU6-29         [[1, 960, 8, 8]]      [1, 960, 8, 8]           0       
     Conv2D-44        [[1, 960, 8, 8]]      [1, 960, 8, 8]         8,640     
  BatchNorm2D-44      [[1, 960, 8, 8]]      [1, 960, 8, 8]         3,840     
     ReLU6-30         [[1, 960, 8, 8]]      [1, 960, 8, 8]           0       
     Conv2D-45        [[1, 960, 8, 8]]      [1, 160, 8, 8]        153,600    
  BatchNorm2D-45      [[1, 160, 8, 8]]      [1, 160, 8, 8]          640      
InvertedResidual-15   [[1, 160, 8, 8]]      [1, 160, 8, 8]           0       
     Conv2D-46        [[1, 160, 8, 8]]      [1, 960, 8, 8]        153,600    
  BatchNorm2D-46      [[1, 960, 8, 8]]      [1, 960, 8, 8]         3,840     
     ReLU6-31         [[1, 960, 8, 8]]      [1, 960, 8, 8]           0       
     Conv2D-47        [[1, 960, 8, 8]]      [1, 960, 8, 8]         8,640     
  BatchNorm2D-47      [[1, 960, 8, 8]]      [1, 960, 8, 8]         3,840     
     ReLU6-32         [[1, 960, 8, 8]]      [1, 960, 8, 8]           0       
     Conv2D-48        [[1, 960, 8, 8]]      [1, 160, 8, 8]        153,600    
  BatchNorm2D-48      [[1, 160, 8, 8]]      [1, 160, 8, 8]          640      
InvertedResidual-16   [[1, 160, 8, 8]]      [1, 160, 8, 8]           0       
     Conv2D-49        [[1, 160, 8, 8]]      [1, 960, 8, 8]        153,600    
  BatchNorm2D-49      [[1, 960, 8, 8]]      [1, 960, 8, 8]         3,840     
     ReLU6-33         [[1, 960, 8, 8]]      [1, 960, 8, 8]           0       
     Conv2D-50        [[1, 960, 8, 8]]      [1, 960, 8, 8]         8,640     
  BatchNorm2D-50      [[1, 960, 8, 8]]      [1, 960, 8, 8]         3,840     
     ReLU6-34         [[1, 960, 8, 8]]      [1, 960, 8, 8]           0       
     Conv2D-51        [[1, 960, 8, 8]]      [1, 320, 8, 8]        307,200    
  BatchNorm2D-51      [[1, 320, 8, 8]]      [1, 320, 8, 8]         1,280     
InvertedResidual-17   [[1, 160, 8, 8]]      [1, 320, 8, 8]           0       
     Conv2D-52        [[1, 320, 8, 8]]     [1, 1280, 8, 8]        409,600    
  BatchNorm2D-52     [[1, 1280, 8, 8]]     [1, 1280, 8, 8]         5,120     
     ReLU6-35        [[1, 1280, 8, 8]]     [1, 1280, 8, 8]           0       
AdaptiveAvgPool2D-1  [[1, 1280, 8, 8]]     [1, 1280, 1, 1]           0       
     Dropout-1          [[1, 1280]]           [1, 1280]              0       
     Linear-1           [[1, 1280]]             [1, 9]            11,529     
===============================================================================
Total params: 2,269,513
Trainable params: 2,201,289
Non-trainable params: 68,224
-------------------------------------------------------------------------------
Input size (MB): 0.75
Forward/backward pass size (MB): 199.66
Params size (MB): 8.66
Estimated Total Size (MB): 209.07
-------------------------------------------------------------------------------
       
{'total_params': 2269513, 'trainable_params': 2201289}
               

④ 模型训练和优化器的选择

In [ ]
#优化器选择class SaveBestModel(paddle.callbacks.Callback):
    def __init__(self, target=0.5, path='work/best_model', verbose=0):
        self.target = target
        self.epoch = None
        self.path = path    def on_epoch_end(self, epoch, logs=None):
        self.epoch = epoch    def on_eval_end(self, logs=None):
        if logs.get('acc') > self.target:
            self.target = logs.get('acc')
            self.model.save(self.path)            print('best acc is {} at epoch {}'.format(self.target, self.epoch))

callback_visualdl = paddle.callbacks.VisualDL(log_dir='work/mushroom')
callback_savebestmodel = SaveBestModel(target=0.5, path='work/best_model')
callbacks = [callback_visualdl, callback_savebestmodel]

base_lr = config_parameters['lr']
epochs = config_parameters['epochs']def make_optimizer(parameters=None):
    momentum = 0.9

    learning_rate= paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=base_lr, T_max=epochs, verbose=False)
    weight_decay=paddle.regularizer.L2Decay(0.0001)
    optimizer = paddle.optimizer.Momentum(
        learning_rate=learning_rate,
        momentum=momentum,
        weight_decay=weight_decay,
        parameters=parameters)    return optimizer

optimizer = make_optimizer(model.parameters())
    In [ ]
model.prepare(optimizer,
              paddle.nn.CrossEntropyLoss(),
              paddle.metric.Accuracy())
    In [14]
model.fit(train_loader,
          eval_loader,
          epochs=100,
          batch_size=128,           # 是否打乱样本集     
          callbacks=callbacks, 
          verbose=1)   # 日志展示格式
   

⑤模型评估测试

       

⑥部署上线

将我们训练得到的模型进行保存为静态图,得到mushroom.pdmodel和mushroom.pdiparams两个文件,准备一个label_list.txt文件

In [15]
model.save('mushroom',training=False)
   

6.1上传原模型

根据图示进行上传文件验证,生成Demo

       

6.2生成端模型

       

       

6.3Demo下载体验

         


# python  # ai  # 异步加载  # cos  # red  # igs  # 数据结构  # 压缩包  # 解压缩  # 服务平台  # 保存为  # 均值  # 边缘  # 加载  # 标准差  # 上了 


相关栏目: 【 Google疑问12 】 【 Facebook疑问10 】 【 网络优化91478 】 【 技术知识72672 】 【 云计算0 】 【 GEO优化84317 】 【 优选文章0 】 【 营销推广36048 】 【 网络运营41350 】 【 案例网站102563 】 【 AI智能45237


相关推荐: 如何利用 DeepSeek 进行多轮复杂对话的状态管理  Brevio AI:利用AI代理提升电商营销效果  AI超级英雄大乱斗:蜘蛛侠、死侍的爆笑奇幻之旅  OpenAI 播客精选:技术内幕、育儿经与AI未来  AI照片编辑终极指南:一键打造潮流图像  智谱清言分析数据怎么用_智谱清言分析数据使用方法详细指南【教程】  AI旅游攻略生成工具有哪些_一键生成行程规划的AI工具推荐  MemeGIF Studio:AI驱动的GIF生成器全面评测与使用指南  AI赋能营销:5分钟快速生成品牌营销素材全攻略  通义千问怎样优化提示词效果_通义千问提示词优化技巧【攻略】  百度ai助手怎么设置不显示 百度ai助手界面净化设置  ChatGPT怎么写工作汇报 职场办公效率提升与周报生成方法  AI驱动营销:如何利用人工智能构建高效营销漏斗  提升企业效率:QR Platform管理后台功能全面解析  DeepSeek分析Excel怎么用_DeepSeek分析Excel使用方法详细指南【教程】  百度ai助手工具栏怎么关 百度ai助手状态栏隐藏  CareerCraft AI:提升大学生实习就业的智能平台  定价3499炒到1.2万,豆包AI手机遭“封杀”,变革之路何去何从?  Gemini怎样写细节型提示词_Gemini细节提示词编写【步骤】  怎么用AI帮你为初创公司进行市场定位分析?  ChatGPT 4 辅助进行室内设计灵感采集  Replika AI:情感慰藉还是虚拟危机?深度剖析与用户反馈  OpenAI ChatGPT Agent:AI自主任务的未来  提升房地产业务:AI语音助手赋能房地产经纪公司  利用AI自动化生成电子书:Make.com的终极教程  文心一言如何做本地生活探店文案 文心一言内容种草指南  MetaGPT:AI驱动的软件开发团队,颠覆传统编码模式  C3.ai深度解析:投资者必知的关键洞察  百度AI搜索怎么用语音提问_百度AI搜索语音输入与识别优化【指南】  Wrike:AI赋能的项目管理平台,提升电商效率与团队协作  如何利用 ChatGPT 进行深度行业竞品分析  AI婴儿播客视频制作终极指南:免费工具与步骤  千问怎样生成年度业绩分析_千问业绩分析模型与数据解读【攻略】  百度AI搜索怎样搜索百科知识_百度AI搜索百科频道与词条跳转【技巧】  今日头条AI怎样推荐抢票工具_今日头条AI抢票工具推荐算法与筛选【技巧】  利用 ChatGPT 设计高效的个人健身与饮食计划  Zapier MCP:AI赋能工作流,释放Claude强大潜能  Recall:打造你的AI知识库,提升记忆力与效率  利用 ChatGPT 进行高质量代码重构与优化  2025年最佳免费AI艺术生成器:POD终极指南  教你用AI把照片变成动漫风格,3个简单步骤刷爆朋友圈  AI海报设计终极指南:免费智能工具,手机轻松搞定!  可灵ai怎么生成招聘JD文案_可灵aiJD生成要素与岗位描述优化【技巧】  2025年QA工程师必备:五款AI自动化测试工具深度解析  百度输入法全感官ai怎么关 百度输入法全感官皮肤关闭  Character AI深度解析:功能、用户反馈与替代方案全攻略  AI测试面试准备:提升你的面试技巧与知识储备  AI照片编辑:为你的单人照添加逼真女友,告别孤单  EdrawMax AI:项目管理和创意专业人士的终极图表工具  百度输入法ai模式怎么关 百度输入法恢复普通模式 

 2025-07-24

了解您产品搜索量及市场趋势,制定营销计划

同行竞争及网站分析保障您的广告效果

点击免费数据支持

提交您的需求,1小时内享受我们的专业解答。

南京市珐之弘网络技术有限公司


南京市珐之弘网络技术有限公司

南京市珐之弘网络技术有限公司专注海外推广十年,是谷歌推广.Facebook广告全球合作伙伴,我们精英化的技术团队为企业提供谷歌海外推广+外贸网站建设+网站维护运营+Google SEO优化+社交营销为您提供一站式海外营销服务。

 87067657

 13565296790

 87067657@qq.com

Notice

We and selected third parties use cookies or similar technologies for technical purposes and, with your consent, for other purposes as specified in the cookie policy.
You can consent to the use of such technologies by closing this notice, by interacting with any link or button outside of this notice or by continuing to browse otherwise.