Java自学者论坛

 找回密码
 立即注册

手机号码,快捷登录

恭喜Java自学者论坛(https://www.javazxz.com)已经为数万Java学习者服务超过8年了!积累会员资料超过10000G+
成为本站VIP会员,下载本站10000G+会员资源,会员资料板块,购买链接:点击进入购买VIP会员

JAVA高级面试进阶训练营视频教程

Java架构师系统进阶VIP课程

分布式高可用全栈开发微服务教程Go语言视频零基础入门到精通Java架构师3期(课件+源码)
Java开发全终端实战租房项目视频教程SpringBoot2.X入门到高级使用教程大数据培训第六期全套视频教程深度学习(CNN RNN GAN)算法原理Java亿级流量电商系统视频教程
互联网架构师视频教程年薪50万Spark2.0从入门到精通年薪50万!人工智能学习路线教程年薪50万大数据入门到精通学习路线年薪50万机器学习入门到精通教程
仿小米商城类app和小程序视频教程深度学习数据分析基础到实战最新黑马javaEE2.1就业课程从 0到JVM实战高手教程MySQL入门到精通教程
查看: 659|回复: 0

基于TensorFlow解决手写数字识别的Softmax方法、多层卷积网络方法和前馈神经网络方法

[复制链接]
  • TA的每日心情
    奋斗
    2024-4-6 11:05
  • 签到天数: 748 天

    [LV.9]以坛为家II

    2034

    主题

    2092

    帖子

    70万

    积分

    管理员

    Rank: 9Rank: 9Rank: 9

    积分
    705612
    发表于 2021-7-11 12:45:38 | 显示全部楼层 |阅读模式

     一.基于TensorFlow的softmax回归模型解决手写字母识别问题

    详细步骤如下:

    1.加载MNIST数据: input_data.read_data_sets('MNIST_data',one_hot=true)

    2.运行TensorFlow的InterractiveSession: sess = tf.InteractiveSession()

    3.构建Softmax回归模型: 占位符tf.placeholder 变量tf.Variable 类别预测与损失函数 tf.nn.softmax  tf.refuce_sum 训练模型 tf.train.GradientDescentOptimizer 评估模型

    结果:在测试集上有91%正确率

     

    二.构建多层卷积网络

    详细步骤如下:

    1.权重初始化函数

    2.卷积和池化函数

    3.第一层卷积

    4.第二层卷积

    5.密集连接层

    6.输出层

    7.训练和评估模型

    代码:(DeepMnist.py)

     1 from tensorflow.examples.tutorials.mnist import input_data
     2 mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
     3 
     4 import tensorflow as tf
     5 sess = tf.InteractiveSession()
     6 
     7 x = tf.placeholder(tf.float32, shape=[None, 784])
     8 y_ = tf.placeholder(tf.float32, shape=[None, 10])
     9 
    10 w = tf.Variable(tf.zeros([784, 10]))
    11 b = tf.Variable(tf.zeros([10]))
    12 
    13 sess.run(tf.global_variables_initializer())
    14 
    15 y = tf.matmul(x ,w) + b
    16 
    17 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits=y))
    18 
    19 train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
    20 
    21 for _ in range(1000):
    22     batch = mnist.train.next_batch(100)
    23     train_step.run(feed_dict={x:batch[0],y_:batch[1]})
    24 
    25 correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
    26 
    27 accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
    28 
    29 print(accuracy.eval(feed_dict={x:mnist.test.images,y_:mnist.test.labels}))
    30 
    31 def weight_variable(shape):
    32     initial = tf.truncated_normal(shape, stddev=0.1)
    33     return tf.Variable(initial)
    34 
    35 def bias_variable(shape):
    36     initial = tf.constant(0.1,shape=shape)
    37     return tf.Variable(initial)
    38 
    39 def conv2d(x,w):
    40     return tf.nn.conv2d(x,w,strides=[1,1,1,1],padding='SAME')
    41 
    42 def max_pool_2x2(x):
    43     return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')
    44 
    45 w_conv1 = weight_variable([5,5,1,32])
    46 b_conv1 = bias_variable([32])
    47 
    48 x_image = tf.reshape(x, [-1,28,28,1])
    49 
    50 h_conv1 = tf.nn.relu(conv2d(x_image, w_conv1) + b_conv1)
    51 h_pool1 = max_pool_2x2(h_conv1)
    52 
    53 w_conv2 = weight_variable([5,5,32,64])
    54 b_conv2 = bias_variable([64])
    55 
    56 h_conv2 = tf.nn.relu(conv2d(h_pool1, w_conv2) + b_conv2)
    57 h_pool2 = max_pool_2x2(h_conv2)
    58 
    59 w_fc1 = weight_variable([7*7*64,1024])
    60 b_fc1 =bias_variable([1024])
    61 
    62 h_pool2_flat = tf.reshape(h_pool2, [-1,7*7*64])
    63 h_fc1 =tf.nn.relu(tf.matmul(h_pool2_flat,w_fc1) + b_fc1)
    64 
    65 keep_prob = tf.placeholder(tf.float32)
    66 h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
    67 
    68 W_fc2 = weight_variable([1024, 10])
    69 b_fc2 = bias_variable([10])
    70 
    71 y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
    72 
    73 cross_entropy = tf.reduce_mean(
    74     tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
    75 train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
    76 correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
    77 accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    78 sess.run(tf.global_variables_initializer())
    79 for i in range(1000):
    80   batch = mnist.train.next_batch(50)
    81   if i%100 == 0:
    82     train_accuracy = accuracy.eval(feed_dict={
    83         x:batch[0], y_: batch[1], keep_prob: 1.0})
    84     print("step %d, training accuracy %g"%(i, train_accuracy))
    85   train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
    86 
    87 print("test accuracy %g"%accuracy.eval(feed_dict={
    88     x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

     

    输出:

     训练1000次,测试准确率96.34%;20000次准确率达到99%以上;

     

    三.简易前馈神经网络

    代码如下:

      1 # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
      2 #
      3 # Licensed under the Apache License, Version 2.0 (the "License");
      4 # you may not use this file except in compliance with the License.
      5 # You may obtain a copy of the License at
      6 #
      7 #     http://www.apache.org/licenses/LICENSE-2.0
      8 #
      9 # Unless required by applicable law or agreed to in writing, software
     10 # distributed under the License is distributed on an "AS IS" BASIS,
     11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     12 # See the License for the specific language governing permissions and
     13 # limitations under the License.
     14 # ==============================================================================
     15 
     16 """Builds the MNIST network.
     17 Implements the inference/loss/training pattern for model building.
     18 1. inference() - Builds the model as far as is required for running the network
     19 forward to make predictions.
     20 2. loss() - Adds to the inference model the layers required to generate loss.
     21 3. training() - Adds to the loss model the Ops required to generate and
     22 apply gradients.
     23 This file is used by the various "fully_connected_*.py" files and not meant to
     24 be run.
     25 """
     26 from __future__ import absolute_import
     27 from __future__ import division
     28 from __future__ import print_function
     29 
     30 import math
     31 
     32 import tensorflow as tf
     33 
     34 # The MNIST dataset has 10 classes, representing the digits 0 through 9.
     35 NUM_CLASSES = 10
     36 
     37 # The MNIST images are always 28x28 pixels.
     38 IMAGE_SIZE = 28
     39 IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE
     40 
     41 
     42 def inference(images, hidden1_units, hidden2_units):
     43   """Build the MNIST model up to where it may be used for inference.
     44   Args:
     45     images: Images placeholder, from inputs().
     46     hidden1_units: Size of the first hidden layer.
     47     hidden2_units: Size of the second hidden layer.
     48   Returns:
     49     softmax_linear: Output tensor with the computed logits.
     50   """
     51   # Hidden 1
     52   with tf.name_scope('hidden1'):
     53     weights = tf.Variable(
     54         tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
     55                             stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
     56         name='weights')
     57     biases = tf.Variable(tf.zeros([hidden1_units]),
     58                          name='biases')
     59     hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)
     60   # Hidden 2
     61   with tf.name_scope('hidden2'):
     62     weights = tf.Variable(
     63         tf.truncated_normal([hidden1_units, hidden2_units],
     64                             stddev=1.0 / math.sqrt(float(hidden1_units))),
     65         name='weights')
     66     biases = tf.Variable(tf.zeros([hidden2_units]),
     67                          name='biases')
     68     hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
     69   # Linear
     70   with tf.name_scope('softmax_linear'):
     71     weights = tf.Variable(
     72         tf.truncated_normal([hidden2_units, NUM_CLASSES],
     73                             stddev=1.0 / math.sqrt(float(hidden2_units))),
     74         name='weights')
     75     biases = tf.Variable(tf.zeros([NUM_CLASSES]),
     76                          name='biases')
     77     logits = tf.matmul(hidden2, weights) + biases
     78   return logits
     79 
     80 
     81 def loss(logits, labels):
     82   """Calculates the loss from the logits and the labels.
     83   Args:
     84     logits: Logits tensor, float - [batch_size, NUM_CLASSES].
     85     labels: Labels tensor, int32 - [batch_size].
     86   Returns:
     87     loss: Loss tensor of type float.
     88   """
     89   labels = tf.to_int64(labels)
     90   cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
     91       labels=labels, logits=logits, name='xentropy')
     92   return tf.reduce_mean(cross_entropy, name='xentropy_mean')
     93 
     94 
     95 def training(loss, learning_rate):
     96   """Sets up the training Ops.
     97   Creates a summarizer to track the loss over time in TensorBoard.
     98   Creates an optimizer and applies the gradients to all trainable variables.
     99   The Op returned by this function is what must be passed to the
    100   `sess.run()` call to cause the model to train.
    101   Args:
    102     loss: Loss tensor, from loss().
    103     learning_rate: The learning rate to use for gradient descent.
    104   Returns:
    105     train_op: The Op for training.
    106   """
    107   # Add a scalar summary for the snapshot loss.
    108   tf.summary.scalar('loss', loss)
    109   # Create the gradient descent optimizer with the given learning rate.
    110   optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    111   # Create a variable to track the global step.
    112   global_step = tf.Variable(0, name='global_step', trainable=False)
    113   # Use the optimizer to apply the gradients that minimize the loss
    114   # (and also increment the global step counter) as a single training step.
    115   train_op = optimizer.minimize(loss, global_step=global_step)
    116   return train_op
    117 
    118 
    119 def evaluation(logits, labels):
    120   """Evaluate the quality of the logits at predicting the label.
    121   Args:
    122     logits: Logits tensor, float - [batch_size, NUM_CLASSES].
    123     labels: Labels tensor, int32 - [batch_size], with values in the
    124       range [0, NUM_CLASSES).
    125   Returns:
    126     A scalar int32 tensor with the number of examples (out of batch_size)
    127     that were predicted correctly.
    128   """
    129   # For a classifier model, we can use the in_top_k Op.
    130   # It returns a bool tensor with shape [batch_size] that is true for
    131   # the examples where the label is in the top k (here k=1)
    132   # of all logits for that example.
    133   correct = tf.nn.in_top_k(logits, labels, 1)
    134   # Return the number of true entries.
    135   return tf.reduce_sum(tf.cast(correct, tf.int32))
    mnist.py
      1 # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
      2 #
      3 # Licensed under the Apache License, Version 2.0 (the "License");
      4 # you may not use this file except in compliance with the License.
      5 # You may obtain a copy of the License at
      6 #
      7 #     http://www.apache.org/licenses/LICENSE-2.0
      8 #
      9 # Unless required by applicable law or agreed to in writing, software
     10 # distributed under the License is distributed on an "AS IS" BASIS,
     11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     12 # See the License for the specific language governing permissions and
     13 # limitations under the License.
     14 # ==============================================================================
     15 
     16 """Trains and Evaluates the MNIST network using a feed dictionary."""
     17 from __future__ import absolute_import
     18 from __future__ import division
     19 from __future__ import print_function
     20 
     21 # pylint: disable=missing-docstring
     22 import argparse
     23 import os.path
     24 import sys
     25 import time
     26 
     27 from six.moves import xrange  # pylint: disable=redefined-builtin
     28 import tensorflow as tf
     29 
     30 from tensorflow.examples.tutorials.mnist import input_data
     31 from tensorflow.examples.tutorials.mnist import mnist
     32 
     33 # Basic model parameters as external flags.
     34 FLAGS = None
     35 
     36 
     37 def placeholder_inputs(batch_size):
     38   """Generate placeholder variables to represent the input tensors.
     39   These placeholders are used as inputs by the rest of the model building
     40   code and will be fed from the downloaded data in the .run() loop, below.
     41   Args:
     42     batch_size: The batch size will be baked into both placeholders.
     43   Returns:
     44     images_placeholder: Images placeholder.
     45     labels_placeholder: Labels placeholder.
     46   """
     47   # Note that the shapes of the placeholders match the shapes of the full
     48   # image and label tensors, except the first dimension is now batch_size
     49   # rather than the full size of the train or test data sets.
     50   images_placeholder = tf.placeholder(tf.float32, shape=(batch_size,
     51                                                          mnist.IMAGE_PIXELS))
     52   labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
     53   return images_placeholder, labels_placeholder
     54 
     55 
     56 def fill_feed_dict(data_set, images_pl, labels_pl):
     57   """Fills the feed_dict for training the given step.
     58   A feed_dict takes the form of:
     59   feed_dict = {
     60       <placeholder>: <tensor of values to be passed for placeholder>,
     61       ....
     62   }
     63   Args:
     64     data_set: The set of images and labels, from input_data.read_data_sets()
     65     images_pl: The images placeholder, from placeholder_inputs().
     66     labels_pl: The labels placeholder, from placeholder_inputs().
     67   Returns:
     68     feed_dict: The feed dictionary mapping from placeholders to values.
     69   """
     70   # Create the feed_dict for the placeholders filled with the next
     71   # `batch size` examples.
     72   images_feed, labels_feed = data_set.next_batch(FLAGS.batch_size,
     73                                                  FLAGS.fake_data)
     74   feed_dict = {
     75       images_pl: images_feed,
     76       labels_pl: labels_feed,
     77   }
     78   return feed_dict
     79 
     80 
     81 def do_eval(sess,
     82             eval_correct,
     83             images_placeholder,
     84             labels_placeholder,
     85             data_set):
     86   """Runs one evaluation against the full epoch of data.
     87   Args:
     88     sess: The session in which the model has been trained.
     89     eval_correct: The Tensor that returns the number of correct predictions.
     90     images_placeholder: The images placeholder.
     91     labels_placeholder: The labels placeholder.
     92     data_set: The set of images and labels to evaluate, from
     93       input_data.read_data_sets().
     94   """
     95   # And run one epoch of eval.
     96   true_count = 0  # Counts the number of correct predictions.
     97   steps_per_epoch = data_set.num_examples // FLAGS.batch_size
     98   num_examples = steps_per_epoch * FLAGS.batch_size
     99   for step in xrange(steps_per_epoch):
    100     feed_dict = fill_feed_dict(data_set,
    101                                images_placeholder,
    102                                labels_placeholder)
    103     true_count += sess.run(eval_correct, feed_dict=feed_dict)
    104   precision = float(true_count) / num_examples
    105   print('  Num examples: %d  Num correct: %d  Precision @ 1: %0.04f' %
    106         (num_examples, true_count, precision))
    107 
    108 
    109 def run_training():
    110   """Train MNIST for a number of steps."""
    111   # Get the sets of images and labels for training, validation, and
    112   # test on MNIST.
    113   data_sets = input_data.read_data_sets(FLAGS.input_data_dir, FLAGS.fake_data)
    114 
    115   # Tell TensorFlow that the model will be built into the default Graph.
    116   with tf.Graph().as_default():
    117     # Generate placeholders for the images and labels.
    118     images_placeholder, labels_placeholder = placeholder_inputs(
    119         FLAGS.batch_size)
    120 
    121     # Build a Graph that computes predictions from the inference model.
    122     logits = mnist.inference(images_placeholder,
    123                              FLAGS.hidden1,
    124                              FLAGS.hidden2)
    125 
    126     # Add to the Graph the Ops for loss calculation.
    127     loss = mnist.loss(logits, labels_placeholder)
    128 
    129     # Add to the Graph the Ops that calculate and apply gradients.
    130     train_op = mnist.training(loss, FLAGS.learning_rate)
    131 
    132     # Add the Op to compare the logits to the labels during evaluation.
    133     eval_correct = mnist.evaluation(logits, labels_placeholder)
    134 
    135     # Build the summary Tensor based on the TF collection of Summaries.
    136     summary = tf.summary.merge_all()
    137 
    138     # Add the variable initializer Op.
    139     init = tf.global_variables_initializer()
    140 
    141     # Create a saver for writing training checkpoints.
    142     saver = tf.train.Saver()
    143 
    144     # Create a session for running Ops on the Graph.
    145     sess = tf.Session()
    146 
    147     # Instantiate a SummaryWriter to output summaries and the Graph.
    148     summary_writer = tf.summary.FileWriter(FLAGS.log_dir, sess.graph)
    149 
    150     # And then after everything is built:
    151 
    152     # Run the Op to initialize the variables.
    153     sess.run(init)
    154 
    155     # Start the training loop.
    156     for step in xrange(FLAGS.max_steps):
    157       start_time = time.time()
    158 
    159       # Fill a feed dictionary with the actual set of images and labels
    160       # for this particular training step.
    161       feed_dict = fill_feed_dict(data_sets.train,
    162                                  images_placeholder,
    163                                  labels_placeholder)
    164 
    165       # Run one step of the model.  The return values are the activations
    166       # from the `train_op` (which is discarded) and the `loss` Op.  To
    167       # inspect the values of your Ops or variables, you may include them
    168       # in the list passed to sess.run() and the value tensors will be
    169       # returned in the tuple from the call.
    170       _, loss_value = sess.run([train_op, loss],
    171                                feed_dict=feed_dict)
    172 
    173       duration = time.time() - start_time
    174 
    175       # Write the summaries and print an overview fairly often.
    176       if step % 100 == 0:
    177         # Print status to stdout.
    178         print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value, duration))
    179         # Update the events file.
    180         summary_str = sess.run(summary, feed_dict=feed_dict)
    181         summary_writer.add_summary(summary_str, step)
    182         summary_writer.flush()
    183 
    184       # Save a checkpoint and evaluate the model periodically.
    185       if (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
    186         checkpoint_file = os.path.join(FLAGS.log_dir, 'model.ckpt')
    187         saver.save(sess, checkpoint_file, global_step=step)
    188         # Evaluate against the training set.
    189         print('Training Data Eval:')
    190         do_eval(sess,
    191                 eval_correct,
    192                 images_placeholder,
    193                 labels_placeholder,
    194                 data_sets.train)
    195         # Evaluate against the validation set.
    196         print('Validation Data Eval:')
    197         do_eval(sess,
    198                 eval_correct,
    199                 images_placeholder,
    200                 labels_placeholder,
    201                 data_sets.validation)
    202         # Evaluate against the test set.
    203         print('Test Data Eval:')
    204         do_eval(sess,
    205                 eval_correct,
    206                 images_placeholder,
    207                 labels_placeholder,
    208                 data_sets.test)
    209 
    210 
    211 def main(_):
    212   if tf.gfile.Exists(FLAGS.log_dir):
    213     tf.gfile.DeleteRecursively(FLAGS.log_dir)
    214   tf.gfile.MakeDirs(FLAGS.log_dir)
    215   run_training()
    216 
    217 
    218 if __name__ == '__main__':
    219   parser = argparse.ArgumentParser()
    220   parser.add_argument(
    221       '--learning_rate',
    222       type=float,
    223       default=0.01,
    224       help='Initial learning rate.'
    225   )
    226   parser.add_argument(
    227       '--max_steps',
    228       type=int,
    229       default=2000,
    230       help='Number of steps to run trainer.'
    231   )
    232   parser.add_argument(
    233       '--hidden1',
    234       type=int,
    235       default=128,
    236       help='Number of units in hidden layer 1.'
    237   )
    238   parser.add_argument(
    239       '--hidden2',
    240       type=int,
    241       default=32,
    242       help='Number of units in hidden layer 2.'
    243   )
    244   parser.add_argument(
    245       '--batch_size',
    246       type=int,
    247       default=100,
    248       help='Batch size.  Must divide evenly into the dataset sizes.'
    249   )
    250   parser.add_argument(
    251       '--input_data_dir',
    252       type=str,
    253       default='/tmp/tensorflow/mnist/input_data',
    254       help='Directory to put the input data.'
    255   )
    256   parser.add_argument(
    257       '--log_dir',
    258       type=str,
    259       default='/tmp/tensorflow/mnist/logs/fully_connected_feed',
    260       help='Directory to put the log data.'
    261   )
    262   parser.add_argument(
    263       '--fake_data',
    264       default=False,
    265       help='If true, uses fake data for unit testing.',
    266       action='store_true'
    267   )
    268 
    269   FLAGS, unparsed = parser.parse_known_args()
    270   tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
    full_connected_feed

    full_connected_feed.py中main函数为入口点,调用run_training函数,函数体内调用了其他的函数。

    输出:

     

    小结:

    1.多查TensorFlow官方帮助文档(不熟悉函数意思);

    2.上述例子参照官网例程编写;

    3.尽量使用GPU版TensorFlow,卷积网络20000次训练时,需要时间很长,而且本机的CPU占用率几近100%(intel i7-4720k);

    4.安装python3.5时,注意添加路径到系统环境变量path中;

     

    参考文献:1.https://www.tensorflow.org/get_started/mnist/pros  

                  2.https://www.tensorflow.org/get_started/mnist/mechanics

    哎...今天够累的,签到来了1...
    回复

    使用道具 举报

    您需要登录后才可以回帖 登录 | 立即注册

    本版积分规则

    QQ|手机版|小黑屋|Java自学者论坛 ( 声明:本站文章及资料整理自互联网,用于Java自学者交流学习使用,对资料版权不负任何法律责任,若有侵权请及时联系客服屏蔽删除 )

    GMT+8, 2024-4-20 08:53 , Processed in 0.079835 second(s), 29 queries .

    Powered by Discuz! X3.4

    Copyright © 2001-2021, Tencent Cloud.

    快速回复 返回顶部 返回列表