在好例子网,分享、交流、成长!
您当前所在位置:首页Others 开发实例一般编程问题 → 卷积神经网络图像识别python代码

卷积神经网络图像识别python代码

一般编程问题

下载此实例
  • 开发语言:Others
  • 实例大小:3.59M
  • 下载次数:19
  • 浏览次数:221
  • 发布时间:2020-08-28
  • 实例类别:一般编程问题
  • 发 布 人:robot666
  • 文件格式:.pdf
  • 所需积分:2
 

实例介绍

【实例简介】
卷积神经网络图像识别python代码
搭建完一个神经网络之后,非常重要的一件事情是,我们需要检查一下我们的损尖loss是否计算正确。简 单的一个办法就是如果我们做分类,最后用的 softmax分类,那我们给定随机权重值的时候对应C个类别的 分类,loss应该差不多是log(C),当然力了正则化项之后这个值会变大一点。 In[12]: #记得ip安装一下 Cpython,然后在n目录下执行 othon setup, py build ext-- inpiace model init two layer convnet( X =np. random randn(100, 3, 32, 32) np. random randint (10, size=100) two layer convne=(X, model, y, reg=0) sanity check: Loss should be about. log(10)=2.3026 print 'Sanity check loss (ro regularization) loss Sanity check: Loss should go up when you add regularization loss, two layer convne-(X, model, y, reg=l print ' Sanity check loss (with regularization):', loss Sanity check loss (no regularization):2.30264679361 Sanity check loss (with regularization):2.34471165979 梯度检验 需要验证一下我们对于梯度的实现是否正确,这一步至关重要,不然BP算法很可能会拿到错误的梯度,那 迭代就进行不下去啦。 In[13]: num inputs =2 input shape =(3, 16, 16) reg num classes =10 X =np. random randn(num inputs, *input shape) y - np. random randint(num classes, size-rum inputs) model -init two layer convnet(num filters-3, filter size-3, input shape-input sha loss, grads =two layer cornet(X, model, y) for param name in sorted(grads) f =lambda two layer convnet(X, model, y)[C] parain grad nun =eval numerical gradient(f, model [param name], verbose=False rel error(param grad num, grads param name]) print i%s max relative error: e%(param name, rel error(param grad num gra 内1 max re⊥ ative error:9.121148e-02 W2 ax relative error: 3.740758e-06 b1 max relative error: 3,634198e-08 b2 max relative error:8.522420e-10 先看看小样本集上是否可以完全拟合 要看看自己实现是否正确,一个小技巧是先看看在小样本集上是否可以完全拟合,当然,因为是在训练集 上完仝拟合,测试集上的准确率可能会低一截、 In[14]: #用2层的卷积神经网终在50个样本上训练 model- init two layer convnet( trainer Classifierfrainer o best model, loss history, -rain acc history, val acc history trainer train( X train[: 50 y train[: 50, X val y val, model, two layer convnetr reg=0.001, momentum=0.9, learning rate=0.000l, batch size=10, num epoch. Verb○sC=Tyuc) starting iteration 0 Finished epoch 0/ 10: cost 2.301550, train: 0.160000, val 0.12300 0,1r1.Co0o00e-04 Finished epoch 1/ 10: cost 2.266975, train: 0.280000, val 0.14100 0,1r9.500000e-05 Finished epoch 2 / 10: cost 1.886672, train: 0.280000, val 0.14600 0r1r9.025000e-05 Finished epoch 3 /10: cost 1. 621461, train: 0.500000, val 0.18600 0,1x8.573750e-05 Finished epoch 4/10: cost 1.891602, train: 0. 540000, val 0. 20100 0,1y8.145062e-05 Finished epoch 5/10: cost 1.860925, train: 0.440000, val 0.14900 0,1r7.73780se-05 Finished epoch 6/10: cost 1.036989, train: 0.740000, val 0.17100 0,1r7.350919e-05 Finished epoch 7 /10: cost 0.975366, train: 0.760000, val 0.18100 0,1r6.983373e-05 Finished epoch 8/10: cost 0.790765, train: 0.780000, val 0.17300 0,1x6.634204e-05 Finished epoch 9/10: cost 0.294475, train: 0.860000 va10.164C0 0,1r6.302494e-05 Finished epoch 10 /10: cost 0. 249152, train: 0.860000, val 0.15100 0,1r5.987369e-05 finished optimization. best validation accuracy: 0.201000 咱们把上面过程的oss,以及训练集和交叉验证集上的准确率画出来,应该可以看到过拟合”的情况 In[15] p_t.sup⊥ot(21;1) p-t plot(loss history) p-t xlabel(iteration) p-t ylabel(loss) p-t subplot(2,1,2 t plot(train acc history p-t plot (val acc history p-tlegend(['train',val, loc-'upper lcft' p_.xlabel(epoch p_t. labe⊥( accuracy p-t show( 30 25 10 0.5 00 iteration 09 train val 07 05 04 03 02 开始训练吧 既然之前都已经亢成整个卷积神经网络的过程,那大家就调用一下实现了的卷积神经网络,在 CIFAR-10 数据集上试验一把吧,按理来说你应该可以拿到50%+的准确率。 In[16]: model= init two layer convnet(filter size=7) trainer Classifierfrainer( best model, loss history, train acc history, val acc history =trainer. train( x train, y train, X val, y valr model, two layer convnet reg=0. 001, momentum=0. 9, learning rate=0.000l, batch size=50, num epoch acc frequency=50, verbose=True starting iteration O Finished epoch 0/1:cost2.309007, train:0.092000,va10.092000 1r1.300003e-04 Finished epoch 0/ 1: cost 1.835443, train: 0.282000, val 0.317000, lr1.000003e-04 Finished epoch 0/ 1: cost 1.859416, train: 0.374000, val 0.396000, 1y1.000000e-04 Finished epoch 0/ 1: cost 1.682609, train: 0. 436000, val 0.433000, 1r1.000000e-04 Finished epoch 0, 1: cost 1. 790532, train: 0.393000, val 0.402000, 1r1.300000e-04 Finished epoch 0/ 1: cost 1.556517, train: 0. 423000, val 0.438000, lr1.300003e-04 Finished epoch 0/ 1: cost 1.876593, train: 0.391000, val 0.401000, lr1.000000e-04 Finished epoch 0 / 1: cost 1.659644, train: 0.467000, val 0.433000, 1r1.000000e-04 Finished epoch 0/ 1: cost 1.821347, train:0. 426000, val 0.41541000, 1r1.300000e-04 Finished epoch 0/1: cost 2.003791, train: 0.468000, val 0., 1y1.300000e-04 starting iteration 500 Finished epoch 0/1: cost 1.91258l, train: 0.471000, val 0.432000, lr1.300000e-04 Finished epoch 0 /1: cost 1.837869, train: 0.483000, val 0.481000, 1r1.000000e-04 Finished epoch 0/1: cost 1.762528, train: 0.461000, val.423000, 1r1.000000e-04 Finished epoch 0/ 1: cost 1.468218, train: 0.475000, val 0. 4550c0, lr1.000000e-04 Finished epoch 0/ 1: cost 1.990751, train: 0.497000, val 0.483000, 1r1.000000e-04 Finished epoch 0/ 1: cost 1.200235, train: 0.4780C0, val 0.50100 1r1.000000e-04 Finished epoch 0/ 1: cost 1.054466, train: 0. 480000, val 0.467000, 1r1.300003e-04 Finished epoch 0, 1: cost 1.536348, train: 0. 4320C0, val 0.432000 1r1.000000c-04 Finished epoch 0/ 1: cost 1.641137, train:0.511000, val 0.520000, ⊥r1.000000e-04 Finished epoch 0/ 1: cost 1.804483, train: 0.460000, val 0.439000, 1r1.300003e-04 Finished epoch 1 1: cost 1.819357, train: 0.485000, val 0.482000 lr9.500000e-05 finished optimization. best validation accuracy: 0.520000 可视化权重佰 我们可以把第一个卷积层拿到的权重,拿出来可视化一下。如果是训练一切正常,那你应该能看到不同颜 色和方向、边缘的图案。 In[17]: from nn. vis utils import visualize grid d= visualize grid(best model[ wl]. transpose(0, 2,3, 1)) p-t. imshow(grid. astyre('uirt8 ) Out「171: 辑 <matplotlib image. AxesImage at 0xl18b198do> 40 10 Experiment! Experiment and try to get the best performance that you can on CIFAR-10 using a ConVNet. Here are some ideas to get you started Things you should try Filter size: Above we used 7X7; this makes pretty pictures but smaller filters may be more efficient Number of filters Above we used 32 filters Do more or fewer do better Network depth The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file nn/classifiers/convnet. py. Some good architectures to try include a [cony-relu-poolxN-conv-relu-[affine]xM-[softmax or SVMI Icony-relu-poolXN-[affine]XM-[softmax or SVM [conv-relu-conv-relu-pool] -[affine]XM-[softmax or SVM] Tips for training For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind If the parameters are working well, you should see improvement within a few hundred iterations Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs Going above and beyond If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these however they would be good things to try for extra credit Alternative update steps: For the assignment we implemented SGD+momentum and RMSprop; you could try alternatives like Ada Grad or AdaDelta Other forms of regularization such as L1 or Dropout Alternative activation functions such as leaky ReLU or maxout ● Model ensembles · Data augmentation What we expect At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound- if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches. You should use the space below to experiment and train your network the final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network. Have fun and happy training Model Experimentation After manual tuning to get the rough number range for hyperparams, I ran cross-validations for low iterations on corn I base training of incrementally trained models to make for better initialization at each stage and be able to adjust parameters at different stages in the training. this allowed for much quicker progress Format Finally I passed the 65% threshold with a model of the format [conv-relu-pool--[conv-relu-pocl]-[affine]-[relu]-laffine]-[sv. I ran SVM and Softmax models and with all settings(tuned on either )SVM outperformed Softmax in this formats In[8]: 【实例截图】
【核心代码】

标签:

实例下载地址

卷积神经网络图像识别python代码

不能下载?内容有错? 点击这里报错 + 投诉 + 提问

好例子网口号:伸出你的我的手 — 分享

网友评论

发表评论

(您的评论需要经过审核才能显示)

查看所有0条评论>>

小贴士

感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。

  • 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
  • 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
  • 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
  • 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。

关于好例子网

本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明

;
报警