{"id":349,"date":"2017-03-17T22:07:57","date_gmt":"2017-03-17T19:07:57","guid":{"rendered":"http:\/\/grechka.family\/dmitry\/blog\/?p=349"},"modified":"2023-07-10T21:58:32","modified_gmt":"2023-07-10T18:58:32","slug":"multilayer-perceptron-learning-capacity","status":"publish","type":"post","link":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/","title":{"rendered":"Multilayer perceptron learning capacity"},"content":{"rendered":"<p>In this post I demonstrate how the capacity (e.g. classifier variance) changes in multilayer perceptron classifier with the change of hidden layer units.<br \/>\nAs a training data I use MNIST dataset published on <a href=\"https:\/\/www.kaggle.com\/c\/digit-recognizer\" target=\"_blank\" rel=\"noopener\">Kaggle as a training competition<\/a>.<br \/>\nThe network is multiclass classifier with single hidden layer, sigmoid activation.<\/p>\n<h2>Multi-class logarithmic loss<\/h2>\n<p><a href=\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/ce.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-353 size-full\" src=\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/ce.png\" alt=\"loss function achieved\" width=\"1344\" height=\"960\" srcset=\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/ce.png 1344w, https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/ce-300x214.png 300w, https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/ce-768x549.png 768w, https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/ce-1024x731.png 1024w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p>The plot shows the minimum value of loss function achieved across different training runs.<br \/>\nEach dot in the figure corresponds to a separate training run finished by stucking in some minimum of the loss function.<br \/>\nYou may see that the training procedure can stuck in the local minimum regardless of the hidden units number.<br \/>\nThis means that one needs to carry out many training runs in order to figure out real network architecture learning capacity.<br \/>\nThe lower boundary of the point cloud depicts the learning capacity. We can see that learning capacity slowly rises as we increase the number of hidden layer units.<\/p>\n<h2>Classification accuracy<\/h2>\n<p>Learning capacity is also reflected into the achieved accuracy.<\/p>\n<p><a href=\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-352 size-full\" src=\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png\" alt=\"Accuracy acieved\" width=\"1344\" height=\"960\" srcset=\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png 1344w, https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc-300x214.png 300w, https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc-768x549.png 768w, https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc-1024x731.png 1024w\" sizes=\"auto, (max-width: 709px) 85vw, (max-width: 909px) 67vw, (max-width: 1362px) 62vw, 840px\" \/><\/a><\/p>\n<p>In this plot, as in the previous one, each of the dot is separate finished training run.<br \/>\nBut in this one Y-axis depicts classification accuracy.<br \/>\nInteresting that the network with even 10 units can correctly classify more than a half of the images.<br \/>\nIt is also surprising for me that the best models cluster together forming a clear gap between them and others.<br \/>\nCan you see the empty space stride?<br \/>\nIs this some particular image feature that is ether captured or not?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this post I demonstrate how the capacity (e.g. classifier variance) changes in multilayer perceptron classifier with the change of hidden layer units. As a training data I use MNIST dataset published on Kaggle as a training competition. The network is multiclass classifier with single hidden layer, sigmoid activation. Multi-class logarithmic loss The plot shows &hellip; <a href=\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Multilayer perceptron learning capacity&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":352,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[49],"tags":[53,54,51,50,52],"class_list":["post-349","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-machine-learning","tag-ai","tag-classification","tag-ffnn","tag-machine-learning","tag-neural-network"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Multilayer perceptron learning capacity - Dmitry A. Grechka<\/title>\n<meta name=\"description\" content=\"The dependency of feed forward neural network learning capacity with the change of hidden layer unit count.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/\" \/>\n<meta property=\"og:locale\" content=\"en_GB\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Multilayer perceptron learning capacity - Dmitry A. Grechka\" \/>\n<meta property=\"og:description\" content=\"The dependency of feed forward neural network learning capacity with the change of hidden layer unit count.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/\" \/>\n<meta property=\"og:site_name\" content=\"Dmitry A. Grechka\" \/>\n<meta property=\"article:published_time\" content=\"2017-03-17T19:07:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-07-10T18:58:32+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1344\" \/>\n\t<meta property=\"og:image:height\" content=\"960\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"dmitry\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"dmitry\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimated reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"2 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/\",\"url\":\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/\",\"name\":\"Multilayer perceptron learning capacity - Dmitry A. Grechka\",\"isPartOf\":{\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png\",\"datePublished\":\"2017-03-17T19:07:57+00:00\",\"dateModified\":\"2023-07-10T18:58:32+00:00\",\"author\":{\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/#\/schema\/person\/63485104fdec6dbe258ea67c2e053a6f\"},\"description\":\"The dependency of feed forward neural network learning capacity with the change of hidden layer unit count.\",\"breadcrumb\":{\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#breadcrumb\"},\"inLanguage\":\"en-GB\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#primaryimage\",\"url\":\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png\",\"contentUrl\":\"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png\",\"width\":1344,\"height\":960},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/grechka.family\/dmitry\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Multilayer perceptron learning capacity\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/#website\",\"url\":\"https:\/\/grechka.family\/dmitry\/blog\/\",\"name\":\"Dmitry A. Grechka\",\"description\":\"Personal blog\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/grechka.family\/dmitry\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-GB\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/#\/schema\/person\/63485104fdec6dbe258ea67c2e053a6f\",\"name\":\"dmitry\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/grechka.family\/dmitry\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ce55dc1fed08e9a15667f56e3285826aa634c717d9c0e34809d717f699bb7b0b?s=96&d=identicon&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ce55dc1fed08e9a15667f56e3285826aa634c717d9c0e34809d717f699bb7b0b?s=96&d=identicon&r=g\",\"caption\":\"dmitry\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Multilayer perceptron learning capacity - Dmitry A. Grechka","description":"The dependency of feed forward neural network learning capacity with the change of hidden layer unit count.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/","og_locale":"en_GB","og_type":"article","og_title":"Multilayer perceptron learning capacity - Dmitry A. Grechka","og_description":"The dependency of feed forward neural network learning capacity with the change of hidden layer unit count.","og_url":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/","og_site_name":"Dmitry A. Grechka","article_published_time":"2017-03-17T19:07:57+00:00","article_modified_time":"2023-07-10T18:58:32+00:00","og_image":[{"width":1344,"height":960,"url":"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png","type":"image\/png"}],"author":"dmitry","twitter_misc":{"Written by":"dmitry","Estimated reading time":"2 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/","url":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/","name":"Multilayer perceptron learning capacity - Dmitry A. Grechka","isPartOf":{"@id":"https:\/\/grechka.family\/dmitry\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#primaryimage"},"image":{"@id":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#primaryimage"},"thumbnailUrl":"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png","datePublished":"2017-03-17T19:07:57+00:00","dateModified":"2023-07-10T18:58:32+00:00","author":{"@id":"https:\/\/grechka.family\/dmitry\/blog\/#\/schema\/person\/63485104fdec6dbe258ea67c2e053a6f"},"description":"The dependency of feed forward neural network learning capacity with the change of hidden layer unit count.","breadcrumb":{"@id":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#breadcrumb"},"inLanguage":"en-GB","potentialAction":[{"@type":"ReadAction","target":["https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/"]}]},{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#primaryimage","url":"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png","contentUrl":"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png","width":1344,"height":960},{"@type":"BreadcrumbList","@id":"https:\/\/grechka.family\/dmitry\/blog\/2017\/03\/multilayer-perceptron-learning-capacity\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/grechka.family\/dmitry\/blog\/"},{"@type":"ListItem","position":2,"name":"Multilayer perceptron learning capacity"}]},{"@type":"WebSite","@id":"https:\/\/grechka.family\/dmitry\/blog\/#website","url":"https:\/\/grechka.family\/dmitry\/blog\/","name":"Dmitry A. Grechka","description":"Personal blog","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/grechka.family\/dmitry\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-GB"},{"@type":"Person","@id":"https:\/\/grechka.family\/dmitry\/blog\/#\/schema\/person\/63485104fdec6dbe258ea67c2e053a6f","name":"dmitry","image":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/grechka.family\/dmitry\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/ce55dc1fed08e9a15667f56e3285826aa634c717d9c0e34809d717f699bb7b0b?s=96&d=identicon&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ce55dc1fed08e9a15667f56e3285826aa634c717d9c0e34809d717f699bb7b0b?s=96&d=identicon&r=g","caption":"dmitry"}}]}},"jetpack_featured_media_url":"https:\/\/grechka.family\/dmitry\/blog\/wp-content\/uploads\/2017\/03\/acc.png","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/posts\/349","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/comments?post=349"}],"version-history":[{"count":8,"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/posts\/349\/revisions"}],"predecessor-version":[{"id":639,"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/posts\/349\/revisions\/639"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/media\/352"}],"wp:attachment":[{"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/media?parent=349"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/categories?post=349"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/grechka.family\/dmitry\/blog\/wp-json\/wp\/v2\/tags?post=349"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}