Created
February 3, 2019 16:13
-
-
Save malkayo/71e483c6f234dd5aca0eaaa52a5c24e0 to your computer and use it in GitHub Desktop.
TensorFlow high-level APIs - Notebook.ipynb
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"nbformat": 4, | |
"nbformat_minor": 0, | |
"metadata": { | |
"colab": { | |
"name": "TensorFlow high-level APIs - Notebook.ipynb", | |
"version": "0.3.2", | |
"provenance": [], | |
"collapsed_sections": [], | |
"include_colab_link": true | |
}, | |
"kernelspec": { | |
"display_name": "Python 3", | |
"language": "python", | |
"name": "python3" | |
} | |
}, | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "view-in-github", | |
"colab_type": "text" | |
}, | |
"source": [ | |
"<a href=\"https://colab.research.google.com/gist/malkayo/71e483c6f234dd5aca0eaaa52a5c24e0/tensorflow-high-level-apis-notebook.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "Kx0YcyptFbTf", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"# TensorFlow high-level APIs - Notebook\n", | |
"This notebook is meant to help following along the series of video on the TF high-level API ([First video here](https://www.youtube.com/watch?v=oFFbKogYdfc))" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "YDw7b01EFbTp", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## Getting the data\n", | |
"The Forest Cover Type Dataset set mentionned in the video can be found on [Kaggle](https://www.kaggle.com/uciml/forest-cover-type-dataset#covtype.csv) " | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "D4oPDcuhFbTw", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"You can download the data directly on [Kaggle](https://www.kaggle.com/uciml/forest-cover-type-dataset#covtype.csv) or by running the following commands on you local machine with the [Kaggle API](https://github.com/Kaggle/kaggle-api)." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "-UVTx7RbFbT5", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"!kaggle datasets download -d uciml/forest-cover-type-dataset" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "7_qHH0j1FbUF", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"If you are using Google Collab, you can upload the forest-cover-type-dataset.zip file with the following" | |
] | |
}, | |
{ | |
"metadata": { | |
"colab_type": "code", | |
"id": "gDiRcPzcf61h", | |
"outputId": "086c08b6-7117-4c61-b94d-cdaa38534109", | |
"colab": { | |
"resources": { | |
"http://localhost:8080/nbextensions/google.colab/files.js": { | |
"data": "Ly8gQ29weXJpZ2h0IDIwMTcgR29vZ2xlIExMQwovLwovLyBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKLy8geW91IG1heSBub3QgdXNlIHRoaXMgZmlsZSBleGNlcHQgaW4gY29tcGxpYW5jZSB3aXRoIHRoZSBMaWNlbnNlLgovLyBZb3UgbWF5IG9idGFpbiBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKLy8KLy8gICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKLy8KLy8gVW5sZXNzIHJlcXVpcmVkIGJ5IGFwcGxpY2FibGUgbGF3IG9yIGFncmVlZCB0byBpbiB3cml0aW5nLCBzb2Z0d2FyZQovLyBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAovLyBXSVRIT1VUIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4KLy8gU2VlIHRoZSBMaWNlbnNlIGZvciB0aGUgc3BlY2lmaWMgbGFuZ3VhZ2UgZ292ZXJuaW5nIHBlcm1pc3Npb25zIGFuZAovLyBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCi8qKgogKiBAZmlsZW92ZXJ2aWV3IEhlbHBlcnMgZm9yIGdvb2dsZS5jb2xhYiBQeXRob24gbW9kdWxlLgogKi8KKGZ1bmN0aW9uKHNjb3BlKSB7CmZ1bmN0aW9uIHNwYW4odGV4dCwgc3R5bGVBdHRyaWJ1dGVzID0ge30pIHsKICBjb25zdCBlbGVtZW50ID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnc3BhbicpOwogIGVsZW1lbnQudGV4dENvbnRlbnQgPSB0ZXh0OwogIGZvciAoY29uc3Qga2V5IG9mIE9iamVjdC5rZXlzKHN0eWxlQXR0cmlidXRlcykpIHsKICAgIGVsZW1lbnQuc3R5bGVba2V5XSA9IHN0eWxlQXR0cmlidXRlc1trZXldOwogIH0KICByZXR1cm4gZWxlbWVudDsKfQoKLy8gTWF4IG51bWJlciBvZiBieXRlcyB3aGljaCB3aWxsIGJlIHVwbG9hZGVkIGF0IGEgdGltZS4KY29uc3QgTUFYX1BBWUxPQURfU0laRSA9IDEwMCAqIDEwMjQ7Ci8vIE1heCBhbW91bnQgb2YgdGltZSB0byBibG9jayB3YWl0aW5nIGZvciB0aGUgdXNlci4KY29uc3QgRklMRV9DSEFOR0VfVElNRU9VVF9NUyA9IDMwICogMTAwMDsKCmZ1bmN0aW9uIF91cGxvYWRGaWxlcyhpbnB1dElkLCBvdXRwdXRJZCkgewogIGNvbnN0IHN0ZXBzID0gdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKTsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIC8vIENhY2hlIHN0ZXBzIG9uIHRoZSBvdXRwdXRFbGVtZW50IHRvIG1ha2UgaXQgYXZhaWxhYmxlIGZvciB0aGUgbmV4dCBjYWxsCiAgLy8gdG8gdXBsb2FkRmlsZXNDb250aW51ZSBmcm9tIFB5dGhvbi4KICBvdXRwdXRFbGVtZW50LnN0ZXBzID0gc3RlcHM7CgogIHJldHVybiBfdXBsb2FkRmlsZXNDb250aW51ZShvdXRwdXRJZCk7Cn0KCi8vIFRoaXMgaXMgcm91Z2hseSBhbiBhc3luYyBnZW5lcmF0b3IgKG5vdCBzdXBwb3J0ZWQgaW4gdGhlIGJyb3dzZXIgeWV0KSwKLy8gd2hlcmUgdGhlcmUgYXJlIG11bHRpcGxlIGFzeW5jaHJvbm91cyBzdGVwcyBhbmQgdGhlIFB5dGhvbiBzaWRlIGlzIGdvaW5nCi8vIHRvIHBvbGwgZm9yIGNvbXBsZXRpb24gb2YgZWFjaCBzdGVwLgovLyBUaGlzIHVzZXMgYSBQcm9taXNlIHRvIGJsb2NrIHRoZSBweXRob24gc2lkZSBvbiBjb21wbGV0aW9uIG9mIGVhY2ggc3RlcCwKLy8gdGhlbiBwYXNzZXMgdGhlIHJlc3VsdCBvZiB0aGUgcHJldmlvdXMgc3RlcCBhcyB0aGUgaW5wdXQgdG8gdGhlIG5leHQgc3RlcC4KZnVuY3Rpb24gX3VwbG9hZEZpbGVzQ29udGludWUob3V0cHV0SWQpIHsKICBjb25zdCBvdXRwdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQob3V0cHV0SWQpOwogIGNvbnN0IHN0ZXBzID0gb3V0cHV0RWxlbWVudC5zdGVwczsKCiAgY29uc3QgbmV4dCA9IHN0ZXBzLm5leHQob3V0cHV0RWxlbWVudC5sYXN0UHJvbWlzZVZhbHVlKTsKICByZXR1cm4gUHJvbWlzZS5yZXNvbHZlKG5leHQudmFsdWUucHJvbWlzZSkudGhlbigodmFsdWUpID0+IHsKICAgIC8vIENhY2hlIHRoZSBsYXN0IHByb21pc2UgdmFsdWUgdG8gbWFrZSBpdCBhdmFpbGFibGUgdG8gdGhlIG5leHQKICAgIC8vIHN0ZXAgb2YgdGhlIGdlbmVyYXRvci4KICAgIG91dHB1dEVsZW1lbnQubGFzdFByb21pc2VWYWx1ZSA9IHZhbHVlOwogICAgcmV0dXJuIG5leHQudmFsdWUucmVzcG9uc2U7CiAgfSk7Cn0KCi8qKgogKiBHZW5lcmF0b3IgZnVuY3Rpb24gd2hpY2ggaXMgY2FsbGVkIGJldHdlZW4gZWFjaCBhc3luYyBzdGVwIG9mIHRoZSB1cGxvYWQKICogcHJvY2Vzcy4KICogQHBhcmFtIHtzdHJpbmd9IGlucHV0SWQgRWxlbWVudCBJRCBvZiB0aGUgaW5wdXQgZmlsZSBwaWNrZXIgZWxlbWVudC4KICogQHBhcmFtIHtzdHJpbmd9IG91dHB1dElkIEVsZW1lbnQgSUQgb2YgdGhlIG91dHB1dCBkaXNwbGF5LgogKiBAcmV0dXJuIHshSXRlcmFibGU8IU9iamVjdD59IEl0ZXJhYmxlIG9mIG5leHQgc3RlcHMuCiAqLwpmdW5jdGlvbiogdXBsb2FkRmlsZXNTdGVwKGlucHV0SWQsIG91dHB1dElkKSB7CiAgY29uc3QgaW5wdXRFbGVtZW50ID0gZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoaW5wdXRJZCk7CiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gZmFsc2U7CgogIGNvbnN0IG91dHB1dEVsZW1lbnQgPSBkb2N1bWVudC5nZXRFbGVtZW50QnlJZChvdXRwdXRJZCk7CiAgb3V0cHV0RWxlbWVudC5pbm5lckhUTUwgPSAnJzsKCiAgY29uc3QgcGlja2VkUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBpbnB1dEVsZW1lbnQuYWRkRXZlbnRMaXN0ZW5lcignY2hhbmdlJywgKGUpID0+IHsKICAgICAgcmVzb2x2ZShlLnRhcmdldC5maWxlcyk7CiAgICB9KTsKICB9KTsKCiAgY29uc3QgY2FuY2VsID0gZG9jdW1lbnQuY3JlYXRlRWxlbWVudCgnYnV0dG9uJyk7CiAgaW5wdXRFbGVtZW50LnBhcmVudEVsZW1lbnQuYXBwZW5kQ2hpbGQoY2FuY2VsKTsKICBjYW5jZWwudGV4dENvbnRlbnQgPSAnQ2FuY2VsIHVwbG9hZCc7CiAgY29uc3QgY2FuY2VsUHJvbWlzZSA9IG5ldyBQcm9taXNlKChyZXNvbHZlKSA9PiB7CiAgICBjYW5jZWwub25jbGljayA9ICgpID0+IHsKICAgICAgcmVzb2x2ZShudWxsKTsKICAgIH07CiAgfSk7CgogIC8vIENhbmNlbCB1cGxvYWQgaWYgdXNlciBoYXNuJ3QgcGlja2VkIGFueXRoaW5nIGluIHRpbWVvdXQuCiAgY29uc3QgdGltZW91dFByb21pc2UgPSBuZXcgUHJvbWlzZSgocmVzb2x2ZSkgPT4gewogICAgc2V0VGltZW91dCgoKSA9PiB7CiAgICAgIHJlc29sdmUobnVsbCk7CiAgICB9LCBGSUxFX0NIQU5HRV9USU1FT1VUX01TKTsKICB9KTsKCiAgLy8gV2FpdCBmb3IgdGhlIHVzZXIgdG8gcGljayB0aGUgZmlsZXMuCiAgY29uc3QgZmlsZXMgPSB5aWVsZCB7CiAgICBwcm9taXNlOiBQcm9taXNlLnJhY2UoW3BpY2tlZFByb21pc2UsIHRpbWVvdXRQcm9taXNlLCBjYW5jZWxQcm9taXNlXSksCiAgICByZXNwb25zZTogewogICAgICBhY3Rpb246ICdzdGFydGluZycsCiAgICB9CiAgfTsKCiAgaWYgKCFmaWxlcykgewogICAgcmV0dXJuIHsKICAgICAgcmVzcG9uc2U6IHsKICAgICAgICBhY3Rpb246ICdjb21wbGV0ZScsCiAgICAgIH0KICAgIH07CiAgfQoKICBjYW5jZWwucmVtb3ZlKCk7CgogIC8vIERpc2FibGUgdGhlIGlucHV0IGVsZW1lbnQgc2luY2UgZnVydGhlciBwaWNrcyBhcmUgbm90IGFsbG93ZWQuCiAgaW5wdXRFbGVtZW50LmRpc2FibGVkID0gdHJ1ZTsKCiAgZm9yIChjb25zdCBmaWxlIG9mIGZpbGVzKSB7CiAgICBjb25zdCBsaSA9IGRvY3VtZW50LmNyZWF0ZUVsZW1lbnQoJ2xpJyk7CiAgICBsaS5hcHBlbmQoc3BhbihmaWxlLm5hbWUsIHtmb250V2VpZ2h0OiAnYm9sZCd9KSk7CiAgICBsaS5hcHBlbmQoc3BhbigKICAgICAgICBgKCR7ZmlsZS50eXBlIHx8ICduL2EnfSkgLSAke2ZpbGUuc2l6ZX0gYnl0ZXMsIGAgKwogICAgICAgIGBsYXN0IG1vZGlmaWVkOiAkewogICAgICAgICAgICBmaWxlLmxhc3RNb2RpZmllZERhdGUgPyBmaWxlLmxhc3RNb2RpZmllZERhdGUudG9Mb2NhbGVEYXRlU3RyaW5nKCkgOgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAnbi9hJ30gLSBgKSk7CiAgICBjb25zdCBwZXJjZW50ID0gc3BhbignMCUgZG9uZScpOwogICAgbGkuYXBwZW5kQ2hpbGQocGVyY2VudCk7CgogICAgb3V0cHV0RWxlbWVudC5hcHBlbmRDaGlsZChsaSk7CgogICAgY29uc3QgZmlsZURhdGFQcm9taXNlID0gbmV3IFByb21pc2UoKHJlc29sdmUpID0+IHsKICAgICAgY29uc3QgcmVhZGVyID0gbmV3IEZpbGVSZWFkZXIoKTsKICAgICAgcmVhZGVyLm9ubG9hZCA9IChlKSA9PiB7CiAgICAgICAgcmVzb2x2ZShlLnRhcmdldC5yZXN1bHQpOwogICAgICB9OwogICAgICByZWFkZXIucmVhZEFzQXJyYXlCdWZmZXIoZmlsZSk7CiAgICB9KTsKICAgIC8vIFdhaXQgZm9yIHRoZSBkYXRhIHRvIGJlIHJlYWR5LgogICAgbGV0IGZpbGVEYXRhID0geWllbGQgewogICAgICBwcm9taXNlOiBmaWxlRGF0YVByb21pc2UsCiAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgYWN0aW9uOiAnY29udGludWUnLAogICAgICB9CiAgICB9OwoKICAgIC8vIFVzZSBhIGNodW5rZWQgc2VuZGluZyB0byBhdm9pZCBtZXNzYWdlIHNpemUgbGltaXRzLiBTZWUgYi82MjExNTY2MC4KICAgIGxldCBwb3NpdGlvbiA9IDA7CiAgICB3aGlsZSAocG9zaXRpb24gPCBmaWxlRGF0YS5ieXRlTGVuZ3RoKSB7CiAgICAgIGNvbnN0IGxlbmd0aCA9IE1hdGgubWluKGZpbGVEYXRhLmJ5dGVMZW5ndGggLSBwb3NpdGlvbiwgTUFYX1BBWUxPQURfU0laRSk7CiAgICAgIGNvbnN0IGNodW5rID0gbmV3IFVpbnQ4QXJyYXkoZmlsZURhdGEsIHBvc2l0aW9uLCBsZW5ndGgpOwogICAgICBwb3NpdGlvbiArPSBsZW5ndGg7CgogICAgICBjb25zdCBiYXNlNjQgPSBidG9hKFN0cmluZy5mcm9tQ2hhckNvZGUuYXBwbHkobnVsbCwgY2h1bmspKTsKICAgICAgeWllbGQgewogICAgICAgIHJlc3BvbnNlOiB7CiAgICAgICAgICBhY3Rpb246ICdhcHBlbmQnLAogICAgICAgICAgZmlsZTogZmlsZS5uYW1lLAogICAgICAgICAgZGF0YTogYmFzZTY0LAogICAgICAgIH0sCiAgICAgIH07CiAgICAgIHBlcmNlbnQudGV4dENvbnRlbnQgPQogICAgICAgICAgYCR7TWF0aC5yb3VuZCgocG9zaXRpb24gLyBmaWxlRGF0YS5ieXRlTGVuZ3RoKSAqIDEwMCl9JSBkb25lYDsKICAgIH0KICB9CgogIC8vIEFsbCBkb25lLgogIHlpZWxkIHsKICAgIHJlc3BvbnNlOiB7CiAgICAgIGFjdGlvbjogJ2NvbXBsZXRlJywKICAgIH0KICB9Owp9CgpzY29wZS5nb29nbGUgPSBzY29wZS5nb29nbGUgfHwge307CnNjb3BlLmdvb2dsZS5jb2xhYiA9IHNjb3BlLmdvb2dsZS5jb2xhYiB8fCB7fTsKc2NvcGUuZ29vZ2xlLmNvbGFiLl9maWxlcyA9IHsKICBfdXBsb2FkRmlsZXMsCiAgX3VwbG9hZEZpbGVzQ29udGludWUsCn07Cn0pKHNlbGYpOwo=", | |
"ok": true, | |
"headers": [ | |
[ | |
"content-type", | |
"application/javascript" | |
] | |
], | |
"status": 200, | |
"status_text": "OK" | |
} | |
}, | |
"base_uri": "https://localhost:8080/", | |
"height": 77 | |
} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"from google.colab import files\n", | |
"uploaded = files.upload()" | |
], | |
"execution_count": 3, | |
"outputs": [ | |
{ | |
"output_type": "display_data", | |
"data": { | |
"text/html": [ | |
"\n", | |
" <input type=\"file\" id=\"files-fdff2e91-e395-4bed-b1ca-8de594a63c78\" name=\"files[]\" multiple disabled />\n", | |
" <output id=\"result-fdff2e91-e395-4bed-b1ca-8de594a63c78\">\n", | |
" Upload widget is only available when the cell has been executed in the\n", | |
" current browser session. Please rerun this cell to enable.\n", | |
" </output>\n", | |
" <script src=\"/nbextensions/google.colab/files.js\"></script> " | |
], | |
"text/plain": [ | |
"<IPython.core.display.HTML object>" | |
] | |
}, | |
"metadata": { | |
"tags": [] | |
} | |
}, | |
{ | |
"output_type": "stream", | |
"text": [ | |
"Saving forest-cover-type-dataset.zip to forest-cover-type-dataset.zip\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "ejBlx-mVGcr1", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"We can then unzip the csv file" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "dLuvliJ0GlIZ", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 69 | |
}, | |
"outputId": "74f61c41-48df-4f50-e61a-87cd3ddb6bee" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"!unzip forest-cover-type-dataset.zip\n", | |
"!rm forest-cover-type-dataset.zip" | |
], | |
"execution_count": 4, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"Archive: forest-cover-type-dataset.zip\n", | |
"replace covtype.csv? [y]es, [n]o, [A]ll, [N]one, [r]ename: y\n", | |
" inflating: covtype.csv \n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "hKOppxCFFbUQ", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## Splitting the data into the training and test sets" | |
] | |
}, | |
{ | |
"metadata": { | |
"colab_type": "code", | |
"id": "5Rar3bD8OAAl", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"import numpy as np\n", | |
"np.set_printoptions(suppress=True)\n", | |
"\n", | |
"from sklearn.model_selection import train_test_split\n", | |
"\n", | |
"data = np.genfromtxt(\"./covtype.csv\", delimiter=\",\", skip_header=1, dtype=np.int32)\n", | |
"\n", | |
"X = data[:, 0:-1]\n", | |
"y = data[:, -1]\n", | |
"\n", | |
"X_train, X_test, y_train, y_test = train_test_split(\n", | |
" X, y, test_size=0.33, random_state=42)\n", | |
"\n", | |
"training_data = np.column_stack((X_train, y_train))\n", | |
"np.savetxt(\"./covtype.csv.train\", training_data, \"%d\", delimiter=\",\")\n", | |
"\n", | |
"test_data = np.column_stack((X_test, y_test))\n", | |
"np.savetxt(\"./covtype.csv.test\", test_data, \"%d\", delimiter=\",\")" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "tSO3mtPwFbUY", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"# Part I - Loading data ([Video](https://www.youtube.com/watch?v=oFFbKogYdfc))" | |
] | |
}, | |
{ | |
"metadata": { | |
"colab_type": "code", | |
"id": "xz5vGSo9OAAt", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"import tensorflow as tf\n", | |
"import pprint\n", | |
"\n", | |
"tf.enable_eager_execution()" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "_B4LxOLmFbUh", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 34 | |
}, | |
"outputId": "53848eec-132d-4259-e54e-8107941a1444" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"a = tf.constant(5)\n", | |
"b = a * 3\n", | |
"print(b)" | |
], | |
"execution_count": 7, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"tf.Tensor(15, shape=(), dtype=int32)\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"scrolled": true, | |
"id": "JHHKVZCgFbUr", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 972 | |
}, | |
"outputId": "7825feef-08df-4d1d-f164-de00b92be735" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"defaults = [tf.int32] * 55\n", | |
"\n", | |
"# tf.contrib.data.CsvDataset is deprecated\n", | |
"dataset = tf.data.experimental.CsvDataset(['./covtype.csv.train'], defaults)\n", | |
"pprint.pprint(list(dataset.take(1)))" | |
], | |
"execution_count": 8, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"[(<tf.Tensor: id=72, shape=(), dtype=int32, numpy=3075>,\n", | |
" <tf.Tensor: id=73, shape=(), dtype=int32, numpy=241>,\n", | |
" <tf.Tensor: id=74, shape=(), dtype=int32, numpy=20>,\n", | |
" <tf.Tensor: id=75, shape=(), dtype=int32, numpy=382>,\n", | |
" <tf.Tensor: id=76, shape=(), dtype=int32, numpy=56>,\n", | |
" <tf.Tensor: id=77, shape=(), dtype=int32, numpy=5772>,\n", | |
" <tf.Tensor: id=78, shape=(), dtype=int32, numpy=179>,\n", | |
" <tf.Tensor: id=79, shape=(), dtype=int32, numpy=252>,\n", | |
" <tf.Tensor: id=80, shape=(), dtype=int32, numpy=207>,\n", | |
" <tf.Tensor: id=81, shape=(), dtype=int32, numpy=1849>,\n", | |
" <tf.Tensor: id=82, shape=(), dtype=int32, numpy=1>,\n", | |
" <tf.Tensor: id=83, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=84, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=85, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=86, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=87, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=88, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=89, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=90, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=91, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=92, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=93, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=94, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=95, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=96, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=97, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=98, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=99, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=100, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=101, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=102, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=103, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=104, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=105, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=106, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=107, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=108, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=109, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=110, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=111, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=112, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=113, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=114, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=115, shape=(), dtype=int32, numpy=1>,\n", | |
" <tf.Tensor: id=116, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=117, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=118, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=119, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=120, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=121, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=122, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=123, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=124, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=125, shape=(), dtype=int32, numpy=0>,\n", | |
" <tf.Tensor: id=126, shape=(), dtype=int32, numpy=2>)]\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "IkwNC1q0FbUx", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"col_names = [\"Elevation\",\"Aspect\",\"Slope\",\"Horizontal_Distance_To_Hydrology\",\n", | |
" \"Vertical_Distance_To_Hydrology\",\"Horizontal_Distance_To_Roadways\",\n", | |
" \"Hillshade_9am\",\"Hillshade_Noon\",\"Hillshade_3pm\",\n", | |
" \"Horizontal_Distance_To_Fire_Points\",\"Soil_Type\", \"Cover_Type\"]\n", | |
"\n", | |
"def _parse_csv_row(*vals):\n", | |
" soil_type_t = tf.convert_to_tensor(vals[14:54])\n", | |
" feat_vals = vals[:10] + (soil_type_t, vals[54])\n", | |
" features = dict(zip(col_names, feat_vals))\n", | |
"\n", | |
" class_label = tf.argmax(vals[10:14], axis=0) # wilderness area\n", | |
" return features, class_label" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"scrolled": true, | |
"id": "nkkX9fQ6FbU5", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 1493 | |
}, | |
"outputId": "5ef7c8e4-859a-41eb-fbc0-9f36e5f41405" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"dataset = dataset.map(_parse_csv_row).batch(64)\n", | |
"pprint.pprint(list(dataset.take(1)))" | |
], | |
"execution_count": 10, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"[({'Aspect': <tf.Tensor: id=426, shape=(64,), dtype=int32, numpy=\n", | |
"array([241, 130, 122, 54, 335, 7, 90, 319, 124, 117, 216, 90, 345,\n", | |
" 45, 163, 347, 115, 217, 42, 17, 354, 109, 144, 350, 10, 360,\n", | |
" 107, 254, 85, 213, 5, 107, 57, 122, 206, 73, 264, 355, 19,\n", | |
" 237, 184, 27, 78, 89, 62, 288, 83, 341, 99, 38, 0, 51,\n", | |
" 202, 78, 193, 86, 34, 45, 346, 354, 92, 112, 159, 15],\n", | |
" dtype=int32)>,\n", | |
" 'Cover_Type': <tf.Tensor: id=427, shape=(64,), dtype=int32, numpy=\n", | |
"array([2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 3, 2, 2, 1, 1, 3, 2, 1, 1, 1, 1,\n", | |
" 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 1, 1, 2, 1, 1, 1, 1, 3, 2, 1, 1, 7,\n", | |
" 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 6, 4, 7, 1],\n", | |
" dtype=int32)>,\n", | |
" 'Elevation': <tf.Tensor: id=428, shape=(64,), dtype=int32, numpy=\n", | |
"array([3075, 2705, 2926, 3162, 2794, 2800, 3283, 2891, 3437, 3156, 2877,\n", | |
" 2600, 3047, 2985, 3181, 3111, 2350, 2906, 3110, 3244, 2807, 3201,\n", | |
" 2926, 2933, 3006, 2515, 2606, 3025, 2551, 3068, 3038, 2429, 3152,\n", | |
" 2828, 2967, 2972, 3479, 2709, 3249, 2591, 2968, 2821, 2897, 3380,\n", | |
" 3100, 3191, 2845, 2839, 2896, 3160, 3161, 2785, 2551, 3041, 2997,\n", | |
" 2887, 2890, 3100, 2886, 3087, 2378, 2233, 3249, 3218], dtype=int32)>,\n", | |
" 'Hillshade_3pm': <tf.Tensor: id=429, shape=(64,), dtype=int32, numpy=\n", | |
"array([207, 125, 114, 104, 169, 151, 129, 187, 87, 114, 189, 71, 164,\n", | |
" 138, 150, 158, 86, 195, 116, 140, 157, 127, 147, 159, 141, 140,\n", | |
" 102, 191, 118, 183, 145, 124, 64, 134, 168, 75, 219, 149, 140,\n", | |
" 222, 157, 123, 73, 39, 142, 190, 143, 168, 89, 90, 156, 139,\n", | |
" 172, 118, 163, 134, 129, 120, 163, 157, 93, 97, 145, 139],\n", | |
" dtype=int32)>,\n", | |
" 'Hillshade_9am': <tf.Tensor: id=430, shape=(64,), dtype=int32, numpy=\n", | |
"array([179, 238, 242, 228, 195, 211, 232, 174, 250, 241, 193, 247, 181,\n", | |
" 222, 226, 155, 250, 180, 221, 210, 209, 235, 226, 190, 202, 172,\n", | |
" 244, 195, 235, 201, 199, 236, 227, 233, 215, 239, 161, 175, 212,\n", | |
" 148, 222, 211, 242, 248, 224, 189, 225, 183, 246, 211, 218, 223,\n", | |
" 212, 234, 214, 229, 218, 223, 191, 204, 244, 247, 229, 208],\n", | |
" dtype=int32)>,\n", | |
" 'Hillshade_Noon': <tf.Tensor: id=431, shape=(64,), dtype=int32, numpy=\n", | |
"array([252, 234, 229, 203, 225, 225, 228, 222, 220, 228, 254, 200, 208,\n", | |
" 225, 242, 182, 217, 253, 206, 214, 228, 231, 239, 212, 209, 182,\n", | |
" 221, 247, 222, 254, 209, 229, 175, 234, 248, 194, 244, 192, 216,\n", | |
" 248, 245, 202, 195, 180, 230, 239, 233, 213, 213, 178, 238, 227,\n", | |
" 252, 221, 251, 230, 213, 211, 217, 223, 213, 220, 242, 212],\n", | |
" dtype=int32)>,\n", | |
" 'Horizontal_Distance_To_Fire_Points': <tf.Tensor: id=432, shape=(64,), dtype=int32, numpy=\n", | |
"array([1849, 1651, 4341, 5246, 2114, 2794, 2279, 2313, 2818, 1934, 2047,\n", | |
" 878, 1382, 5409, 1008, 2088, 646, 726, 1343, 2350, 2744, 2546,\n", | |
" 1979, 2888, 2023, 1722, 624, 2142, 760, 1092, 752, 1366, 2501,\n", | |
" 1940, 774, 2589, 1622, 2442, 492, 1095, 4110, 3187, 3423, 484,\n", | |
" 2419, 1613, 2654, 313, 6199, 175, 741, 2738, 1146, 1329, 1774,\n", | |
" 2340, 1831, 3013, 6029, 1106, 1503, 1702, 2584, 2287], dtype=int32)>,\n", | |
" 'Horizontal_Distance_To_Hydrology': <tf.Tensor: id=433, shape=(64,), dtype=int32, numpy=\n", | |
"array([382, 212, 270, 0, 42, 30, 42, 85, 391, 212, 30, 551, 297,\n", | |
" 277, 0, 127, 124, 192, 120, 319, 42, 446, 240, 342, 175, 85,\n", | |
" 256, 212, 180, 330, 297, 60, 210, 153, 108, 192, 150, 342, 228,\n", | |
" 0, 433, 190, 170, 573, 247, 124, 150, 30, 85, 319, 210, 484,\n", | |
" 270, 90, 212, 0, 30, 256, 67, 60, 30, 60, 240, 573],\n", | |
" dtype=int32)>,\n", | |
" 'Horizontal_Distance_To_Roadways': <tf.Tensor: id=434, shape=(64,), dtype=int32, numpy=\n", | |
"array([5772, 2148, 5607, 2900, 2052, 3331, 1218, 1741, 2076, 690, 3421,\n", | |
" 1061, 366, 3598, 2468, 1047, 134, 859, 2954, 1485, 3300, 618,\n", | |
" 4806, 1846, 2330, 1728, 968, 1304, 108, 1342, 600, 451, 1583,\n", | |
" 1114, 1215, 1673, 3168, 2850, 5434, 2818, 5392, 1082, 4277, 603,\n", | |
" 2469, 722, 2759, 5060, 4293, 1366, 2288, 1717, 872, 1262, 4561,\n", | |
" 870, 3587, 4160, 4281, 1932, 658, 607, 2720, 1061], dtype=int32)>,\n", | |
" 'Slope': <tf.Tensor: id=435, shape=(64,), dtype=int32, numpy=\n", | |
"array([20, 11, 13, 16, 11, 8, 8, 17, 21, 13, 22, 22, 18, 7, 6, 28, 20,\n", | |
" 28, 15, 12, 7, 9, 4, 16, 15, 26, 15, 11, 10, 18, 15, 9, 26, 7,\n", | |
" 8, 21, 21, 23, 11, 32, 7, 17, 22, 29, 5, 11, 4, 17, 18, 25, 0,\n", | |
" 6, 15, 10, 20, 6, 12, 13, 14, 10, 16, 17, 8, 13], dtype=int32)>,\n", | |
" 'Soil_Type': <tf.Tensor: id=436, shape=(64, 40), dtype=int32, numpy=\n", | |
"array([[0, 0, 0, ..., 0, 0, 0],\n", | |
" [0, 0, 0, ..., 0, 0, 0],\n", | |
" [0, 0, 0, ..., 0, 0, 0],\n", | |
" ...,\n", | |
" [0, 0, 1, ..., 0, 0, 0],\n", | |
" [0, 0, 0, ..., 0, 0, 0],\n", | |
" [0, 0, 0, ..., 0, 0, 0]], dtype=int32)>,\n", | |
" 'Vertical_Distance_To_Hydrology': <tf.Tensor: id=437, shape=(64,), dtype=int32, numpy=\n", | |
"array([ 56, 44, 99, 0, 6, 3, 4, 21, -27, 39, 11, 69, 60,\n", | |
" 17, 0, 48, 28, 69, 16, 20, -4, 61, 10, 5, 37, 11,\n", | |
" 65, 76, 30, 1, 5, -4, 80, 25, -5, 36, 67, -14, 40,\n", | |
" 0, 55, 47, 40, 151, 34, 7, 21, 6, 13, 84, 35, -10,\n", | |
" 44, 24, 30, 0, 0, 14, 6, 8, 7, 12, 32, 101],\n", | |
" dtype=int32)>},\n", | |
" <tf.Tensor: id=438, shape=(64,), dtype=int64, numpy=\n", | |
"array([0, 0, 0, 0, 0, 0, 0, 2, 1, 2, 2, 2, 2, 0, 0, 0, 2, 2, 2, 0, 0, 0,\n", | |
" 0, 2, 2, 2, 0, 2, 3, 0, 2, 3, 2, 0, 2, 2, 0, 2, 0, 2, 0, 2, 0, 1,\n", | |
" 2, 2, 0, 0, 0, 0, 2, 0, 3, 2, 0, 2, 0, 0, 0, 2, 3, 3, 0, 2])>)]\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "mL1wlG7MFbVC", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"# Part 2 - Going deep on data and features ([Video](https://youtu.be/TOP2aLxcuu8))" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "Ql3m_M0kFbVF", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"# configuration objects that will be part of the model's graph\n", | |
"cover_type = tf.feature_column.categorical_column_with_identity(\"Cover_Type\", num_buckets=8)\n", | |
"cover_embedding = tf.feature_column.embedding_column(cover_type, dimension=10) # 1-hot vector" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "n5qU8dnGFbVK", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"numeric_cols = [\"Elevation\",\"Aspect\",\"Slope\",\"Horizontal_Distance_To_Hydrology\",\n", | |
" \"Vertical_Distance_To_Hydrology\",\"Horizontal_Distance_To_Roadways\",\n", | |
" \"Hillshade_9am\",\"Hillshade_Noon\",\"Hillshade_3pm\",\"Horizontal_Distance_To_Fire_Points\"]\n", | |
"numeric_features = [tf.feature_column.numeric_column(feat) for feat in numeric_cols]" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "PTV5dMmbFbVQ", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"soil_type = tf.feature_column.numeric_column(\"Soil_Type\", shape=(40, ))" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "Z1Z2t6RoFbVX", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"columns = numeric_features + [soil_type, cover_embedding]" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "0k-fCijHFbVd", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"As of January 2019, tf.keras.layers.DenseFeature layer does not exist in TF 1.12.0\n", | |
"Hence building the model with Keras as described in the video is not possible however the following code using tf.estimator achieves a similar outcomes." | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "ox4aSgKzFbVf", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 193 | |
}, | |
"outputId": "94510c71-6c55-4ddc-ae75-30f6192bb235" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"model = tf.estimator.DNNClassifier(feature_columns=columns, n_classes=4,\n", | |
" hidden_units=[256, 16, 8])" | |
], | |
"execution_count": 15, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"INFO:tensorflow:Using default config.\n", | |
"WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpoc5tjdcw\n", | |
"INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpoc5tjdcw', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true\n", | |
"graph_options {\n", | |
" rewrite_options {\n", | |
" meta_optimizer_iterations: ONE\n", | |
" }\n", | |
"}\n", | |
", '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fee4fac5748>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "1B5AbqnHFbVj", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"# Part 3 - Building and refining your models ([Video](https://www.youtube.com/watch?v=ChidCgtd1Lw))" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "2ExTSGcJFbVl", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"As described above, the example using the Keras layers has been adapted in order to work with the current version of Tensorflow \n", | |
"We use tf.estimator instead of a Keras model, more details can be found from 4:10 in the video\n", | |
"\n", | |
"As show below some methods like compiile are not available for estimators" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "NRh-s6RRFbVq", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 237 | |
}, | |
"outputId": "027c7480-3965-4f58-a20d-6c43116d4036" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"model.compile(\n", | |
" optimizer=tf.train.AdamOptimizer(),\n", | |
" loss=\"sparse_categorical_crossentropy\",\n", | |
" metrics=[\"accuracy\"] \n", | |
")" | |
], | |
"execution_count": 16, | |
"outputs": [ | |
{ | |
"output_type": "error", | |
"ename": "AttributeError", | |
"evalue": "ignored", | |
"traceback": [ | |
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", | |
"\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", | |
"\u001b[0;32m<ipython-input-16-f752d44e3dd7>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m model.compile(\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0moptimizer\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mAdamOptimizer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0mloss\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"sparse_categorical_crossentropy\"\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mmetrics\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"accuracy\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m )\n", | |
"\u001b[0;31mAttributeError\u001b[0m: 'DNNClassifier' object has no attribute 'compile'" | |
] | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "kU7RzMNWFbVw", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"def load_data(*filenames):\n", | |
" dataset = tf.data.experimental.CsvDataset(\n", | |
" filenames, defaults)\n", | |
" dataset = dataset.map(_parse_csv_row)\n", | |
" dataset = dataset.batch(64)\n", | |
" return dataset" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "hleE_dx0FbV2", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"### Training the model/estimator" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "wiVeKmE_FbV5", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 4550 | |
}, | |
"outputId": "6fd1c594-4f00-461f-aaf7-6cf6cca90e34" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"nb_epoch = 2\n", | |
"for _ in range(nb_epoch):\n", | |
" model.train(lambda : load_data(\"covtype.csv.train\"))" | |
], | |
"execution_count": 18, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"INFO:tensorflow:Calling model_fn.\n", | |
"INFO:tensorflow:Done calling model_fn.\n", | |
"INFO:tensorflow:Create CheckpointSaverHook.\n", | |
"INFO:tensorflow:Graph was finalized.\n", | |
"INFO:tensorflow:Running local_init_op.\n", | |
"INFO:tensorflow:Done running local_init_op.\n", | |
"INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpoc5tjdcw/model.ckpt.\n", | |
"INFO:tensorflow:loss = 1563.7515, step = 1\n", | |
"INFO:tensorflow:global_step/sec: 125.934\n", | |
"INFO:tensorflow:loss = 54.0829, step = 101 (0.801 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.979\n", | |
"INFO:tensorflow:loss = 60.392498, step = 201 (0.705 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.561\n", | |
"INFO:tensorflow:loss = 50.694626, step = 301 (0.711 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.116\n", | |
"INFO:tensorflow:loss = 56.530968, step = 401 (0.684 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.203\n", | |
"INFO:tensorflow:loss = 52.510902, step = 501 (0.703 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.778\n", | |
"INFO:tensorflow:loss = 57.054874, step = 601 (0.695 sec)\n", | |
"INFO:tensorflow:global_step/sec: 139.115\n", | |
"INFO:tensorflow:loss = 56.987648, step = 701 (0.719 sec)\n", | |
"INFO:tensorflow:global_step/sec: 139.589\n", | |
"INFO:tensorflow:loss = 55.104744, step = 801 (0.716 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.143\n", | |
"INFO:tensorflow:loss = 53.28988, step = 901 (0.714 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.09\n", | |
"INFO:tensorflow:loss = 55.908768, step = 1001 (0.694 sec)\n", | |
"INFO:tensorflow:global_step/sec: 120.92\n", | |
"INFO:tensorflow:loss = 50.61058, step = 1101 (0.827 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.401\n", | |
"INFO:tensorflow:loss = 48.526443, step = 1201 (0.691 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.219\n", | |
"INFO:tensorflow:loss = 52.245483, step = 1301 (0.704 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.69\n", | |
"INFO:tensorflow:loss = 51.112408, step = 1401 (0.705 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.419\n", | |
"INFO:tensorflow:loss = 47.682133, step = 1501 (0.691 sec)\n", | |
"INFO:tensorflow:global_step/sec: 136.394\n", | |
"INFO:tensorflow:loss = 60.863514, step = 1601 (0.731 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.832\n", | |
"INFO:tensorflow:loss = 49.61512, step = 1701 (0.703 sec)\n", | |
"INFO:tensorflow:global_step/sec: 127.748\n", | |
"INFO:tensorflow:loss = 50.141735, step = 1801 (0.780 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.08\n", | |
"INFO:tensorflow:loss = 73.35028, step = 1901 (0.694 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.573\n", | |
"INFO:tensorflow:loss = 58.26995, step = 2001 (0.687 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.584\n", | |
"INFO:tensorflow:loss = 56.142826, step = 2101 (0.694 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.473\n", | |
"INFO:tensorflow:loss = 59.331448, step = 2201 (0.709 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.246\n", | |
"INFO:tensorflow:loss = 52.343704, step = 2301 (0.688 sec)\n", | |
"INFO:tensorflow:global_step/sec: 137.705\n", | |
"INFO:tensorflow:loss = 53.304695, step = 2401 (0.726 sec)\n", | |
"INFO:tensorflow:global_step/sec: 131.839\n", | |
"INFO:tensorflow:loss = 52.263824, step = 2501 (0.761 sec)\n", | |
"INFO:tensorflow:global_step/sec: 130.136\n", | |
"INFO:tensorflow:loss = 50.648746, step = 2601 (0.771 sec)\n", | |
"INFO:tensorflow:global_step/sec: 124.489\n", | |
"INFO:tensorflow:loss = 60.119858, step = 2701 (0.798 sec)\n", | |
"INFO:tensorflow:global_step/sec: 131.819\n", | |
"INFO:tensorflow:loss = 56.921307, step = 2801 (0.762 sec)\n", | |
"INFO:tensorflow:global_step/sec: 127.871\n", | |
"INFO:tensorflow:loss = 40.97254, step = 2901 (0.783 sec)\n", | |
"INFO:tensorflow:global_step/sec: 130.498\n", | |
"INFO:tensorflow:loss = 48.667816, step = 3001 (0.762 sec)\n", | |
"INFO:tensorflow:global_step/sec: 124.791\n", | |
"INFO:tensorflow:loss = 48.208115, step = 3101 (0.806 sec)\n", | |
"INFO:tensorflow:global_step/sec: 131.442\n", | |
"INFO:tensorflow:loss = 51.0486, step = 3201 (0.756 sec)\n", | |
"INFO:tensorflow:global_step/sec: 132.216\n", | |
"INFO:tensorflow:loss = 57.64761, step = 3301 (0.756 sec)\n", | |
"INFO:tensorflow:global_step/sec: 131.609\n", | |
"INFO:tensorflow:loss = 48.017967, step = 3401 (0.765 sec)\n", | |
"INFO:tensorflow:global_step/sec: 132.271\n", | |
"INFO:tensorflow:loss = 46.5169, step = 3501 (0.755 sec)\n", | |
"INFO:tensorflow:global_step/sec: 131.766\n", | |
"INFO:tensorflow:loss = 56.23243, step = 3601 (0.760 sec)\n", | |
"INFO:tensorflow:global_step/sec: 131.417\n", | |
"INFO:tensorflow:loss = 61.430374, step = 3701 (0.759 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.475\n", | |
"INFO:tensorflow:loss = 52.16754, step = 3801 (0.709 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.936\n", | |
"INFO:tensorflow:loss = 58.322666, step = 3901 (0.704 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.71\n", | |
"INFO:tensorflow:loss = 60.72654, step = 4001 (0.701 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.7\n", | |
"INFO:tensorflow:loss = 63.571835, step = 4101 (0.694 sec)\n", | |
"INFO:tensorflow:global_step/sec: 139.187\n", | |
"INFO:tensorflow:loss = 43.489136, step = 4201 (0.716 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.013\n", | |
"INFO:tensorflow:loss = 53.06375, step = 4301 (0.697 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.325\n", | |
"INFO:tensorflow:loss = 58.16475, step = 4401 (0.700 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.69\n", | |
"INFO:tensorflow:loss = 42.54741, step = 4501 (0.711 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.632\n", | |
"INFO:tensorflow:loss = 57.692978, step = 4601 (0.685 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.49\n", | |
"INFO:tensorflow:loss = 52.41902, step = 4701 (0.685 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.872\n", | |
"INFO:tensorflow:loss = 46.229134, step = 4801 (0.689 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.509\n", | |
"INFO:tensorflow:loss = 53.27777, step = 4901 (0.693 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.84\n", | |
"INFO:tensorflow:loss = 45.817062, step = 5001 (0.694 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.325\n", | |
"INFO:tensorflow:loss = 51.862717, step = 5101 (0.689 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.209\n", | |
"INFO:tensorflow:loss = 48.110218, step = 5201 (0.689 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.853\n", | |
"INFO:tensorflow:loss = 56.061565, step = 5301 (0.690 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.059\n", | |
"INFO:tensorflow:loss = 56.900913, step = 5401 (0.688 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.748\n", | |
"INFO:tensorflow:loss = 45.794617, step = 5501 (0.691 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.174\n", | |
"INFO:tensorflow:loss = 57.858955, step = 5601 (0.686 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.505\n", | |
"INFO:tensorflow:loss = 53.602364, step = 5701 (0.691 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.529\n", | |
"INFO:tensorflow:loss = 41.861874, step = 5801 (0.692 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.351\n", | |
"INFO:tensorflow:loss = 44.658897, step = 5901 (0.698 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.892\n", | |
"INFO:tensorflow:loss = 55.12124, step = 6001 (0.705 sec)\n", | |
"INFO:tensorflow:Saving checkpoints for 6083 into /tmp/tmpoc5tjdcw/model.ckpt.\n", | |
"INFO:tensorflow:Loss for final step: 19.818727.\n", | |
"INFO:tensorflow:Calling model_fn.\n", | |
"INFO:tensorflow:Done calling model_fn.\n", | |
"INFO:tensorflow:Create CheckpointSaverHook.\n", | |
"INFO:tensorflow:Graph was finalized.\n", | |
"INFO:tensorflow:Restoring parameters from /tmp/tmpoc5tjdcw/model.ckpt-6083\n", | |
"INFO:tensorflow:Running local_init_op.\n", | |
"INFO:tensorflow:Done running local_init_op.\n", | |
"INFO:tensorflow:Saving checkpoints for 6083 into /tmp/tmpoc5tjdcw/model.ckpt.\n", | |
"INFO:tensorflow:loss = 50.73872, step = 6084\n", | |
"INFO:tensorflow:global_step/sec: 124.892\n", | |
"INFO:tensorflow:loss = 46.179607, step = 6184 (0.806 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.727\n", | |
"INFO:tensorflow:loss = 57.620033, step = 6284 (0.686 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.717\n", | |
"INFO:tensorflow:loss = 47.93088, step = 6384 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.142\n", | |
"INFO:tensorflow:loss = 53.190434, step = 6484 (0.701 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.976\n", | |
"INFO:tensorflow:loss = 47.101044, step = 6584 (0.692 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.171\n", | |
"INFO:tensorflow:loss = 50.72544, step = 6684 (0.697 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.984\n", | |
"INFO:tensorflow:loss = 53.46061, step = 6784 (0.701 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.741\n", | |
"INFO:tensorflow:loss = 52.17375, step = 6884 (0.706 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.933\n", | |
"INFO:tensorflow:loss = 51.16942, step = 6984 (0.693 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.574\n", | |
"INFO:tensorflow:loss = 51.577843, step = 7084 (0.693 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.446\n", | |
"INFO:tensorflow:loss = 45.820217, step = 7184 (0.710 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.15\n", | |
"INFO:tensorflow:loss = 55.1689, step = 7284 (0.710 sec)\n", | |
"INFO:tensorflow:global_step/sec: 139.742\n", | |
"INFO:tensorflow:loss = 50.02367, step = 7384 (0.715 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.001\n", | |
"INFO:tensorflow:loss = 44.8162, step = 7484 (0.706 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.997\n", | |
"INFO:tensorflow:loss = 47.461018, step = 7584 (0.712 sec)\n", | |
"INFO:tensorflow:global_step/sec: 138.742\n", | |
"INFO:tensorflow:loss = 54.59662, step = 7684 (0.721 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.805\n", | |
"INFO:tensorflow:loss = 44.98487, step = 7784 (0.698 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.733\n", | |
"INFO:tensorflow:loss = 45.31526, step = 7884 (0.695 sec)\n", | |
"INFO:tensorflow:global_step/sec: 137.838\n", | |
"INFO:tensorflow:loss = 55.089863, step = 7984 (0.726 sec)\n", | |
"INFO:tensorflow:global_step/sec: 139.947\n", | |
"INFO:tensorflow:loss = 53.3806, step = 8084 (0.715 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.083\n", | |
"INFO:tensorflow:loss = 52.81919, step = 8184 (0.699 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.425\n", | |
"INFO:tensorflow:loss = 54.40442, step = 8284 (0.712 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.251\n", | |
"INFO:tensorflow:loss = 46.548256, step = 8384 (0.703 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.45\n", | |
"INFO:tensorflow:loss = 51.30117, step = 8484 (0.704 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.607\n", | |
"INFO:tensorflow:loss = 46.064327, step = 8584 (0.708 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.544\n", | |
"INFO:tensorflow:loss = 48.06105, step = 8684 (0.696 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.901\n", | |
"INFO:tensorflow:loss = 56.247322, step = 8784 (0.696 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.632\n", | |
"INFO:tensorflow:loss = 51.228027, step = 8884 (0.701 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.163\n", | |
"INFO:tensorflow:loss = 36.852295, step = 8984 (0.708 sec)\n", | |
"INFO:tensorflow:global_step/sec: 139.334\n", | |
"INFO:tensorflow:loss = 42.330986, step = 9084 (0.718 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.794\n", | |
"INFO:tensorflow:loss = 43.96254, step = 9184 (0.695 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.314\n", | |
"INFO:tensorflow:loss = 45.464436, step = 9284 (0.698 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.289\n", | |
"INFO:tensorflow:loss = 47.248096, step = 9384 (0.703 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.705\n", | |
"INFO:tensorflow:loss = 48.061234, step = 9484 (0.710 sec)\n", | |
"INFO:tensorflow:global_step/sec: 137.473\n", | |
"INFO:tensorflow:loss = 39.808403, step = 9584 (0.723 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.624\n", | |
"INFO:tensorflow:loss = 54.48674, step = 9684 (0.711 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.101\n", | |
"INFO:tensorflow:loss = 55.452187, step = 9784 (0.704 sec)\n", | |
"INFO:tensorflow:global_step/sec: 139.439\n", | |
"INFO:tensorflow:loss = 49.383556, step = 9884 (0.717 sec)\n", | |
"INFO:tensorflow:global_step/sec: 138.687\n", | |
"INFO:tensorflow:loss = 49.048637, step = 9984 (0.724 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.234\n", | |
"INFO:tensorflow:loss = 55.28456, step = 10084 (0.703 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.778\n", | |
"INFO:tensorflow:loss = 56.285095, step = 10184 (0.702 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.473\n", | |
"INFO:tensorflow:loss = 38.605423, step = 10284 (0.697 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.191\n", | |
"INFO:tensorflow:loss = 42.116234, step = 10384 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 139.867\n", | |
"INFO:tensorflow:loss = 49.800972, step = 10484 (0.715 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.772\n", | |
"INFO:tensorflow:loss = 34.982292, step = 10584 (0.696 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.879\n", | |
"INFO:tensorflow:loss = 48.09848, step = 10684 (0.710 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.511\n", | |
"INFO:tensorflow:loss = 45.812664, step = 10784 (0.701 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.204\n", | |
"INFO:tensorflow:loss = 35.634865, step = 10884 (0.693 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.072\n", | |
"INFO:tensorflow:loss = 44.835922, step = 10984 (0.699 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.269\n", | |
"INFO:tensorflow:loss = 35.582657, step = 11084 (0.700 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.555\n", | |
"INFO:tensorflow:loss = 42.163185, step = 11184 (0.694 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.598\n", | |
"INFO:tensorflow:loss = 35.836525, step = 11284 (0.682 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.434\n", | |
"INFO:tensorflow:loss = 46.57731, step = 11384 (0.683 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.294\n", | |
"INFO:tensorflow:loss = 51.163036, step = 11484 (0.701 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.755\n", | |
"INFO:tensorflow:loss = 39.366745, step = 11584 (0.688 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.696\n", | |
"INFO:tensorflow:loss = 46.037136, step = 11684 (0.706 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.905\n", | |
"INFO:tensorflow:loss = 44.876427, step = 11784 (0.690 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.408\n", | |
"INFO:tensorflow:loss = 34.15465, step = 11884 (0.699 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.664\n", | |
"INFO:tensorflow:loss = 38.461205, step = 11984 (0.700 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.557\n", | |
"INFO:tensorflow:loss = 48.154327, step = 12084 (0.697 sec)\n", | |
"INFO:tensorflow:Saving checkpoints for 12166 into /tmp/tmpoc5tjdcw/model.ckpt.\n", | |
"INFO:tensorflow:Loss for final step: 15.786152.\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "LQmpFUwNFbV-", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"### Validating the model" | |
] | |
}, | |
{ | |
"metadata": { | |
"colab_type": "code", | |
"id": "kxNpMGbDOAA1", | |
"outputId": "acf57836-59ff-463d-d844-7fe6655b1c5c", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 382 | |
} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"print(model.evaluate(lambda : load_data(\"covtype.csv.test\"), steps = 50))" | |
], | |
"execution_count": 19, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"INFO:tensorflow:Calling model_fn.\n", | |
"INFO:tensorflow:Done calling model_fn.\n", | |
"INFO:tensorflow:Starting evaluation at 2019-02-03-16:09:56\n", | |
"INFO:tensorflow:Graph was finalized.\n", | |
"INFO:tensorflow:Restoring parameters from /tmp/tmpoc5tjdcw/model.ckpt-12166\n", | |
"INFO:tensorflow:Running local_init_op.\n", | |
"INFO:tensorflow:Done running local_init_op.\n", | |
"INFO:tensorflow:Evaluation [5/50]\n", | |
"INFO:tensorflow:Evaluation [10/50]\n", | |
"INFO:tensorflow:Evaluation [15/50]\n", | |
"INFO:tensorflow:Evaluation [20/50]\n", | |
"INFO:tensorflow:Evaluation [25/50]\n", | |
"INFO:tensorflow:Evaluation [30/50]\n", | |
"INFO:tensorflow:Evaluation [35/50]\n", | |
"INFO:tensorflow:Evaluation [40/50]\n", | |
"INFO:tensorflow:Evaluation [45/50]\n", | |
"INFO:tensorflow:Evaluation [50/50]\n", | |
"INFO:tensorflow:Finished evaluation at 2019-02-03-16:09:57\n", | |
"INFO:tensorflow:Saving dict for global step 12166: accuracy = 0.7009375, average_loss = 0.6522408, global_step = 12166, loss = 41.743412\n", | |
"INFO:tensorflow:Saving 'checkpoint_path' summary for global step 12166: /tmp/tmpoc5tjdcw/model.ckpt-12166\n", | |
"{'accuracy': 0.7009375, 'average_loss': 0.6522408, 'loss': 41.743412, 'global_step': 12166}\n" | |
], | |
"name": "stdout" | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"colab_type": "code", | |
"id": "wR2LG5L-OAA8", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"### Exporting the model" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"colab_type": "code", | |
"id": "x2CDmMtWf4Up", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"features_sample = list(dataset.take(1))[0][0]\n", | |
"input_receiver_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(\n", | |
" features_sample)" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
}, | |
{ | |
"metadata": { | |
"id": "MXbFznYWFbWa", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 367 | |
}, | |
"outputId": "a46d1bee-ea22-49e1-cd63-4e4ed1bf6b48" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"model.export_saved_model(\n", | |
" export_dir_base=\"models\",\n", | |
" serving_input_receiver_fn=input_receiver_fn\n", | |
")" | |
], | |
"execution_count": 22, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"INFO:tensorflow:Calling model_fn.\n", | |
"INFO:tensorflow:Done calling model_fn.\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Classify: None\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Regress: None\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Train: None\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Eval: None\n", | |
"INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:\n", | |
"INFO:tensorflow:'serving_default' : Classification input must be a single string Tensor; got {'Elevation': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=int32>, 'Aspect': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=int32>, 'Slope': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Hydrology': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=int32>, 'Vertical_Distance_To_Hydrology': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Roadways': <tf.Tensor 'Placeholder_5:0' shape=(?,) dtype=int32>, 'Hillshade_9am': <tf.Tensor 'Placeholder_6:0' shape=(?,) dtype=int32>, 'Hillshade_Noon': <tf.Tensor 'Placeholder_7:0' shape=(?,) dtype=int32>, 'Hillshade_3pm': <tf.Tensor 'Placeholder_8:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Fire_Points': <tf.Tensor 'Placeholder_9:0' shape=(?,) dtype=int32>, 'Soil_Type': <tf.Tensor 'Placeholder_10:0' shape=(?, 40) dtype=int32>, 'Cover_Type': <tf.Tensor 'Placeholder_11:0' shape=(?,) dtype=int32>}\n", | |
"INFO:tensorflow:'classification' : Classification input must be a single string Tensor; got {'Elevation': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=int32>, 'Aspect': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=int32>, 'Slope': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Hydrology': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=int32>, 'Vertical_Distance_To_Hydrology': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Roadways': <tf.Tensor 'Placeholder_5:0' shape=(?,) dtype=int32>, 'Hillshade_9am': <tf.Tensor 'Placeholder_6:0' shape=(?,) dtype=int32>, 'Hillshade_Noon': <tf.Tensor 'Placeholder_7:0' shape=(?,) dtype=int32>, 'Hillshade_3pm': <tf.Tensor 'Placeholder_8:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Fire_Points': <tf.Tensor 'Placeholder_9:0' shape=(?,) dtype=int32>, 'Soil_Type': <tf.Tensor 'Placeholder_10:0' shape=(?, 40) dtype=int32>, 'Cover_Type': <tf.Tensor 'Placeholder_11:0' shape=(?,) dtype=int32>}\n", | |
"WARNING:tensorflow:Export includes no default signature!\n", | |
"INFO:tensorflow:Restoring parameters from /tmp/tmpoc5tjdcw/model.ckpt-12166\n", | |
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py:1044: calling SavedModelBuilder.add_meta_graph_and_variables (from tensorflow.python.saved_model.builder_impl) with legacy_init_op is deprecated and will be removed in a future version.\n", | |
"Instructions for updating:\n", | |
"Pass your op to the equivalent parameter main_op instead.\n", | |
"INFO:tensorflow:Assets added to graph.\n", | |
"INFO:tensorflow:No assets to write.\n", | |
"INFO:tensorflow:SavedModel written to: models/temp-b'1549210197'/saved_model.pb\n" | |
], | |
"name": "stdout" | |
}, | |
{ | |
"output_type": "execute_result", | |
"data": { | |
"text/plain": [ | |
"b'models/1549210197'" | |
] | |
}, | |
"metadata": { | |
"tags": [] | |
}, | |
"execution_count": 22 | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "dhbRkuNYFbWd", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"## Swapping the model" | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "AKe9EjVJFbWe", | |
"colab_type": "text" | |
}, | |
"cell_type": "markdown", | |
"source": [ | |
"Since we did not use the Keras model in the previous section, swapping the model is completely straightforward." | |
] | |
}, | |
{ | |
"metadata": { | |
"scrolled": true, | |
"id": "aZ1UhrkIFbWf", | |
"colab_type": "code", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 5369 | |
}, | |
"outputId": "c445088a-1741-4f02-9c71-5c4fc347f05c" | |
}, | |
"cell_type": "code", | |
"source": [ | |
"# model definition\n", | |
"model = tf.estimator.DNNLinearCombinedClassifier(\n", | |
" linear_feature_columns=[cover_type, soil_type],\n", | |
" dnn_feature_columns=numeric_features,\n", | |
" dnn_hidden_units=[256, 16, 8],\n", | |
" n_classes=4\n", | |
")\n", | |
"\n", | |
"# model training\n", | |
"nb_epoch = 2\n", | |
"for _ in range(nb_epoch):\n", | |
" model.train(lambda : load_data(\"covtype.csv.train\"))\n", | |
"\n", | |
"# model validation\n", | |
"print(model.evaluate(lambda : load_data(\"covtype.csv.test\"), steps = 50))\n", | |
"\n", | |
"# exporting the model\n", | |
"model.export_saved_model(\n", | |
" export_dir_base=\"models\",\n", | |
" serving_input_receiver_fn=input_receiver_fn\n", | |
")" | |
], | |
"execution_count": 23, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"text": [ | |
"INFO:tensorflow:Using default config.\n", | |
"WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpx_4kh1pp\n", | |
"INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpx_4kh1pp', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true\n", | |
"graph_options {\n", | |
" rewrite_options {\n", | |
" meta_optimizer_iterations: ONE\n", | |
" }\n", | |
"}\n", | |
", '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fee4e8b7eb8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n", | |
"INFO:tensorflow:Calling model_fn.\n", | |
"INFO:tensorflow:Done calling model_fn.\n", | |
"INFO:tensorflow:Create CheckpointSaverHook.\n", | |
"INFO:tensorflow:Graph was finalized.\n", | |
"INFO:tensorflow:Running local_init_op.\n", | |
"INFO:tensorflow:Done running local_init_op.\n", | |
"INFO:tensorflow:Saving checkpoints for 0 into /tmp/tmpx_4kh1pp/model.ckpt.\n", | |
"INFO:tensorflow:loss = 21178.918, step = 1\n", | |
"INFO:tensorflow:global_step/sec: 124.621\n", | |
"INFO:tensorflow:loss = 695.51685, step = 101 (0.804 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.57\n", | |
"INFO:tensorflow:loss = 80.324326, step = 201 (0.693 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.545\n", | |
"INFO:tensorflow:loss = 82.16355, step = 301 (0.672 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.094\n", | |
"INFO:tensorflow:loss = 72.09811, step = 401 (0.680 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.449\n", | |
"INFO:tensorflow:loss = 64.45215, step = 501 (0.712 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.797\n", | |
"INFO:tensorflow:loss = 55.31878, step = 601 (0.686 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.884\n", | |
"INFO:tensorflow:loss = 57.930946, step = 701 (0.683 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.77\n", | |
"INFO:tensorflow:loss = 58.940742, step = 801 (0.675 sec)\n", | |
"INFO:tensorflow:global_step/sec: 132.308\n", | |
"INFO:tensorflow:loss = 52.013374, step = 901 (0.760 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.264\n", | |
"INFO:tensorflow:loss = 50.14777, step = 1001 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.131\n", | |
"INFO:tensorflow:loss = 46.58689, step = 1101 (0.698 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.793\n", | |
"INFO:tensorflow:loss = 47.136314, step = 1201 (0.695 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.61\n", | |
"INFO:tensorflow:loss = 46.79516, step = 1301 (0.687 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.846\n", | |
"INFO:tensorflow:loss = 48.593464, step = 1401 (0.699 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.033\n", | |
"INFO:tensorflow:loss = 44.56623, step = 1501 (0.710 sec)\n", | |
"INFO:tensorflow:global_step/sec: 123.601\n", | |
"INFO:tensorflow:loss = 46.604683, step = 1601 (0.809 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.05\n", | |
"INFO:tensorflow:loss = 40.529945, step = 1701 (0.685 sec)\n", | |
"INFO:tensorflow:global_step/sec: 150.624\n", | |
"INFO:tensorflow:loss = 42.553047, step = 1801 (0.664 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.968\n", | |
"INFO:tensorflow:loss = 58.286285, step = 1901 (0.671 sec)\n", | |
"INFO:tensorflow:global_step/sec: 149.646\n", | |
"INFO:tensorflow:loss = 47.92729, step = 2001 (0.668 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.996\n", | |
"INFO:tensorflow:loss = 47.835617, step = 2101 (0.671 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.741\n", | |
"INFO:tensorflow:loss = 42.22122, step = 2201 (0.696 sec)\n", | |
"INFO:tensorflow:global_step/sec: 150.339\n", | |
"INFO:tensorflow:loss = 41.930344, step = 2301 (0.668 sec)\n", | |
"INFO:tensorflow:global_step/sec: 150.075\n", | |
"INFO:tensorflow:loss = 41.02935, step = 2401 (0.667 sec)\n", | |
"INFO:tensorflow:global_step/sec: 135.393\n", | |
"INFO:tensorflow:loss = 43.472324, step = 2501 (0.736 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.757\n", | |
"INFO:tensorflow:loss = 40.552483, step = 2601 (0.690 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.831\n", | |
"INFO:tensorflow:loss = 40.73275, step = 2701 (0.668 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.073\n", | |
"INFO:tensorflow:loss = 44.880142, step = 2801 (0.694 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.563\n", | |
"INFO:tensorflow:loss = 35.04409, step = 2901 (0.686 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.58\n", | |
"INFO:tensorflow:loss = 36.582817, step = 3001 (0.672 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.986\n", | |
"INFO:tensorflow:loss = 40.33685, step = 3101 (0.672 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.604\n", | |
"INFO:tensorflow:loss = 36.125923, step = 3201 (0.673 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.71\n", | |
"INFO:tensorflow:loss = 44.038193, step = 3301 (0.677 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.841\n", | |
"INFO:tensorflow:loss = 37.700592, step = 3401 (0.681 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.767\n", | |
"INFO:tensorflow:loss = 36.90215, step = 3501 (0.677 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.638\n", | |
"INFO:tensorflow:loss = 44.456184, step = 3601 (0.682 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.123\n", | |
"INFO:tensorflow:loss = 43.691116, step = 3701 (0.685 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.202\n", | |
"INFO:tensorflow:loss = 34.283455, step = 3801 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.189\n", | |
"INFO:tensorflow:loss = 37.454475, step = 3901 (0.699 sec)\n", | |
"INFO:tensorflow:global_step/sec: 150.086\n", | |
"INFO:tensorflow:loss = 49.208107, step = 4001 (0.666 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.35\n", | |
"INFO:tensorflow:loss = 43.246468, step = 4101 (0.706 sec)\n", | |
"INFO:tensorflow:global_step/sec: 133.637\n", | |
"INFO:tensorflow:loss = 35.596638, step = 4201 (0.745 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.987\n", | |
"INFO:tensorflow:loss = 34.981503, step = 4301 (0.684 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.917\n", | |
"INFO:tensorflow:loss = 39.480564, step = 4401 (0.667 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.352\n", | |
"INFO:tensorflow:loss = 29.907627, step = 4501 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 128.565\n", | |
"INFO:tensorflow:loss = 45.20193, step = 4601 (0.778 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.212\n", | |
"INFO:tensorflow:loss = 36.020462, step = 4701 (0.706 sec)\n", | |
"INFO:tensorflow:global_step/sec: 130.839\n", | |
"INFO:tensorflow:loss = 29.850979, step = 4801 (0.765 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.53\n", | |
"INFO:tensorflow:loss = 36.98899, step = 4901 (0.688 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.85\n", | |
"INFO:tensorflow:loss = 30.55624, step = 5001 (0.690 sec)\n", | |
"INFO:tensorflow:global_step/sec: 113.157\n", | |
"INFO:tensorflow:loss = 30.075686, step = 5101 (0.884 sec)\n", | |
"INFO:tensorflow:global_step/sec: 130.163\n", | |
"INFO:tensorflow:loss = 29.997782, step = 5201 (0.769 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.481\n", | |
"INFO:tensorflow:loss = 40.189583, step = 5301 (0.696 sec)\n", | |
"INFO:tensorflow:global_step/sec: 136.485\n", | |
"INFO:tensorflow:loss = 38.054596, step = 5401 (0.733 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.387\n", | |
"INFO:tensorflow:loss = 30.590908, step = 5501 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.656\n", | |
"INFO:tensorflow:loss = 36.124546, step = 5601 (0.685 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.313\n", | |
"INFO:tensorflow:loss = 34.145, step = 5701 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.127\n", | |
"INFO:tensorflow:loss = 28.415867, step = 5801 (0.685 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.423\n", | |
"INFO:tensorflow:loss = 31.236776, step = 5901 (0.684 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.627\n", | |
"INFO:tensorflow:loss = 38.86322, step = 6001 (0.682 sec)\n", | |
"INFO:tensorflow:Saving checkpoints for 6083 into /tmp/tmpx_4kh1pp/model.ckpt.\n", | |
"INFO:tensorflow:Loss for final step: 10.247061.\n", | |
"INFO:tensorflow:Calling model_fn.\n", | |
"INFO:tensorflow:Done calling model_fn.\n", | |
"INFO:tensorflow:Create CheckpointSaverHook.\n", | |
"INFO:tensorflow:Graph was finalized.\n", | |
"INFO:tensorflow:Restoring parameters from /tmp/tmpx_4kh1pp/model.ckpt-6083\n", | |
"INFO:tensorflow:Running local_init_op.\n", | |
"INFO:tensorflow:Done running local_init_op.\n", | |
"INFO:tensorflow:Saving checkpoints for 6083 into /tmp/tmpx_4kh1pp/model.ckpt.\n", | |
"INFO:tensorflow:loss = 35.453964, step = 6084\n", | |
"INFO:tensorflow:global_step/sec: 124.084\n", | |
"INFO:tensorflow:loss = 29.9327, step = 6184 (0.808 sec)\n", | |
"INFO:tensorflow:global_step/sec: 149.675\n", | |
"INFO:tensorflow:loss = 33.40795, step = 6284 (0.668 sec)\n", | |
"INFO:tensorflow:global_step/sec: 149.71\n", | |
"INFO:tensorflow:loss = 33.848015, step = 6384 (0.668 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.496\n", | |
"INFO:tensorflow:loss = 30.754347, step = 6484 (0.684 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.329\n", | |
"INFO:tensorflow:loss = 34.508507, step = 6584 (0.678 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.116\n", | |
"INFO:tensorflow:loss = 33.75431, step = 6684 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.934\n", | |
"INFO:tensorflow:loss = 35.52318, step = 6784 (0.703 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.388\n", | |
"INFO:tensorflow:loss = 36.761726, step = 6884 (0.699 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.821\n", | |
"INFO:tensorflow:loss = 33.967083, step = 6984 (0.709 sec)\n", | |
"INFO:tensorflow:global_step/sec: 143.195\n", | |
"INFO:tensorflow:loss = 39.754387, step = 7084 (0.699 sec)\n", | |
"INFO:tensorflow:global_step/sec: 115.992\n", | |
"INFO:tensorflow:loss = 30.153181, step = 7184 (0.861 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.986\n", | |
"INFO:tensorflow:loss = 35.46028, step = 7284 (0.687 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.373\n", | |
"INFO:tensorflow:loss = 32.308365, step = 7384 (0.688 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.769\n", | |
"INFO:tensorflow:loss = 37.288086, step = 7484 (0.681 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.405\n", | |
"INFO:tensorflow:loss = 29.26889, step = 7584 (0.688 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.189\n", | |
"INFO:tensorflow:loss = 34.921825, step = 7684 (0.693 sec)\n", | |
"INFO:tensorflow:global_step/sec: 141.528\n", | |
"INFO:tensorflow:loss = 28.719109, step = 7784 (0.713 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.528\n", | |
"INFO:tensorflow:loss = 27.658295, step = 7884 (0.686 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.17\n", | |
"INFO:tensorflow:loss = 43.41176, step = 7984 (0.682 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.705\n", | |
"INFO:tensorflow:loss = 37.003395, step = 8084 (0.670 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.518\n", | |
"INFO:tensorflow:loss = 31.534742, step = 8184 (0.682 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.21\n", | |
"INFO:tensorflow:loss = 27.774275, step = 8284 (0.688 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.796\n", | |
"INFO:tensorflow:loss = 31.540936, step = 8384 (0.686 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.975\n", | |
"INFO:tensorflow:loss = 30.644445, step = 8484 (0.690 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.579\n", | |
"INFO:tensorflow:loss = 29.438581, step = 8584 (0.687 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.772\n", | |
"INFO:tensorflow:loss = 30.256893, step = 8684 (0.704 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.99\n", | |
"INFO:tensorflow:loss = 29.146053, step = 8784 (0.682 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.302\n", | |
"INFO:tensorflow:loss = 36.77201, step = 8884 (0.674 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.015\n", | |
"INFO:tensorflow:loss = 27.146791, step = 8984 (0.680 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.159\n", | |
"INFO:tensorflow:loss = 28.759703, step = 9084 (0.675 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.061\n", | |
"INFO:tensorflow:loss = 32.43556, step = 9184 (0.680 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.112\n", | |
"INFO:tensorflow:loss = 24.901987, step = 9284 (0.716 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.965\n", | |
"INFO:tensorflow:loss = 35.65362, step = 9384 (0.678 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.533\n", | |
"INFO:tensorflow:loss = 30.539782, step = 9484 (0.682 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.142\n", | |
"INFO:tensorflow:loss = 26.402475, step = 9584 (0.680 sec)\n", | |
"INFO:tensorflow:global_step/sec: 135.711\n", | |
"INFO:tensorflow:loss = 35.693443, step = 9684 (0.737 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.079\n", | |
"INFO:tensorflow:loss = 36.578415, step = 9784 (0.688 sec)\n", | |
"INFO:tensorflow:global_step/sec: 131.246\n", | |
"INFO:tensorflow:loss = 25.770401, step = 9884 (0.758 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.088\n", | |
"INFO:tensorflow:loss = 27.556356, step = 9984 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.006\n", | |
"INFO:tensorflow:loss = 42.780975, step = 10084 (0.676 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.281\n", | |
"INFO:tensorflow:loss = 34.690804, step = 10184 (0.689 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.065\n", | |
"INFO:tensorflow:loss = 28.994417, step = 10284 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 140.057\n", | |
"INFO:tensorflow:loss = 27.364697, step = 10384 (0.714 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.54\n", | |
"INFO:tensorflow:loss = 31.856724, step = 10484 (0.692 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.205\n", | |
"INFO:tensorflow:loss = 25.337952, step = 10584 (0.684 sec)\n", | |
"INFO:tensorflow:global_step/sec: 144.729\n", | |
"INFO:tensorflow:loss = 35.8079, step = 10684 (0.695 sec)\n", | |
"INFO:tensorflow:global_step/sec: 142.37\n", | |
"INFO:tensorflow:loss = 31.155233, step = 10784 (0.703 sec)\n", | |
"INFO:tensorflow:global_step/sec: 134.903\n", | |
"INFO:tensorflow:loss = 23.521465, step = 10884 (0.742 sec)\n", | |
"INFO:tensorflow:global_step/sec: 145.907\n", | |
"INFO:tensorflow:loss = 30.1797, step = 10984 (0.680 sec)\n", | |
"INFO:tensorflow:global_step/sec: 149.019\n", | |
"INFO:tensorflow:loss = 23.848671, step = 11084 (0.671 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.881\n", | |
"INFO:tensorflow:loss = 24.006592, step = 11184 (0.684 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.897\n", | |
"INFO:tensorflow:loss = 23.731112, step = 11284 (0.681 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.004\n", | |
"INFO:tensorflow:loss = 34.677574, step = 11384 (0.677 sec)\n", | |
"INFO:tensorflow:global_step/sec: 146.701\n", | |
"INFO:tensorflow:loss = 31.434517, step = 11484 (0.683 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.075\n", | |
"INFO:tensorflow:loss = 25.500505, step = 11584 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.152\n", | |
"INFO:tensorflow:loss = 30.964556, step = 11684 (0.675 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.226\n", | |
"INFO:tensorflow:loss = 26.70921, step = 11784 (0.674 sec)\n", | |
"INFO:tensorflow:global_step/sec: 148.127\n", | |
"INFO:tensorflow:loss = 29.678047, step = 11884 (0.675 sec)\n", | |
"INFO:tensorflow:global_step/sec: 147.249\n", | |
"INFO:tensorflow:loss = 24.557219, step = 11984 (0.679 sec)\n", | |
"INFO:tensorflow:global_step/sec: 149\n", | |
"INFO:tensorflow:loss = 33.76008, step = 12084 (0.671 sec)\n", | |
"INFO:tensorflow:Saving checkpoints for 12166 into /tmp/tmpx_4kh1pp/model.ckpt.\n", | |
"INFO:tensorflow:Loss for final step: 7.664049.\n", | |
"INFO:tensorflow:Calling model_fn.\n", | |
"INFO:tensorflow:Done calling model_fn.\n", | |
"INFO:tensorflow:Starting evaluation at 2019-02-03-16:11:28\n", | |
"INFO:tensorflow:Graph was finalized.\n", | |
"INFO:tensorflow:Restoring parameters from /tmp/tmpx_4kh1pp/model.ckpt-12166\n", | |
"INFO:tensorflow:Running local_init_op.\n", | |
"INFO:tensorflow:Done running local_init_op.\n", | |
"INFO:tensorflow:Evaluation [5/50]\n", | |
"INFO:tensorflow:Evaluation [10/50]\n", | |
"INFO:tensorflow:Evaluation [15/50]\n", | |
"INFO:tensorflow:Evaluation [20/50]\n", | |
"INFO:tensorflow:Evaluation [25/50]\n", | |
"INFO:tensorflow:Evaluation [30/50]\n", | |
"INFO:tensorflow:Evaluation [35/50]\n", | |
"INFO:tensorflow:Evaluation [40/50]\n", | |
"INFO:tensorflow:Evaluation [45/50]\n", | |
"INFO:tensorflow:Evaluation [50/50]\n", | |
"INFO:tensorflow:Finished evaluation at 2019-02-03-16:11:29\n", | |
"INFO:tensorflow:Saving dict for global step 12166: accuracy = 0.8528125, average_loss = 0.46170443, global_step = 12166, loss = 29.549084\n", | |
"INFO:tensorflow:Saving 'checkpoint_path' summary for global step 12166: /tmp/tmpx_4kh1pp/model.ckpt-12166\n", | |
"{'accuracy': 0.8528125, 'average_loss': 0.46170443, 'loss': 29.549084, 'global_step': 12166}\n", | |
"INFO:tensorflow:Calling model_fn.\n", | |
"INFO:tensorflow:Done calling model_fn.\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Classify: None\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Regress: None\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Train: None\n", | |
"INFO:tensorflow:Signatures INCLUDED in export for Eval: None\n", | |
"INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:\n", | |
"INFO:tensorflow:'serving_default' : Classification input must be a single string Tensor; got {'Elevation': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=int32>, 'Aspect': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=int32>, 'Slope': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Hydrology': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=int32>, 'Vertical_Distance_To_Hydrology': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Roadways': <tf.Tensor 'Placeholder_5:0' shape=(?,) dtype=int32>, 'Hillshade_9am': <tf.Tensor 'Placeholder_6:0' shape=(?,) dtype=int32>, 'Hillshade_Noon': <tf.Tensor 'Placeholder_7:0' shape=(?,) dtype=int32>, 'Hillshade_3pm': <tf.Tensor 'Placeholder_8:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Fire_Points': <tf.Tensor 'Placeholder_9:0' shape=(?,) dtype=int32>, 'Soil_Type': <tf.Tensor 'Placeholder_10:0' shape=(?, 40) dtype=int32>, 'Cover_Type': <tf.Tensor 'Placeholder_11:0' shape=(?,) dtype=int32>}\n", | |
"INFO:tensorflow:'classification' : Classification input must be a single string Tensor; got {'Elevation': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=int32>, 'Aspect': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=int32>, 'Slope': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Hydrology': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=int32>, 'Vertical_Distance_To_Hydrology': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Roadways': <tf.Tensor 'Placeholder_5:0' shape=(?,) dtype=int32>, 'Hillshade_9am': <tf.Tensor 'Placeholder_6:0' shape=(?,) dtype=int32>, 'Hillshade_Noon': <tf.Tensor 'Placeholder_7:0' shape=(?,) dtype=int32>, 'Hillshade_3pm': <tf.Tensor 'Placeholder_8:0' shape=(?,) dtype=int32>, 'Horizontal_Distance_To_Fire_Points': <tf.Tensor 'Placeholder_9:0' shape=(?,) dtype=int32>, 'Soil_Type': <tf.Tensor 'Placeholder_10:0' shape=(?, 40) dtype=int32>, 'Cover_Type': <tf.Tensor 'Placeholder_11:0' shape=(?,) dtype=int32>}\n", | |
"WARNING:tensorflow:Export includes no default signature!\n", | |
"INFO:tensorflow:Restoring parameters from /tmp/tmpx_4kh1pp/model.ckpt-12166\n", | |
"INFO:tensorflow:Assets added to graph.\n", | |
"INFO:tensorflow:No assets to write.\n", | |
"INFO:tensorflow:SavedModel written to: models/temp-b'1549210289'/saved_model.pb\n" | |
], | |
"name": "stdout" | |
}, | |
{ | |
"output_type": "execute_result", | |
"data": { | |
"text/plain": [ | |
"b'models/1549210289'" | |
] | |
}, | |
"metadata": { | |
"tags": [] | |
}, | |
"execution_count": 23 | |
} | |
] | |
}, | |
{ | |
"metadata": { | |
"id": "Z7lVjHrdFbWw", | |
"colab_type": "code", | |
"colab": {} | |
}, | |
"cell_type": "code", | |
"source": [ | |
"" | |
], | |
"execution_count": 0, | |
"outputs": [] | |
} | |
] | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks!!!