Last active
March 21, 2022 12:15
-
-
Save ljcucc/8a338ccf7f2d48147aacb1b176144ca9 to your computer and use it in GitHub Desktop.
ISR Machine.ipynb
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"nbformat": 4, | |
"nbformat_minor": 0, | |
"metadata": { | |
"accelerator": "TPU", | |
"colab": { | |
"name": "ISR Machine.ipynb", | |
"provenance": [], | |
"collapsed_sections": [], | |
"include_colab_link": true | |
}, | |
"kernelspec": { | |
"name": "python3", | |
"display_name": "Python 3" | |
}, | |
"language_info": { | |
"name": "python" | |
} | |
}, | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "view-in-github", | |
"colab_type": "text" | |
}, | |
"source": [ | |
"<a href=\"https://colab.research.google.com/gist/ljcucc/8a338ccf7f2d48147aacb1b176144ca9/isr-machine.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "QJ4sSTzDWAao" | |
}, | |
"source": [ | |
"# ISR Machine - A image upscaling notebook with no code.\n", | |
"\n", | |
"This noteobook will help you to upscaling your image by using the magic from ISR team which was a german ML team ( https://github.com/idealo/image-super-resolution ).\n", | |
"\n", | |
"ITWolf ([Twitter](https://twitter.com/the_itwolf), [Github](https://github.com/ljcucc) ) is the authro of this noteobook, If you like it, consider to share this notebook.\n", | |
"\n", | |
"## How to use it?\n", | |
"\n", | |
"1. Enter your Image URL under [Configuration] section \n", | |
"2. After that, click [Connect] to connect to remote VM at Google Compute Engine.\n", | |
"3. Click option from top menu bar:`[Runtime] > [Run all]` .\n", | |
" * If you want to batch process image, try using \"How to use (batch)\"\n", | |
"4. Click auto download to dowanload after finished or click [Download] button to download your image.\n", | |
"\n", | |
"## How to use it? (batch)\n", | |
"\n", | |
"1. After that, click [Connect] to connect to remote VM at Google Compute Engine.\n", | |
"2. Click \"Init program\" to init environment.\n", | |
"3. Click \"GPU, TPU settings\"\n", | |
"4. Enter your Image URL under [Configuration] section. (The config will auto apply)\n", | |
"\n", | |
"Total takes 2min (Init environment, run once) + (8s~2min),depends waht model and how much size of your image." | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"metadata": { | |
"id": "KCd2ZuS4V6Z0" | |
}, | |
"source": [ | |
"%%capture\n", | |
"#@title Init program\n", | |
"#@markdown (Ignore me, but I'm taking a while, about <2min )\n", | |
"!pip install ISR\n", | |
"!pip install 'h5py<3.0.0'\n", | |
"!pip install tensorflow==2.4.1\n", | |
"!pip install keras==2.4.3" | |
], | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"metadata": { | |
"id": "TPDLdnKedj6R", | |
"cellView": "form" | |
}, | |
"source": [ | |
"%%capture\n", | |
"#@title Configuration { run: \"auto\" }\n", | |
"\n", | |
"#@markdown Step by step to get started\n", | |
"#@markdown \n", | |
"#@markdown ---\n", | |
"\n", | |
"#@markdown #### 1. Your image URL\n", | |
"image_url = \"https://d.furaffinity.net/art/ludo/1455487019/1455487019.ludo_zeba_ghost.png\" #@param {type:\"string\"}\n", | |
"\n", | |
"#@markdown #### 2. Type of your image\n", | |
"#@markdown png or jpg. If you choose jpg, program will auto noise-cancel.\n", | |
"image_type = \"jpg\" #@param [\"png\",\"jpg\"]\n", | |
"\n", | |
"#@markdown #### 3. Choose want kind of model you want to use\n", | |
"#@markdown * **gans**: Max upscaling model, but noise or image will looks sharper.\n", | |
"#@markdown * **psnr-smal** For small images, fastest model, try it to check it's fit or not.\n", | |
"#@markdown * **psnr-large** For larger images, execution time will slower, try it to check it's fit or not.\n", | |
"#@markdown * **noise-cancel**: just, noise canceling.\n", | |
"\n", | |
"upscaling_model = \"psnr-small\" #@param [\"gans\", \"psnr-small\", \"psnr-large\", \"noise-cancel\"]\n", | |
"\n", | |
"#@markdown Do you need to auto downlaod after finished? (check that checkbox to ensure)\n", | |
"\n", | |
"download_after_finishing = False #@param {type:\"boolean\"}\n", | |
"\n", | |
"#@markdown #### 4. Then click **Runtime** > **Run all** to Start\n", | |
"\n", | |
"#@markdown ---\n", | |
"\n", | |
"#@markdown #### Optional\n", | |
"#@markdown Noise cancel after upscaling (this may take a while to do it):\n", | |
"\n", | |
"noise_canceling_after_predict = False #@param {type: \"boolean\"}\n", | |
"\n", | |
"#@markdown Show image after finished: \n", | |
"show_on_finished = True #@param {type: \"boolean\"}\n", | |
"\n", | |
"\n", | |
"#@markdown Choose the runtime you use (TPU is default option)\n", | |
"#@markdown e.g. CPU may takes >15min,GPU takes 5~6min,TPU only takes 1min~1min and half.\n", | |
"\n", | |
"runtime_type = \"TPU\" #@param [\"CPU\",\"GPU\",\"TPU\"]" | |
], | |
"execution_count": 6, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"metadata": { | |
"id": "VAaFLLiKXJST", | |
"cellView": "form" | |
}, | |
"source": [ | |
"#@title GPU、TPU Settings\n", | |
"#@markdown Ignore this, unless you want to config GPU or TPU.\n", | |
"%%capture\n", | |
"%tensorflow_version 2.x\n", | |
"import tensorflow as tf\n", | |
"print(\"Tensorflow version \" + tf.__version__)\n", | |
"\n", | |
"# tf.config.optimizer.set_jit(True)\n", | |
"\n", | |
"if runtime_type == \"TPU\":\n", | |
" try:\n", | |
" tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection\n", | |
" print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])\n", | |
" except ValueError:\n", | |
" raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')\n", | |
"\n", | |
" tf.config.experimental_connect_to_cluster(tpu)\n", | |
" tf.tpu.experimental.initialize_tpu_system(tpu)\n", | |
" tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)\n", | |
"elif runtime_type == \"GPU\" : \n", | |
" device_name = tf.test.gpu_device_name()\n", | |
" if device_name != '/device:GPU:0':\n", | |
" raise SystemError('GPU device not found')\n", | |
" print('Found GPU at: {}'.format(device_name))\n", | |
" # tf.debugging.set_log_device_placement(True)" | |
], | |
"execution_count": 2, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"metadata": { | |
"id": "hCZfkvojYwbY", | |
"colab": { | |
"base_uri": "https://localhost:8080/", | |
"height": 443 | |
}, | |
"cellView": "form", | |
"outputId": "27b5dcd3-3ee7-479c-8a0e-6657ab236487" | |
}, | |
"source": [ | |
"# %%capture\n", | |
"#@title Main Progra\n", | |
"#@markdown Dont open this, unless you understand what inside.\n", | |
"\n", | |
"# !wget {image_url}\n", | |
"!mv *.{image_type} test.png\n", | |
"!mkdir -p data/input/test_images\n", | |
"!mv test.png data/input/test_images\n", | |
"\n", | |
"import numpy as np\n", | |
"from PIL import Image\n", | |
"\n", | |
"img = Image.open(f'data/input/test_images/test.png')\n", | |
"\n", | |
"from ISR.models import RDN, RRDN\n", | |
"import tensorflow\n", | |
"\n", | |
"\n", | |
"img = img.convert(\"RGB\")\n", | |
"lr_img = np.array(img)\n", | |
"\n", | |
"print(f\"feeding {lr_img.shape} image into noise canceling\")\n", | |
"\n", | |
"if image_type == \"jpg\":\n", | |
" noise_cancel__model = RDN(weights='noise-cancel')\n", | |
" lr_img = noise_cancel__model.predict(lr_img)\n", | |
" del noise_cancel__model\n", | |
"\n", | |
"print(f\"feeding {lr_img.shape} image into predict mode\")\n", | |
"\n", | |
"model = None\n", | |
"if upscaling_model == \"gans\":\n", | |
" model = RRDN(weights=upscaling_model)\n", | |
"else:\n", | |
" model = RDN(weights=upscaling_model)\n", | |
"\n", | |
"sr_img = model.predict(lr_img)\n", | |
"del model\n", | |
"\n", | |
"print(f\"feeding {sr_img.shape} image into noise canceling\")\n", | |
"\n", | |
"if noise_canceling_after_predict:\n", | |
" noise_cancel__model = RDN(weights='noise-cancel')\n", | |
" sr_img = noise_cancel__model.predict(sr_img)\n", | |
" del noise_cancel__model\n", | |
"\n", | |
"image = Image.fromarray(sr_img)\n", | |
"image.save(f\"./result.png\")" | |
], | |
"execution_count": 10, | |
"outputs": [ | |
{ | |
"output_type": "stream", | |
"name": "stdout", | |
"text": [ | |
"mv: cannot stat '*.jpg': No such file or directory\n", | |
"mv: cannot stat 'test.png': No such file or directory\n" | |
] | |
}, | |
{ | |
"output_type": "error", | |
"ename": "SymbolAlreadyExposedError", | |
"evalue": "ignored", | |
"traceback": [ | |
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", | |
"\u001b[0;31mSymbolAlreadyExposedError\u001b[0m Traceback (most recent call last)", | |
"\u001b[0;32m<ipython-input-10-ba76a0fb511b>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 13\u001b[0m \u001b[0mimg\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mImage\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mopen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf'data/input/test_images/test.png'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 14\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 15\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mISR\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmodels\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mRDN\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mRRDN\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 16\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 17\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/ISR/models/__init__.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m\u001b[0mcut_vgg19\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mCut_VGG19\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m\u001b[0mdiscriminator\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mDiscriminator\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m\u001b[0mrdn\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mRDN\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m\u001b[0mrrdn\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mRRDN\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/ISR/models/cut_vgg19.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmodels\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mModel\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mapplications\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvgg19\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mVGG19\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mISR\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mutils\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlogger\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mget_logger\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/tensorflow/keras/__init__.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 17\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mcallbacks\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 18\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mconstraints\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 19\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mdatasets\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 20\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mestimator\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 21\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mexperimental\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/tensorflow/keras/datasets/__init__.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 12\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mcifar100\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 13\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mfashion_mnist\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 14\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mimdb\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 15\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mmnist\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;34m.\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mreuters\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/tensorflow/keras/datasets/imdb/__init__.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0msys\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0m_sys\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 11\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdatasets\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mimdb\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mget_word_index\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 12\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdatasets\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mimdb\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mload_data\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 13\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/datasets/imdb.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 23\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mnumpy\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 24\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 25\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpreprocessing\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msequence\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0m_remove_long_seq\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 26\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mutils\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdata_utils\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mget_file\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 27\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mplatform\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtf_logging\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mlogging\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/preprocessing/__init__.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 24\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 25\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mbackend\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 26\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpreprocessing\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mimage\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 27\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpreprocessing\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0msequence\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 28\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mkeras\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpreprocessing\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtext\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/preprocessing/image.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1101\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1102\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1103\u001b[0;31m \u001b[0mkeras_export\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'keras.preprocessing.image.random_rotation'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrandom_rotation\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1104\u001b[0m \u001b[0mkeras_export\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'keras.preprocessing.image.random_shift'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrandom_shift\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1105\u001b[0m \u001b[0mkeras_export\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'keras.preprocessing.image.random_shear'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrandom_shear\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/tf_export.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, func)\u001b[0m\n\u001b[1;32m 334\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mf\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_overrides\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 335\u001b[0m \u001b[0m_\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mundecorated_f\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtf_decorator\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0munwrap\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mf\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 336\u001b[0;31m \u001b[0mdelattr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mundecorated_f\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mapi_names_attr\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 337\u001b[0m \u001b[0mdelattr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mundecorated_f\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mapi_names_attr_v1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 338\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", | |
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/tf_export.py\u001b[0m in \u001b[0;36mset_attr\u001b[0;34m(self, func, api_names_attr, names)\u001b[0m\n\u001b[1;32m 351\u001b[0m \u001b[0;31m# __dict__ instead of using hasattr to verify that subclasses have\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 352\u001b[0m \u001b[0;31m# their own _tf_api_names as opposed to just inheriting it.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 353\u001b[0;31m \u001b[0;32mif\u001b[0m \u001b[0mapi_names_attr\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__dict__\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 354\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_allow_multiple_exports\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 355\u001b[0m raise SymbolAlreadyExposedError(\n", | |
"\u001b[0;31mSymbolAlreadyExposedError\u001b[0m: Symbol random_rotation is already exposed as ('keras.preprocessing.image.random_rotation',)." | |
] | |
} | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"metadata": { | |
"id": "Sv3w2GHRhZLy", | |
"cellView": "form" | |
}, | |
"source": [ | |
"#@title Download\n", | |
"#@markdown Click that [Download] button after [Run all]\n", | |
"\n", | |
"from google.colab import files\n", | |
"import ipywidgets as widgets\n", | |
"\n", | |
"def download(_):\n", | |
" files.download(\"./result.png\")\n", | |
"\n", | |
"button = widgets.Button(\n", | |
" description='Download',\n", | |
" disabled=False,\n", | |
" button_style='success', # 'success', 'info', 'warning', 'danger' or ''\n", | |
" tooltip='Click me',\n", | |
" icon='check' # (FontAwesome names without the `fa-` prefix)\n", | |
")\n", | |
"button.on_click(download)\n", | |
"\n", | |
"if download_after_finishing:\n", | |
" files.download(\"./result.png\")\n", | |
"\n", | |
"button" | |
], | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "code", | |
"metadata": { | |
"id": "twLMFj5l3shG", | |
"cellView": "form" | |
}, | |
"source": [ | |
"#@title Display final result { vertical-output: true }\n", | |
"#@markdown (If you're checked that `show_on_finished` checkbox)\n", | |
"\n", | |
"if not show_on_finished:\n", | |
" exit()\n", | |
"\n", | |
"Image.open(\"result.png\")" | |
], | |
"execution_count": null, | |
"outputs": [] | |
}, | |
{ | |
"cell_type": "markdown", | |
"metadata": { | |
"id": "nfCVBuQNdC47" | |
}, | |
"source": [ | |
"Credit: [](https://colab.research.google.com/github/idealo/image-super-resolution/blob/master/notebooks/ISR_Prediction_Tutorial.ipynb)" | |
] | |
} | |
] | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment