tag:blogger.com,1999:blog-2234555849108700502024-03-17T00:17:41.266-07:00flylogicalAviation, audio, programming, and miscellaneous other bits and pieces Unknownnoreply@blogger.comBlogger115125tag:blogger.com,1999:blog-223455584910870050.post-73506875545986443982020-06-06T04:21:00.000-07:002020-06-12T12:49:53.955-07:00Deep Learning Analysis of COVID-19 lung X-Rays using MATLAB: Part 6<h2>
*** DISCLAIMER ***</h2>
<br />
<i>I have no medical training. Nothing presented here should be considered in any way as informative from a medical point-of-view. This is simply an exercise in image analysis via Deep Learning using MATLAB, with lung X-rays as a topical example in these times of COVID-19. </i><br />
<i></i><br />
<h2>
INTRODUCTION</h2>
<br />
In this Part 6 in my series of blog articles on exploring Deep Learning applied to lung X-rays using MATLAB, I bring together the results of the analysis of Parts <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html" target="_blank">1</a>, <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_20.html" target="_blank">2</a>, <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">3</a>, <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">4</a> & <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html" target="_blank">5</a>, and suggest a candidate set of composite models which are particularly suited to the task. I also present a <a href="https://flylogical.com/FlyMore/WebApps/XRay/Main.aspx" target="_blank">live website</a> whereby anyone can try these composite models by uploading an X-ray image and receiving results on-the-fly. Finally, all the underlying trained networks presented in this series of articles have been posted-up to GitHub (in <a href="https://github.com/risklogical/deeplearningmodels/releases/tag/v1.0" target="_blank">MATLAB</a> and <a href="https://github.com/risklogical/deeplearningmodels/releases/tag/v1.0_ONNX" target="_blank">ONNX</a> formats). They are openly available for anyone wishing to experiment with them.<br />
<br />
<br />
<h2>
</h2>
<h2>
COMPOSITE MODELS</h2>
<br />
After much trial-and-error experimentation with the models presented in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a> in combination with the <i>grad-CAM</i> analysis of Parts <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html">4</a> & <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html">5</a>, the following two composite models can be considered as being quite effective.<br />
<br />
<h3>
MODEL 1</h3>
<div>
<br /></div>
<div>
This is based on a combination of the four-class networks (from Experiment 4) in Parts <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html" target="_blank">1</a> and <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">3</a>, with <i>grad-CAM</i> Discrimination Filtering from <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html" target="_blank">Part 5</a>. Specifically, the model comprises the following steps (where the network names refer to the names of the underlying pretrained networks used as the basis for the Transfer Learning):</div>
<div>
<br /></div>
<ol>
<li> Apply (i) <span style="font-family: "courier new" , "courier" , monospace;">alexnet</span>; (ii) <span style="font-family: "courier new" , "courier" , monospace;">vgg16</span>; (iii) <span style="font-family: "courier new" , "courier" , monospace;">googlenet(original)</span>; and (iv) <span style="font-family: "courier new" , "courier" , monospace;">googlenet (places)</span> (from Experiment 4) to the X-ray-image-under-test. Each network will generate a score for each of the possible four labels [HEALTHY, BACTERIA, COVID, OTHER-VIRUS]. </li>
<li>Generate a <i>grad-CAM</i> image map for each of the networks (i)--(iv) in Step 1 using the technique presented in <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4</a>.</li>
<li>Apply (a) <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span>; (b) <span style="font-family: "courier new" , "courier" , monospace;">darknet19</span>; and (c) <span style="font-family: "courier new" , "courier" , monospace;">mobilenetv2</span> from <a href="http://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html" target="_blank">Part 5</a> to the four <i>grad-CAM</i> images from Step 2. From the results for each <i>grad-CAM</i> image, assign a weighting factor determined as follows: if the majority of (a), (b), (c) are in agreement with INSIDE_LUNGS, set the weighting factor to 0.8 (rather than to 1 because the <i>grad-CAM</i> Discrimination Filter classifiers aren't perfectly accurate). If the majority of (a), (b), (c) are in agreement with OUTSIDE_LUNGS, set the weighting factor to 0.2 (rather than to 0 because the classifiers aren't perfectly accurate). If the majority of (a), (b), (c) are in agreement with RIBCAGE_CENTRAL, set the weighting factor to 0.5 (i.e., mid-way). In all other cases, set the weighting factor to 0.3 (i.e., ambiguous).</li>
<li> Multiply each of the scores from Step 1 by the respective weighting factor from Step 3. This will give a <i>grad-CAM</i> weighted score per label per network.</li>
<li>Take the <i>average</i> of the scores from Step 4 across all networks to give an average score per label. Renormalize these average scores so that they add up to one.</li>
<li>Take the maximum of the resulting normalised averaged scores from Step 5, then assign the output classification to the label corresponding to the maximum score. This will give the resulting class from HEALTHY, BACTERIA, COVID, or OTHER-VIRUS with an accompanying score.</li>
</ol>
<h3>
MODEL 2</h3>
<div>
<br /></div>
This is based on a cascade of the two-class networks (from Experiments 1, 2, and 3) in Parts <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html" target="_blank">1</a> and <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">3</a>, with <i>grad-CAM</i> Discrimination Filtering from <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html" target="_blank">Part 5</a>. Specifically, the model comprises the following steps:<br />
<br />
<ol>
<li>Apply (i) <span style="font-family: "courier new" , "courier" , monospace;">darknet19</span>; (ii) <span style="font-family: "courier new" , "courier" , monospace;">resnet101</span>; (iii) <span style="font-family: "courier new" , "courier" , monospace;">squeezenet</span>; and (iv) <span style="font-family: "courier new" , "courier" , monospace;">resnet18</span> (from Experiment 1) to the X-ray-image-under-test. Each network will generate a score for each of the possible two labels [YES (pneumonia), NO (healthy)].</li>
<li> Apply the identical approach to Steps 2--4 in MODEL 1 (above) to give the <i>grad-CAM</i> weighted score per label per network.</li>
<li>Take the <i>maximum </i>of the <i>grad-CAM</i> weighted scores per label per network from the previous step across all networks to give a maximum score per label. Renormalize these maximum scores so that they add up to one.</li>
<li>Take the maximum of the resulting normalised maximum scores from the previous step, then assign the output classification to the label corresponding to the maximum score. This will give the resulting class from YES or NO with an accompanying score.</li>
<li>If the result is NO, the process terminates with the overall result of HEALTHY (plus accompanying score). If the result is YES, continue to the next step.</li>
<li>Apply (i) <span style="font-family: "courier new" , "courier" , monospace;">vgg19</span>; (ii) <span style="font-family: "courier new" , "courier" , monospace;">inceptionv3</span>; (iii) <span style="font-family: "courier new" , "courier" , monospace;">squeezenet</span>; and (iv) <span style="font-family: "courier new" , "courier" , monospace;">mobilenetv2</span> (from Experiment 2) to the X-ray-image-under-test. Each network will generate a score for each of the possible two labels [BACTERIA, VIRUS].</li>
<li>Apply the identical approach to Steps 2--4 in MODEL 1 (above) to give the <i>grad-CAM</i> weighted score per label per network.</li>
<li>Take the <i>average </i>of the <i>grad-CAM</i> weighted scores per label per network from the previous step across all networks to give an average score per label. Renormalize these average scores so that they add up to one.</li>
<li>Take the maximum of the resulting normalised average scores from the previous step, then assign the output classification to the label corresponding to the maximum score. This will give the resulting class from BACTERIA or VIRUS with an accompanying score.</li>
<li>If the result is BACTERIA, the process terminates with the overall result of BACTERIA (plus accompanying score). If the result is VIRUS, continue to the next step.</li>
<li>Apply (i) <span style="font-family: "courier new" , "courier" , monospace;">resnet50</span>; (ii) <span style="font-family: "courier new" , "courier" , monospace;">vgg16</span>; (iii) <span style="font-family: "courier new" , "courier" , monospace;">vgg19</span>; and (iv)<span style="font-family: "courier new" , "courier" , monospace;"> darknet53</span> (from Experiment 3) to the X-ray-image-under-test. Each network will generate a score for each of the possible two labels [COVID, OTHER-VIRUS].</li>
<li> Apply the identical approach to Steps 2--4 in MODEL 1 (above) to give the <i>grad-CAM</i> weighted score per label per network.</li>
<li>Take the <i>average </i>of the <i>grad-CAM</i> weighted scores per label per network from the previous step across all networks to give an average score per label. Renormalize these average scores so that they add up to one.</li>
<li>Take the maximum of the resulting normalised average scores from the previous step, then assign the output classification to the label corresponding to the maximum score. This will give the resulting class from COVID or OTHER-VIRUS with an accompanying score. The process is complete.</li>
</ol>
<div>
<br /></div>
<div>
Taken together, MODEL 1 and MODEL 2 provide two alternate paths to the classification of the lung X-ray-image-under-test. If the resulting classifications are in agreement, this represents the final classification, with a score given by the average of the scores for the two models. If the resulting classifications are not in agreement, the classification with the higher score can be considered as representing the final classification (with its corresponding score).</div>
<div>
<br /></div>
<div>
These two composite models were hand-crafted (essentially by trial-and-error). They perform well on the validation images. Of course there are many other combinations of networks (from <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>) that could be considered. </div>
<div>
<br /></div>
<h2>
DEPLOYMENT</h2>
<div>
<br /></div>
<div>
The trial-and-error experimentation to determine the combination of the Deep Neural Networks in MODELS 1 & 2 -- as well as the training of all the underlying Deep Neural Networks (via Transfer Learning), and the <i>grad-CAM</i> Discrimination Filtering -- was all performed in MATLAB. </div>
<div>
<br /></div>
<div>
The next step was to expose the resulting models in a form that they are generally accessible (for anyone to experiment with) without the need for MATLAB. That is the topic of this section.<br />
<br /></div>
<h3>
</h3>
<h3>
MATLAB Compiler</h3>
<div>
<br /></div>
<div>
The approach taken was to utilise the MATLAB Compiler (and the accompanying MATLAB Compiler SDK) to generate a shared library (specifically a Microsoft dotnet library) which contains all the code required to run the models. </div>
<div>
<br /></div>
<h3>
RESTful Web Service</h3>
<div>
<br /></div>
<div>
This library was then integrated into a RESTful Web Service application (written in C#), and deployed on a web server (Windows / IIS). </div>
<div>
<br /></div>
<h3>
Web Application Front-End</h3>
<div>
<br /></div>
<div>
The RESTful Web Service is exposed to users via a simple ASP.NET Web Application (front end) hosted on a Windows server. Here is <a href="https://flylogical.com/FlyMore/WebApps/XRay/Main.aspx" target="_blank">the URL</a> and a screenshot of the landing page...</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgae33lcqocU7mIQa2b2cCLHZj5kl2IpCnN4qBf6k5oEV9pxiee5lxND__0cJatIa0uudENkQgZqu4V_GPgx7-C5C2OFc7_xpogX90mKefSm6v1ETSNQ_Ejfb4lKYmZWDEugnf2LdCpqVs/s1600/Capture_XRAY_WEBAPP_START.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="339" data-original-width="670" height="202" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgae33lcqocU7mIQa2b2cCLHZj5kl2IpCnN4qBf6k5oEV9pxiee5lxND__0cJatIa0uudENkQgZqu4V_GPgx7-C5C2OFc7_xpogX90mKefSm6v1ETSNQ_Ejfb4lKYmZWDEugnf2LdCpqVs/s400/Capture_XRAY_WEBAPP_START.PNG" width="400" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<u><span style="color: #000120;"></span></u><br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
Simply upload a lung X-ray (cropped with no borders, and confined to the rib-cage as far as possible) via the web page, and wait (up to a few minutes) for the analysis to proceed. The results look like this:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJwi5GGqFIUmQp8sM-ZvKhw9-aa2fhahp67PJgwRVKRC3J9QKhqgxyWPrKGWo_dmokkvDgZEVVgx5r_0w3zqfiW-AhXlyCgvaIKoZ3AVal7B8iBa4g6G7M4ATivA9C-6-LE2S_ed7cYEc/s1600/Capture_XRAY_WEBAPP_END.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a></div>
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJwi5GGqFIUmQp8sM-ZvKhw9-aa2fhahp67PJgwRVKRC3J9QKhqgxyWPrKGWo_dmokkvDgZEVVgx5r_0w3zqfiW-AhXlyCgvaIKoZ3AVal7B8iBa4g6G7M4ATivA9C-6-LE2S_ed7cYEc/s1600/Capture_XRAY_WEBAPP_END.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="743" data-original-width="683" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJwi5GGqFIUmQp8sM-ZvKhw9-aa2fhahp67PJgwRVKRC3J9QKhqgxyWPrKGWo_dmokkvDgZEVVgx5r_0w3zqfiW-AhXlyCgvaIKoZ3AVal7B8iBa4g6G7M4ATivA9C-6-LE2S_ed7cYEc/s400/Capture_XRAY_WEBAPP_END.PNG" width="367" /></a></div>
<br />
<div>
<br /></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<h2>
EXPORTED MODELS</h2>
<div>
<br /></div>
<div>
All the Deep Neural Networks presented in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a> and <a href="http://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html" target="_blank">Part 5</a>, including the subset of models used in the deployed composite MODELS 1 & 2 presented above, have been exported in MATLAB format and in ONNX format. Please feel free to retrieve them from my github repositories (<a href="https://github.com/risklogical/deeplearningmodels/releases/tag/v1.0" target="_blank">here for MATLAB format</a>, and <a href="https://github.com/risklogical/deeplearningmodels/releases/tag/v1.0_ONNX" target="_blank">here for ONNX format</a>) for use in your own experiments.</div>
<h2>
</h2>
<h2>
POTENTIAL NEXT STEPS</h2>
<ul>
<li>Try different combinations of underlying models to generate composite models which perform better than MODELS 1 & 2 presented here. Owing to the large number of possible combinations, this search/optimisation should be performed in an automated manner (rather than manually by trial-and-error as applied here).</li>
<li>Re-train and compare all the models with larger image datasets whenever they become available. If you have access to such images, please consider posting them to the open source COVID-Net archive <a href="https://figure1.typeform.com/to/lLrHwv" target="_blank">here</a>.</li>
</ul>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-34114444219854615982020-05-30T04:06:00.000-07:002020-05-30T04:19:38.116-07:00Deep Learning Analysis of COVID-19 lung X-Rays using MATLAB: Part 5<h2>
*** DISCLAIMER ***</h2>
<br />
<i>I have no medical training. Nothing presented here should be considered in any way as informative from a medical point-of-view. This is simply an exercise in image analysis via Deep Learning using MATLAB, with lung X-rays as a topical example in these times of COVID-19. </i><br />
<i></i><br />
<h2>
INTRODUCTION</h2>
<br />
In this Part 5 in my series of blog articles on exploring Deep Learning of lung X-rays using MATLAB, the observations for <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4</a> -- whereby the <i>grad-CAM</i> technique was used to identify which regions of the X-ray images were being activated for all 19 network architectures under consideration -- serve as the basis for a new network to discriminate between those models which are utilising the lung regions (as desired) rather than outside the lung regions, for a given image-under-test. The resulting network can then be used as a discriminating filter applied to the outputs of the main X-ray classifiers in order to choose between those classifiers which focus (correctly) on the lung regions rather than elsewhere.<br />
<br />
<br />
<h2>
DATASET</h2>
<div>
<br /></div>
The image dataset for the<i> grad-CAM Discriminating Filter </i>comprised a set of <i>grad-CAM</i> images as presented in <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4</a>. Specifically, the dataset was composed by generating a total of 14,333 grad-CAM image files across all 19 network types and X-ray sample images. For training of the Deep Neural Network, these were split into three classes: INSIDE_LUNGS (whereby the <i>grad-CAM</i> images contain activation regions which are focused on the interior of one or both lungs -- the desirable scenario); OUTSIDE_LUNGS (whereby the <i>grad-CAM</i> images contain activation regions which are focused outside of the lungs or even outside of the body -- undesirable scenario); and RIBCAGE_CENTRAL (whereby the <i>grad-CAM</i> images contain activation regions which are focused in the central part of the ribcage rather then explicitly within either lung -- an intermediate scenario which happened to occur commonly so was considered necessary to be included). Sample images of each of these are shown below.<br />
<br />
<div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj08Btj3XA_XTixPJI5CnM2ZjDNa0xbIJxaDukjNe5VEnQfzzjXYpAw6jGFC9dSZtee6Fvq4GcjI3IFdk-_XYO1lVNgKADb8j8N5xKfxrmPgvjhUSl6br62uoKQLa5tjmqkISRhryS0mkY/s1600/INSIDE_LUNGS.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="404" data-original-width="404" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj08Btj3XA_XTixPJI5CnM2ZjDNa0xbIJxaDukjNe5VEnQfzzjXYpAw6jGFC9dSZtee6Fvq4GcjI3IFdk-_XYO1lVNgKADb8j8N5xKfxrmPgvjhUSl6br62uoKQLa5tjmqkISRhryS0mkY/s400/INSIDE_LUNGS.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div style="text-align: left;">
A sample image generated from the <i>grad-CAM</i> technique presented in <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4</a> applied to a lung X-ray analysed via a (Transfer Learning) Deep Neural Network from <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>. This example has been assigned the label INSIDE_LUNGS for the sake of creating a test dataset for the training of the Deep Neural Network Discriminating Filter, the central focus of this current article. Ideally, all the g<i>rad-CAM</i> images generated from all the classifiers applied to all the lung X-rays would fall within this INSIDE_LUNGS class. But the results of<a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank"> Part 4</a> show this not to be the case (and hence the motivation for devising the Discriminating Filter to sort the relevant classifications from the less relevant) .</div>
</td></tr>
</tbody></table>
</div>
<br />
<div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg60W52Z1OyBLzraqCAMaPP9ea77OplMsjRLwWgOfqn2D6Is1amhLWuq1RxmBVeOu-1gjreaL-B8DUNSjREneOHilpea-d0_edD7GjZkCTwv4wkT8HN09UxLCL0tSFFuaaz0R8_QBXQAyk/s1600/OUTSIDE_LUNGS.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="354" data-original-width="354" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg60W52Z1OyBLzraqCAMaPP9ea77OplMsjRLwWgOfqn2D6Is1amhLWuq1RxmBVeOu-1gjreaL-B8DUNSjREneOHilpea-d0_edD7GjZkCTwv4wkT8HN09UxLCL0tSFFuaaz0R8_QBXQAyk/s400/OUTSIDE_LUNGS.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div style="text-align: left;">
A sample <i>grad-CAM</i> image which falls within the OUTSIDE_LUNGS category. The purpose of the Discriminating Filter described in this current article is to identify such cases where the X-ray analysis classifier has wrongly focused on regions outside of the lungs (or indeed the body).</div>
</td></tr>
</tbody></table>
</div>
<br />
<div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNumprLvWlc3sReKdfE_OWP_r04vsUeUUl5xnhMZw7SFuWaUdDpXoP94b2wkwmjxP1gpgbujVXO7pD98Pd73dzNsXKWaiZ0RNtXe54lQYKnNKGWnTrSiqiCMQ17P_0qDM_8fsrGlPnaAI/s1600/RIBCAGE_CENTRAL.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="354" data-original-width="354" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNumprLvWlc3sReKdfE_OWP_r04vsUeUUl5xnhMZw7SFuWaUdDpXoP94b2wkwmjxP1gpgbujVXO7pD98Pd73dzNsXKWaiZ0RNtXe54lQYKnNKGWnTrSiqiCMQ17P_0qDM_8fsrGlPnaAI/s400/RIBCAGE_CENTRAL.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div style="text-align: left;">
A sample <i>grad-CAM</i> image which falls within the RIBCAGE_CENTRAL category. This occurs quite often with the networks from <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>. The idea is that such cases can be considered less positively definitive than INSIDE_LUNGS, but better than OUTSIDE_LUNGS, when it comes to determining the validity for lung X-ray classification. </div>
</td></tr>
</tbody></table>
</div>
<h3>
</h3>
<h2>
GROUND TRUTH DATA LABELLING VIA AMAZON SAGEMAKER</h2>
<div>
In order to assign each of the 14,333 <i>grad-CAM</i> images into the appropriate class (INSIDE_LUNGS, OUTSIDE_LUNGS, or RIBCAGE_CENTRAL) in preparation for training the Deep Neural Network to be used as the Discriminating Filter, the <a href="https://docs.aws.amazon.com/AWSMechTurk/latest/AWSMechanicalTurkGettingStartedGuide/SvcIntro.html" target="_blank">Amazon Mechanical Turk</a> service (part of the <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sms.html" target="_blank">Amazon SageMaker Ground Truth for Data Labelling</a> product suite) was utilised. This unique service leverages an on-demand, scalable, human workforce to perform the image labelling. The service employs thousands of human workers willing to do piecemeal work at their convenience, and is a far more attractive solution than attempting to manually label all the images oneself (!)</div>
<div>
<br /></div>
<h2>
TRAINING THE DEEP NEURAL NETWORKS VIA TRANSFER LEARNING</h2>
<div>
<br /></div>
<div>
Once the <i>grad-CAM</i> images had been sorted (via AWS Mechanical Turk) into the three classes (INSIDE_LUNGS, OUTSIDE_LUNGS, and RIBCAGE_CENTRAL), all 19 pre-trained networks available in MATLAB were used for Transfer Learning on these <i>grad-CAM</i> images, directly analogous to the approach presented in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a> for the underlying X-ray image classifier training. </div>
<div>
<br /></div>
<div>
<br /></div>
<h2>
RESULTS</h2>
<div>
The results from the (Transfer Learning) training of all the networks is summarised as follows. From consideration of the classification accuracies on the validation dataset, the "best" performing networks were found to be (where the name refers to the base pre-trained network used in the Transfer Learning): <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> for determining if INSIDE_LUNGS (75% accuracy); <span style="font-family: "courier new" , "courier" , monospace;">darknet19</span> for determining if OUTSIDE_LUNGS (85% accuracy); and <span style="font-family: "courier new" , "courier" , monospace;">mobilenetv2</span> for determining if RIBCAGE_CENTRAL (86% accuracy). The validation Confusion Matrix for each of these is included below.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><img border="0" data-original-height="835" data-original-width="845" height="395" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQmR0Z__6BB6Dt3-oLWIvgFV64n-pG0oOG54oDbjNaagfCDi9jAJOf-rQer0Nd3suM4qtFSa4U09xpfnisqYfgtt2j3UEqXVIJIqimYxxBtKGbDrLLztA8nupDgDnUAVzjovv9l2R2-no/s400/EXP_5_CONFUSION_googlenet_228MAY2020.png" style="margin-left: auto; margin-right: auto;" width="400" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div style="text-align: left;">
Confusion Matrix (on the validation dataset) for a network trained on <i>grad-CAM</i> images via Transfer Learning starting with the pre-trained <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span>. Of all the networks that were tried, this one had the highest accuracy (75%) for the INSIDE_LUNGS class.</div>
</td></tr>
</tbody></table>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQmR0Z__6BB6Dt3-oLWIvgFV64n-pG0oOG54oDbjNaagfCDi9jAJOf-rQer0Nd3suM4qtFSa4U09xpfnisqYfgtt2j3UEqXVIJIqimYxxBtKGbDrLLztA8nupDgDnUAVzjovv9l2R2-no/s1600/EXP_5_CONFUSION_googlenet_228MAY2020.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a><br />
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQmR0Z__6BB6Dt3-oLWIvgFV64n-pG0oOG54oDbjNaagfCDi9jAJOf-rQer0Nd3suM4qtFSa4U09xpfnisqYfgtt2j3UEqXVIJIqimYxxBtKGbDrLLztA8nupDgDnUAVzjovv9l2R2-no/s1600/EXP_5_CONFUSION_googlenet_228MAY2020.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br /></a></div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQmR0Z__6BB6Dt3-oLWIvgFV64n-pG0oOG54oDbjNaagfCDi9jAJOf-rQer0Nd3suM4qtFSa4U09xpfnisqYfgtt2j3UEqXVIJIqimYxxBtKGbDrLLztA8nupDgDnUAVzjovv9l2R2-no/s1600/EXP_5_CONFUSION_googlenet_228MAY2020.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;">
</a>
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKhVp-rhziFxDL-efYm2WrPcaycaBnMxK_mPS_1DPqV4Bq6Shz84VMoidNxNDy2EUkvkqrpU_xq3rFp7WUeboSBDEQa1AHVSYsHWfZL4Rdk-WncYBef_zb4CFtakRzNvLFjWCJ61dq_5s/s1600/EXP_5_CONFUSION_darknet19_228MAY2020.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="835" data-original-width="845" height="395" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKhVp-rhziFxDL-efYm2WrPcaycaBnMxK_mPS_1DPqV4Bq6Shz84VMoidNxNDy2EUkvkqrpU_xq3rFp7WUeboSBDEQa1AHVSYsHWfZL4Rdk-WncYBef_zb4CFtakRzNvLFjWCJ61dq_5s/s400/EXP_5_CONFUSION_darknet19_228MAY2020.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div style="text-align: left;">
Confusion Matrix (on the validation dataset) for a network trained on <i>grad-CAM</i> images via Transfer Learning starting with the pre-trained <span style="font-family: "courier new" , "courier" , monospace;">darknet19</span>. Of all the networks that were tried, this one had the highest accuracy (85%) for the OUTSIDE_LUNGS class.</div>
</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQmR0Z__6BB6Dt3-oLWIvgFV64n-pG0oOG54oDbjNaagfCDi9jAJOf-rQer0Nd3suM4qtFSa4U09xpfnisqYfgtt2j3UEqXVIJIqimYxxBtKGbDrLLztA8nupDgDnUAVzjovv9l2R2-no/s1600/EXP_5_CONFUSION_googlenet_228MAY2020.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a></div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipUSjNLMxzhN1ytcH8uo1dIsh_mg5SgYeibMeml4GfnyWJufStU0mhjaPxyXl9L4uBxwJnFna1KQpQo0Hzyh02eM4K5pdB_YLsvZXbDiboWaniHKMjf5F95ybHprRHAhrsa4vn7hsD6pY/s1600/EXP_5_CONFUSION_mobilenetv2_228MAY2020.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="835" data-original-width="845" height="395" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipUSjNLMxzhN1ytcH8uo1dIsh_mg5SgYeibMeml4GfnyWJufStU0mhjaPxyXl9L4uBxwJnFna1KQpQo0Hzyh02eM4K5pdB_YLsvZXbDiboWaniHKMjf5F95ybHprRHAhrsa4vn7hsD6pY/s400/EXP_5_CONFUSION_mobilenetv2_228MAY2020.png" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div style="text-align: left;">
Confusion Matrix (on the validation dataset) for a network trained on <i>grad-CAM</i> images via Transfer Learning starting with the pre-trained <span style="font-family: "courier new" , "courier" , monospace;">mobilenetv2</span>. Of all the networks that were tried, this one had the highest accuracy (86%) for the RIBCAGE_CENTRAL class.</div>
</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<h2>
DISCUSSION & NEXT STEPS</h2>
<div>
The results demonstrate that the technique of Transfer Learning can be used to devise Deep Neural Networks which can successfully distinguish (with reasonable accuracy) the validity of a given lung X-ray classifier network applied to a given X-ray image by determining whether the corresponding <i>grad-CAM</i> image focuses on regions INSIDE the lungs (suggesting that the X-ray lung classification is valid), OUSTIDE the lungs (suggesting that the the X-ray lung classification is not valid), or in the RIBCAGE CENTRAL region (suggesting that the lung X-ray classification may be of some validity: i.e., more relevant than OUTSIDE the lungs though not as relevant as INSIDE the lungs). The Deep Neural Networks presented here can therefore serve as a <i>Discrimination Filter</i> to assist in choosing between all the various networks (presented in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>) for X-ray lung image classification.<br />
<br />
The next step will be to combine the results of this article with the results from <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a> to determine the "best" network (or combination of networks) for lung X-ray image classification.</div>
<div>
<b></b><i></i><u></u><sub></sub><sup></sup><strike></strike><br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<br />
<div>
</div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-52826941440327396702020-05-14T05:48:00.001-07:002020-05-30T04:14:45.496-07:00Deep Learning Analysis of COVID-19 lung X-Rays using MATLAB: Part 4<i><b>Update: see <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html" target="_blank">Part 5</a> where the grad-CAM results presented below are used to train another suite of networks to help choose between all the lung X-ray classifiers presented in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>.</b></i><br />
<h2>
*** DISCLAIMER ***</h2>
<br />
<i>I have no medical training. Nothing presented here should be considered in any way as informative from a medical point-of-view. This is simply an exercise in image analysis via Deep Learning using MATLAB, with lung X-rays as a topical example in these times of COVID-19. </i><br />
<i></i><br />
<h2>
INTRODUCTION</h2>
<br />
In this Part 4 in my series of blog articles on exploring Deep Learning of lung X-rays using MATLAB, the analysis of <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a> is revisited to further compare the performance of <i>all</i> the pre-trained networks available via MATLAB as the basis for the Transfer Learning procedure. Specifically, the <i><a href="https://towardsdatascience.com/demystifying-convolutional-neural-networks-using-gradcam-554a85dd4e48" target="_blank">grad-CAM</a></i> technique is applied to (i) gain an insight into how the various networks respond to the underlying images and, moreover, (ii) to investigate the <i>differences</i> between the responses of each network from one another. The goal is to provide some guidance as to how to choose the "best" network for the task at hand. Again, all analysis is performed in MATLAB.<br />
<br />
<h2>
grad-CAM</h2>
<br />
The <i>grad-CAM</i> technique is introduced <a href="https://towardsdatascience.com/demystifying-convolutional-neural-networks-using-gradcam-554a85dd4e48">here</a>, with a MATLAB implementation provided <a href="https://uk.mathworks.com/help/deeplearning/ug/gradcam-explains-why.html">here</a> which is used as the basis for the present analysis. Note that <i>grad-CAM </i>is a more powerful and more general extension of the Class Activation Map (CAM) technique used in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_20.html">Part 2</a>.<br />
<br />
The code for generating the results presented in the following sections uses the <span style="font-family: "courier new" , "courier" , monospace;">gradcam</span> function (in MATLAB) provided in the reference example <a href="https://uk.mathworks.com/help/deeplearning/ug/gradcam-explains-why.html">here</a>. The <span style="font-family: "courier new" , "courier" , monospace;">gradcam</span> function presented there is used in precisely the same manner here, so is not repeated here.<br />
<br />
That said, the cited reference example is directly applicable only to <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span>. In order to extend to each of the other networks requires the appropriate softmax and feature map layers to be identified through use of the <span style="font-family: "courier new" , "courier" , monospace;">analyzeNetwork</span> function to examine the given network and select the correct layers. The softmax layer is easily identified as the last softmax layer before the output. The feature map layer is identified as follows (from <a href="https://uk.mathworks.com/help/deeplearning/ug/gradcam-explains-why.html">here</a>):<br />
<br />
"<i>Specify either the last ReLU layer with non-singleton spatial dimensions, or the last layer that gathers the outputs of ReLU layers (such as a depth concatenation or an addition layer). If your network does not contain any ReLU layers, specify the name of the final convolutional layer that has non-singleton spatial dimensions in the output</i>".<br />
<br />
For convenience, I have performed this identification for all the network types, and bundled them into a function named <span style="font-family: "courier new" , "courier" , monospace;"><a href="https://github.com/risklogical/matlab-general/blob/master/gradCamLayerNames.m" target="_blank">gradCamLayerNames</a></span> (available via my <a href="https://github.com/risklogical/matlab-general" target="_blank">github repository</a>.)<br />
<br />
Note: my <span style="font-family: "courier new" , "courier" , monospace;">gradCamLayerNames</span> function returns the relevant layer names for the unmodified pre-trained networks distributed with MATLAB. For pre-trained networks which have been modified for Transfer Learning (by replacing the final few layers as described in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html">Part 1</a>), the relevant layer names for use with <span style="font-family: "courier new" , "courier" , monospace;">gradcam</span> may be different (unless the original names happen to have been replicated). For example, all the networks used in the present analysis have been modified in the manner described in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html">Part 1</a> for Transfer Learning, and so the relevant softmax layer name for use with <span style="font-family: "courier new" , "courier" , monospace;">gradcam</span> is 'softmax' rather than that returned by <span style="font-family: "courier new" , "courier" , monospace;">gradCamLayerNames</span>.<br />
<br />
<h2>
Image Datasets and Transfer Learning Networks</h2>
<div>
<br /></div>
<div>
The lung X-ray image datasets (arranged into Examples 1--4) and the corresponding Transfer Learning trained networks from<a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html"> Part 3</a> are used here "as is" without further introduction (refer to <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html">Part 3</a> for the details).</div>
<div>
<br /></div>
<h2>
Analysis via grad-CAM</h2>
<h3>
</h3>
<h3>
EXAMPLE 1: "YES / NO" Classification of Pneumonia
</h3>
<div>
The <i>grad-CAM</i> analysis has been performed on all of the Example 1 Transfer Learning networks with all of the corresponding validation images. A representative sample of results are displayed on the following links (where the network names pertain to the base networks used in the Transfer Learning):</div>
<br />
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace;"><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_1_BEST_NET_VGG16/script_scratchpad_activation_maps_gradcam.html" target="_blank">vgg16</a></span> applied to all 224 validation images</li>
<li><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_1_BEST_NET_DARKNET53/script_scratchpad_activation_maps_gradcam.html" target="_blank"><span style="font-family: "courier new" , "courier" , monospace;">darknet53</span></a> applied to all 224 validation images</li>
<li><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_1_ALL_NETS_ONE_IMAGE/script_scratchpad_activation_maps_gradcam.html" target="_blank">all 19 networks applied</a> to a single representative validation image</li>
</ol>
<ul>
</ul>
<div>
</div>
<h3>
EXAMPLE 2: Classification Bacterial or Viral Pneumonia </h3>
<div>
The <i>grad-CAM</i> analysis has been performed on all of the Example 2 Transfer Learning networks with all of the corresponding validation images. A representative sample of results are displayed on the following links (where the network names pertain to the base networks used in the Transfer Learning):</div>
<br />
<ol>
<li><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_2_BEST_NET_DARKNET53/script_scratchpad_activation_maps_gradcam.html" target="_blank"><span style="font-family: "courier new" , "courier" , monospace;">darknet53</span></a> applied to all 640 validation images</li>
<li><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_2_ALL_NETS_ONE_IMAGE/script_scratchpad_activation_maps_gradcam.html" target="_blank">all 19 networks applied</a> to a single representative validation image</li>
</ol>
<div>
<br />
<br /></div>
<ul>
</ul>
<div>
</div>
<h3>
EXAMPLE 3: Classification of COVID-19 or Other-Viral </h3>
<div>
The <i>grad-CAM</i> analysis has been performed on all of the Example 3 Transfer Learning networks with all of the corresponding validation images. A representative sample of results are displayed on the following links (where the network names pertain to the base networks used in the Transfer Learning):</div>
<br />
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace;"><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_3_BEST_NET_VGG19/script_scratchpad_activation_maps_gradcam.html" target="_blank">vgg19</a></span> applied to all 260 validation images</li>
<li><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_3_ALL_NETS_ONE_IMAGE/script_scratchpad_activation_maps_gradcam.html" target="_blank">all 19 networks applied</a> to a single representative validation image</li>
</ol>
<div>
<br /></div>
<ul>
</ul>
<div>
</div>
<h3>
EXAMPLE 4: Determine if COVID-19 pneumonia versus Healthy, Bacterial, or non-COVID viral pneumonia
</h3>
<div>
The <i>grad-CAM</i> analysis has been performed on all of the Example 4 Transfer Learning networks with all of the corresponding validation images. A representative sample of results are displayed on the following links (where the network names pertain to the base networks used in the Transfer Learning):</div>
<br />
<ol>
<li><span style="font-family: "courier new" , "courier" , monospace;"><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_4_BEST_NET_INCEPTIONRESNETV2/script_scratchpad_activation_maps_gradcam.html" target="_blank">inceptionresnetv2</a></span> applied to all 44 validation images</li>
<li><a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_4_ALL_NETS_ONE_IMAGE/script_scratchpad_activation_maps_gradcam.html" target="_blank">all 19 networks applied</a> to a single representative validation image</li>
</ol>
<ul>
</ul>
<h3>
</h3>
<div>
<br /></div>
<h2>
RESULTS & NEXT STEPS</h2>
Looking over all these <i>grad-CAM</i> images for all four Examples (via the links above) confirms that the networks are generally responding to regions <i>within</i> the lungs when making their classifications. This is a positive finding in terms of qualifying the overall Deep Learning approach to the analysis of the lung X-rays, and confirms the results of the (simpler) CAM approach from <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_20.html" target="_blank">Part 2</a>. However, the findings are not completely definitive in that it can be seen that some networks on some images are responding to inappropriate regions in the images (e.g., outside the lungs or even outside the body!), thereby reducing the validity of the approach for classifying the lung X-rays.<br />
<br />
It is also interesting to observe how the various networks respond differently to the same image. For example, the <i>grad-CAM</i> images below (taken from the<a href="http://flylogical.com/Docs/covid/blog_article_4/EXPERIMENT_4_ALL_NETS_ONE_IMAGE/script_scratchpad_activation_maps_gradcam.html" target="_blank"> results for Experiment 4</a>) illustrate how six different networks (base names <span style="font-family: "courier new" , "courier" , monospace;">darknet19</span>, <span style="font-family: "courier new" , "courier" , monospace;">darknet53</span>, <span style="font-family: "courier new" , "courier" , monospace;">densenet201</span>, <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> <span style="font-family: "courier new" , "courier" , monospace;">[original]</span>, <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> <span style="font-family: "courier new" , "courier" , monospace;">[places]</span>, and <span style="font-family: "courier new" , "courier" , monospace;">inceptionresnetv2</span>) respond to the same validation image. It can be seen that the given networks are activated by quite different regions within the image. This is perhaps not too surprising given that the networks generally have quite different layer structures. That said, the <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> variants (<span style="font-family: "courier new" , "courier" , monospace;">[original]</span> and <span style="font-family: "courier new" , "courier" , monospace;">[places]</span>) have identical layer structures but have been pre-trained on different image sets, then Transfer Trained on identical lung X-ray training images. The activations observed from <i>grad-CAM </i>analysis are nevertheless quite different.<br />
<br />
All this goes to show that the optimal choice of networks for the task of lung X-ray classification is somewhat subtle since the various networks respond in different ways to the underlying images. It is not sufficient to only consider the classification accuracy scores (from the classification-accuracy results tables presented in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>). It is important to also consider the relevance and validity of the activated regions as exposed via this <i>grad-CAM</i> analysis.<br />
<br />
Interesting next steps to consider therefore would be to (i) combine the results of the various networks on the classification task rather than simply trying to choose a single 'optimal' network (per Experiment task); (ii) whilst doing so, eliminate any network whose <i>grad-CAM</i> activations are in inappropriate regions (i.e., outside the lungs) on a given sample-image-under-test. This could result in a more accurate and robust COVID-19 classifier.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5hWv9rBrKMXly0pDPVJvV7N-FbHJm8-SgGOTkp1Ed20-LkV86QqgUA30a-Hxkl_XEiQYNJzMosb4VNwY98K9fG40ZdQF8JbScWmRpHc3BdLup1NjRQ642_Zj-BYUcNz4frvxNzMBSkmY/s1600/Capture_gradcam1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="348" data-original-width="431" height="322" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5hWv9rBrKMXly0pDPVJvV7N-FbHJm8-SgGOTkp1Ed20-LkV86QqgUA30a-Hxkl_XEiQYNJzMosb4VNwY98K9fG40ZdQF8JbScWmRpHc3BdLup1NjRQ642_Zj-BYUcNz4frvxNzMBSkmY/s400/Capture_gradcam1.PNG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz82UqW_0axtPcZYP6M9DqmOODoNPgLP7QJ5c9X2QitL81DvalEUOHhqDTQDX2PoaYjdOYkXJbhsvwkpkegzX-fwffRHVSpZstzMigq9ri04mtB1OBJW1glYCUQ-s8AX9ycVyoL4BovRg/s1600/Capture_gradcam2.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="343" data-original-width="414" height="331" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz82UqW_0axtPcZYP6M9DqmOODoNPgLP7QJ5c9X2QitL81DvalEUOHhqDTQDX2PoaYjdOYkXJbhsvwkpkegzX-fwffRHVSpZstzMigq9ri04mtB1OBJW1glYCUQ-s8AX9ycVyoL4BovRg/s400/Capture_gradcam2.PNG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqdNt7kqtHdUu807CKnMFet8PVhNeqDqr0XczUrCnU6ov5qfneyqVrH62VFHzRVR2nr72tFSnsl-TYxUaTHYo-O7OW-g2SfoAmrwGLJ1HINEn0PC5Bhek0GIiE2VioUdkdaAA5D2FQNOM/s1600/Capture_gradcam3.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="313" data-original-width="380" height="328" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqdNt7kqtHdUu807CKnMFet8PVhNeqDqr0XczUrCnU6ov5qfneyqVrH62VFHzRVR2nr72tFSnsl-TYxUaTHYo-O7OW-g2SfoAmrwGLJ1HINEn0PC5Bhek0GIiE2VioUdkdaAA5D2FQNOM/s400/Capture_gradcam3.PNG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiv5k7zJ9BlJBqZei6jyXfrmdZvlfu9JHezWzyZuaV54XAHxOfYy-ua1radU-8jF1LqjfvzYTkU7VfbUmTgEt93UfLvU72L6oycsp7m2LkaMPtHh8jbIdoqYlZ1cnMdOG2UxcVWhbj_Mw4/s1600/Capture_gradcam4.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="312" data-original-width="384" height="325" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiv5k7zJ9BlJBqZei6jyXfrmdZvlfu9JHezWzyZuaV54XAHxOfYy-ua1radU-8jF1LqjfvzYTkU7VfbUmTgEt93UfLvU72L6oycsp7m2LkaMPtHh8jbIdoqYlZ1cnMdOG2UxcVWhbj_Mw4/s400/Capture_gradcam4.PNG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha4lTO6GKSQRn-i3DeXxxKQAmkSG_K9-IiQVVzf00iHCs5bQFCoF_5n_MaPT6IMGnalB13w_KBWBuqfk7YJ4x4iMr1re0dUoSZfA-xtDYB_ep5VdyKScomGypOJVSmR41-RNOsAdv7ne8/s1600/Capture_gradcam5.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="308" data-original-width="388" height="317" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEha4lTO6GKSQRn-i3DeXxxKQAmkSG_K9-IiQVVzf00iHCs5bQFCoF_5n_MaPT6IMGnalB13w_KBWBuqfk7YJ4x4iMr1re0dUoSZfA-xtDYB_ep5VdyKScomGypOJVSmR41-RNOsAdv7ne8/s400/Capture_gradcam5.PNG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZtjRNU0A6Mzv9mA54HE_7p38jQ6iKMNOH7BTMB7I_lHVtV8mutVL1jJQ7-88zd7_XM7683xU4-qxIbBL9DJt0AT9tQhz5LfvEyFHbPgRpuahVFCwONZTN0EKxwsV1U1WslCi8nC6HUDE/s1600/Capture_gradcam6.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="386" data-original-width="450" height="342" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZtjRNU0A6Mzv9mA54HE_7p38jQ6iKMNOH7BTMB7I_lHVtV8mutVL1jJQ7-88zd7_XM7683xU4-qxIbBL9DJt0AT9tQhz5LfvEyFHbPgRpuahVFCwONZTN0EKxwsV1U1WslCi8nC6HUDE/s400/Capture_gradcam6.PNG" width="400" /></a></div>
<br />
<br />
<br />
<div>
</div>
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-10477035468910559012020-04-28T11:24:00.001-07:002020-05-30T04:14:29.844-07:00Deep Learning Analysis of COVID-19 lung X-Rays using MATLAB: Part 3<b><i><br /></i></b>
<b><i><br /></i></b>
<b><i>UPDATES: See <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4 </a>for a grad-CAM analysis of all the trained networks presented below, then <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html" target="_blank">Part 5</a> where the grad-CAM results of <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4</a> are used to train another suite of networks to help choose between the lung X-ray classifiers presented below.</i></b><br />
<br />
<h2>
*** DISCLAIMER ***</h2>
<br />
<i>I have no medical training. Nothing presented here should be considered in any way as informative from a medical point-of-view. This is simply an exercise in image analysis via Deep Learning using MATLAB, with lung X-rays as a topical example in these times of COVID-19. </i><br />
<i></i><br />
<h2>
INTRODUCTION</h2>
<br />
In this Part 3 in my series of blog articles on exploring Deep Learning of lung X-rays using MATLAB, the analysis of <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 1</a> is revisited but rather than just using the pre-trained <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> as the basis of Transfer Learning, the performance of <i>all</i> the pre-trained networks available via MATLAB as the basis for the Transfer Learning procedure is compared.<br />
<br />
<br />
<h2>
AVAILABLE PRE-TRAINED NETWORKS</h2>
<div>
<br /></div>
<div>
See <a href="https://uk.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html" target="_blank">this overview</a> for a list of all the available pre-trained Deep Neural Networks bundled with MATLAB (version R2020a). There are 19 available networks, listed below, which include two alternate versions of <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span>: the original, and the alternate version with identical layer structure but pre-trained on images of places rather than images of objects. </div>
<div>
<br />
<table>
<tbody>
<tr><td><b>Available Pre-Trained Networks</b></td>
</tr>
<tr><td><span style="font-family: inherit;">squeezenet</span></td></tr>
<tr><td><span style="font-family: inherit;">googlenet</span></td></tr>
<tr><td><span style="font-family: inherit;">googlenet (places)</span></td></tr>
<tr><td><span style="font-family: inherit;">inceptionv3</span></td></tr>
<tr><td><span style="font-family: inherit;">densenet201</span></td></tr>
<tr><td><span style="font-family: inherit;">mobilenetv2</span></td></tr>
<tr><td><span style="font-family: inherit;">resnet18</span></td></tr>
<tr><td><span style="font-family: inherit;">resnet50</span></td></tr>
<tr><td><span style="font-family: inherit;">resnet101</span></td></tr>
<tr><td><span style="font-family: inherit;">xception</span></td></tr>
<tr><td><span style="font-family: inherit;">inceptionresnetv2</span></td></tr>
<tr><td><span style="font-family: inherit;">shufflenet</span></td></tr>
<tr><td><span style="font-family: inherit;">nasnetmobile</span></td></tr>
<tr><td><span style="font-family: inherit;">darknet19</span></td></tr>
<tr><td><span style="font-family: inherit;">darknet53</span></td></tr>
<tr><td><span style="font-family: inherit;">alexnet</span></td></tr>
<tr><td><span style="font-family: inherit;">vgg16</span></td></tr>
<tr><td><span style="font-family: inherit;">vgg19</span></td></tr>
</tbody></table>
</div>
<span style="font-family: inherit;"></span><br />
<h2>
TRANSFER LEARNING</h2>
<h3>
Network Preparation</h3>
<br />
Each of the above pre-trained networks were prepared for Transfer Learning in the same manner as described in <a href="http://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 1</a> (and references therein). This involved replacing the last few layers in each network in preparation for re-training with the lung X-ray images. To determine which layers to replace required identifying the last learning layer (such as a <span style="font-family: "courier new" , "courier" , monospace;">convolution2dLayer</span>) in each network and replacing from that point onwards using new layers with the appropriate number of output classes (e.g., 2 or 4 etc rather than 1000 as per the pre-trained <span style="font-family: "courier new" , "courier" , monospace;">imageNet</span> classes). For convenience, I've collected together the appropriate logic for preparing each of the networks (since the relevant layer names are generally different for the various networks) in the function<span style="font-family: "courier new" , "courier" , monospace;"> prepareTransferLearningLayers</span> which you can obtain from my GitHub repository <a href="https://github.com/risklogical/matlab-general" target="_blank">here</a>.<br />
<br />
<h3>
Data Preparation</h3>
<div>
For each of the Examples 1--4 in <a href="http://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 1</a>, the training and validation image datasets were prepared as before (from all the underlying images available) with the important additional action: for each Example, the respective datasets were frozen (rather than randomly chosen each time) so that each of the 19 networks could be trained and tested on precisely the same datasets as one another, thereby enabling ready comparison between network performance.</div>
<div>
<br /></div>
<div>
<br /></div>
<h3>
Training Options</h3>
<div>
The training options for each case were set as follows:</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">MaxEpochs=1000; % Placeholder, patience will stop well before</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">miniBatchSize = 10;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">numIterationsPerEpoch = floor(numTrainingImages/miniBatchSize);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">options = trainingOptions('sgdm',...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'ExecutionEnvironment','multi-gpu', ...% for AWS ec2 p-class VM</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'MiniBatchSize',miniBatchSize, 'MaxEpochs',MaxEpochs,...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'InitialLearnRate',1e-4, 'Verbose',false,...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'Plots','none', 'ValidationData',validationImages,...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'ValidationFrequency',numIterationsPerEpoch,...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'ValidationPatience',4);
</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"></span><br /></div>
Note that the <span style="font-family: "courier new" , "courier" , monospace;">ValidationPatience</span> is set to a finite value (e.g., 4 rather than <span style="font-family: "courier new" , "courier" , monospace;">Inf</span>) to automatically halt the training before overfitting occurs. This also enables the training to be performed within a big loop across all 19 network types without user intervention. Also note that <span style="font-family: "courier new" , "courier" , monospace;">ExecutionEnvironment</span> was set to <span style="font-family: "courier new" , "courier" , monospace;">multi-gpu</span> to take advantage of the multiple GPUs available via Amazon Web Services (AWS) p-class instance types in order to speed-up the analysis for all networks across all examples. The screenshot below shows the GPU activity when running the training on an AWS p2.x8large instance type. Even with GPUs, some training runs took quite a long time, especially for the larger networks (not surprisingly). For example, <span style="font-family: "courier new" , "courier" , monospace;">nasnetlarge</span> on the Example 2 dataset (3434 training images) took 11 hours to complete. All in all, it took a few days to complete the training for all 76 cases (i.e., the 4 Example Cases across each of the 19 networks)<br />
<br />
<div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgC2Cxi2PV6V7mL8m6P7DgRHmbKNVP7EXObpqaRjQmc9AmPt51VzgZG7exOggpwhB7qx9YKYG3QZz-ZV2uIeXlikomb2Tu7iT_3NNhPHOc8zEUK0bggxHWHb2sO5PWRwDlZ2v_bEx2yUro/s1600/Capture_MULTI_GPU_TRAINING.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="292" data-original-width="469" height="248" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgC2Cxi2PV6V7mL8m6P7DgRHmbKNVP7EXObpqaRjQmc9AmPt51VzgZG7exOggpwhB7qx9YKYG3QZz-ZV2uIeXlikomb2Tu7iT_3NNhPHOc8zEUK0bggxHWHb2sO5PWRwDlZ2v_bEx2yUro/s400/Capture_MULTI_GPU_TRAINING.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Deep Learning Network training via MATLAB on an AWS p2.x8large instance with 8 NVIDIA Tesla GPUs
</td></tr>
</tbody></table>
</div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<h2>
RESULTS</h2>
<div>
<br /></div>
<div>
Refer to <a href="http://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 1</a> for the motivation and background details pertaining to the following examples. </div>
<h3>
</h3>
<h3>
</h3>
<h3>
EXAMPLE 1: "YES / NO" Classification of Pneumonia
</h3>
<div>
<br /></div>
<div>
The 19 networks were re-trained (via Transfer Learning) on the relevant training dataset for the given example (1280 images, equally balanced across both classes). The following table shows the performance of each trained network when applied to the validation dataset (balanced, 112 each "yes" / "no") and the holdout dataset (unbalanced, 3806 "yes" only). The results are ordered (descending) by (i) Average Accuracy (across both classes), then (ii) Pneumonia Accuracy (i.e., fraction of "yes" correctly diagnosed). The table also included the Missed Pneumonia rate i.e., the percentage of the total validation population that should have been diagnosed "yes" (pneumonia) but which were missed i.e., wrongly diagnosed as "no" (healthy).<br />
<br />
<br /></div>
<div>
<table>
<tbody>
<tr><td><b>Base network</b></td><td><b>Validation: Average Accuracy</b><br />
<b><br /></b></td><td><b>Validation: Pneumonia Accuracy</b><br />
<b><br /></b></td><td><b>Validation: Healthy Accuracy</b><br />
<b><br /></b></td><td><b>Validation: Missed Pneumonia</b><br />
<b><br /></b></td><td><b>Holdout: Average Accuracy</b><br />
<b><br /></b></td></tr>
<tr><td>vgg16</td><td>91%</td><td>88%</td><td>95%</td><td>6%</td><td>86%</td></tr>
<tr><td>alexnet</td><td>90%</td><td>86%</td><td>94%</td><td>7%</td><td>85%</td></tr>
<tr><td>darknet19</td><td>88%</td><td>88%</td><td>88%</td><td>6%</td><td>87%</td></tr>
<tr><td>darknet53</td><td>88%</td><td>89%</td><td>87%</td><td>5%</td><td>89%</td></tr>
<tr><td>shufflenet</td><td>88%</td><td>84%</td><td>92%</td><td>8%</td><td>84%</td></tr>
<tr><td>googlenet</td><td>88%</td><td>83%</td><td>93%</td><td>8%</td><td>84%</td></tr>
<tr><td>googlenetplaces</td><td>88%</td><td>89%</td><td>86%</td><td>5%</td><td>87%</td></tr>
<tr><td>resnet101</td><td>88%</td><td>77%</td><td>98%</td><td>12%</td><td>76%</td></tr>
<tr><td>nasnetlarge</td><td>87%</td><td>83%</td><td>91%</td><td>8%</td><td>84%</td></tr>
<tr><td>resnet50</td><td>87%</td><td>86%</td><td>88%</td><td>7%</td><td>88%</td></tr>
<tr><td>vgg19</td><td>86%</td><td>90%</td><td>81%</td><td>5%</td><td>91%</td></tr>
<tr><td>xception</td><td>86%</td><td>79%</td><td>93%</td><td>11%</td><td>83%</td></tr>
<tr><td>resnet18</td><td>85%</td><td>71%</td><td>100%</td><td>15%</td><td>77%</td></tr>
<tr><td>squeezenet</td><td>84%</td><td>92%</td><td>76%</td><td>4%</td><td>91%</td></tr>
<tr><td>densenet201</td><td>83%</td><td>71%</td><td>96%</td><td>15%</td><td>72%</td></tr>
<tr><td>inceptionresnetv2 </td><td>83%</td><td>92%</td><td>73%</td><td>4%</td><td>86%</td></tr>
<tr><td>nasnetmobile</td><td>72%</td><td>84%</td><td>60%</td><td>8%</td><td>85%</td></tr>
<tr><td>inceptionv3</td><td>72%</td><td>83%</td><td>61%</td><td>8%</td><td>83%</td></tr>
<tr><td>mobilenetv2</td><td>69%</td><td>93%</td><td>46%</td><td>4%</td><td>93%</td></tr>
</tbody></table>
</div>
<div>
<br /></div>
<h3>
EXAMPLE 2: Classification Bacterial or Viral Pneumonia
</h3>
<br />
The 19 networks were re-trained (via Transfer Learning) on the relevant training dataset for the given example (3434 images, equally balanced across both classes). The following table shows the performance of each trained network when applied to the validation dataset (balanced, 320 each "bacteria" / "virus") and the holdout dataset (unbalanced, 520 "bacteria" only). The results are ordered (descending) by (i) Average Accuracy (across both classes), then (ii) Viral Accuracy (i.e., fraction of viral cases correctly diagnosed). Also shown is the Missed Viral rate (i.e., the fraction of the total validation population that should have been diagnosed viral but which were missed (wrongly diagnosed as bacterial).<br />
<br />
<table>
<tbody>
<tr><td><b>Base network</b></td><td><b>Validation: Average Accuracy</b><br />
<b></b><br /></td><td><b>Validation: Viral Accuracy</b><br />
<b><br /></b></td><td><b>Validation: Bacterial Accuracy</b><br />
<b></b><br /></td><td><b>Validation: Missed Viral</b><br />
<b><br /></b></td><td><b>Holdout: Average Accuracy</b><br />
<b></b><br /></td></tr>
<tr><td>darknet53</td><td>80%</td><td>76%</td><td>84%</td><td>12%</td><td>84%</td></tr>
<tr><td>vgg16</td><td>80%</td><td>73%</td><td>87%</td><td>14%</td><td>83%</td></tr>
<tr><td>squeezenet</td><td>79%</td><td>75%</td><td>83%</td><td>12%</td><td>80%</td></tr>
<tr><td>vgg19</td><td>78%</td><td>79%</td><td>78%</td><td>10%</td><td>78%</td></tr>
<tr><td>mobilenetv2</td><td>78%</td><td>81%</td><td>75%</td><td>9%</td><td>71%</td></tr>
<tr><td>googlenetplaces</td><td>78%</td><td>71%</td><td>86%</td><td>15%</td><td>85%</td></tr>
<tr><td>densenet201</td><td>78%</td><td>70%</td><td>87%</td><td>15%</td><td>85%</td></tr>
<tr><td>inceptionresnetv2 </td><td>78%</td><td>82%</td><td>74%</td><td>9%</td><td>70%</td></tr>
<tr><td>alexnet</td><td>78%</td><td>81%</td><td>75%</td><td>10%</td><td>71%</td></tr>
<tr><td>googlenet</td><td>77%</td><td>71%</td><td>83%</td><td>15%</td><td>83%</td></tr>
<tr><td>nasnetlarge</td><td>77%</td><td>78%</td><td>76%</td><td>11%</td><td>76%</td></tr>
<tr><td>darknet19</td><td>77%</td><td>62%</td><td>92%</td><td>19%</td><td>89%</td></tr>
<tr><td>inceptionv3</td><td>76%</td><td>91%</td><td>60%</td><td>4%</td><td>58%</td></tr>
<tr><td>resnet50</td><td>75%</td><td>68%</td><td>83%</td><td>16%</td><td>81%</td></tr>
<tr><td>nasnetmobile</td><td>74%</td><td>66%</td><td>81%</td><td>17%</td><td>76%</td></tr>
<tr><td>shufflenet</td><td>69%</td><td>50%</td><td>89%</td><td>25%</td><td>88%</td></tr>
<tr><td>xception</td><td>69%</td><td>43%</td><td>94%</td><td>28%</td><td>90%</td></tr>
<tr><td>resnet101</td><td>65%</td><td>38%</td><td>92%</td><td>31%</td><td>93%</td></tr>
<tr><td>resnet18</td><td>58%</td><td>80%</td><td>35%</td><td>10%</td><td>39%</td></tr>
</tbody></table>
<br />
<br />
<h3>
EXAMPLE 3: Classification of COVID-19 or Other-Viral </h3>
<br />
The 19 networks were re-trained (via Transfer Learning) on the relevant training dataset for the given example (130 images, equally balanced across both classes). The following table shows the performance of each trained network when applied to the validation dataset (balanced, 11 each "covid" / "other-viral") and the holdout dataset (unbalanced, 1938 "other-viral" only). The results are ordered (descending) by (i) Average Accuracy (across both classes), then (ii) COVID-19 Accuracy (i.e., fraction of COVID-19 cases correctly diagnosed). Also shown is the Missed COVID-19 i.e., the fraction of the total validation population that should have been diagnosed COVID-19 but which were missed (wrongly diagnosed as Other-Viral).<br />
<br />
<table>
<tbody>
<tr><td><b>Base network</b></td><td><b>Validation: Average Accuracy</b><br />
<b></b><br /></td><td><b>Validation: COVID-19 Accuracy</b><br />
<b><br /></b></td><td><b>Validation: Other-Viral Accuracy</b><br />
<b></b><br /></td><td><b>Validation: Missed COVID-19</b><br />
<b><br /></b></td><td><b>Holdout: Average Accuracy</b><br />
<b></b><br /></td></tr>
<tr><td>alexnet</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>95%</td></tr>
<tr><td>vgg16</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>96%</td></tr>
<tr><td>vgg19</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>97%</td></tr>
<tr><td>darknet19</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>93%</td></tr>
<tr><td>darknet53</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>96%</td></tr>
<tr><td>densenet201</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>96%</td></tr>
<tr><td>googlenet</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>95%</td></tr>
<tr><td>googlenetplaces </td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>95%</td></tr>
<tr><td>inceptionresnetv2 </td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>96%</td></tr>
<tr><td>inceptionv3</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>96%</td></tr>
<tr><td>mobilenetv2</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>95%</td></tr>
<tr><td>resnet18</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>96%</td></tr>
<tr><td>resnet50</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>96%</td></tr>
<tr><td>resnet101</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>96%</td></tr>
<tr><td>shufflenet</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>95%</td></tr>
<tr><td>squeezenet</td><td>100%</td><td>100%</td><td>100%</td><td>0%</td><td>94%</td></tr>
<tr><td>xception</td><td>100%</td><td>100%</td><td>94%</td><td>0%</td><td>96%</td></tr>
<tr><td>nasnetmobile</td><td>95%</td><td>100%</td><td>91%</td><td>0%</td><td>94%</td></tr>
<tr><td>nasnetlarge</td><td>95%</td><td>91%</td><td>100%</td><td>5%</td><td>96%</td></tr>
</tbody></table>
<br />
<br />
<h3>
EXAMPLE 4: Determine if COVID-19 pneumonia versus Healthy, Bacterial, or non-COVID viral pneumonia </h3>
<br />
The 19 networks were re-trained (via Transfer Learning) on the relevant training dataset for the given example (260 images, equally balanced across all four classes). The following table shows the performance of each trained network when applied to the validation dataset (balanced, 11 each of "covid" / "other-viral" / "bacterial" / "healthy") and the holdout dataset (unbalanced, zero "covid", 1934 "other-viral", 2463 "bacterial", 676 "healthy"). For succinctness, not all four classes are shown in the table (just the key ones of interest which the network should ideally distinguish: COVID-19 and Healthy). The results are ordered (descending) by (i) Average Accuracy (across all four classes), then (ii) COVID-19 (i.e., fraction of COVID-19 cases correctly diagnosed). Also shown is the Missed COVID-19 i.e., the fraction of the total validation population that should have been diagnosed COVID-19 but which were missed (wrongly diagnosed as belonging to one of the other three classes).<br />
<br />
<br />
<div>
<table>
<tbody>
<tr><td><b>Base network</b></td><td><b>Validation: Average Accuracy</b><br />
<b></b><br /></td><td><b>Validation: COVID-19 Accuracy</b><br />
<b><br /></b></td><td><b>Validation: Healthy Accuracy</b><br />
<b></b><br /></td><td><b>Validation: Missed COVID-19</b><br />
<b><br /></b></td><td><b>Holdout: Average Accuracy</b><br />
<b></b><br /></td></tr>
<tr><td>alexnet</td><td>82%</td><td>100%</td><td>100%</td><td>0%</td><td>58%</td></tr>
<tr><td>inceptionresnetv2 </td><td>80%</td><td>100%</td><td>100%</td><td>0%</td><td>61%</td></tr>
<tr><td>googlenet</td><td>80%</td><td>91%</td><td>100%</td><td>2%</td><td>61%</td></tr>
<tr><td>xception</td><td>77%</td><td>100%</td><td>100%</td><td>0%</td><td>58%</td></tr>
<tr><td>inceptionv3</td><td>77%</td><td>91%</td><td>100%</td><td>2%</td><td>58%</td></tr>
<tr><td>mobilenetv2</td><td>77%</td><td>91%</td><td>100%</td><td>2%</td><td>61%</td></tr>
<tr><td>densenet201</td><td>75%</td><td>100%</td><td>100%</td><td>0%</td><td>61%</td></tr>
<tr><td>darknet19 </td><td>75%</td><td>100%</td><td>100%</td><td>0%</td><td>59%</td></tr>
<tr><td>nasnetlarge </td><td>75%</td><td>91%</td><td>100%</td><td>2%</td><td>61%</td></tr>
<tr><td>vgg19</td><td>73%</td><td>100%</td><td>100%</td><td>0%</td><td>52%</td></tr>
<tr><td>nasnetmobile</td><td>73%</td><td>91%</td><td>100%</td><td>2%</td><td>58%</td></tr>
<tr><td>darknet53</td><td>73%</td><td>91%</td><td>100%</td><td>2%</td><td>63%</td></tr>
<tr><td>vgg16</td><td>73%</td><td>91%</td><td>100%</td><td>2%</td><td>61%</td></tr>
<tr><td>googlenetplaces</td><td>73%</td><td>82%</td><td>100%</td><td>5%</td><td>57%</td></tr>
<tr><td>resnet18</td><td>73%</td><td>73%</td><td>100%</td><td>7%</td><td>60%</td></tr>
<tr><td>resnet50</td><td>70%</td><td>91%</td><td>100%</td><td>2%</td><td>61%</td></tr>
<tr><td>squeezenet</td><td>70%</td><td>73%</td><td>100%</td><td>7%</td><td>55%</td></tr>
<tr><td>shufflenet</td><td>68%</td><td>91%</td><td>91%</td><td>2%</td><td>59%</td></tr>
<tr><td>resnet101</td><td>52%</td><td>64%</td><td>100%</td><td>9%</td><td>51%</td></tr>
</tbody></table>
</div>
<br />
<h2>
DISCUSSION & CONCLUSIONS</h2>
<div>
The main points of discussion surrounding these experiments are summarised as follows:</div>
<div>
<br /></div>
<ul>
<li>It is interesting to observe that the best performing networks (i.e., those near the top of the lists of results presented above) per Experiment generally differ per Experiment. The differences must be due to the nature and number of images being compared in a given Experiment and in the detailed structure of the networks and their specific response to the respective image sets in training. </li>
<li>For each Experiment, the most accurate network turned out not to be <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> as used exclusively in <a href="http://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 1</a>. This emphasises the importance of trying different networks for a given problem -- and it is not at all clear <i>a priori</i> which network is going to perform best. The results also suggest that <span style="font-family: "courier new" , "courier" , monospace;">resnet50</span>, as used <a href="https://blogs.mathworks.com/deep-learning/2020/03/18/deep-learning-for-medical-imaging-covid-19-detection/" target="_blank">here</a>, is not actually the optimal choice when analysing these lung images via Transfer Learning.</li>
<li>Since each Example reveals a different preferred network, a useful strategy for diagnosing COVID-19 could be as follows: (i) use a preferred network from Example 1 (e.g., <span style="font-family: "courier new" , "courier" , monospace;">vgg16</span> at the top of the list, or some other network from near the top of the list) to determine whether a given X-ray-image-under-test is healthy or unhealthy; (ii) if unhealthy, use a preferred network from Example 2 to determine if viral or bacterial pneumonia; (iii) if viral, use a preferred network from Example 3 to determine if COVID-19 or another type of viral pneumonia; (iv) test the same image using a preferred network from Example 4 (which directly assesses whether-or-not COVID-19). Compare the conclusion of step (iv) with that of step (iii) to see if they reinforce one another by being in agreement on a COVID-19 diagnosis (or not, as the case may be). This multi-network cascaded approach should be more robust than just using a single network (e.g., as per Example 4 alone) to perform the diagnosis.</li>
<li>Care was taken to ensure that the training and validation sets used throughout were chosen to be balanced i.e., with equal distribution across all classes in the given Experiment. This left the holdout sets i.e., those containing the unused images from the total available pool, comprising an unbalanced set of test images per Experiment representing a further useful test set. Despite the imbalances, the performance on the networks when applied to the holdout images was generally good, suggesting that the trained networks behave consistently.</li>
</ul>
<h2>
POTENTIAL NEXT STEPS</h2>
<ul>
<li>In the interests of time, the training runs were only conducted once per model per Experiment i.e., using one sample of training and validation images per Experiment. For completeness, the training should be repeated with different randomly selected training & validation images (from the available pool) to ensure that the results (in terms of assessing favoured models per Experiment, etc) are statistically significant.</li>
<li>Likewise, in the interests of time, the training options (hyper-parameter settings) were fixed (based on quick trial-and-error tests, then frozen for all ensuing experiments). Ideally, these should be optimised, for example using Bayesian Optimisation as described <a href="https://www.mathworks.com/help/deeplearning/ug/deep-learning-using-bayesian-optimization.html" target="_blank">here</a>. </li>
<li>It would be interesting to gain an understanding of the differences in the performance of the various networks across the various Experiments. Perhaps a comparative Activation Mapping Analysis (akin to that presented in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_20.html" target="_blank">Part 2</a>) could shed some light (?)</li>
<li>It would be interesting to compare the performance of the networks presented in this article with the <a href="https://arxiv.org/pdf/2003.09871.pdf" target="_blank">COVID-Net custom network</a>. Unfortunately, after spending many hours in TensorFlow, I was unable to export the COVID-Net -- either as a Keras model or in ONNX format -- in a manner suitable for importing into MATLAB (via <a href="https://www.mathworks.com/help/deeplearning/ref/importkerasnetwork.html" target="_blank"><span style="font-family: "courier new" , "courier" , monospace;">importKerasNetwork</span></a> or <a href="https://www.mathworks.com/help/deeplearning/ref/importonnxnetwork.html" target="_blank"><span style="font-family: "courier new" , "courier" , monospace;">importONNXNetwork</span></a>). Perhaps, then, the COVID-Net would need to built from scratch within MATLAB in order to perform the desired comparison. I'm not sure if that is possible (given the underlying structure of COVID-Net). Note: I was able to import and work with the COVID-Net model from <a href="https://drive.google.com/drive/folders/1eNidqMyz3isLjGYN1evzQu--A-JVkzbk" target="_blank">here</a> in TensorFlow, but could not successfully export it for use within MATLAB.</li>
<li>Re-train and compare all the models with larger image datasets whenever they become available. If you have access to such images, please consider posting them to the open source COVID-Net archive <a href="https://figure1.typeform.com/to/lLrHwv" target="_blank">here</a>.</li>
</ul>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-23258242501540481072020-04-20T03:07:00.000-07:002020-05-14T05:58:33.088-07:00Deep Learning Analysis of COVID-19 lung X-Rays using MATLAB: Part 2<b><i>UPDATE: See <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4 </a>where I've performed a grad-CAM analysis on all the trained networks from <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>, in the theme of <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_20.html" target="_blank">Part 2</a>.</i></b><br />
<br />
<h2>
*** DISCLAIMER ***</h2>
<br />
<i>I have no medical training. Nothing presented here should be considered in any way as informative from a medical point-of-view. This is simply an exercise in image analysis via Deep Learning using MATLAB, with lung X-rays as a topical example in these times of COVID-19. </i><br />
<i></i><br />
<h2>
INTRODUCTION</h2>
<b><br /></b>
This follows on from my previous post (<a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html">Part 1</a>) where I presented results of a preliminary investigation into COVID-19 lung X-ray classification using Deep Learning in MATLAB. The results were promising, but I did emphasise my main caveat that the Deep Neural Networks may have been skewed by extraneous information embedded in the X-ray images leading to exaggerated performance of the classifiers. In this post, I utilise the approach suggested <a href="https://blogs.mathworks.com/deep-learning/2020/03/18/deep-learning-for-medical-imaging-covid-19-detection/" target="_blank">here</a> (another MATLAB-based COVID-19 image investigation) based on the Class Activation Mapping technique described <a href="https://uk.mathworks.com/help/deeplearning/ug/investigate-network-predictions-using-class-activation-mapping.html" target="_blank">here</a> to determine the hotspots in the images which drive the classification results. This verification analysis mirrors that presented in the original <a href="https://arxiv.org/pdf/2003.09871.pdf" target="_blank">COVID-Net article</a> (where they utilise the <i>GSInquire</i> tool for similar purpose). As before, my approach is to use MATLAB for all calculations, and to provide code snippets which may be useful to others..<br />
<br />
<i>GOTCHA: In <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html">Part 1</a> I was using MATLAB version R2019b. For this current investigation I upgraded to R2020a for the following reasons:</i><br />
<ul>
<li><i>The <span style="font-family: "courier new" , "courier" , monospace;">mean</span> function in R2020a has an additional option for <span style="font-family: "courier new" , "courier" , monospace;">vecdim</span> as the second input argument, as required in the code I utilised from <a href="https://uk.mathworks.com/help/deeplearning/ug/investigate-network-predictions-using-class-activation-mapping.html">here<span id="goog_1480626026"></span></a></i></li>
<li><a href="https://www.blogger.com/"></a><i>The structure of the pre-trained networks e.g., <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> which I use, has changed such that the class names are held in the <span style="font-family: "courier new" , "courier" , monospace;">Classes</span> property of the output layer in R2020a rather than in the <span style="font-family: "courier new" , "courier" , monospace;">ClassNames</span> property as in R2019b. I could have simply modified my code to workaround the difference, but given the first reason above (especially), I decided to upgrade the versioning (and hopefully this will avoid future problems).</i><i><br /></i></li>
</ul>
<h2>
</h2>
<h2>
CLASS ACTIVATION MAPPING</h2>
<h3>
Dataset</h3>
<div>
Using the validation results from the Deep Neural Net analysis in Example 4 of <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html">Part 1</a> provides a set of 44 sample X-rays and predicted classes, 11 from each of the four classes in question: "healthy", "bacteria", "viral-other", and "covid". By choosing Example 4, we have selected the most challenging case to investigate (i.e., the 4-class classifier trained on relatively few images compared with Examples 1--3, each of which were 2-class classifiers trained on more images than Example 4).<br />
<br />
The images are contained in <span style="font-family: "courier new" , "courier" , monospace;">validationImages</span> (the validation <span style="font-family: "courier new" , "courier" , monospace;">imageDatastore</span>) from Example 4 and the trained network (from Transfer Learning) is contained in the <span style="font-family: "courier new" , "courier" , monospace;">netTransfer</span> variable. The task at hand is to analyse the Class Activation Mappings to determine which regions of the X-rays play the dominant role in assessing the predicted class. </div>
<div>
<br /></div>
<h3>
Code snippet </h3>
<div>
The code which performs the Class Activation Mapping using the <span style="font-family: "courier new" , "courier" , monospace;">netTransfer</span> network (in a loop around all 44 images in <span style="font-family: "courier new" , "courier" , monospace;">validationImages</span>) is adapted directly from <a href="https://uk.mathworks.com/help/deeplearning/ug/investigate-network-predictions-using-class-activation-mapping.html"> this example</a>, and presented in full as follows (the utility sub-functions -- identical to those in <a href="https://uk.mathworks.com/help/deeplearning/ug/investigate-network-predictions-using-class-activation-mapping.html">the example</a> -- are not included here):</div>
<div>
<br /></div>
<span style="font-family: "courier new" , "courier" , monospace;">net=netTransfer;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">netName = "googlenet";</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">classes = net.Layers(end).Classes;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">layerName = activationLayerName(netName);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">for i=1:length(validationImages.Files)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> h = figure('Units','normalized','Position',[0.05 0.05 0.9</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 0.8],'Visible','on');</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> [img,fileinfo] = readimage(validationImages,i);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> im=img(:,:,[1 1 1]); %Convert from grayscale to rgb</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> imResized = imresize(img, [224 224]);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> imResized=imResized(:,:,[1 1 1]); %Convert to rgb</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> imageActivations = activations(net,imResized,layerName);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> scores = squeeze(mean(imageActivations,[1 2]));</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> fcWeights = net.Layers(end-2).Weights;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> fcBias = net.Layers(end-2).Bias;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> scores = fcWeights*scores + fcBias;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> [~,classIds] = maxk(scores,4); %since 4 classes to compare</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> weightVector = shiftdim(fcWeights(classIds(1),:),-1);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> classActivationMap = sum(imageActivations.*weightVector,3);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> scores = exp(scores)/sum(exp(scores));</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> maxScores = scores(classIds);
labels = classes(classIds);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> [maxScore, maxID] = max(maxScores);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> labels_max = labels(maxID);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> CAMshow(im,classActivationMap)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> title("Predicted: "+string(labels_max) + ", " +</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> string(maxScore)+" (Actual: "+</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">string(validationImages.Labels(i))+")",'FontSize', 18);</span><span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> drawnow</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">end</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<h3>
Results & Conclusions </h3>
The resulting Class Activation Maps for all 44 validation images are shown below. The title of each image contains the predicted class (plus the corresponding score) and the actual class. Since the network is not 100% accurate, some of the predictions are incorrect. However, it is clear from these image activation heat-maps that the networks are generally using the detail within the lungs (albeit with a few use regions further away) rather than extraneous factors and artefacts (embedded text, pacemakers, etc.) to make the predictions. This is an encouraging result, successfully countering the caveat from <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung.html">Part 1</a> regarding the possibility of the classifier performance being exaggerated by such artefacts, and is in line with the conclusions reported <a href="https://arxiv.org/pdf/2003.09871.pdf">here</a> and <a href="https://blogs.mathworks.com/deep-learning/2020/03/18/deep-learning-for-medical-imaging-covid-19-detection/">here</a> from similar studies.<br />
<br />
<h3>
Class Activation Maps</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSbCqz1fKlNuOLC-WrQ0f6IgV0iFgCU4VAhHJXeW4SCO2KSG2erz9SwESubBzu2whTcHFLyN2tA7HNc2w414wfA8kcmlk8WuTtkmn2ndUGwtcVdgnEVlfPNKHCQgwyKKOq1EQn4uDkBj4/s1600/script_analyse_class_activations_01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="717" data-original-width="1078" height="265" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSbCqz1fKlNuOLC-WrQ0f6IgV0iFgCU4VAhHJXeW4SCO2KSG2erz9SwESubBzu2whTcHFLyN2tA7HNc2w414wfA8kcmlk8WuTtkmn2ndUGwtcVdgnEVlfPNKHCQgwyKKOq1EQn4uDkBj4/s400/script_analyse_class_activations_01.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtpY5lEPNO3SPRMF_G3vjhHU4xbY7WDsLHaNZWo-x54JaglLaYVvRDlt65AAN9d8KGakzwAng9_4VK3RMYvfbrFxiZ1aFD7CxkIR9yuLmOTSbnHNdojmM05qs3igDY-1Uqe3IoBZvXbBE/s1600/script_analyse_class_activations_02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="789" data-original-width="1152" height="273" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtpY5lEPNO3SPRMF_G3vjhHU4xbY7WDsLHaNZWo-x54JaglLaYVvRDlt65AAN9d8KGakzwAng9_4VK3RMYvfbrFxiZ1aFD7CxkIR9yuLmOTSbnHNdojmM05qs3igDY-1Uqe3IoBZvXbBE/s400/script_analyse_class_activations_02.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcgscNzoTgGuosZvfadwVrLXzYr2ZCrKg0LsQ0AlbEx3i5DePdjD9b1FiaXnol0cQNs9h5qNttWRFv3-gl4RRX5Zr8QswsPl4laxpgDG7O3f6dulys9RoyiFnocbMQ40DGiSAuogSHs1k/s1600/script_analyse_class_activations_03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="697" data-original-width="914" height="305" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcgscNzoTgGuosZvfadwVrLXzYr2ZCrKg0LsQ0AlbEx3i5DePdjD9b1FiaXnol0cQNs9h5qNttWRFv3-gl4RRX5Zr8QswsPl4laxpgDG7O3f6dulys9RoyiFnocbMQ40DGiSAuogSHs1k/s400/script_analyse_class_activations_03.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihkQlZ9__HyKrtOjb_74YeVloQsu4w0yr1ktOs0eehF7WZDxUlTRUUx0C8vpZZ3ZLRw-M5Gvnpp4J0exl8kPiqjBPcTmwFWsPR98hMfEfOwhnL96jht99tnUhrzkmxPGiPQqP0pNNdeN8/s1600/script_analyse_class_activations_04.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="817" data-original-width="1098" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihkQlZ9__HyKrtOjb_74YeVloQsu4w0yr1ktOs0eehF7WZDxUlTRUUx0C8vpZZ3ZLRw-M5Gvnpp4J0exl8kPiqjBPcTmwFWsPR98hMfEfOwhnL96jht99tnUhrzkmxPGiPQqP0pNNdeN8/s400/script_analyse_class_activations_04.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqCbR6Q6miZ8GueuplLJPt1e8XS71kNW5oo6374E0V6Kx9MTGBLUdkreiHEEBbvUToTBhVBjL9AOtTcE0JS-wudmhCYb_ghGxJ2TmNcUJpQ_et0_PeyTnC-bL6SHwzFp10TA2p7wJVAD8/s1600/script_analyse_class_activations_05.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="621" data-original-width="1030" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqCbR6Q6miZ8GueuplLJPt1e8XS71kNW5oo6374E0V6Kx9MTGBLUdkreiHEEBbvUToTBhVBjL9AOtTcE0JS-wudmhCYb_ghGxJ2TmNcUJpQ_et0_PeyTnC-bL6SHwzFp10TA2p7wJVAD8/s400/script_analyse_class_activations_05.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4gv7jw_Z6mthMpr3c3EiL8zMTBP0c9vnRZRPdlpWtkDdyGIqO1ERfemtHwyw1Xfdcd36J1-vL4jba9qZNTTn9wT1w2Q-Z2FJo2ON9y5RGxlucvb-gBNincMEIaiTbplXjZSmtkw94osk/s1600/script_analyse_class_activations_06.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="765" data-original-width="1001" height="305" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4gv7jw_Z6mthMpr3c3EiL8zMTBP0c9vnRZRPdlpWtkDdyGIqO1ERfemtHwyw1Xfdcd36J1-vL4jba9qZNTTn9wT1w2Q-Z2FJo2ON9y5RGxlucvb-gBNincMEIaiTbplXjZSmtkw94osk/s400/script_analyse_class_activations_06.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgl0uFE83-Hg_5abhLBmIrFaGkkhlY0ecvByKibdbk2I4NMi4cmzaLHRNGHMvWC94lNRat5GCngDlxqGlmWqw2YVRUVb9CC1cBO_2FgwXeY5IXBZnBqbXyOMrh9iro1TvnX_ajTfvw20hc/s1600/script_analyse_class_activations_07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="749" data-original-width="1358" height="220" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgl0uFE83-Hg_5abhLBmIrFaGkkhlY0ecvByKibdbk2I4NMi4cmzaLHRNGHMvWC94lNRat5GCngDlxqGlmWqw2YVRUVb9CC1cBO_2FgwXeY5IXBZnBqbXyOMrh9iro1TvnX_ajTfvw20hc/s400/script_analyse_class_activations_07.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkia-cvoXdHXK9OiApMWsR6MnVDCQg516eNUvMBT0hApPhbajN_8dPLbzptfNc0LQy1SIV0LaSoPF-GbVRPXUP4P4MAElUgybEaK8d_sEEgkwXD1gx8CK9M7okjiCoyfr98XMcMo3khZs/s1600/script_analyse_class_activations_08.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="659" data-original-width="852" height="308" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkia-cvoXdHXK9OiApMWsR6MnVDCQg516eNUvMBT0hApPhbajN_8dPLbzptfNc0LQy1SIV0LaSoPF-GbVRPXUP4P4MAElUgybEaK8d_sEEgkwXD1gx8CK9M7okjiCoyfr98XMcMo3khZs/s400/script_analyse_class_activations_08.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7Nvtn3cLm4ztdnkKXotUSCJ3MlfL5pcMDdh-7osWXLGZJ91B_vdBSQ4XINyvOJfdqXqmpp9cbyweTgRUqqfTvyoUiQ6_Z7MuA0Qprqbh1ulaCUCLRUnIRk8HFeZv-x9aIN_CDBB5LQ6k/s1600/script_analyse_class_activations_09.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="741" data-original-width="1094" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7Nvtn3cLm4ztdnkKXotUSCJ3MlfL5pcMDdh-7osWXLGZJ91B_vdBSQ4XINyvOJfdqXqmpp9cbyweTgRUqqfTvyoUiQ6_Z7MuA0Qprqbh1ulaCUCLRUnIRk8HFeZv-x9aIN_CDBB5LQ6k/s400/script_analyse_class_activations_09.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-rVQ6Mna5MuzvN5pe76z9C9LPEsymnj7wGDG4jiMoPilWAmK9IXoxsbI91uLoYJFg1Jm8Jo6O3s_I5y5MeWBsl90pBQRkzYAm13FJnV8uCTvcWFkOFpvZkugQD_yM8fFmMiM_KcM2Uqw/s1600/script_analyse_class_activations_10.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="749" data-original-width="1102" height="271" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-rVQ6Mna5MuzvN5pe76z9C9LPEsymnj7wGDG4jiMoPilWAmK9IXoxsbI91uLoYJFg1Jm8Jo6O3s_I5y5MeWBsl90pBQRkzYAm13FJnV8uCTvcWFkOFpvZkugQD_yM8fFmMiM_KcM2Uqw/s400/script_analyse_class_activations_10.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEge6WVel2yX1-CCmIeArMkqLvnswwdb_8kfXvGJk1tFKVKMsYS7G3438cRMvktTVLchEIBaAlATNS_QkrIBKIFCMrk0yw5llm9me_pHGktkbcoi3NMYCX5jZRdE9GmaGnCUla-vcyTqfm8/s1600/script_analyse_class_activations_11.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="845" data-original-width="1326" height="253" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEge6WVel2yX1-CCmIeArMkqLvnswwdb_8kfXvGJk1tFKVKMsYS7G3438cRMvktTVLchEIBaAlATNS_QkrIBKIFCMrk0yw5llm9me_pHGktkbcoi3NMYCX5jZRdE9GmaGnCUla-vcyTqfm8/s400/script_analyse_class_activations_11.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcyj5Ck-x1WeBG_ywEdVa3CcrOW0e_IcZznZI1NcZ58yG8ivqb_HDvlnBqLAp8WuA4j7xLHxILi1mbmDbxVS8Zf0F3Yy-J375qPzi2BHz0iiYOzMdWr2kN5kpl2KWaxSN7mf5VraVAbQU/s1600/script_analyse_class_activations_12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="668" data-original-width="742" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcyj5Ck-x1WeBG_ywEdVa3CcrOW0e_IcZznZI1NcZ58yG8ivqb_HDvlnBqLAp8WuA4j7xLHxILi1mbmDbxVS8Zf0F3Yy-J375qPzi2BHz0iiYOzMdWr2kN5kpl2KWaxSN7mf5VraVAbQU/s400/script_analyse_class_activations_12.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYY-aUK5dZ0ZC0QTvkfEXf7ju0oyXEUz1nbSNgtZxXlIYTgK0lPUd6R9g3kjnAmNN9KwnyZSUzE3om-ApmMjXQk34tzFMwsz5oJubclHg9_CjInlpUd9Rly7MJVOxtK1L-zhhp5hE0oKc/s1600/script_analyse_class_activations_13.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="678" data-original-width="762" height="355" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYY-aUK5dZ0ZC0QTvkfEXf7ju0oyXEUz1nbSNgtZxXlIYTgK0lPUd6R9g3kjnAmNN9KwnyZSUzE3om-ApmMjXQk34tzFMwsz5oJubclHg9_CjInlpUd9Rly7MJVOxtK1L-zhhp5hE0oKc/s400/script_analyse_class_activations_13.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlPK_901OOfBH_6ltC4Uzjfl-ErHC9Gaaq8xHdmxF6kc0x-i2lXP_S2bduudZz0VyhDnxUHnuwP5c5-VLaBHdpZhxXjbpNOpM2mqYbpeHyQLM9-8R80wsvOJ5Ec41oziypa0mSClsbWeQ/s1600/script_analyse_class_activations_14.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="813" data-original-width="768" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlPK_901OOfBH_6ltC4Uzjfl-ErHC9Gaaq8xHdmxF6kc0x-i2lXP_S2bduudZz0VyhDnxUHnuwP5c5-VLaBHdpZhxXjbpNOpM2mqYbpeHyQLM9-8R80wsvOJ5Ec41oziypa0mSClsbWeQ/s400/script_analyse_class_activations_14.png" width="377" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijeyYsaIvf8D-lmL25hrTNxH6U5ynsqYqeBnzOY-MaBkiciqqAXzUpYNF9_t8GNaF86S7vdmg9O2r8sUMbxh9MbJ0yOC6D0HvgXLN1jRC1AQ11MW2ezHE12xHoQcnTaXE29_SH4Jd30is/s1600/script_analyse_class_activations_15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="893" data-original-width="894" height="398" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijeyYsaIvf8D-lmL25hrTNxH6U5ynsqYqeBnzOY-MaBkiciqqAXzUpYNF9_t8GNaF86S7vdmg9O2r8sUMbxh9MbJ0yOC6D0HvgXLN1jRC1AQ11MW2ezHE12xHoQcnTaXE29_SH4Jd30is/s400/script_analyse_class_activations_15.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcldkoBcOm7zsQzBRM-zEfLFnRqoIiUdF6yMwGuiJ8OilB_mx5FglmyCR5-gO58ZeCiBqeLUiP52V_3n3JFBzn0LVJDaIFgELRHM9ros7OKtDzCcwNAQLFfdgccK859lNAFmIDsfEowNs/s1600/script_analyse_class_activations_16.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="791" data-original-width="777" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcldkoBcOm7zsQzBRM-zEfLFnRqoIiUdF6yMwGuiJ8OilB_mx5FglmyCR5-gO58ZeCiBqeLUiP52V_3n3JFBzn0LVJDaIFgELRHM9ros7OKtDzCcwNAQLFfdgccK859lNAFmIDsfEowNs/s400/script_analyse_class_activations_16.png" width="392" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtCUuY64qqcYAA0GYcTJAOo5GPPt6RPDKWR1N7BQ5kskwyf0tDrD8pYIKsKNTNRMyC8B6q2fVG_V3Q9botJhM5OhooTnYNRkGZGz1NSwc2CP_-1uA4hcvkpvRRNxoj_QB0X9e55SP2DBQ/s1600/script_analyse_class_activations_17.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="866" data-original-width="1005" height="343" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtCUuY64qqcYAA0GYcTJAOo5GPPt6RPDKWR1N7BQ5kskwyf0tDrD8pYIKsKNTNRMyC8B6q2fVG_V3Q9botJhM5OhooTnYNRkGZGz1NSwc2CP_-1uA4hcvkpvRRNxoj_QB0X9e55SP2DBQ/s400/script_analyse_class_activations_17.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuXjfx-H_mjJkLNli8okV7ExY6rbuJvrxp82n3FauRUZT6EgJOp0v45KHk4T5txcZzmstKuvIyhkD-CSpMgaXWxg6CXVLpmr4zFfDGFyh2gCOTiR0JDi_RbacqCda3jkNmf4mf98FU9i0/s1600/script_analyse_class_activations_18.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="810" data-original-width="891" height="362" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuXjfx-H_mjJkLNli8okV7ExY6rbuJvrxp82n3FauRUZT6EgJOp0v45KHk4T5txcZzmstKuvIyhkD-CSpMgaXWxg6CXVLpmr4zFfDGFyh2gCOTiR0JDi_RbacqCda3jkNmf4mf98FU9i0/s400/script_analyse_class_activations_18.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWqwr3_W6Z3OxYQ1FcOMuGLH2W1-HDWI9OO_TBtrzCEosZ6glYVH8LZu4VRZ5qZ7D6kU3-C9XmWNXncUg9wZNGqzlKzG3Dp81nM2hGro6gxdfyJLcMD9tg0sJXz2behk0EzexCIfoCpvQ/s1600/script_analyse_class_activations_19.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="816" data-original-width="1002" height="325" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWqwr3_W6Z3OxYQ1FcOMuGLH2W1-HDWI9OO_TBtrzCEosZ6glYVH8LZu4VRZ5qZ7D6kU3-C9XmWNXncUg9wZNGqzlKzG3Dp81nM2hGro6gxdfyJLcMD9tg0sJXz2behk0EzexCIfoCpvQ/s400/script_analyse_class_activations_19.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfbYAysRre7iv7r3zTnjNTCSCeEZLYAtXgJFv7MgzbV-HCHLIbeXKnM9Eziz-kHoU9Z6SYqjU3KrxDpQkiKIwRkx70dDYCP-BVSk0U-Ah82WbW6Hbt_e33xPj6rjcxyHnDnH2nivkHMRw/s1600/script_analyse_class_activations_20.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="605" data-original-width="686" height="352" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfbYAysRre7iv7r3zTnjNTCSCeEZLYAtXgJFv7MgzbV-HCHLIbeXKnM9Eziz-kHoU9Z6SYqjU3KrxDpQkiKIwRkx70dDYCP-BVSk0U-Ah82WbW6Hbt_e33xPj6rjcxyHnDnH2nivkHMRw/s400/script_analyse_class_activations_20.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3v1RBgZ0MN77lrd-d6A0ASDxFRUVWkGyjPtjM8kQ0kPbaHpxtqhConM3CJHetXcLu3l0UPhO1WerhOeu7B-lc_htOskm_S0nwoKAfbLVZ6x18qKGesQtiEQLHxAmVrHoxJsmAZMEIm70/s1600/script_analyse_class_activations_21.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="855" data-original-width="1068" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3v1RBgZ0MN77lrd-d6A0ASDxFRUVWkGyjPtjM8kQ0kPbaHpxtqhConM3CJHetXcLu3l0UPhO1WerhOeu7B-lc_htOskm_S0nwoKAfbLVZ6x18qKGesQtiEQLHxAmVrHoxJsmAZMEIm70/s400/script_analyse_class_activations_21.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJKsPeBCBDpZPzWCbjdyAzjpiPMXJVGeX4IA02EOGPlmE5QmrCf2X7KbabnTcm-ExWyk6xLVwQ5su6W1nhY9SqhK7pZxFmeNqx5mkAH7xDoR2tZBhykcGZcIphLbMa8woW2KGceVaF3a0/s1600/script_analyse_class_activations_22.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="843" data-original-width="1015" height="331" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJKsPeBCBDpZPzWCbjdyAzjpiPMXJVGeX4IA02EOGPlmE5QmrCf2X7KbabnTcm-ExWyk6xLVwQ5su6W1nhY9SqhK7pZxFmeNqx5mkAH7xDoR2tZBhykcGZcIphLbMa8woW2KGceVaF3a0/s400/script_analyse_class_activations_22.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8barkpScMrhVoJYXOxOQsKtFESe9Zu8bbbeHx66eYkFkTMfBMPfUQfGgCOK-SyvkvACODzQNwPpCPjQje6tgKwvfh4KaYm25LNpQq7kWQA6FDHC4fj5HJhlve5utAFXW0ftaah4tLBFI/s1600/script_analyse_class_activations_23.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="833" data-original-width="1115" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8barkpScMrhVoJYXOxOQsKtFESe9Zu8bbbeHx66eYkFkTMfBMPfUQfGgCOK-SyvkvACODzQNwPpCPjQje6tgKwvfh4KaYm25LNpQq7kWQA6FDHC4fj5HJhlve5utAFXW0ftaah4tLBFI/s400/script_analyse_class_activations_23.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXIxCCdEJpJt_umvOqUoAHQKJb7j9MIY4BXNcibV0vpcLvdz8D11LeaiiUajJg7T0B9UAly5Xr8GZv7yDOf8uB2gVm2IQ7y5N3Av4Crsn_USSBlPA4CoDGWuj43FzvXqeMUOH2uLNnz0w/s1600/script_analyse_class_activations_24.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="778" data-original-width="1019" height="305" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXIxCCdEJpJt_umvOqUoAHQKJb7j9MIY4BXNcibV0vpcLvdz8D11LeaiiUajJg7T0B9UAly5Xr8GZv7yDOf8uB2gVm2IQ7y5N3Av4Crsn_USSBlPA4CoDGWuj43FzvXqeMUOH2uLNnz0w/s400/script_analyse_class_activations_24.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaitWPNpFFt0q7DPg4evV9EDNAL5w3uQCKvwVSzOp_sOCnZXebVfDOitBBB65LalHhjquvZmEMintT1bXEaVkkVwCSLfrz4kFapfrpu8jV5uAHaqnW5XD3KS92vMdpM1eH52EIKxtemV4/s1600/script_analyse_class_activations_25.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="628" data-original-width="777" height="322" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaitWPNpFFt0q7DPg4evV9EDNAL5w3uQCKvwVSzOp_sOCnZXebVfDOitBBB65LalHhjquvZmEMintT1bXEaVkkVwCSLfrz4kFapfrpu8jV5uAHaqnW5XD3KS92vMdpM1eH52EIKxtemV4/s400/script_analyse_class_activations_25.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSPihyphenhyphenq-7IomLk0JQDv8OcTXqGMrfmyPU8DJH5IKURmW0CuIsCb8Zm7iOGvi4nem2-Y3MdJxUvyfFtEoqyG8MhWvq1FJC26a2YJKyXFRyqpV6YizO52TzMw7ZCBBHckfw_Ye_VhMpDLcc/s1600/script_analyse_class_activations_26.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="757" data-original-width="938" height="322" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSPihyphenhyphenq-7IomLk0JQDv8OcTXqGMrfmyPU8DJH5IKURmW0CuIsCb8Zm7iOGvi4nem2-Y3MdJxUvyfFtEoqyG8MhWvq1FJC26a2YJKyXFRyqpV6YizO52TzMw7ZCBBHckfw_Ye_VhMpDLcc/s400/script_analyse_class_activations_26.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6XWoBewgA6U4yVJCZFVw3uAYG32OsYtxBn8yF_T2g1oEzFF_caDwFzU3ezC2fvBUM6LN4fskk4lyhDQwCjcjeEUx9pUBnrC4SNLYfAjp8Jp2bAx_XwW7y8M6dJHjzrUIWhb9e8j9soNU/s1600/script_analyse_class_activations_27.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="856" data-original-width="1288" height="265" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6XWoBewgA6U4yVJCZFVw3uAYG32OsYtxBn8yF_T2g1oEzFF_caDwFzU3ezC2fvBUM6LN4fskk4lyhDQwCjcjeEUx9pUBnrC4SNLYfAjp8Jp2bAx_XwW7y8M6dJHjzrUIWhb9e8j9soNU/s400/script_analyse_class_activations_27.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF2G8O8yui85ZEGzcO_Pl-TAH3Ug7PPlsR5usaU8yi5PsVEyD-IqhzPlH5h8inJRGonLYzi93yo5gvhzvzANq5s6WlYbz_FCQVekKgAraZvB-GvOv855PitSaYypjuV8t_p9JdX5hyphenhyphen6Fo/s1600/script_analyse_class_activations_28.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="629" data-original-width="780" height="322" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF2G8O8yui85ZEGzcO_Pl-TAH3Ug7PPlsR5usaU8yi5PsVEyD-IqhzPlH5h8inJRGonLYzi93yo5gvhzvzANq5s6WlYbz_FCQVekKgAraZvB-GvOv855PitSaYypjuV8t_p9JdX5hyphenhyphen6Fo/s400/script_analyse_class_activations_28.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaZFhYVzILNras66jyJncxO0Ha1lg0Ii9Ez5HEqz9TJ2DfQ4TkuZWUVvGAHH8GHVJwCSNU4o9o-hqPi6U55cSPkZk1tY56RHPcm813xPkX5HwIBz3CQN7_1iAm-L7KbyL1jFuZo3hpirA/s1600/script_analyse_class_activations_29.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="747" data-original-width="947" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaZFhYVzILNras66jyJncxO0Ha1lg0Ii9Ez5HEqz9TJ2DfQ4TkuZWUVvGAHH8GHVJwCSNU4o9o-hqPi6U55cSPkZk1tY56RHPcm813xPkX5HwIBz3CQN7_1iAm-L7KbyL1jFuZo3hpirA/s400/script_analyse_class_activations_29.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKZA6mzfbxAHTWpXnTd126z9GtWOlCEHB2HprWT_IPdlUf2UWdtscx0KPQFgItJyITnNDXYdqw5e08lCpeQu6MhlYUFhbdqcvsMNDc9DzL-BdbV8Fdegm8sILFZ1a-023j96nK36rb5rk/s1600/script_analyse_class_activations_30.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="768" data-original-width="928" height="330" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKZA6mzfbxAHTWpXnTd126z9GtWOlCEHB2HprWT_IPdlUf2UWdtscx0KPQFgItJyITnNDXYdqw5e08lCpeQu6MhlYUFhbdqcvsMNDc9DzL-BdbV8Fdegm8sILFZ1a-023j96nK36rb5rk/s400/script_analyse_class_activations_30.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvEbBtPBRaybhV3kKDzJgqalR3_BPyFZYRvzcBxQ0Di65Rv_ypYqLd3FC2d8RsJS138BA_dlqLO8tGMv-jk5v1mbZe4XJsVFNxXdrj0JajfAMJFl0dt0uIIDkF9FeWsGFxbXo2vVcsns0/s1600/script_analyse_class_activations_31.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="769" data-original-width="1051" height="292" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvEbBtPBRaybhV3kKDzJgqalR3_BPyFZYRvzcBxQ0Di65Rv_ypYqLd3FC2d8RsJS138BA_dlqLO8tGMv-jk5v1mbZe4XJsVFNxXdrj0JajfAMJFl0dt0uIIDkF9FeWsGFxbXo2vVcsns0/s400/script_analyse_class_activations_31.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigXYi6K1bBYvB3wrLpgZQS5RAybL1OXRWS2KPiZ0m02_Ez4nwQ2SJGsdGjhxVzAI2kPJqzc1btLDXmI2LB4yJcWc6HlUTe-FMG6bpb7zta2FE5NawBjJu7YkrkEyViM-65MbHGo4lVmDQ/s1600/script_analyse_class_activations_32.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="746" data-original-width="1019" height="292" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigXYi6K1bBYvB3wrLpgZQS5RAybL1OXRWS2KPiZ0m02_Ez4nwQ2SJGsdGjhxVzAI2kPJqzc1btLDXmI2LB4yJcWc6HlUTe-FMG6bpb7zta2FE5NawBjJu7YkrkEyViM-65MbHGo4lVmDQ/s400/script_analyse_class_activations_32.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipUd8ix7si22u5K6s5A16bAyYLPqgALh67aBm4D7TYMSwx6tZbTmx84ET8CErQqV27zscw0hSDeMCNUWPKdDMcdO67M4xRHZrbQzBql9GkVDrFLhu0DcXW8L-6fZ69XFFnF11i4M-iYp4/s1600/script_analyse_class_activations_33.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="785" data-original-width="1019" height="307" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipUd8ix7si22u5K6s5A16bAyYLPqgALh67aBm4D7TYMSwx6tZbTmx84ET8CErQqV27zscw0hSDeMCNUWPKdDMcdO67M4xRHZrbQzBql9GkVDrFLhu0DcXW8L-6fZ69XFFnF11i4M-iYp4/s400/script_analyse_class_activations_33.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzoMP624-c8Os4fyxUG9OgiNuLmBdWzhYTexuSYY7R_T44tQ6ZjlKd2UunY49HA_FoNkanmJovoFp84ukCcNLmXjfDBV2NY9b_Rj6JUL2r5M4rUvfMVB4WDZgur3NSC_OfMe4UYlLx9D0/s1600/script_analyse_class_activations_34.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="721" data-original-width="986" height="291" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzoMP624-c8Os4fyxUG9OgiNuLmBdWzhYTexuSYY7R_T44tQ6ZjlKd2UunY49HA_FoNkanmJovoFp84ukCcNLmXjfDBV2NY9b_Rj6JUL2r5M4rUvfMVB4WDZgur3NSC_OfMe4UYlLx9D0/s400/script_analyse_class_activations_34.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEif83cmbI3kugw94MESKM9qrnBiegYCCroksXfR-90-13i6uk5OHHeBkRqOXFaPXPLeAxJM3kqqRAGgkno5HmmLmhris4tsLjj2xU3P6YI7bCCZEMjb2sE7rk3_OxB6kJsTDFqiEmh7ZIA/s1600/script_analyse_class_activations_35.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="631" data-original-width="766" height="328" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEif83cmbI3kugw94MESKM9qrnBiegYCCroksXfR-90-13i6uk5OHHeBkRqOXFaPXPLeAxJM3kqqRAGgkno5HmmLmhris4tsLjj2xU3P6YI7bCCZEMjb2sE7rk3_OxB6kJsTDFqiEmh7ZIA/s400/script_analyse_class_activations_35.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7klIHX0VVOYnfIjc1IDDPjpcmWQ8FLDHlSxUg31hbILtvCskZJDvaGjaBbmt9xzepK9KVv8SARYwn_XAtPAidedCvPYjNoW2dDn0ReJ7sgFhEV38XRpO9uQwaS_dBhmsXRY_NKNY6lIo/s1600/script_analyse_class_activations_36.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="769" data-original-width="929" height="330" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7klIHX0VVOYnfIjc1IDDPjpcmWQ8FLDHlSxUg31hbILtvCskZJDvaGjaBbmt9xzepK9KVv8SARYwn_XAtPAidedCvPYjNoW2dDn0ReJ7sgFhEV38XRpO9uQwaS_dBhmsXRY_NKNY6lIo/s400/script_analyse_class_activations_36.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKSJ3vcfeZazbrkiG5oC4SY4PlyI1avBJ425wbSuOdhGZy5H-iiKpCVA_CGJJ5KjGXrUAZkGbgr3BfLW5mag7TU9kXI3yeAzkwgG2_H30y2EJQv6UDyNTlUuge1y8keEOKGD96nzaqbLY/s1600/script_analyse_class_activations_37.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="749" data-original-width="970" height="308" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKSJ3vcfeZazbrkiG5oC4SY4PlyI1avBJ425wbSuOdhGZy5H-iiKpCVA_CGJJ5KjGXrUAZkGbgr3BfLW5mag7TU9kXI3yeAzkwgG2_H30y2EJQv6UDyNTlUuge1y8keEOKGD96nzaqbLY/s400/script_analyse_class_activations_37.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpdiqQ5PJlO31bi-LzHpepVWt4Fw27kwofaWcvVdgSSmlfyyGmKn0DJNUEMuVCKOds-yQyk9zVET4WFfmee6hcdfYqYbBrjs9GpHOsXkd8EIEgB7gf4f7o2prNXzAUS9EPAyDOp6DmkwQ/s1600/script_analyse_class_activations_38.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="797" data-original-width="1177" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpdiqQ5PJlO31bi-LzHpepVWt4Fw27kwofaWcvVdgSSmlfyyGmKn0DJNUEMuVCKOds-yQyk9zVET4WFfmee6hcdfYqYbBrjs9GpHOsXkd8EIEgB7gf4f7o2prNXzAUS9EPAyDOp6DmkwQ/s400/script_analyse_class_activations_38.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjUyuR8XCMNH8yGTV9t6vpd8CKZpgz7zZGRJStH6rsYUsb4gEGTJtm5H8uWX_1zl57P5iAuYqfoWDz2-DnWm2dNaEZ2kHdTAtxx1ROh73QEEMlXPivc3q32pYO9aTyLd5QnsEpuL2rjqk/s1600/script_analyse_class_activations_39.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="669" data-original-width="1142" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjUyuR8XCMNH8yGTV9t6vpd8CKZpgz7zZGRJStH6rsYUsb4gEGTJtm5H8uWX_1zl57P5iAuYqfoWDz2-DnWm2dNaEZ2kHdTAtxx1ROh73QEEMlXPivc3q32pYO9aTyLd5QnsEpuL2rjqk/s400/script_analyse_class_activations_39.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4XU_pCwv-mFADEEdvEjck9qbHB9oWdRNpKpng0ZVpwg4pa4oqTGvdJgbObTLSo7HN1z6ZhPJ7GEsn9ocnhTHTgShSBcQGLiRR5rEB672YiKQXVd4CHfASuP23xf1ORsmT9CAAiOBi1wY/s1600/script_analyse_class_activations_40.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="770" data-original-width="1322" height="232" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4XU_pCwv-mFADEEdvEjck9qbHB9oWdRNpKpng0ZVpwg4pa4oqTGvdJgbObTLSo7HN1z6ZhPJ7GEsn9ocnhTHTgShSBcQGLiRR5rEB672YiKQXVd4CHfASuP23xf1ORsmT9CAAiOBi1wY/s400/script_analyse_class_activations_40.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMXrXXoGNMpPNS7xi3YMo3lwaOD0MolZ29GVmAobwmgDnarEb6G-WCeV5ad8EjM2o5i2DtEqSJdUyfZFWHzowsldRhC5bv16xqdTGhd8gBKOPUaF04AOFGr55uzbKhHwCZTeciqVk7TZw/s1600/script_analyse_class_activations_41.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="861" data-original-width="1294" height="265" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMXrXXoGNMpPNS7xi3YMo3lwaOD0MolZ29GVmAobwmgDnarEb6G-WCeV5ad8EjM2o5i2DtEqSJdUyfZFWHzowsldRhC5bv16xqdTGhd8gBKOPUaF04AOFGr55uzbKhHwCZTeciqVk7TZw/s400/script_analyse_class_activations_41.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3f-XcrBj1Fisp2b3seMloYRlQvhtEYA0FzYP15lDOwxkIf2zyB6eRrwBlL43acvhdcg7F2VVK5QBKHOF8sMGkoGXxmImolg5TGriiiu4JvB077NlTKC6N9LqBthGyej8HRn_39mXGBYY/s1600/script_analyse_class_activations_42.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="733" data-original-width="1174" height="248" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3f-XcrBj1Fisp2b3seMloYRlQvhtEYA0FzYP15lDOwxkIf2zyB6eRrwBlL43acvhdcg7F2VVK5QBKHOF8sMGkoGXxmImolg5TGriiiu4JvB077NlTKC6N9LqBthGyej8HRn_39mXGBYY/s400/script_analyse_class_activations_42.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiv7FZoswSis2oGmrW1-T4rj-1Q3PHoxirYtBbAnVYzcpWz47H2mJ9hjCZ1qihE0-zEN7iYPjpeqaseg-fRNJrK6RbBxukCPU0HXdwnlbf-pRlkGFRPcN_2SvCq_mUzaHgJols1wzr0oFA/s1600/script_analyse_class_activations_43.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="805" data-original-width="1238" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiv7FZoswSis2oGmrW1-T4rj-1Q3PHoxirYtBbAnVYzcpWz47H2mJ9hjCZ1qihE0-zEN7iYPjpeqaseg-fRNJrK6RbBxukCPU0HXdwnlbf-pRlkGFRPcN_2SvCq_mUzaHgJols1wzr0oFA/s400/script_analyse_class_activations_43.png" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjik-hUoya1zONZMOWbnyhUDLsCX64BuC_PZgvRRS5sbPq3ZkZFG9Zmbtmpn2zl9DhferL0dmhfxsyOA9Bn5jXYs1Cbg9fp1JQN_Qjfn4sfM8CBdXPJLNzkqGXAXO3RXd2cApDodyO0JRk/s1600/script_analyse_class_activations_44.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="733" data-original-width="910" height="321" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjik-hUoya1zONZMOWbnyhUDLsCX64BuC_PZgvRRS5sbPq3ZkZFG9Zmbtmpn2zl9DhferL0dmhfxsyOA9Bn5jXYs1Cbg9fp1JQN_Qjfn4sfM8CBdXPJLNzkqGXAXO3RXd2cApDodyO0JRk/s400/script_analyse_class_activations_44.png" width="400" /></a></div>
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-63254172835584649422020-04-08T02:25:00.001-07:002020-06-10T11:12:53.237-07:00Deep Learning Analysis of COVID-19 lung X-Rays using MATLAB: Part 1<b><i>UPDATE: see <a href="https://flylogical.blogspot.com/2020/06/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 6</a> where I provide some composite models based on a combination of the most effective subset of previous models, plus I've published a live website where you can try them out for yourself by uploading a lung X-ray. You can also download all the models for your own further experimentation.</i></b><br />
<b><i><br />
<b><i>UPDATE: see <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung_30.html" target="_blank">Part 5</a> where the grad-CAM results of <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4</a> are used to train another suite of networks to help choose between all the lung X-ray classifiers presented in <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>. </i></b><br />
<b><i><br />
UPDATE: see <a href="https://flylogical.blogspot.com/2020/05/deep-learning-analysis-of-covid-19-lung.html" target="_blank">Part 4 </a>where I've performed a grad-CAM analysis on all the trained networks from <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3</a>, in the theme of <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_20.html" target="_blank">Part 2</a>.</i></b><br />
<b><i><br /></i></b>
<b><i>UPDATE: see <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_28.html" target="_blank">Part 3 </a>where I've now compared the (Transfer Learning) performance of all 19 neural network types available via MATLAB R2020a on the lung X-ray analysis i.e., extending beyond just <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> covered here </i></b><br />
<b><i><br /></i></b>
<b><i>UPDATE: see <a href="https://flylogical.blogspot.com/2020/04/deep-learning-analysis-of-covid-19-lung_20.html">Part 2</a> where I have now performed a Class Activation Map study to successfully counter the caveats in this post whereby I had a concern that the trained networks may be utilising extraneous artefacts embedded in the X-ray images (e.g., text etc) to exaggerate their predictive performance</i></b><br />
<b><i></i><br /></b>
<b>*** DISCLAIMER ***</b><br />
<br />
<i>I have no medical training. Nothing presented here should be considered in any way as informative from a medical point-of-view. This is simply an exercise in image analysis via Deep Learning using MATLAB, with lung X-rays as a topical example in these times of COVID-19. </i><br />
<i></i><br />
<b>INTRODUCTION</b><br />
<br />
Recent examples of lung X-ray image classification via deep learning have utilised TensorFlow. The most comprehensive approach I could find so far is described <a href="https://www.technologyreview.com/s/615399/coronavirus-neural-network-can-help-spot-covid-19-in-chest-x-ray-pneumonia/" target="_blank">here</a> with detail <a href="https://arxiv.org/pdf/2003.09871.pdf" target="_blank">here</a> (which I'll refer to as COVID-Net). I just wanted to try something similar in MATLAB since that is my tool of choice in my day job where I use MATLAB for various Artificial Intelligence / Machine Learning investigations and also for other side-projects such as <a href="http://flylogical.blogspot.com/2019/10/neuralmet-updates.html" target="_blank">aviation weather forecasting</a>.<br />
<br />
<b>APPROACH</b><br />
<b><br /></b>
My goal was to use the underlying chest X-ray image dataset from COVID-Net to train a deep neural network via the technique of transfer learning, just to see how well the resulting classifiers would perform. All analysis is performed using MATLAB, and code snippets are provided which may be useful to others.<br />
<br />
<b>SAMPLE IMAGES</b><br />
<b></b><br />
Before getting started with the analysis, here are some sample images from which the training and testing will be performed, just to give an idea of the challenge for the neural networks in classifying between the various alternatives.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimwFC53_96K15ig5ETF0JsB6RAypBxHCLkZsO4S2DuGnlwwoUGnkQ6ubrcQjbcGve_cuhy0jw7zhWliOAviGHGXJdyyOBi4Xw10a7mLU6fFshF2UeEJG08gNsfI0tAkF7SjWn_st1A5MU/s1600/Capture_XRAY_HEALTHY.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="414" data-original-width="497" height="332" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimwFC53_96K15ig5ETF0JsB6RAypBxHCLkZsO4S2DuGnlwwoUGnkQ6ubrcQjbcGve_cuhy0jw7zhWliOAviGHGXJdyyOBi4Xw10a7mLU6fFshF2UeEJG08gNsfI0tAkF7SjWn_st1A5MU/s400/Capture_XRAY_HEALTHY.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Healthy</td></tr>
</tbody></table>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggWLZCp4j-af79curIWkt8VIp4Y2eK8cZ5FQ77tetKIqYwKWa2OsgBn4JWkSR_yfa84Go5Ke4P6l88wWLiK4iZMX6EU0bQcI8v-eG4cj08P7V4dy1ixK6izyxRN_dlBWp0yIOpYUyB_vY/s1600/Capture_XRAY_BACTERIA.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="341" data-original-width="500" height="272" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggWLZCp4j-af79curIWkt8VIp4Y2eK8cZ5FQ77tetKIqYwKWa2OsgBn4JWkSR_yfa84Go5Ke4P6l88wWLiK4iZMX6EU0bQcI8v-eG4cj08P7V4dy1ixK6izyxRN_dlBWp0yIOpYUyB_vY/s400/Capture_XRAY_BACTERIA.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Bacterial Pneumonia</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqku_gLopQ3dEZJv3BlUhueW-pRoTDz_mn-AXb_frD8_De2TEmvAbr1FdWPBDwDRLQUjh_t7j8kCJ0R9d3ULOEoLjGvqFVS3hPp__HoAA2Y4MnUGBVgtqYwBIym2-ECXyMV3Sgv61rsWo/s1600/Capture_XRAY_VIRAL_OTHER.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="404" data-original-width="494" height="326" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqku_gLopQ3dEZJv3BlUhueW-pRoTDz_mn-AXb_frD8_De2TEmvAbr1FdWPBDwDRLQUjh_t7j8kCJ0R9d3ULOEoLjGvqFVS3hPp__HoAA2Y4MnUGBVgtqYwBIym2-ECXyMV3Sgv61rsWo/s400/Capture_XRAY_VIRAL_OTHER.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Viral Pneumonia (not COVID-19)</td></tr>
</tbody></table>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCXdb0cLy5lsJ3cGNYDw1950XAGAX4jKDFVREr6NwHNmKbi2avuNWo9RzJR3XxIrFZvBJLcpbZv18dyHQdRM-8nPxE9Jq9WLFfcjovEhy_QfuzYM90Yq1aEo8JKbG5M-gExMFCMiMG324/s1600/Capture_XRAY_COVID.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="469" data-original-width="496" height="377" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCXdb0cLy5lsJ3cGNYDw1950XAGAX4jKDFVREr6NwHNmKbi2avuNWo9RzJR3XxIrFZvBJLcpbZv18dyHQdRM-8nPxE9Jq9WLFfcjovEhy_QfuzYM90Yq1aEo8JKbG5M-gExMFCMiMG324/s400/Capture_XRAY_COVID.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">COVID-19 Pneumonia</td></tr>
</tbody></table>
<br />
<br />
<b>EXAMPLE 1: "YES / NO" Classification of Pneumonia</b><br />
<b><br /></b>
<b>Data Preparation</b><br />
<b><br /></b>
This first example addresses the task of training the network to classify whether a given X-ray belongs to a normal (healthy) patient or one suffering from pneumonia, irrespective of the type of pneumonia (i.e., bacterial or viral, etc).<br />
<br />
The dataset has 752 normal images, and 4558 with pneumonia (across all types). To provide a balanced set across the two target classes ("yes" for pneumonia and "no" for healthy), I used only 752 images from each (all of the "no" class, and randomly selected from 4556 of the "yes" class). From each class, I used (randomly selected) 85% i.e., 640 for training and 15% i.e, 112 for validation. The line of code which does this in MATLAB is:<br />
<br />
<br />
</i></b><br />
<div>
<b><i><span style="font-family: "courier new" , "courier" , monospace;">[trainingImages,validationImages,holdoutImages] = splitEachLabel(images,640,112,'randomized');</span><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> </span></i></b></div>
<b><i>
<br />
where <span style="font-family: "courier new" , "courier" , monospace;">images</span> refers to an <span style="font-family: "courier new" , "courier" , monospace;">imageDatastore</span> object initialised on a master folder of images sorted into two subfolders containing the images from each (YES and NO) class, created via the following line of code:<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">images = imageDatastore(sortedPath,'IncludeSubfolders',true,'LabelSource','foldernames');
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<br />
where <span style="font-family: "courier new" , "courier" , monospace;">sortedPath</span> is the variable containing the name of the master folder. Note: I performed the sorting offline based simply on the substring "NORMAL" appearing in the filename (to define the "NO" class) assuming that all filenames without "NORMAL" were in the "YES" class. Here's a code snippet showing how to do this in MATLAB:<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">allImages = imageDatastore('\covid\data\train','IncludeSubfolders',false); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">yesPath='\covid\sorted\yesno\yes\'; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">noPath='\covid\sorted\yesno\no\'; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">for i=1:length(allImages.Files) </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> [~,name,ext] = fileparts(char(allImages.Files(i))); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> if contains(name, 'NORMAL') </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> destfolder=noPath; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> else </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> destfolder=yesPath; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> end </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span><span style="font-family: "courier new" , "courier" , monospace;"> destfile=[destfolder name ext]; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> copyfile(char(allImages.Files(i)),destfile); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">end</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<br />
Next, I created an <span style="font-family: "courier new" , "courier" , monospace;">imageDataAugmenter</span> with random translation shifts of +/-3 pixels and rotational shifts of +/- 10 degrees via the following line of code:<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">imageAugmenter = imageDataAugmenter( ...
'RandRotation',[-10,10], ...
'RandXTranslation',[-3 3], ...
'RandYTranslation',[-3 3]);</span><br />
<br />
Applying this to the training set with the inclusion of the<span style="font-family: "courier new" , "courier" , monospace;"> 'gray2rgb'</span> colour pre-processor (so that all the different types of image files e.g., jpeg, png, etc., can be imported via the same dataset without error) gives the actual training set used (denoted <span style="font-family: "courier new" , "courier" , monospace;">trainingImages_</span>):<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">trainingImages_=augmentedImageDatastore(outputSize,trainingImages,'ColorPreprocessing','gray2rgb','DataAugmentation',imageAugmenter);
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
...and similarly for the validation set but <i>without</i> the augmentation:
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">validationImages_=augmentedImageDatastore(outputSize,validationImages,'ColorPreprocessing','gray2rgb');
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
Note that <span style="font-family: "courier new" , "courier" , monospace;">outputSize</span> is set as follows since I'm using<span style="font-family: "courier new" , "courier" , monospace;"> googlenet </span>in the transfer learning:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">outputSize=[224 224 3]; %FOR GOOGLENET
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<b>Network Preparation</b><br />
<b></b><br />
Here's the code I used to prepare the pre-trained <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> for transfer learning (i.e., by replacing the final few layers of the network):<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;">net = googlenet;
lgraph = layerGraph(net); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;">%Replace final layers </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;">lgraph = removeLayers(lgraph, {'loss3-classifier','prob','output'}); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">numClasses = numel(categories(trainingImages.Labels)); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;">newLayers = [
fullyConnectedLayer(numClasses,'Name','fc','WeightLearnRateFactor',10,'BiasLearnRateFactor',10)
softmaxLayer('Name','softmax')
classificationLayer('Name','classoutput')]; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;">lgraph = addLayers(lgraph,newLayers); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">%Connect the last transferred layer remaining in the network %('pool5-drop_7x7_s1') to the new layers. </span><br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">lgraph = connectLayers(lgraph,'pool5-drop_7x7_s1','fc');
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
The figure below shows the last few layers of the network with the above replacements:<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs1776BCukbMVNsy1TZG5aSemb9fJ9map92ioZ4BWxnj2Eo1Qv5-lCY0hCa6wvFBhhL9Uf3cPL2S7KNaQk2c95USKInHRhOdpnLG9Pjq7NHFRMxIBF2akDHnBMQQRhweRPbUJxL1LkZQI/s1600/Capture_COVID_NN_LAYERS.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="355" data-original-width="617" height="230" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs1776BCukbMVNsy1TZG5aSemb9fJ9map92ioZ4BWxnj2Eo1Qv5-lCY0hCa6wvFBhhL9Uf3cPL2S7KNaQk2c95USKInHRhOdpnLG9Pjq7NHFRMxIBF2akDHnBMQQRhweRPbUJxL1LkZQI/s400/Capture_COVID_NN_LAYERS.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Transfer Learning preparation: replacement of last few layers of <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span></td></tr>
</tbody></table>
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
which was displayed using the following code:<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">figure('Units','normalized','Position',[0.3 0.3 0.4 0.4]); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">plot(lgraph) </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">ylim([0,10]);
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
The training options are set as follows (by trial-and-error mostly!)<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">miniBatchSize = 10; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">MaxEpochs=12; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">numIterationsPerEpoch = floor(numel(trainingImages.Labels)/miniBatchSize); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;">options = trainingOptions('sgdm',...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'MiniBatchSize',miniBatchSize,...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'MaxEpochs',MaxEpochs,...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'InitialLearnRate',1e-4,... </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'Verbose',false,... </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'Plots','training-progress',... </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'ValidationData',validationImages_,... </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'ValidationFrequency',numIterationsPerEpoch,... </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'ValidationPatience',Inf);</span>
<br />
<br />
...and the actual training is performed via the following line of code:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">netTransfer = trainNetwork(trainingImages_,lgraph,options);
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<b>Results</b><br />
<b></b><br />
Since my options include <span style="font-family: "courier new" , "courier" , monospace;">'Plots','training-progress'</span>, the following chart is presented (in real-time as training progresses):<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPMuf4ti1XqOtDHMSU9Hl8RKnqT24bvmyq_UtHxE2I-4ajK_W_KzxEE93RJmwkneDn4v8En4SiYoqKD9kYl9RqJpMpEAKRGYqlt8_l4XvhYLEcSuL7sO88fTq8AejNDCxRAzBZBmfHjJk/s1600/Capture_COVID_PNEUMONIA_YES_NO_CHART.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a></div>
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPMuf4ti1XqOtDHMSU9Hl8RKnqT24bvmyq_UtHxE2I-4ajK_W_KzxEE93RJmwkneDn4v8En4SiYoqKD9kYl9RqJpMpEAKRGYqlt8_l4XvhYLEcSuL7sO88fTq8AejNDCxRAzBZBmfHjJk/s1600/Capture_COVID_PNEUMONIA_YES_NO_CHART.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a><br />
<div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPMuf4ti1XqOtDHMSU9Hl8RKnqT24bvmyq_UtHxE2I-4ajK_W_KzxEE93RJmwkneDn4v8En4SiYoqKD9kYl9RqJpMpEAKRGYqlt8_l4XvhYLEcSuL7sO88fTq8AejNDCxRAzBZBmfHjJk/s1600/Capture_COVID_PNEUMONIA_YES_NO_CHART.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a><br />
<div>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPMuf4ti1XqOtDHMSU9Hl8RKnqT24bvmyq_UtHxE2I-4ajK_W_KzxEE93RJmwkneDn4v8En4SiYoqKD9kYl9RqJpMpEAKRGYqlt8_l4XvhYLEcSuL7sO88fTq8AejNDCxRAzBZBmfHjJk/s1600/Capture_COVID_PNEUMONIA_YES_NO_CHART.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="793" data-original-width="1018" height="311" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPMuf4ti1XqOtDHMSU9Hl8RKnqT24bvmyq_UtHxE2I-4ajK_W_KzxEE93RJmwkneDn4v8En4SiYoqKD9kYl9RqJpMpEAKRGYqlt8_l4XvhYLEcSuL7sO88fTq8AejNDCxRAzBZBmfHjJk/s400/Capture_COVID_PNEUMONIA_YES_NO_CHART.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div>
Example 1: Training Convergence</div>
</td></tr>
</tbody></table>
</div>
<div>
<br /></div>
</div>
</div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
It can be seen that the training converges nicely, though given the slight downturn in the validation accuracy at the end of the run (refer to the black dots in the upper blue curve) and the slight upturn in loss ( black dots in lower orange curve), there has been a small degree of overfitting. The training should therefore have been stopped slightly earlier, ideally.<br />
<br />
For assessing the classification performance, the error statistics applied to the validation set are conveniently presented by way of the Confusion Matrix, as follows:<br />
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">predictedLabelsValidation = classify(netTransfer,validationImages_);
plotconfusion(validationImages.Labels,predictedLabelsValidation);
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
which produces the following chart:<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgK8HTaxdm3PqRE0U7e2_YMKoqYmNiBuFOzfArWpKLB31biSWybplFqpfM12sBeR17tyIBiVpZxjwj2ivrNGOnjSvsnyJHf7iP8leKectNYgJ82_lbjy7-MIRpvE_SPs1FoXFNE8V4oHQM/s1600/Capture_COVID_PNEUMONIA_YES_NO_CONFUSION.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="596" data-original-width="587" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgK8HTaxdm3PqRE0U7e2_YMKoqYmNiBuFOzfArWpKLB31biSWybplFqpfM12sBeR17tyIBiVpZxjwj2ivrNGOnjSvsnyJHf7iP8leKectNYgJ82_lbjy7-MIRpvE_SPs1FoXFNE8V4oHQM/s400/Capture_COVID_PNEUMONIA_YES_NO_CONFUSION.PNG" width="393" /></a></div>
</td></tr>
<tr><td class="tr-caption" style="text-align: center;"><div>
Example 1: Confusion Matrix for validation set</div>
<div>
<br /></div>
</td></tr>
</tbody></table>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
The performance is quite reasonable, with an average accuracy of 90.2% (true negative: 85.7%, and true positive 95.9%; false negative 14.3%, and false positive 4.1%). Caveat: I have <i>not</i> checked if there is some spurious reason which makes the performance appear artificially better than it is. For example, an identifying text character etc., which may be present in (some or all of) the images which gives a definitive clue to the "yes" or "no" nature of the content such that the image classifier is actually -- and erroneously -- picking-up on this clue rather than identifying the actual lung state. I simply took the entire images "as is". A more rigorous analysis would need to check for such.<br />
<br />
<br />
<b>EXAMPLE 2: Classification "Bacterial" or "Viral" Pneumonia</b><br />
<b></b><br />
<b>Data Preparation</b><br />
<br />
For this next task, I assume that we know that the patient is suffering from pneumonia, but want to train a network to determine whether the pneumonia is viral or bacterial. Again, this is a two-class problem where the classes are "bacteria" and "virus". The training set is constructed by taking the training images from the "yes" bucket (i.e., the known pneumonia cases) from Example 1 and sorting them into "bacteria" and "virus". Again, I did this offline based on whether the filenames included the substrings "bacteria" or "streptococcus" and placing those in the "bacteria" subfolder, and all others in the "virus" subfolder. In this case, there was almost an equal number of images for each class, so the datasets were created as follows (using 2019 from each class, split 85%/15% train-validate, as before):<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">[trainingImages,validationImages,holdoutImages] = splitEachLabel(images,1717,302,'randomized');</span>
<br />
<br />
Thereafter, the preparation steps mirror those for Example 1.<br />
<br />
<b>Network Preparation</b><br />
<br />
Mirroring the steps followed for Example 1.<br />
<br />
<b>Results</b><br />
<br />
<br />
The corresponding training convergence plot is shown below:<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGNomHNmI6aURTTtvSmJiHlXvAwjbOYeWWQeQnUqM1cDqDsC8zGyOY-2AE8fv8E0vLqJbOvvRC-iutAIsmmFKRUbMn3px_NrAQcxmQyaVWvrGvtvzlu13iPS0M9e9gINicX2aWMmzSdWI/s1600/Capture_COVID_PNEUMONIA_BAC_VIR_CHART.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="788" data-original-width="1020" height="308" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGNomHNmI6aURTTtvSmJiHlXvAwjbOYeWWQeQnUqM1cDqDsC8zGyOY-2AE8fv8E0vLqJbOvvRC-iutAIsmmFKRUbMn3px_NrAQcxmQyaVWvrGvtvzlu13iPS0M9e9gINicX2aWMmzSdWI/s400/Capture_COVID_PNEUMONIA_BAC_VIR_CHART.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Example 2: Training Convergence</td></tr>
</tbody></table>
<br />
<b></b><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Again, the convergence is good (though not as convincing as in Example 1). There is likewise a (slightly more pronounced) degree of over-fitting which could be eliminated by stopping earlier. The corresponding Confusion Matrix computed for the validation set is shown below:<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWagjDjDIObrOTES2BojFgo8jxufP9uK7jjbjskR4qjdbbBWkniG_M0wDwLAeYVCUa7gMVOmHRlUP2xqBndXxwlfRI0x7mEp9BckN9DpzPYZP_8Cmc-ljFTivBnvn9Vy4Km_5BmFudo7s/s1600/Capture_COVID_PNEUMONIA_BAC_VIR_CONFUSION.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="601" data-original-width="574" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWagjDjDIObrOTES2BojFgo8jxufP9uK7jjbjskR4qjdbbBWkniG_M0wDwLAeYVCUa7gMVOmHRlUP2xqBndXxwlfRI0x7mEp9BckN9DpzPYZP_8Cmc-ljFTivBnvn9Vy4Km_5BmFudo7s/s400/Capture_COVID_PNEUMONIA_BAC_VIR_CONFUSION.PNG" width="381" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Example 2: Confusion Matrix for validation set</td></tr>
</tbody></table>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
The performance is reasonable (though not as good as in Example 1) with an average classification accuracy of 78% (true bacteria: 79.2%, and true virus 76.8%; false bacteria 20.8%, and false virus 22%). Caveat: again, I have <i>not</i> checked if there are underlying clues in the images which exaggerate the performance: I simply took the entire images "as is".<br />
<br />
<b>EXAMPLE 3: Classification of COVID-19 or Other-Viral</b><br />
<b><br /></b>
<b>Data Preparation</b><br />
<br />
For this next task, I assume that we know the patient is suffering from some form of viral pneumonia but want to train a network to determine whether the pneumonia is COVID-19 rather than some other form (SARS, MERS, etc.).. Again, this is a two-class problem where the classes are "covid" and "other". The training set is constructed by taking the training images from the "viral" bucket (i.e., the known viral pneumonia cases) from Example 2 and sorting them into "covid" and "other". Again, I did this offline based on whether the filenames included the substrings "covid" or "corona" and placing those in the "covid" subfolder, and all others in the "other" subfolder. In this case, there were only 76 covid images versus 2014 other viral, so the datasets were created as follows (using only 76 from each class, split 85%/15% train-validate, as before):
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">[trainingImages,validationImages,holdoutImages] = splitEachLabel(images,65,11,'randomized');
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
Thereafter, the preparation steps mirror those in the previous examples.<br />
<br />
<b>Network Preparation</b><br />
<b></b><br />
Mirroring the steps from the previous examples except setting the maximum number of epochs to 7 rather than 12 in order to prevent overfitting due to the relatively small number of training images compared with the previous examples.<br />
<br />
<b>Results</b><br />
<b></b><br />
The corresponding training convergence plot is shown below:<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjITC9QBjA2sl-rV0gawFpFzUIAzu_K2mWnWXwd3p1TL4Wa2-VZr33EkqjBts3P5IsN5o8rOapLIIStAGVA5QbDTRc8t7LeCjSw7abINoqR19-wjat_Q0WRFrowamhyZZk5pYvjCU4cyQo/s1600/Capture_COVID_PNEUMONIA_COVID_VS_OTHER_VIR_CHART.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="787" data-original-width="1028" height="305" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjITC9QBjA2sl-rV0gawFpFzUIAzu_K2mWnWXwd3p1TL4Wa2-VZr33EkqjBts3P5IsN5o8rOapLIIStAGVA5QbDTRc8t7LeCjSw7abINoqR19-wjat_Q0WRFrowamhyZZk5pYvjCU4cyQo/s400/Capture_COVID_PNEUMONIA_COVID_VS_OTHER_VIR_CHART.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Example 3: Training Convergence</td></tr>
</tbody></table>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Again, the convergence is good. The plot is far less dense than previous examples owing to the significantly reduced number of training images and reduced number of epochs. The corresponding Confusion Matrix computed for the validation set is shown below.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6sE1onsxua8UE7TCKmrK5gaCMaXL0HpTn0pHgrDy76W5khrQLfa5AWSScMp6JmDGFk4r2QPqUNc1lZ5QlrOxBp_cNVZSXgf7sgyD5XM20o3-jUro8zDmtsP_SdkLLf8fRrBe2QEtG4UA/s1600/Capture_COVID_PNEUMONIA_COVID_VS_OTHER_VIR_CONFUSION.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="592" data-original-width="582" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj6sE1onsxua8UE7TCKmrK5gaCMaXL0HpTn0pHgrDy76W5khrQLfa5AWSScMp6JmDGFk4r2QPqUNc1lZ5QlrOxBp_cNVZSXgf7sgyD5XM20o3-jUro8zDmtsP_SdkLLf8fRrBe2QEtG4UA/s400/Capture_COVID_PNEUMONIA_COVID_VS_OTHER_VIR_CONFUSION.PNG" width="392" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Example 3: Confusion Matrix for validation set</td></tr>
</tbody></table>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
The performance is good, albeit on a rather small validation set of only 22 images (11 covid, 11 other-viral) with an average classification accuracy of 95.5% (true covid: 100%, and true other-virus 91.7%; false covid 0%, and false other-virus 4.5%). Caveat: again, I have <i>not</i> checked if there are underlying clues in the images which exaggerate the performance: I simply took the entire images "as is".<br />
<br />
<b>EXAMPLE 4: Determine if COVID-19 pneumonia versus Healthy, Bacterial, or non-COVID viral pneumonia</b><br />
<b></b><br />
<b>Data Preparation</b><br />
<b></b><br />
In this final task, the challenge for the neural network is the most demanding: namely, from a given lung X-ray, determine if the patient is healthy, has bacterial pneumonia, non-COVID-19 viral pneumonia, or COVID-19 pneumonia. This is a four-class problem rather than all previous examples which were (simpler) two-class problems. The four classes are "healthy", "bacteria", "viral-other", and "covid". The training set is the entire basket of training images, but since there are only 76 covid images, then only 76 are used for each of the classes (65 for training, 11 for validation) in order to balance the dataset when training the neural network, as follows (i.e., same code as in Example 3):<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">[trainingImages,validationImages,holdoutImages] = splitEachLabel(images,65,11,'randomized');
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
Thereafter, the preparation steps mirror those in the previous examples.<br />
<br />
<b>Network Preparation</b><br />
<b></b><br />
Mirroring the steps from the previous examples except setting the maximum number of epochs to 10 rather than 12 in order to prevent overfitting due to the relatively small number of training images.<br />
<br />
<b>Results</b><br />
<b></b><br />
The corresponding training convergence plot is shown below:<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgT_mN1HuqZvrqhdXIwUc9L7s_lejI1Qq_i0Q8zQLwlglLCWySSroNDu15g48fErxoorxMlJdkW_XHzGAefAaRnBlMhQw6GsC9igKshDzKmDzIM79AsssIorbzDMMQzMz-dWihyphenhyphenwFMyguQ/s1600/Capture_COVID_PNEUMONIA_MULTICLASS_CHART.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="792" data-original-width="1035" height="305" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgT_mN1HuqZvrqhdXIwUc9L7s_lejI1Qq_i0Q8zQLwlglLCWySSroNDu15g48fErxoorxMlJdkW_XHzGAefAaRnBlMhQw6GsC9igKshDzKmDzIM79AsssIorbzDMMQzMz-dWihyphenhyphenwFMyguQ/s400/Capture_COVID_PNEUMONIA_MULTICLASS_CHART.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Example 4: Training Convergence</td></tr>
</tbody></table>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Again, the convergence is good thought quite as impressive as for some of the two-class examples.<br />
<br />
The corresponding Confusion Matrix computed for the validation set is shown below.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBE9j8yDMQzC6F2EqsV7_F5Jvw5xl89QVxd7fYsZRfhJVbFZELbnmGKuFBA5LC8cRA7ThWyZK2srMb1I1vY360y4pc8aXMsYhlx46vSYjTEcOV5V7zvEkrnkpDNhabNGhu2L0pWJJdqwM/s1600/Capture_COVID_PNEUMONIA_MULTICLASS_CONFUSION.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="599" data-original-width="574" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBE9j8yDMQzC6F2EqsV7_F5Jvw5xl89QVxd7fYsZRfhJVbFZELbnmGKuFBA5LC8cRA7ThWyZK2srMb1I1vY360y4pc8aXMsYhlx46vSYjTEcOV5V7zvEkrnkpDNhabNGhu2L0pWJJdqwM/s400/Capture_COVID_PNEUMONIA_MULTICLASS_CONFUSION.PNG" width="382" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Example 4: Confusion Matrix for validation set</td></tr>
</tbody></table>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
The performance is surprisingly good, albeit on a rather small validation set of only 44 images (11 for each class) with an average classification accuracy of 75%. Interestingly, all the COVID-19 examples are correctly identified as such. Moreover, there are no non-COVID-19 images which are erroneously mis-identified as COVID-19. Caveat: again, I have <i>not</i> checked if there are underlying clues in the images which exaggerate the performance: I simply took the entire images "as is".<br />
<br />
<b>CONCLUSIONS</b><br />
<b><br /></b>
<br />
<ul>
<li>As an exercise in using MATLAB for Deep learning, this has been a definite success. I understand that TensorFlow is free of charge, and MATLAB is not. But if you <i>do</i> have access to MATLAB with the required Toolboxes for Deep Learning, it is a very powerful framework and easy to use compared with TensorFlow (my opinion).</li>
<li>As an exercise in identifying COVID-19 lung X-rays images from non-COVID-19 images, the approach of Transfer Learning from the pre-trained <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span> seems promising in terms of the observed performance. However, to do this properly, the caveat I have raised throughout (about possible clues being embedded in the images which exaggerate the performance of the classifiers) would need to be properly addressed. </li>
<li>That said, as a next step it would be interesting to properly compare the custom network architecture developed in COVID-Net with the (much simpler) approach presented here (i.e., using only a slightly-modified <span style="font-family: "courier new" , "courier" , monospace;">googlenet</span>). If anyone has the inclination to program the COVID-Net network in MATLAB, please let me know. I would like to help if I can.</li>
<li>I will re-visit the COVID-Net data resources and re-train the models whenever the COVID-19 image set becomes more extensive.</li>
<li>Keep safe.</li>
</ul>
</i></b>Unknownnoreply@blogger.com17tag:blogger.com,1999:blog-223455584910870050.post-54323082891864529552020-04-04T05:05:00.001-07:002020-04-04T05:05:30.950-07:00NOTAMS Service UpdateIn these strange and worrying times, flying is rightly on the back-burner. But one upside of the down-time is the opportunity to get some odd jobs done. So, I'm pleased to say that I've managed to tick off one important item from that "To Do" list -- namely, I've migrated the NOTAMS service from AIDAP to SWIM Cloud Distribution Service (SCDS) as mandated by the FAA (details <a href="https://www.faa.gov/about/initiatives/notam/" target="_blank">here</a>) since AIDAP is in the process of being deprecated this year. Users of my <i>iNavCalc</i> and <i>JustNOTAMS</i> apps may therefore have noticed a slight change in the format of the NOTAMS, reflecting this migration (which went live last weekend).<br />
<br />
For those interested in the technical aspects, here's a summary of the various bits and pieces:<br />
<br />
<ol>
<li>The solution starts with receiving a continuous stream of NOTAMS from SCDS, published via a JMS (Java Messaging Service). I implemented this piece in Java (a language I'd written in only very occasionally in the past -- many thanks to the FAA team for their jump-start sample code, showing how it's done). To get back into my comfort zone (i.e., out of Java into something else), I implemented (in the Java code) a simple mechanism to channel the JMS messages into an AWS SQS (Amazon Web Services Simple Queuing Service) queue. The simple Java app is running continuously on the main FlyLogical app server (no additional server resources required).</li>
<li>I then wrote an AWS Lambda function (in C# dotnet core 2.1) to process each message from the SQS queue into an existing FlyLogical SQL database (hosted on the Microsoft Azure cloud). I made use of the built-in triggering mechanism which enables an AWS SQS queue to automatically call the Lambda function whenever a new message arrives.</li>
<li>Finally, I augmented the existing RESTful API services (used across the current FlyLogical apps) to retrieve on-demand the NOTAMS from the SQL database and serve them up to the apps in similar (XML) format as AIDAP. </li>
</ol>
<div>
This approach allowed for maximal code re-use and minimal code new build.</div>
<div>
</div>
<div>
<br /></div>
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-22623908057183110332019-10-06T09:40:00.000-07:002019-10-06T09:40:06.434-07:00NeuralMET UpdatesTime flies.<br />
<br />
It's been over a year since I last posted about <a href="https://flylogical.azurewebsites.net/WebApps/NMET/Main.aspx" target="_blank">NeuralMET</a>, my experiments in AI/ML as applied to aviation weather forecasting. Here is the <a href="http://flylogical.blogspot.com/2018/05/introducing-neuralmet.html" target="_blank">original post</a> that introduced the subject, and here is the <a href="http://flylogical.blogspot.com/2018/06/neuralmet-updates.html" target="_blank">previous update</a> from last year. At that time, the results (in terms of prediction accuracy) were somewhat marginal.<br />
<br />
Since that time, I have continued to gather METAR data every half-hour, and by now have accumulated well over a year's worth of records for each location in my basket. I have thus re-trained all the ML models on these larger datasets, and the corresponding prediction results are now quite encouraging (<a href="https://flylogical.azurewebsites.net/WebApps/NMET/Main.aspx" target="_blank">see here</a> for the online live forecasts plus historical performance for a selection of locations).<br />
<br />
Interestingly, I obtain the best accuracy using a combination of Deep Neural Nets and Random Forests. The particular combination depends on the variable in question and on the forecast horizon. These best combinations (chosen by trial-and-error) are reflected in the <a href="https://flylogical.azurewebsites.net/WebApps/NMET/Main.aspx" target="_blank">online live forecasts</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-69120513757753459222018-09-10T12:41:00.001-07:002018-09-25T01:39:32.499-07:00AfterBurner<i>[Updated 25 September 2018 with some info on the 747 Art Car at the end]</i><br />
<i>[Updated 20 September 2018 with some more Thoughts at the end]</i><br />
<h2>
Impressions from Burning Man 2018</h2>
<div>
<br /></div>
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsVj9rT_dRgFmPQdpeRDMF_Q6mg5uEL8dB7p8mj-eBwCjt59wedUe8DW6ynGLCHPxAbDOlpNUbkrhxLpz2Hc74zXGTBdkUqKyo5HvgnrV4YUCXkkanKvJXFe_h6JS-tRoEd12FghDt35M/s1600/BM_SURVIVORS_GUIDE.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="828" data-original-width="540" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsVj9rT_dRgFmPQdpeRDMF_Q6mg5uEL8dB7p8mj-eBwCjt59wedUe8DW6ynGLCHPxAbDOlpNUbkrhxLpz2Hc74zXGTBdkUqKyo5HvgnrV4YUCXkkanKvJXFe_h6JS-tRoEd12FghDt35M/s640/BM_SURVIVORS_GUIDE.PNG" width="416" /></a></div>
<b></b><i></i><u></u><sub></sub><sup></sup><strike></strike><br />
<b></b><i></i><u></u><sub></sub><sup></sup><strike></strike><br /></div>
<div>
I just returned from <a href="https://burningman.org/" target="_blank">Burning Man 2018</a>. Here are my videos, photos, and thoughts... </div>
<h3>
</h3>
<h3>
</h3>
<h3>
Videos</h3>
<div>
<br /></div>
<div>
<a href="https://www.youtube.com/watch?v=834bvrRxZi4" target="_blank">Cycling across the Playa [unedited]</a><br />
<br />
<a href="https://www.youtube.com/watch?v=HXw4dD7CrpY" target="_blank">Dust Storms on the Playa [unedited]</a><br />
<br /></div>
<a href="https://www.youtube.com/watch?v=zEzcRUfXk4I" target="_blank">Sailing on The Monaco past the giant Polar Bear [unedited]</a>
<br />
<div>
<u><span style="color: #000120;"></span></u><br /></div>
<h3>
Photos</h3>
<div>
<a href="https://drive.google.com/open?id=1gcw6KxOHy-04J8XtSDbvIBnIOhyQa6JS" target="_blank">Unedited collection of photos</a></div>
<br />
Some highlights...<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixRHijLAHhgYhadaTEd3xLR6e-Tq4uMMyvtm40HYJy8Y6k5s-gNQredlVxaJxH3t7LeLppOa_HAw0LfkerfMwjFH4JdUki3RkzuyzL9X9PgapPtNZ6YkAIPpVyC0XFNFU9_Oo2wcUxHmg/s1600/20180901_194321.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixRHijLAHhgYhadaTEd3xLR6e-Tq4uMMyvtm40HYJy8Y6k5s-gNQredlVxaJxH3t7LeLppOa_HAw0LfkerfMwjFH4JdUki3RkzuyzL9X9PgapPtNZ6YkAIPpVyC0XFNFU9_Oo2wcUxHmg/s640/20180901_194321.jpg" width="360" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxWtHvbLnm03o_gQAoLa1uhaDSAA8XwPgveixBikb0d1_gBXO4In1xqyLWYvWp95FatAevFReQqraiGi1b3WEjxR603pbWPJA-8xuz6GNuijD7eCnddM6ufXzeK9RPcdjLVsCgZNS5cek/s1600/IMG_2851.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="536" data-original-width="693" height="308" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxWtHvbLnm03o_gQAoLa1uhaDSAA8XwPgveixBikb0d1_gBXO4In1xqyLWYvWp95FatAevFReQqraiGi1b3WEjxR603pbWPJA-8xuz6GNuijD7eCnddM6ufXzeK9RPcdjLVsCgZNS5cek/s400/IMG_2851.jpg" width="400" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKc1IpQ0OSfcaUEtb7IZTHUVWaKCkwLwTDfneJbsiGfKhc6TBi87m7jekrFz5Uhk4MRsrO9SCE4QHC6B7NnL3w0WlO851YQaAGFpEipevcAqw4kTrA8acMWPXXLHXbFovlvXcP07IfA1s/s1600/20180901_210807.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKc1IpQ0OSfcaUEtb7IZTHUVWaKCkwLwTDfneJbsiGfKhc6TBi87m7jekrFz5Uhk4MRsrO9SCE4QHC6B7NnL3w0WlO851YQaAGFpEipevcAqw4kTrA8acMWPXXLHXbFovlvXcP07IfA1s/s640/20180901_210807.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjR7woCAl3BAGeXVVi5MU_QjYRZDNluYtPTd3j5O5C3UIX8NDK-hvlTA0C7Vjar9O8rJwHS2-SbIbnn0yETFzgfi2XjzlNYTIbYOy163iYShYBZNtmbHWE9gxaSgYtNndZs5UzsiBpI_U4/s1600/20180901_220310.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjR7woCAl3BAGeXVVi5MU_QjYRZDNluYtPTd3j5O5C3UIX8NDK-hvlTA0C7Vjar9O8rJwHS2-SbIbnn0yETFzgfi2XjzlNYTIbYOy163iYShYBZNtmbHWE9gxaSgYtNndZs5UzsiBpI_U4/s640/20180901_220310.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwIleAwyVgg6yV7hbmIHx2UreXyxu9NbCJuQeNElc369ahx8RpwNT3Otr094mcMEsNlVNT6mBZf1tkwJycWXEtFDiQrWaMpn5ym68fLREVQxoSsANF-A92bbZ4_XhJMwu2PI4175SRXx4/s1600/20180901_110314.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwIleAwyVgg6yV7hbmIHx2UreXyxu9NbCJuQeNElc369ahx8RpwNT3Otr094mcMEsNlVNT6mBZf1tkwJycWXEtFDiQrWaMpn5ym68fLREVQxoSsANF-A92bbZ4_XhJMwu2PI4175SRXx4/s640/20180901_110314.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaulF2XSK6MZQQlkNEIKZHbH_ouC8uUHp6L0odQOq7x7XJS6731cQmqNvtvHj3Tmg6vxoGjiRxiL8QHohqL8Mks_7zqYXhhSMH2a-37spQJF505-HSWMeDMXcdSqg1WlIoueiq5IuGCPI/s1600/20180901_205125.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaulF2XSK6MZQQlkNEIKZHbH_ouC8uUHp6L0odQOq7x7XJS6731cQmqNvtvHj3Tmg6vxoGjiRxiL8QHohqL8Mks_7zqYXhhSMH2a-37spQJF505-HSWMeDMXcdSqg1WlIoueiq5IuGCPI/s640/20180901_205125.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1FzerYOWuxQXtuqQrc-0uCiIoDo-xMgzMH4XyPUUq3l27ZSd_Q000SCPADiGdAsDk4cFrkJUrDOketTEUkrV_iQYQ9GYAu2cc_lA6hCP7J14AmXvm15tEiKsDe4RDOrCKhMv0fRQDWls/s1600/20180901_205140.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1FzerYOWuxQXtuqQrc-0uCiIoDo-xMgzMH4XyPUUq3l27ZSd_Q000SCPADiGdAsDk4cFrkJUrDOketTEUkrV_iQYQ9GYAu2cc_lA6hCP7J14AmXvm15tEiKsDe4RDOrCKhMv0fRQDWls/s640/20180901_205140.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1A-Vmbz6AJ6WXjBqKWcO-jka9cj4FmN0cUsCiU6qJRdnLB2YYNSjYuVy5y3HecUkGzf7CGSShOLEdqMr7neyBLVdX3y2su4hBtuGYYSkpA04UfT7d9ZpAHkqx7tcjzAMY9chaXCF5jkk/s1600/20180901_062626.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh1A-Vmbz6AJ6WXjBqKWcO-jka9cj4FmN0cUsCiU6qJRdnLB2YYNSjYuVy5y3HecUkGzf7CGSShOLEdqMr7neyBLVdX3y2su4hBtuGYYSkpA04UfT7d9ZpAHkqx7tcjzAMY9chaXCF5jkk/s640/20180901_062626.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8lxE3yTRWLD9pOJaYS8SC2ZqsbK0wbV9ScHEYg3f_PTHKpoc6N9a-ze2t8fFOV4SXbnfBm2Zn56Sz3NV-ZMBNfXScUti2lTyha_ukh8rDo6DRQU0fql_ATtrW6nk8TcaAXpL3_QWf4dI/s1600/20180831_191737.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8lxE3yTRWLD9pOJaYS8SC2ZqsbK0wbV9ScHEYg3f_PTHKpoc6N9a-ze2t8fFOV4SXbnfBm2Zn56Sz3NV-ZMBNfXScUti2lTyha_ukh8rDo6DRQU0fql_ATtrW6nk8TcaAXpL3_QWf4dI/s640/20180831_191737.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkC217FHjKoMzdthWHatlnkmpKHKyUZVj6_hrAaaGdwDp-8OptXqfFckHWhqcR_uHLp_yoAgLpXXCSxxh-evObVSV-FouYZedgHqZU8OTa6_kJl30KR9BJJ89vmyVI6_Du9fx4pPmxDio/s1600/20180831_185641.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkC217FHjKoMzdthWHatlnkmpKHKyUZVj6_hrAaaGdwDp-8OptXqfFckHWhqcR_uHLp_yoAgLpXXCSxxh-evObVSV-FouYZedgHqZU8OTa6_kJl30KR9BJJ89vmyVI6_Du9fx4pPmxDio/s640/20180831_185641.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-DhxYAc_LX5BZ-stvj4ujMShGUeTaIKz7VLYyg-iOMP56w6zJNcTf9PY1zXca4FOcYPnqhP69eI9uHjbxjE1v_vlgPPahUU1rjKHIjm_knMc7UgTyr6w9qC3tyCmK6AYsI5nMEeapsUU/s1600/20180901_072128.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-DhxYAc_LX5BZ-stvj4ujMShGUeTaIKz7VLYyg-iOMP56w6zJNcTf9PY1zXca4FOcYPnqhP69eI9uHjbxjE1v_vlgPPahUU1rjKHIjm_knMc7UgTyr6w9qC3tyCmK6AYsI5nMEeapsUU/s640/20180901_072128.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgy1O3Y7NY16L5s1q9T4eyf13hyphenhyphenyR4w6OShQ-1lzzzObN5Xmli_BUvLdAwBpwL6jaJ7DAxtwnLrtMgRGXy1FtPFaHVggCDP5vDs_pGIu798JJrPW1YmpMCUkNpiWTh3O2DpWdu1uoAjD1c/s1600/20180831_170419.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgy1O3Y7NY16L5s1q9T4eyf13hyphenhyphenyR4w6OShQ-1lzzzObN5Xmli_BUvLdAwBpwL6jaJ7DAxtwnLrtMgRGXy1FtPFaHVggCDP5vDs_pGIu798JJrPW1YmpMCUkNpiWTh3O2DpWdu1uoAjD1c/s640/20180831_170419.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizTFf9uDhl8mbHdCoVGrxMLgH-3favFthSxZOFLskc0FojDSkXVreO9oh4teLm9r5VdF5cJmfLLjQoccHsN7j_jZe0lfptrdd1M4rf7dl0xb-JIB01TKS3BEgri6fpyl4jPSawM9E-P5E/s1600/20180830_190637.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizTFf9uDhl8mbHdCoVGrxMLgH-3favFthSxZOFLskc0FojDSkXVreO9oh4teLm9r5VdF5cJmfLLjQoccHsN7j_jZe0lfptrdd1M4rf7dl0xb-JIB01TKS3BEgri6fpyl4jPSawM9E-P5E/s640/20180830_190637.jpg" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4YzO4IY3dMV5CX9VvbEb73gCJ2J7odcBVwvduQ240f38AroeZ2aPEpZLpo54JgEPEIHUnT-lr2lBFqlTWAPTRoiOB29fZ3xbtvY7E5O6x2cGZqlDLUmqElrnohZtQIruIZrWTrY1lXbY/s1600/20180831_171050.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4YzO4IY3dMV5CX9VvbEb73gCJ2J7odcBVwvduQ240f38AroeZ2aPEpZLpo54JgEPEIHUnT-lr2lBFqlTWAPTRoiOB29fZ3xbtvY7E5O6x2cGZqlDLUmqElrnohZtQIruIZrWTrY1lXbY/s400/20180831_171050.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEictYFXWdWp9LOjwO1H-qQpuWMX9Y2iRSzVzFq7WqdOsxEwsDIIM1AG8MtxDu5pS2LEb2ur5gCWZR19XPg6IjUCo1yovvJlVpw9mwQp6_FhqmhHFA340T1G7bCdQqRdfxXBU8eCsv12wWc/s1600/20180901_153431.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEictYFXWdWp9LOjwO1H-qQpuWMX9Y2iRSzVzFq7WqdOsxEwsDIIM1AG8MtxDu5pS2LEb2ur5gCWZR19XPg6IjUCo1yovvJlVpw9mwQp6_FhqmhHFA340T1G7bCdQqRdfxXBU8eCsv12wWc/s400/20180901_153431.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPzFOtDLW22BEeKp28_vh6kV2GW5nRj4LhmHswHjachCqy7PBvgMiBvoQbdFzDUrlW7Yw67nkwGvLNuc7O9nc8r8XlXy5wiel1ceUi_VP34cbC9zDaziSbrAGiuc5KlNMNp267H9LRXI0/s1600/20180901_083621.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPzFOtDLW22BEeKp28_vh6kV2GW5nRj4LhmHswHjachCqy7PBvgMiBvoQbdFzDUrlW7Yw67nkwGvLNuc7O9nc8r8XlXy5wiel1ceUi_VP34cbC9zDaziSbrAGiuc5KlNMNp267H9LRXI0/s400/20180901_083621.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
<h3>
Thoughts</h3>
It was <b>mental</b>. Here are some impressions and observations.<br />
<br />
<h4>
Math Camp</h4>
As summarised in this <a href="https://www.scientificamerican.com/article/burning-mans-mathematical-underbelly/" target="_blank">nice article<span style="color: #b00000;"> </span></a>there was a "Math Camp" located at "3.14, P". We enjoyed listening to ad hoc presentations on topics such as "the mathematics of the doubling cube in Backgammon". But the highlight was a presentation by a PhD student from Canada (his area of expertise was probability theory and Monte Carlo analysis) who explained the (counterintuitive) strategy of using a random-number-generator to beat the 50:50 odds of guessing a coin-toss !<br />
<br />
Also, at the "Math Camp" we mentioned our longstanding surprise with the fact that the perimeter of an ellipse cannot be written in simple "closed-form" (unlike for a circle: <i>2 times Pi times radius</i>), but instead requires the use of elliptic integrals. Discussion ensued, and we acquiesced: accepting that since the elliptic integral can be computed to any desired precision, it is as good as a simple "closed form". Same goes for the error function or the Gaussian distribution etc. So, all good : )<br />
<h4>
</h4>
<h4>
747 Art Car</h4>
<div>
One of the most impressive Art Cars was the Boeing 747 partial fuselage re-purposed as a dance venue: </div>
<div>
<br /></div>
<div>
<a href="https://youtu.be/OyeP_x5SqaM" target="_blank">Video describing the history of the Boeing 747-300 that become the Burning Man Art Car</a></div>
<div>
<a href="https://jalopnik.com/burning-mans-boeing-747-is-stuck-in-the-nevada-desert-1829151685/amp?__twitter_impression=true" target="_blank">...and an article on the aftermath (oops!) </a></div>
<div>
<br /></div>
<u></u>… to be continued (when I have time)<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirOPniiiPA0WFctU5TamVOgDBPTTAGds0KkK-uxDZr1_6FZMna5WXSdJmMVLU88Sy_PeQaMjpAYhyS0JXsUyVyZONb59_75teZJr4bLlTn813QohecoIFxcqi7Pbdv6p_r81iX1OUhBFY/s1600/BM_VEHICLE_PASS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><span style="color: #b00000;"></span><span style="color: #b00000;"></span><img border="0" data-original-height="1094" data-original-width="1600" height="273" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirOPniiiPA0WFctU5TamVOgDBPTTAGds0KkK-uxDZr1_6FZMna5WXSdJmMVLU88Sy_PeQaMjpAYhyS0JXsUyVyZONb59_75teZJr4bLlTn813QohecoIFxcqi7Pbdv6p_r81iX1OUhBFY/s400/BM_VEHICLE_PASS.jpg" width="400" /></a></div>
<b></b><i></i><u></u><sub></sub><sup></sup><strike></strike><br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-59434652493571462362018-06-28T01:55:00.000-07:002019-10-06T09:43:40.503-07:00NeuralMET Updates<i><a href="https://flylogical.blogspot.com/2019/10/neuralmet-updates.html" target="_blank">Updated 6 October 2019 with newer ML models</a></i><br />
<u><span style="color: #000120;"></span></u><br />
<br />
<a href="https://flylogical.azurewebsites.net/WebApps/NMET/Main.aspx" target="_blank">NeuralMET</a> has now been rolled-out to multiple airports: EGAC (Belfast City, UK), EGNH (Blackpool, UK), EGNS (Isle of Man, UK), EGPF (Glasgow, UK), EGPK (Prestwick, UK), KLAX (Los Angeles, US), and KSFO (San Francisco). More are on the way (the historical METAR data-gathering has commenced: forecasts available in a few months time i.e., when sufficient historical data has been captured for training the models). Reasons for choosing these specific airports: (i) EGAC, EGNH, EGNS, EGPF and EGPK are all geographically close to one another, so if I ever wanted to extend the models to look for correlations across locations, these would be a suitable starting point for such analyses; (ii) KLAX and KSFO are in very different global locations than the others (all in the UK), and exhibit very different weather patterns. I thought it would be interesting to compare how the models work across such variations.<br />
<br />
The performance of the forecasts certainly varies across the different locations. A recent set of Error Curves can be found <a href="https://flylogical.azurewebsites.net/WebApps/NMET/docs/publish_forecast_stats_from_online_models.html" target="_blank">here</a> (see <a href="http://flylogical.blogspot.com/2018/05/introducing-neuralmet.html" target="_blank">previous posts</a> for definitions and explanations). No yet wholly conclusive, but the forecasts are generally improving compared with naïve estimates and random guesses (!)<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-38386969050866544862018-05-30T14:21:00.001-07:002019-10-06T09:42:03.098-07:00Introducing NeuralMET<i><a href="https://flylogical.blogspot.com/2019/10/neuralmet-updates.html" target="_blank">Updated 6 October 2019 with newer ML models</a></i><br />
<i><br /></i>
<i><a href="http://flylogical.blogspot.com/2018/06/neuralmet-updates.html" target="_blank">Updated 28 June 2018: new locations added</a></i><br />
<i><br /></i>
<br />
<h2>
An <a href="https://flylogical.azurewebsites.net/WebApps/NMET/Main.aspx" target="_blank">Online Weather Forecaster</a> using Artificial Intelligence Deep Learning Neural Networks built in MATLAB</h2>
<br />
In this post, I pick up from where I left off in my <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-refinements-in.html" target="_blank">previous post(s)</a> where I developed some preliminary Machine Learning (ML) models for weather prediction using MATLAB. In this next post, I deploy the weather forecasts into production in order to complete my end-to-end example of using MATLAB for prototyping and deploying Deep Learning neural networks. You will need to read the <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-refinements-in.html" target="_blank">previous post</a> for context, as I do not repeat any of that here.<br />
<br />
<h3>
Production Deployment via the MATLAB Compiler </h3>
<br />
The MATLAB Compiler provides the option to build a self-contained Windows executable which encompasses the entire MATLAB code in a single application. However, for future-proofing / code re-usability, I instead chose to build a library in C# / .NET containing all the neural network prediction code (importing the trained networks via a MATLAB ".mat" data file -- 72 networks in all, covering the 8 variables across the 9 lookahead forecast periods), including the SQL database interaction (for reading the METAR observations and persisting the computed forecasts, every half hour). This was (almost) trivially simple to accomplish using the MATLAB Compiler Graphical-User-Interface. Having built the library (in my case, named <b>NeuralMetLib.dll</b>), I then built a very simple Windows Console application in C# which imported the compiled MATLAB library via the following few lines of code:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">using MathWorks.MATLAB.NET.Arrays; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">using NeuralMetLib; </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">namespace TestNeuralMetConsole </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">{ </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">class Program </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> { </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> static void Main(string[] args) </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> { </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> NeuralMet nm = new NeuralMet(); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> MWArray resultMW= nm.testNeuralMET(); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> } </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> } </span><br />
}
<br />
<br />
I deployed this Windows Console app on an AWS EC2 <b>t2.small</b> instance type and "wrapped it" behind a Windows Scheduler Task to enable the automatic triggering of forecast updates every half hour -- and that essentially completed the deployment of the production code.<br />
<br />
<h3>
Client Web Application</h3>
<div>
<br /></div>
<div>
In order to consume the deployed forecasts and make them available for viewing, I built a simple ASP.NET web app, hosted on Microsoft Azure. You can find it <a href="https://flylogical.azurewebsites.net/WebApps/NMET/Main.aspx" target="_blank">here</a>. This Online Forecaster automatically updates whenever new METAR data is obtained. It also displays the running statistics of the neural network forecasts versus the naïve ("zero-order-hold") forecasts (which can be compared directly with the Revised Error Curves in the <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-refinements-in.html" target="_blank">previous post</a>).</div>
<div>
<br /></div>
<h3>
Conclusions & Next Steps</h3>
<div>
<br />
The MATLAB Compiler proved to be a straightforward and effective means of deploying trained Deep Learning Neural Network into production. Comparing this to my <a href="http://flylogical.blogspot.com/2018/01/deploying-tensorflow-object-detector.html" target="_blank">previous experience</a> of deploying neural networks from TensorFlow, I found the MATLAB approach to be <i>considerably</i> easier.<br />
<br /></div>
<div>
As a next step, I'll use the same approach to deploy the Deep Learning training code (i.e., as well as the prediction code). In that way, the re-training of the neural networks can be fully automated, say, every month or so. I'll also add more locations (i.e., in addition to EGNS) once I capture the necessary METAR data.<br />
<br />
I'd also like to try using Containers as opposed to entire Virtual Machine instances, in the interests of optimal use of infrastructure.</div>
<h3>
</h3>
<h3>
</h3>
<h3>
GOTCHAS</h3>
<div>
<br />
Dealing with the following "gotchas" was the most time-consuming aspect of the deployment. These aside, the entire process took only a couple of hours to build, test, and deploy. </div>
<div>
<i></i><i></i><br /></div>
<i>GOTCHA: In order for the Compiled MATLAB code to access the (Azure) SQL database, the following steps had to be carried out on the target machine (in my case, a Windows 2016 Server):</i><br />
<i></i><br />
<i>Installed JRE 8 from <a href="http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html" target="_blank">here</a></i><br />
<i></i><br />
<i>Installed Microsoft JDBC Driver 6.4 for SQL Server from <a href="https://www.microsoft.com/en-us/download/details.aspx?id=56615" target="_blank">here</a> (where I arbitrarily selected
'<b>C:\Program Files\JDBC6</b>' for the installation location) </i><br />
<i><br /></i>
<i>Each time I accessed the SQL database from within the MATLAB source-code, I configured the database connection within MATLAB as follows:</i><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><i></i><br />
<span style="font-family: "courier new" , "courier" , monospace;">javaaddpath('<b>C:\Program Files\JDBC6</b>\sqljdbc_6.4\enu\mssql-jdbc-6.4.0.jre8.jar','-end'); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;">conn=database('DATABASE_SERVER_NAME',...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'ACCOUNT_NAME','PASSWORD',...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'com.microsoft.sqlserver.jdbc.SQLServerDriver',...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'jdbc:sqlserver://k00cpfylic.database.windows.net:1433;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> database=DATABASE_SERVER_NAME'); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<i>where DATABASE_SERVER_NAME, ACCOUNT_NAME, and PASSWORD are (obviously!) for your own database rather than mine. Also, make sure the SQL Server database firewall is configured to allow connections from the target machine (straightforward in Azure when using a fixed IP address on the target machine). When compiled, the deployed application successfully connects to the SQL database.</i><br />
<i><br /><br />DOUBLE GOTCHA: When compiling the MATLAB code via the MATLAB Compiler, there can be situations where multiple functions in MATLAB have the same name but different usage depending on the given Toolbox. In such cases you need to "steer" the Compiler by giving it some direction at compile-time. Specifically, when compiling functions that use the 'predict' function applied to Deep Learning neural nets (of type <b>SeriesNetwork</b>), you need to add the following pragma line to the function which contains the 'predict' function:</i><br />
<i></i><br />
<span style="font-family: "courier new" , "courier" , monospace;"> %#function SeriesNetwork %declares which "predict" function to use </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><i></i><br />
<i>Otherwise, the code will compile without error, but will give the following runtime error: </i><br />
<i><br /></i>
<span style="font-family: "courier new" , "courier" , monospace;">Undefined function 'predict' for input arguments of type 'double'</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span><i>
GOTCHA: When building a .NET application which consumes the compiled MATLAB dll, make sure the project settings in Visual Studio are configured to build specifically for the <b>64bit CPU</b> target (not <b>Any CPU</b>), otherwise an error similar to the following will arise at runtime:</i><br />
<i></i><br />
<span style="font-family: "courier new" , "courier" , monospace;">Unhandled Exception: System.TypeInitializationException: The type initializer for 'NeuralMetLib.NeuralMet' threw an exception. ---> System.TypeInitializationExc
eption: The type initializer for 'MathWorks.MATLAB.NET.Utility.MWMCR' threw an exception. ---> System.TypeInitializationException: The type initializer for 'Mat
hWorks.MATLAB.NET.Arrays.MWArray' threw an exception. ---> System.BadImageFormat
Exception: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)
at MathWorks.MATLAB.NET.Arrays.MWArray.mclmcrInitialize2(Int32 primaryMode)
at MathWorks.MATLAB.NET.Arrays.MWArray..cctor()
--- End of inner exception stack trace ---
at MathWorks.MATLAB.NET.Utility.MWMCR..cctor()
--- End of inner exception stack trace ---
at NeuralMetLib.NeuralMet..cctor()
--- End of inner exception stack trace ---
at NeuralMetLib.NeuralMet..ctor()
at TestNeuralMetConsole.Program.Main(String[] args) in E:\FlyLogicalSoftware\
VS2017\WinDotnet\NeuralMet\TestNeuralMetConsole\Program.cs:line</span><br />
<br />Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-223455584910870050.post-19639045509884871112018-05-25T12:10:00.000-07:002018-05-30T14:23:42.534-07:00Weather Prediction Refinements in MATLAB Machine Learning Models<i>Update 30 May 2018: the models presented here have now been deployed online, as described in <a href="http://flylogical.blogspot.com/2018/05/introducing-neuralmet.html" target="_blank">the next post</a>.</i><br />
<br />
In this post, I pretty much pick up from where I left off in my <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-with-machine.html" target="_blank">previous post</a> where I developed some preliminary Machine Learning (ML) models for weather prediction using MATLAB. In this next post, I explore some further refinements to the models. You will need to read the <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-with-machine.html" target="_blank">previous post</a> for context, as I do not repeat any of that here.<br />
<br />
<h2>
Revised Error Curves</h2>
<div>
<br /></div>
<div>
All the results of current refinements are presented in the following set of Error Curves which are revised versions of those presented in the <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-with-machine.html" target="_blank">previous post</a>. Each of the updates to the curves is described later in this post.</div>
<div>
<br /></div>
<div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXvz1gG-pVXshTMtVY5oRX-dssV506yQLwGP2dBdsqDiYr1DFIQqf9rYuGNSrg3Tb1Zog_0fDL0u37Hs_gMI8EDpzQjkYm5DCNvTLRyapt9Kg7i_xh0D-YmiYxn4rLSDh-VqdDiMdsoMo/s1600/NEW_ERROR_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXvz1gG-pVXshTMtVY5oRX-dssV506yQLwGP2dBdsqDiYr1DFIQqf9rYuGNSrg3Tb1Zog_0fDL0u37Hs_gMI8EDpzQjkYm5DCNvTLRyapt9Kg7i_xh0D-YmiYxn4rLSDh-VqdDiMdsoMo/s1600/NEW_ERROR_1.png" /></a></div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoCmaVg66JyuEVSlCQhH_fHxoqepZI8l7Tc3X4IXBO_PNxpscCgvf1q3H8bnYie4SR43c5EjI4iML-5H5WcooZpMMVIuc8muvUnOYE05k1WM2IGWYhPgnZ8CA8-jdL7V0pW69STSJxMpo/s1600/NEW_ERROR_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoCmaVg66JyuEVSlCQhH_fHxoqepZI8l7Tc3X4IXBO_PNxpscCgvf1q3H8bnYie4SR43c5EjI4iML-5H5WcooZpMMVIuc8muvUnOYE05k1WM2IGWYhPgnZ8CA8-jdL7V0pW69STSJxMpo/s1600/NEW_ERROR_2.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgixGYu403IYbrSj4KTQOO-YAEhEhxTjy2hy9Eja1J95kGS8W3Hd5deY1YKUiOz7TWz0XLfDXcnm0QfnW91U02qnv-3hrG0T3JvGz7hF7PM9AIgTwly13AFTRn8AcPqFuJKw4ikW3yAZ0/s1600/NEW_ERROR_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgixGYu403IYbrSj4KTQOO-YAEhEhxTjy2hy9Eja1J95kGS8W3Hd5deY1YKUiOz7TWz0XLfDXcnm0QfnW91U02qnv-3hrG0T3JvGz7hF7PM9AIgTwly13AFTRn8AcPqFuJKw4ikW3yAZ0/s1600/NEW_ERROR_3.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIcdsn75JKwiKRmyXnV4G8Y0TyxltRFrPvmpScFIL-fa9c2LIFIzbE7KS2O_lubFliki7lBQcoCTUMtL7Sg9TB4OnJGMlwQunG4yv4cJ9S80ilTZaBuU1MjIcbNGkrD1pnhT3k06411N8/s1600/NEW_ERROR_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIcdsn75JKwiKRmyXnV4G8Y0TyxltRFrPvmpScFIL-fa9c2LIFIzbE7KS2O_lubFliki7lBQcoCTUMtL7Sg9TB4OnJGMlwQunG4yv4cJ9S80ilTZaBuU1MjIcbNGkrD1pnhT3k06411N8/s1600/NEW_ERROR_4.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpSz0Z7ZMnN957B6imp5O1nWKjNjb2ip7oAaHtg2QmU-m3KXinVpo18RtfrTw1j91MsgkK2GzDckEz3uzn7v3aBaYDZz_IATNMtxFY66hTl1PyL3oikVv1-EzqeGNn9eVoSVeNMRlJq_Q/s1600/NEW_ERROR_5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpSz0Z7ZMnN957B6imp5O1nWKjNjb2ip7oAaHtg2QmU-m3KXinVpo18RtfrTw1j91MsgkK2GzDckEz3uzn7v3aBaYDZz_IATNMtxFY66hTl1PyL3oikVv1-EzqeGNn9eVoSVeNMRlJq_Q/s1600/NEW_ERROR_5.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigRVD-YHnb-DKPqk1yugkwQrg3TYTYiUtGd3ug-J05j0UjEFTe9o3F28zHSrzXcKfEXQhPD4Y9Rn_QC6nYFHBlosXb8xitq6ENuG_5Ux9OBv6wQsfu9efbjICcTaE_EQf1139I3ZpkWjU/s1600/NEW_ERROR_6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigRVD-YHnb-DKPqk1yugkwQrg3TYTYiUtGd3ug-J05j0UjEFTe9o3F28zHSrzXcKfEXQhPD4Y9Rn_QC6nYFHBlosXb8xitq6ENuG_5Ux9OBv6wQsfu9efbjICcTaE_EQf1139I3ZpkWjU/s1600/NEW_ERROR_6.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimyfI9s9_1GiOokXhit-nsqkRAlxSw0B4Gtri_9Wkn7M1McS4XN59fTxFisX96G81HI5sAMzq4UgMhcbi2ltLYbWQ0hyphenhyphenTpPWd_KAmZB5-_NQf9XTQNGDanBi_-vARwowiKkvyXHUNXUQ8/s1600/NEW_ERROR_7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimyfI9s9_1GiOokXhit-nsqkRAlxSw0B4Gtri_9Wkn7M1McS4XN59fTxFisX96G81HI5sAMzq4UgMhcbi2ltLYbWQ0hyphenhyphenTpPWd_KAmZB5-_NQf9XTQNGDanBi_-vARwowiKkvyXHUNXUQ8/s1600/NEW_ERROR_7.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBGHCHZjEnnX917Bsk2WarWAfBEVYDZ-D7A87P8g4PrUlSSdIurAaaRsPKrlhpFDcN7oAS4ITihl_2pitfhlBIfpahnLlXMGJzUnI79QZ1Ohe8h3xa1VequbtTmyE68G5guer3mBP_qdg/s1600/NEW_ERROR_8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBGHCHZjEnnX917Bsk2WarWAfBEVYDZ-D7A87P8g4PrUlSSdIurAaaRsPKrlhpFDcN7oAS4ITihl_2pitfhlBIfpahnLlXMGJzUnI79QZ1Ohe8h3xa1VequbtTmyE68G5guer3mBP_qdg/s1600/NEW_ERROR_8.png" /></a></div>
<h2>
</h2>
<h2>
The Previous Models</h2>
<div>
<br /></div>
<div>
The results from the previous models are reproduced in the Revised Error Curves exactly as they were before, but with slight label changes summarised as follows:</div>
<div>
<br /></div>
<ul>
<li>The solid blue curves are the "LSTM alone" curves from before. Now labelled "LSTM", the "(dash, Multivar)" can be ignored for now.</li>
<li>The solid red curves are the "Multi-regression plus LSTM" curves from before. Now labelled "Multi-reg plus LSTM", the "(+ single period)" can be ignored for now.</li>
<li>The solid orange curves are the "Multi-regression alone" curves from before. Now labelled "Multi-reg", the "(+ single period)" can be ignored for now.</li>
<li>The black dotted line labelled "sdev obs" is the same as before </li>
</ul>
<br />
<h2>
The New Models </h2>
<h3>
Zero Order Hold</h3>
<div>
The first of the new models, the most naïve model of all, represented by the solid black curves labelled "ZOH" (for Zero Order Hold), is described as follows: basically, <i>use the last known observed values as the forecast values for the future</i>. It is about the simplest method of forecasting. As the curves show, it works well in the leftmost portions of the graphs, for short forecast periods (half hour, one hour etc), but unsurprisingly, the performance drops off (i.e., curves rise steeply) as the forecast period increases. That said, the (frankly disappointing) realisation is that <i>this naïve model actually performs better than all the previously-presented models up to approximately 5 hours or so</i>, which probably says more about the poor quality of those previous models. Note: the "dips" in the ZOH error at 24, 48, and 72-hours ahead correspond to the fact that local time of the forecast is <i>exactly</i> the same as the local time of the observation used in the ZOH, so diurnal variation will be effectively nulled, leading to a stronger correlation.</div>
<div>
<br /></div>
<h3>
Multivariable LSTM</h3>
<div>
The second of the new models represented by the blue dot-dash curves labelled "(dash, Multivar)" (alongside the previous "LSTM" labels) is a variation of the previous LSTM modelling. Recalling from the <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-with-machine.html" target="_blank">previous post</a>, when building the LSTM models, I opted to treat each variable individually, and trained a single-variable LSTM model on the single time-history of observations for each variable. My (albeit nothing more than instinctive) reasoning was that it could be expecting "too much from the model" to try and fit all 8 variables together. However, I did remark back then that it might be worth trying the multi-variable LSTM i.e., by fitting for 8 variables simultaneously in a single LSTM neural network, just in case there were useful internal correlations that might help. MATLAB makes this extension to multiple variables straightforward, and the results are now in. Unfortunately, with the exception of a few sporadic "dips" (i.e., regions of lower errors), the multivariable LSTM model generally under-performs the previous individual LSTM models. Moreover, it almost exclusively under-performs the naïve ZOH models for every variable across almost the entirety of the forecast periods (validating my original instincts).</div>
<div>
<br /></div>
<h3>
Multivariable Regressions for Individual Forecast Periods</h3>
<div>
Recalling from the <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-with-machine.html" target="_blank">previous post</a>, when fitting the multivariable regression models (the solid red and orange curves), I opted to fit for all forecast periods simultaneously (mostly to minimise the number of models required). However, as noted back then, the performance of the resulting models was relatively poor at small forecast periods, where the regressions were expected to perform better. I reasoned that the single error being minimised via the back propagation training algorithm was hampering the short forecast periods by being unable to get below the value for the long forecast periods (where the error is always going to be larger). So, my suggested proposal as a future enhancement was to fit a regression for each single forecast period of interest. The results of those regressions are now in (specifically for 0.5, 1, 3, 6, 12, 24, 36, 48, and 72-hour forecast periods) and are represented by the red and orange 'plus signs' in the error curves labelled "(+ single period)" alongside the corresponding "Multi-reg" labels. As described previously, the red (solid and now 'plus signs') correspond to the case where the outputs from the (solid blue) LSTM estimates are used as further input (regressors) in the regressions; the orange (solid and now 'plus signs') correspond to the case where the outputs from the (solid blue) LSTM estimates are <i>not</i> used as further input (regressors) in the regressions. As can be observed from the Revised Error Curves, it turns out that the <i>Multivariable Regressions without LSTM, computed individually for each forecast period of interest</i>, represented by the orange 'plus signs', are generally the best of all models across almost the entire range of forecast periods except for the low periods where ZOH prevails. There is one notable exception, namely Sea-Level Pressure, where ZOH prevails exclusively, indicating that this particular variable is very difficult to predict from previous time-histories.</div>
<br />
<h2>
Revised Recipe for an Online Weather Forecaster</h2>
<div>
<br /></div>
<div>
Given the above results, the recipe for an online weather predictor for a given location is simplified, as follows:</div>
<div>
<br /></div>
<ul>
<li>Capture and persist the METAR briefings every half-hour for the location of interest.</li>
<li>Every few months or so, use the above-mentioned METAR time histories to re-train a set of neural network multivariable input regression models, with individual target responses per variable, and one model per forecast period of interest.</li>
<li>Every half hour, update the forecasts using the above-mentioned trained regression models with the most recent set of METAR observations as inputs (and desired forecasts as outputs). In those specific cases (combinations of variable and forecast period) where ZOH prevails, use that instead of the neural net.</li>
</ul>
<div>
Over time, one can expect that the predictive capabilities of the Deep Learning networks used in the above-mentioned regressions should improve as the training datasets grow. Moreover, once sufficient data has been gathered to span at least a year, the day-of-year variable (currently omitted, see <a href="http://flylogical.blogspot.com/2018/05/weather-prediction-with-machine.html" target="_blank">previous post</a> for the reason why) can be included as a further input (regressor). This will improve the performance by capturing the seasonal effects of the weather (i.e., in addition to the intra-day effects already captured).</div>
<h2>
</h2>
<h2>
Production Deployment</h2>
<div>
This is the next important step and will be explored in a future post.</div>
<div>
<br /></div>
<br />
<br />
<br />
<div>
<br /></div>
<div>
<br /></div>
<div>
</div>
<div>
<br /></div>
<div>
</div>
<div>
<br /></div>
<div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-56152596849879158542018-05-16T04:23:00.000-07:002018-05-30T14:27:18.382-07:00Weather Prediction with Machine Learning in MATLAB <i><b><br /></b></i>
<i><b>UPDATE 30 May 2018: See<a href="http://flylogical.blogspot.com/2018/05/introducing-neuralmet.html" target="_blank"> latest post</a> for online deployment of the models presented here</b></i><br />
<i></i><b></b><br />
<i><b>UPDATE 25 May 2018: See<a href="http://flylogical.blogspot.com/2018/05/weather-prediction-refinements-in.html" target="_blank"> following post</a> for refinements to the models presented here</b></i><br />
<i></i><b></b><br />
This is the next in the series of my Artificial Intelligence (AI) / Machine Learning (ML) posts . The <a href="http://flylogical.blogspot.com/2018/01/object-detection-with-tensorflow-simple.html" target="_blank">first</a> covered the use of TensorFlow for Object Detection. The <a href="http://flylogical.blogspot.com/2018/01/deploying-tensorflow-object-detector.html" target="_blank">second</a> described how to deploy the trained TensorFlow model on the Google Cloud ML Engine.<br />
<br />
In this third post, I focus entirely on <a href="https://www.mathworks.com/products/matlab.html" target="_blank">MATLAB</a> in order to explore its machine learning capabilities, specifically for prototyping ML models. The topic of deploying the trained models for production is touched upon but not expanded here.<br />
<br />
As in my previous posts, I make no apologies for any technical decisions which may be considered sub-optimal by those who know better. This was an AI/ML learning exercise for me, first and foremost.<br />
<br />
<h2>
The Goal</h2>
Devise an ML algorithm to forecast the (aviation) weather, in half-hour increments, up to three days into the future, using historical time series of weather data. I was motivated to tackle this specific problem because (i) high-quality aviation weather data is readily available -- so the task of data preparation and cleansing is minimised, enabling the focus to be primarily on the ML algorithms; (ii) I've lamented the loss of the (very useful -- in my opinion) 3-day aviation forecast from the UK MET Office website ever since they removed it a couple of years ago.<br />
<br />
<h2>
The Dataset</h2>
<div>
For this initial exploration, I used aviation weather data (<a href="https://en.wikipedia.org/wiki/METAR" target="_blank">METAR</a>s) for Ronaldsway Airport (EGNS) on the Isle of Man since (i) I am based here and fly my Scottish Aviation Bulldog from here; (ii) being located in Northern Europe, the weather is varied and changeable, so the models (hopefully) have interesting features to be detected during training, thereby exercising the models' forecasting capabilities more than if the weather was uniform and more easily predictable. The modelling techniques presented for this single location can of course be extended to any other location for which analogous data exists (i.e., pretty-much anywhere).</div>
<div>
<br /></div>
<div>
The underlying weather data was obtained from <a href="https://www.aviationweather.gov/" target="_blank">US NOAA/NWS </a> (as utilised in <a href="http://flylogical.blogspot.com/p/mobile-apps.html#justmet" target="_blank">JustMET</a>, <a href="http://flylogical.blogspot.com/p/mobile-apps.html#inavcalc" target="_blank">iNavCalc</a>, and <a href="http://flylogical.blogspot.com/p/mobile-apps.html#reallysimplemovingmap" target="_blank">ReallySimpleMovingMap</a>). The training set comprised METAR data captured every half hour for EGNS over the 3.5 month period from 30 December 2017 through 13 April 2018. Each half-hourly recorded METAR was persisted (for long term storage e.g., for future analysis) to an <a href="https://aws.amazon.com/dynamodb/" target="_blank">Amazon DynamoDB</a> database, as well as to a <a href="https://azure.microsoft.com/en-gb/services/sql-database/" target="_blank">Microsoft Azure SQL</a> database (for temporary storage and staging). The triggering to capture each successive half-hourly METAR via web-service calls to NOAA was implemented using a <a href="https://azure.microsoft.com/en-us/services/scheduler/" target="_blank">Microsoft Azure Scheduler</a>.</div>
<div>
<u><span style="color: #000120;"><span style="background-color: white;"></span><br /></span></u></div>
<h2>
The Toolkit</h2>
<ul>
<li>For the data-wrangling and ML modelling, I used <a href="https://uk.mathworks.com/products/matlab.html" target="_blank">MATLAB R2018a Prerelease</a><b> </b>with the following add-on toolboxes: <a href="https://uk.mathworks.com/products/statistics.html" target="_blank">Statistics and Machine Learning Toolbox</a>, <a href="https://uk.mathworks.com/products/parallel-computing.html" target="_blank">Parallel Computing Toolbox</a> (for access to GPUs), <a href="https://uk.mathworks.com/products/neural-network.html" target="_blank">Neural Network Toolbox</a>, and the <a href="https://uk.mathworks.com/products/database.html" target="_blank">Database Toolbox</a> (to retrieve the weather data from the aforementioned Microsoft Azure SQL database).</li>
<li>For running MATLAB, I used an<a href="https://aws.amazon.com/ec2/" target="_blank"> AWS EC2</a> virtual machine (VM). For preparatory work, I instantiated the VM on the CPU-based <b>c4.large</b> <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank">instance type</a>, and for training the neural networks, I used the GPU-equipped <b>p2.xlarge</b> instance type.</li>
</ul>
<h2>
</h2>
<h2>
</h2>
<h2>
</h2>
<h2>
</h2>
<h2>
</h2>
<div>
<br /></div>
<h2>
Solution Path</h2>
<h3>
Pre-processing the Raw Data</h3>
The first task was to pre-process the raw data, in this case, primarily to correct for data gaps since there was (inevitably) some (unforeseen) down-time over the weeks and months in the automated METAR capture process.<br />
<br />
To start out, the data was retrieved (into MATLAB) from the Azure SQL database using the MATLAB Database Toolbox functionality. <i>GOTCHA: the graphical interface bundled with the MATLAB Database Toolbox is rather limited and cannot handle stored procedures. Instead, the command-line functions must be used when retrieving data from databases via stored-procedures.</i><br />
<br />
Next, the retrieved data was reformatted into a MATLAB <a href="https://uk.mathworks.com/help/matlab/timetables.html" target="_blank">Timetable. </a>This proved to be a very convenient format for manipulating and preparing the data. It is an extension of the MATLAB <a href="https://uk.mathworks.com/help/matlab/matlab_prog/create-a-table.html" target="_blank">table </a>format, designed specifically to handle time-stamped data, and therefore ideal for handling the multivariate METAR time-series. Note: the MATLAB <a href="https://uk.mathworks.com/help/matlab/matlab_prog/create-a-table.html" target="_blank">table </a>format is a relatively recent innovation, and seems to be MATLAB's answer to the <i>DataFrame</i> object from the powerful and popular <a href="http://pandas.pydata.org/" target="_blank">pandas</a> library available for Python.<br />
<br />
The set of 8 variables collected for analysis and forecasting are summarised below (for detailed definitions, see <a href="https://web.archive.org/web/19990420051036/http://www.ofcm.gov/fmh-1/fmh1.htm" target="_blank">here</a>). The variables pertain to observations made on the ground at the location of the given weather station (airport), distributed via the METAR reports. I have kept the units as per the METARs (rather than converting to S.I.). Each observation (i.e., at each sample time) contains the following set of data:
<br />
<ul>
<li>Temperature (units: degrees Celsius)</li>
<li>Dewpoint (units: degrees Celsius)</li>
<li>Cloudbase (units: feet)</li>
<li>Cloudcover (units: oktas, dimensionless, 0 implies clear sky, 8 implies overcast), converted to numerical value from the raw skycover categorical variable from the METAR (i.e.,"CAVOK" -- 0 oktas; "FEW" -- 1.5 oktas; "SCT" -- 3.5 oktas; "BKN" -- 5.5 oktas; "OVC" -- 8 oktas). Note: whenever "CAVOK" was specified, this was taken to set the Cloudcover value to zero and the Cloudbase value to 5000 feet -- even if skies were clear all the way up since the METAR vertical extent formally ends at 5000 feet above the airport (typically). Making this assumption (hopefully) means erring on the safe side, even if it tampers -- in a sense --with the natural data. </li>
<li>Surface Pressure (units: Hectopascals)</li>
<li>Visibility (units: miles)</li>
<li>Wind Speed (units: knots)</li>
<li>Wind Direction (units: degrees from True North)</li>
</ul>
Additionally, the date and time of each observation is given (in UTC), from which the local time-of-day can be determined from the known longitude via the MATLAB expression:
<br />
<div style="-webkit-text-stroke-width: 0px; background-color: transparent; color: black; font-family: Times New Roman; font-size: 16px; font-style: normal; font-variant: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: left; text-decoration: none; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px;">
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">timeofday=mod(timeutc+longitude*12/180,24)</span>;</div>
<div>
<b></b><b></b><i></i><u></u><sub></sub><sup></sup><strike><br /></strike></div>
<div>
Note: as well as local-time-of-day, it would also be worthwhile to include the day-of-year in the analyses (since weather is known to be seasonal). This was not done for now since the entire data set spans only 3.5 months rather than at least one complete year. This means that since the validation set is taken from the 30% tail-end (see later), and the training set is the 70% taken from the start, up to the beginning of the tail-end, there will be no common values for day-of-year in both the training and validation sets, so it is not sensible to include day-of-year for now. However, if/when the collected data set spans sufficient time (1.3 years for the 30:70 split) such that the training and validation sets both contain common values for day-of-year, then it should be included in a future refinement.</div>
<br />
<h3>
Filling the Data Gaps</h3>
The MATLAB command for creating the aforementioned <i>timetable</i> structure from individual vectors is as follows:
<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">TT=timetable(datetimeUTC,temperature,</span><span style="font-family: "courier new" , "courier" , monospace;">dewpoint,cloudbase,</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">cloudcover,visibility,sealevelpressure,windspeed,...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">winddirection,timeofday);
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
Since all the variables in the <i>timetable </i>are continuous (rather than categorical), it is simple in MATLAB to fill for missing data by interpolation as follows:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">% Define the complete time vector, every 30 minutes</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">newTimes = [datetimeUTC(1):minutes(30):datetimeUTC(end)];</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br />% Use interpolation for numerical values</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">TT.Properties.VariableContinuity = {'continuous','continuous','continuous','continuous',...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'continuous','continuous',</span><span style="font-family: "courier new" , "courier" , monospace;">'continuous','continuous',...</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">'continuous'};</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">% Perform the data filling</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">TT1 = retime(TT,newTimes);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<br />
<h3>
METAR Data Time Series Plots</h3>
These cleaned METAR time series for EGNS are plotted in the graphs below and serve as the source of training and validation data for the upcoming ML models. Each time series is 4,974 data points in length (corresponding to the 3.5 month historical record, sampled each half hour).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBmnYfNd4TT5GQt8JhSy0My7FPt2slrPXlx09dzP86QeMAWaonYNMEuZwdt0D81X5FVO0lYLf9ydjM5F3ukxYrJ4CPm5Qg_u5ON-4I1ff0NAbib6618KgVdsLevVV_1t7790izVxehZEk/s1600/METAR_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBmnYfNd4TT5GQt8JhSy0My7FPt2slrPXlx09dzP86QeMAWaonYNMEuZwdt0D81X5FVO0lYLf9ydjM5F3ukxYrJ4CPm5Qg_u5ON-4I1ff0NAbib6618KgVdsLevVV_1t7790izVxehZEk/s400/METAR_1.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpBw9QEOtsCUUCeMfsiKsLSCYz-bpyU_fwXCXHNt2pdJ-BSyg6esl_7zKz-mVRK4FBmogmjXaGtEvKs5d-uArvJ-d0cdp_HZFnb0ti3L091mL8YJ_ILynMkbgRKOvu50pNVUnaU-UNVoU/s1600/METAR_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpBw9QEOtsCUUCeMfsiKsLSCYz-bpyU_fwXCXHNt2pdJ-BSyg6esl_7zKz-mVRK4FBmogmjXaGtEvKs5d-uArvJ-d0cdp_HZFnb0ti3L091mL8YJ_ILynMkbgRKOvu50pNVUnaU-UNVoU/s400/METAR_2.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxNz9uIOuH9WRwKFmByBVTYJIotdJFjGb-JJY37fYN-wTREL6uThqcQbVKGsBgp0QhyphenhyphenjKG8evLiT8nzG5NxoAple0muE4Vq4uoZh8oPhOxDsCn0iBUzUCi329R-7VZnUyZEt6q2BVTWfg/s1600/METAR_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxNz9uIOuH9WRwKFmByBVTYJIotdJFjGb-JJY37fYN-wTREL6uThqcQbVKGsBgp0QhyphenhyphenjKG8evLiT8nzG5NxoAple0muE4Vq4uoZh8oPhOxDsCn0iBUzUCi329R-7VZnUyZEt6q2BVTWfg/s400/METAR_3.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgmeN7kIZxubSC8VSvxelNjTSzV19deBZQQ8Amu1lWHwNsYI7CPq-4QSYqCrMWSy8oab2tJUPMjuEhHDzWOUvOYgTMEJbolZYZrckLx-yliPpSRCPQoJeThCmrDPn4Ix9QM0wUHT4Dc9Q/s1600/METAR_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgmeN7kIZxubSC8VSvxelNjTSzV19deBZQQ8Amu1lWHwNsYI7CPq-4QSYqCrMWSy8oab2tJUPMjuEhHDzWOUvOYgTMEJbolZYZrckLx-yliPpSRCPQoJeThCmrDPn4Ix9QM0wUHT4Dc9Q/s400/METAR_4.png" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
<h3>
Modelling Phase 1: LSTM models for each variable</h3>
Since it is generally known that long short-term (LSTM) neural networks are well-suited to the task of building regression models for time series data, it seemed the natural starting point for these investigations, not least since <a href="https://uk.mathworks.com/help/nnet/ug/long-short-term-memory-networks.html" target="_blank">LSTM layers are now available within MATLAB</a>.<br />
<br />
A separate LSTM model was therefore built for each of the METAR data variables by following the MATLAB example presented <a href="https://uk.mathworks.com/help/nnet/ug/long-short-term-memory-networks.html" target="_blank">here</a>. Note: it is possible to build an LSTM model for multiple time series taken together, but I felt it would be "asking too much" of the model, so I opted for separate models for each (single variable) time series. It may be worthwhile revisiting this decision in a future attempt at refining the modelling.<br />
<br />
When building the models for the METAR data, the various (hyper-)parameters available within the LSTM model for "tweaking" (such as the number of neurons, per layer the number of layers, the number of samples back in time to be used to fit the model for looking forward in time, etc) not surprisingly needed to be changed from the default settings and from those settings presented in the <a href="https://uk.mathworks.com/help/nnet/examples/time-series-forecasting-using-deep-learning.html" target="_blank">MATLAB example</a>, in order to achieve useful results on the METAR data. This is not unreasonable, given that the data sets are so different, and that machine learning is essentially data-driven. By trial-and-error experimentation, the following code snippet captures the set of hyper-parameters which were found to be effective on the METAR variables.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">%% Example data setup for LSTM model on the first chunk of data</span><br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new";">% Look back 92 hours. Seems suitable for METAR data</span></div>
<span style="font-family: "courier new" , "courier" , monospace;">numTimeStepsTrain = 184; </span><br />
<div>
<br /></div>
<div>
<span style="font-family: "courier new";">% 3 days maximum forecast look-ahead<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">numTimeStepsPred = 144; <br />windowLength = numTimeStepsPred+numTimeStepsTrain;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">data=data_entire_history(1:windowLength); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% where data_entire_history is entire time series</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">XTrain = data(1:numTimeStepsTrain); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% where data is the first window of time series</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">YTrain = data(2:numTimeStepsTrain+1); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% target for LSTM is one time-step into the future</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">XTest = data(numTimeStepsTrain+1:end-1); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% inputs for testing the LSTM model at all forecast look-aheads</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">YTest = data(numTimeStepsTrain+2:end); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% targets for testing the LSTM model at all forecast look-aheads</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">%For a better fit and to prevent the training from diverging,</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">%standardize the training data to have zero mean and unit variance.</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">%Standardize the test data using the same parameters as the training</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">%data.</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">mu = mean(XTrain);<br />sig = std(XTrain);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">XTrain = (XTrain - mu) / sig;<br />YTrain = (YTrain - mu) / sig;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">XTest = (XTest - mu) / sig;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">%% Define LSTM Network Architecture</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">inputSize = 1;<br />numResponses = 1;<br />numHiddenUnits =65; % seems suitable for METAR data</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">layers = [ ...<br /> sequenceInputLayer(inputSize)<br /> lstmLayer(numHiddenUnits)<br /> fullyConnectedLayer(numResponses)<br /> regressionLayer];</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">maxEpochs=200;<br />opts = trainingOptions('adam', ...<br /> 'MaxEpochs',maxEpochs, ...<br /> 'GradientThreshold',1, ...<br /> 'InitialLearnRate',0.005, ...<br /> 'LearnRateSchedule','piecewise', ...<br /> 'LearnRateDropPeriod',125, ...<br /> 'LearnRateDropFactor',0.2, ...<br /> 'Verbose',0);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% Train the LSTM network with the specified training options by </span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% using trainNetwork.</span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></span>
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">net = trainNetwork(XTrain,YTrain,layers,opts);</span></span><br />
<span style="font-family: "courier new";"></span><span style="font-family: "courier new";"><br />
<span style="font-family: "courier new" , "courier" , monospace;">% Forecast Future Time Steps<br />% To forecast the values of multiple time steps in the future, </span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% use </span></span><span style="font-family: "courier new";"><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">the predictAndUpdateState function to predict time steps </span></span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% one at a </span></span></span><span style="font-family: "courier new";"><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">time and update the network state at each prediction. </span></span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% For each </span></span></span><span style="font-family: "courier new";"><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">prediction, use the previous prediction as input to </span></span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% the function.</span></span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% To initialize the network state, first predict on the training </span></span><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;"></span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% data XTrain. Next, make the first prediction using the last </span></span><br />
<span style="font-family: "courier new";">% <span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">time </span></span></span><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">step of the training response YTrain(end). Loop over the </span></span><br />
<span style="font-family: "courier new";">% <span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">remaining </span></span></span><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">predictions and input the previous prediction to </span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% predictAndUpdateState.</span></span><br />
<span style="font-family: "courier new";"><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">net = predictAndUpdateState(net,XTrain);<br />[net,YPred] = predictAndUpdateState(net,YTrain(end));</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">numTimeStepsTest = numel(XTest);<br />for i = 2:numTimeStepsTest<br /> [net,YPred(1,i)] = predictAndUpdateState(net,YPred(i-1));<br />end</span></span><br />
<span style="font-family: "courier new";"></span><span style="font-family: "courier new";"><br />
<span style="font-family: "courier new" , "courier" , monospace;">% Unstandardize the predictions using mu and sig calculated</span></span><br />
<span style="font-family: "courier new";">% <span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">earlier.</span></span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;"></span></span><span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;"><br />YPred = sig*YPred + mu;</span></span><br />
<span style="font-family: "courier new";"></span><span style="font-family: "courier new";"><br />
<span style="font-family: "courier new" , "courier" , monospace;">% The training progress plot reports the root-mean-square error </span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">%(RMSE) calculated from the standardized data. Calculate the RMSE </span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">% from the unstandardized predictions.</span></span><br />
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></span>
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">rmse = sqrt(mean((YPred-YTest).^2))</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;"></span></span></div>
Note: it may be better to tweak the hyper-parameters specific to the modeling of each variable, again, not done, here, but an idea for future enhancements.<br />
<br />
Using the above-mentioned set of hyper-parameters, the graph below shows a typical LSTM training convergence history (in this case, for Temperature). Note: this plot, (optionally) generated by MATLAB interactively during training, is similar to that available via <a href="https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard" target="_blank">TensorBoard</a> (when training TensorFlow models), but with the added advantage that there is a "Stop Button" on the MATLAB interface that enables the user to stop the training at any time (and capture the network parameters at that time).
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOC7hrKc8pjke7ao-wAR1XRtYk5dUj_VJZU5_Htv0DFpIFv0SY71mRyWH6XY1RYTvksD_990vRkeg-DVUdvN6cAWEsJaElI2nrp1aiGDh5JHK0IO6pQFAyeOVB-brNwcSce5ALv-n7OhI/s1600/Capture_Typical_LSTM_converge.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="825" data-original-width="1349" height="388" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOC7hrKc8pjke7ao-wAR1XRtYk5dUj_VJZU5_Htv0DFpIFv0SY71mRyWH6XY1RYTvksD_990vRkeg-DVUdvN6cAWEsJaElI2nrp1aiGDh5JHK0IO6pQFAyeOVB-brNwcSce5ALv-n7OhI/s640/Capture_Typical_LSTM_converge.PNG" width="640" /></a></div>
<br />
<br />
The typical forecast results (i.e., from just one arbitrary window of the historical data set) obtained from the LSTM models for each variable are shown in the following plots. In each plot, the (92 hour) training data window is (arbitrarily) chosen to be before "time zero" when the forecast starts (and extends from half an hour to 72 hours i.e., 3 days). The black curve is the training data, the blue curve the forecast results, and the red curve the test values against which the forecast performance can be directly compared. For all METAR variables, the forecasts are seen to be effective only out to a few hours at most, with significant deviations beyond that -- and some variables are worse than others.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwQIRjfSRRT7Sl7Ryza_YHqdV0DTa8fZ0P-HSlRIds2l5LkmRSjmPFy0Q2VUNvnnnOzUWNjRSXIjnf78TRB5WWEvK4udIMXnX-Xb9cSO1V74Bcik3wgXCcv3ubSWPYzTkV8abnBOBTA8w/s1600/LSTM_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwQIRjfSRRT7Sl7Ryza_YHqdV0DTa8fZ0P-HSlRIds2l5LkmRSjmPFy0Q2VUNvnnnOzUWNjRSXIjnf78TRB5WWEvK4udIMXnX-Xb9cSO1V74Bcik3wgXCcv3ubSWPYzTkV8abnBOBTA8w/s640/LSTM_1.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8jAmT9WKxz8qLm0qqKxL-jI9eV13Bp1fSm8EwsZ5gs0uhp2-XEajN42IZDY8u0riHcDMLvhtNOOX5V0gtKPlsfu66RKBBacBR4riia-ka0rOeShrXwyN3JUzcpvGjJpmT56u8UorF-QI/s1600/LSTM_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8jAmT9WKxz8qLm0qqKxL-jI9eV13Bp1fSm8EwsZ5gs0uhp2-XEajN42IZDY8u0riHcDMLvhtNOOX5V0gtKPlsfu66RKBBacBR4riia-ka0rOeShrXwyN3JUzcpvGjJpmT56u8UorF-QI/s640/LSTM_2.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgqP3Ugf-jWte1MjtI3gSNZBko-GzrU6a5zd3bm3yjEnpOqwrS4gL2j3fXdNP5VDLDWuxH4jSMvc5ZTmaLPHsRQkLm-iG84gCydBpg8J_CvVEKNYOQURpBspLozbJYC3zLtyTQA0IHc8w/s1600/LSTM_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgqP3Ugf-jWte1MjtI3gSNZBko-GzrU6a5zd3bm3yjEnpOqwrS4gL2j3fXdNP5VDLDWuxH4jSMvc5ZTmaLPHsRQkLm-iG84gCydBpg8J_CvVEKNYOQURpBspLozbJYC3zLtyTQA0IHc8w/s640/LSTM_3.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhytzgOS7hcxhIYN6IfOAdVBE_bge3SjE_9Lb9YajrQKbvYYG4FqyYSIYy6h5AwIISmJOFGKsMRH8G8c0nnpTCVu_g6SGY47JI31Xd86JoycHglsPMMJZKJifUWajZYdkqOIW9Kzl8k0HY/s1600/LSTM_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhytzgOS7hcxhIYN6IfOAdVBE_bge3SjE_9Lb9YajrQKbvYYG4FqyYSIYy6h5AwIISmJOFGKsMRH8G8c0nnpTCVu_g6SGY47JI31Xd86JoycHglsPMMJZKJifUWajZYdkqOIW9Kzl8k0HY/s640/LSTM_4.png" width="640" /></a></div>
<br />
<h3>
Forecast Error Plots</h3>
By re-training each LSTM model for each METAR variable, at <i>each</i> successive sample point in the test data set, and comparing with the known measurements for each forecast time, it is possible to build-up a statistical picture of the average performance of the models over forecasting time. For the 3.5 month historical METAR data set, sampled every half hour, and subtracting two window widths (first and last), this implies training approximately 4600 LSTM models per METAR variable. Note: when it comes to production deployment of the models, the principle of re-training each model at each sample point i.e., each time a new weather observation comes in -- every half hour in the case of METARs, --means that the model for the given variable is the most up-to-date that it can be at any given time, for use in forecasting forward from that point in time. By averaging the mean-squared error in the (4600) forecasts of each of the trained models sliding forward in time from the begin to the end of the entire data set, the expected accuracy of the forecast for each look-ahead forecast time can be assessed. These accuracies (in terms of absolute average mean-square error and relative average mean-square error) are plotted in the Forecast Error Plots below (blue curves, labelled "<b>LSTM alone</b>") for each variable versus look-ahead time (from half an hour to three days). On the absolute error plots, the standard deviation of the underlying observations is also shown (denoted <b>sdev obs</b>). Whenever the error curve is (well) below the <b>sdev obs</b> line, the forecast can be considered better than random. But whenever the error curve is near to or above the <b>sdev obs</b> line, the forecast is no better than random, and should be considered as ineffective. Similarly, on the relative error curves, whenever the error is below 50% (as indicated by the line marked <b>50% error</b>), the forecast may be considered as effective, though the lower the better. Above a relative error of 50%, the forecast should be considered as ineffective. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2kilUlBFAj6Ae66ddAns2KVe1vjcjq_IwAh9DgC-vF0SawNjtW5ww2gtV8qTcXGRcMCKRe0NYm7FYId_Iv9-NS4rCeLrZdr7btj89a2l-p_6D4COJz2usbudMs9araCoMW771Hx_C6yQ/s1600/ERROR_1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2kilUlBFAj6Ae66ddAns2KVe1vjcjq_IwAh9DgC-vF0SawNjtW5ww2gtV8qTcXGRcMCKRe0NYm7FYId_Iv9-NS4rCeLrZdr7btj89a2l-p_6D4COJz2usbudMs9araCoMW771Hx_C6yQ/s640/ERROR_1.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgD_xo0wD-278VBcvlKIiU-wa9YGwmNHPZqhreFBCzwAcjgRfudmbhg7wspqmSZ8zFhy_6DURirQ-NnkJSyVtUjcjKN1Fjs3E1fB8VX_rV0Aq2NuHf_66aTuJ2916Tr9fgTt_85H0eZgBw/s1600/ERROR_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgD_xo0wD-278VBcvlKIiU-wa9YGwmNHPZqhreFBCzwAcjgRfudmbhg7wspqmSZ8zFhy_6DURirQ-NnkJSyVtUjcjKN1Fjs3E1fB8VX_rV0Aq2NuHf_66aTuJ2916Tr9fgTt_85H0eZgBw/s640/ERROR_2.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGePWHZBl6NE3lCdFkeCbXFJw-b_AON_IuDnsS9P1A7JNaVmwmuTAysXmqY0mTsG-evS0YT_gf0sTWpjl7eH5cW2gKOEjJG0PeTpIfngfzB5zUPanZ-IiNZ4vEysJrd8OjMpumFTDCYYc/s1600/ERROR_3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGePWHZBl6NE3lCdFkeCbXFJw-b_AON_IuDnsS9P1A7JNaVmwmuTAysXmqY0mTsG-evS0YT_gf0sTWpjl7eH5cW2gKOEjJG0PeTpIfngfzB5zUPanZ-IiNZ4vEysJrd8OjMpumFTDCYYc/s640/ERROR_3.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHtxRxuXqy-GnOZajUptUefRuLwnNtxVhW1zJJxHXamAmjSb7zkcRkOyFk8tNL0aJfiG50bOOU0SZuBpWn7WyKPSko4kKE4SrC1QLdsjsal5MHaLGDpkWNxtk3kbGMB2e_3M69bHBuuzw/s1600/ERROR_4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHtxRxuXqy-GnOZajUptUefRuLwnNtxVhW1zJJxHXamAmjSb7zkcRkOyFk8tNL0aJfiG50bOOU0SZuBpWn7WyKPSko4kKE4SrC1QLdsjsal5MHaLGDpkWNxtk3kbGMB2e_3M69bHBuuzw/s640/ERROR_4.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijFSgUb7rlkFIv_Zr7lMa6gC-BGDO5EG4AD-RRyniB2Jqg_UazzoBB7q1GsCzQsuoILEKZ6AhZ4B32VP8H5iqPUR_eofTKUpLUaEidopjOv_b2mk4By3K6UQDMBheSuD03Jfk7McLAfFI/s1600/ERROR_5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijFSgUb7rlkFIv_Zr7lMa6gC-BGDO5EG4AD-RRyniB2Jqg_UazzoBB7q1GsCzQsuoILEKZ6AhZ4B32VP8H5iqPUR_eofTKUpLUaEidopjOv_b2mk4By3K6UQDMBheSuD03Jfk7McLAfFI/s640/ERROR_5.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9Vf1myVecsTYDvNDrGmR8dAZF1w-HFrmuo8ifXRsa6Q8gqTahH5pE87ll6M-DXkUosA_KiDn_5FM9Y6pZiNI5kmgzknS1srvALncEEVb1kElG5Y4uks6pJOPtA5PtG1ovPeGsb4EhT8E/s1600/ERROR_6.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9Vf1myVecsTYDvNDrGmR8dAZF1w-HFrmuo8ifXRsa6Q8gqTahH5pE87ll6M-DXkUosA_KiDn_5FM9Y6pZiNI5kmgzknS1srvALncEEVb1kElG5Y4uks6pJOPtA5PtG1ovPeGsb4EhT8E/s640/ERROR_6.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTwGuh6HMXfOBkLq7eXejtvdF2LUCoL-CohiMYgvrUWbCKpROGcXB82d8lx1hP2gNIFGt1JIdGm_3pa3xBpjtNQVdOSkvSSBa3PwjdAPOWeEE0F8ZJIfaWp2Bk51ikLEdkQz6GSMG4rnk/s1600/ERROR_7.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTwGuh6HMXfOBkLq7eXejtvdF2LUCoL-CohiMYgvrUWbCKpROGcXB82d8lx1hP2gNIFGt1JIdGm_3pa3xBpjtNQVdOSkvSSBa3PwjdAPOWeEE0F8ZJIfaWp2Bk51ikLEdkQz6GSMG4rnk/s640/ERROR_7.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUfAL-wbkaPmBugn6PlLxBcXTavVKKw7KuT28VHY8-ByvbbIe2DT-ktBRq4OaTyEO2q6TeKM8sP9kaVYHUqtjDaWjkkQ-JHmX43CIvxj1lut-5EmHR4H193qoRi8pPKoV1FRwRvmoVWgw/s1600/ERROR_8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="560" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUfAL-wbkaPmBugn6PlLxBcXTavVKKw7KuT28VHY8-ByvbbIe2DT-ktBRq4OaTyEO2q6TeKM8sP9kaVYHUqtjDaWjkkQ-JHmX43CIvxj1lut-5EmHR4H193qoRi8pPKoV1FRwRvmoVWgw/s640/ERROR_8.png" width="640" /></a></div>
<br />
In terms of the range to which the LSTM forecasts are considered useful (in terms of look-ahead period), these values have been extracted from the plots and summarised in the following table.<br />
<br />
<table>
<tbody>
<tr>
<td><b>Variable</b></td><td><b> Usable forecast </b></td></tr>
<tr><td>Temperature </td><td> 6 hours</td></tr>
<tr><td>Dewpoint </td><td> 2 hours</td></tr>
<tr><td>Cloudbase </td><td> 2 hours</td></tr>
<tr><td>Cloud Cover </td><td> No usable forecast</td></tr>
<tr><td>Sea-Level Pressure </td><td> 25 hours</td></tr>
<tr><td>Visibility </td><td> 2 hours</td></tr>
<tr><td>Windspeed </td><td> 3 hours</td></tr>
<tr><td>Wind Direction </td><td> 3 hours</td></tr>
</tbody>
</table>
<br />
The LSTM forecasts are generally seen to be useful out to a few hours, with some exceptions: Sea-Level Pressure forecast is good out to (an impressive) 25 hours; but Cloud Cover forecast is no good at all.<br />
<span style="font-family: inherit;"><br /></span>
<br />
<h3>
<span style="font-family: inherit;">
Modelling Phase 2: Using the LSTM model outputs in combination with the other METAR variables to perform regressions</span></h3>
<span style="font-family: inherit;">
The benefit of the LSTM modelling from Phase 1 above is that the recent history of a given variable is utilised in predicting it's future path. This should presumably be better than using just a single snapshot in time (e.g., now) to predict the future. That said, from the results obtained, the accuracy diminishes quite significantly when forecasting out beyond a couple of hours or so. In this next phase, the idea is to utilise the information from the other METAR variables to help improve the forecasts for the given variable (which so far has been based only on histories of itself). This makes sense from the laws-of-physics point of view. For example, the temperature an hour from now will depend not only on the temperature now, but on: the time of day (since there is a diurnal temperature cycle) of the measurement and of the desired forecast; the extent of cloud cover; the wind strength (possibly), etc., so it makes intuitive sense to somehow tie these other known measurements and time-factors into the forecasts for a given variable.<br />
<br />
The strategy therefore is to re-cast the forecasting task as a neural network multivariate regression problem where the inputs (regressors) comprise: (i) all the measured METAR variables at a given time, (ii) the time of day of those measurements, (ii) the time difference between the measurement time and the time of the forecast looking ahead; (iii) and the estimated value of the variable in question at the time of the forecast looking ahead obtained from the LSTM model from Phase 1. The output (target) of the regression is the estimate of the value of the variable in question at the time of the forecast looking ahead (for each look ahead time). For training, all input and output values are known. Moreover, since the LSTM models have been re-defined every half hour (by sliding through the entire data set), a large set (i.e., 664,521) of input/output values is available for training this neural net regression model.<br />
<br />
To perform the neural network regression, MATLAB has two options available: the (older) <b>train</b> function; and the newer <b>trainNetwork</b> function (which was used above for the LSTM training). The differences between the two methods are discussed <a href="https://uk.mathworks.com/matlabcentral/answers/385205-how-are-the-functions-train-and-trainnetwork-different-underneath" target="_blank">here</a>. I opted to use the newer <b>trainNetwork</b> method since it is focused on Deep Learning and can make use of large data sets running on GPUs. At this point I would like to extend my gratitude to Musab Khawaja at the Mathworks who provided me with sample code (in the snippet below) demonstrating how to adapt the <b>imageInputLayer</b> (normally used for image processing) for use in general-purpose multivariable regression.<br />
<br />
As with the LSTM modelling, the hyper-parameters need to be chosen for the training. Again, by trial-and-error, the following common set (within the code snippet below) proved to be suitable for each of the METAR data fits:</span><br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">% Here, x is the array of appropriate regressor observations, and <br />% t is the vector of targets<br />
</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% Create a table with last col the outputs<br />data=array2table([x' t']);<br />[sizen,sizem]=size(x);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">numVars = sizen;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">n = height(data);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br />% reshape to 4D - first 3D for the 'image', last D for each <br />% sample<br />dataArray = reshape(data{:,1:numVars}', [numVars 1 1 n]); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% ...assume first numVars columns are predictors (regressors)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br />output = data{:,numVars+1}; % assume response is last column</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">% Split into 70% training and 30% test set<br />pc = 0.7;<br />rng('default') % for reproducibility<br />idx=1:n; <br />% Don't shuffle yet, since don't want training sliding <br />% window to leak into validation set<br />max_i = ceil(n*pc);<br />idxTrain = idx(1:max_i);<br />idxTest = idx(max_i+1:n);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br />% Now shuffle the training and validation sets independently<br />idxTrain=randsample(idxTrain,length(idxTrain));<br />idxTest=randsample(idxTest,length(idxTest));<br />
<br />
% Prepare arrays for regressions</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">trainingData = dataArray(:, :, :, idxTrain);<br />trainingOutput = output(idxTrain);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">testData = dataArray(:, :, :, idxTest);<br />testOutput = output(idxTest);<br />testSet = {testData, testOutput};</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">% Define network architecture<br />layers = [...<br /> imageInputLayer([numVars,1,1]) % Non-image regression!<br /> <br /> fullyConnectedLayer(500) % Seems suitable for METAR data<br /> reluLayer<br /> </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> fullyConnectedLayer(100) <br /> reluLayer<br /> <br /> fullyConnectedLayer(1)<br /> regressionLayer<br /> ];</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">% Set training options<br />options = trainingOptions('adam', ...<br /> 'ExecutionEnvironment','gpu', ... <br /> 'InitialLearnRate', 1e-4, ...<br /> 'MaxEpochs', 1000, ... <br /> 'MiniBatchSize', 10000,...% Seems suitable for METAR data<br /> 'ValidationData', testSet, ...<br /> 'ValidationFrequency', 25, ...<br /> 'ValidationPatience', 5, ... </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> 'Shuffle', 'every-epoch', ... <br /> 'Plots','training-progress');</span><br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">% Train<br />net = trainNetwork(trainingData, trainingOutput, layers, options);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">% Predict for validation plots<br />y=predict(net,testData);<br />VALIDATION_INPUTS=reshape(testData,[numVars,length(testData)])';<br />VALIDATION_OUTPUTS=testOutput;<br />VALIDATION_y=y;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><span style="font-family: inherit;"><br /></span><span style="font-family: inherit;"><br />
A Deep Learning model with the afore-mentioned hyper-parameters was trained for each METAR variable, in turn, selected as the target. As evident from the code snippet, the first 70% of the entire data set was used for training (which amounted to 465,165 data points), the remaining 30% ("tail-end") for validation (which amounted to 199,356 data points).<br />
<br />
The errors in the corresponding forecasts when applied to the validation data are displayed as the red curves (labelled "<b>Multi-regression plus LSTM</b>") in the Forecast Error Plots presented earlier. For comparison, the yellow curves (labelled "<b>Multi-regression alone</b>") in t</span>he error plots correspond to the multivariate regressions re-trained but this time excluding the LSTM outputs as regressors. It can be seen that in all cases, the LSTM models out-perform the Multi-regressions when limiting our attention to those regimes where the forecasts are deemed to be useful i.e., when the absolute rms error is below the standard deviation of the observations and when the relative rms error is below 50%. This came somewhat as a surprise, since intuitively it was felt that the addition of information via the other variables <i>should</i> have been more beneficial than was observed. Perhaps a refined regression analysis, as discussed below, would reveal such. Outside the usable regimes, the Multi-regressions sometimes out-perform the LSTM models, but by then, none of the models are effective (errors too large). It is also interesting to note that the inclusion of the LSTM outputs as inputs to the Multi-regressions generally improves their performance (i.e., red curves lower than yellow curves) -- but not in every case (see for example, the Cloudbase forecasts, where the yellow curve is lower than the red curve).
<br />
<h2>
<span style="font-family: inherit;"><br /></span></h2>
<h2>
<span style="font-family: inherit;">Other Things To Try</span></h2>
<br />
Some ideas to try next include (not exhaustive):<br />
<br />
<ul>
<li>The LSTM models were trained to target the observations just one sample period (half hour) away from the inputs, but then asked to predict out to three days away, with the accuracy dropping off dramatically within the first few samples ahead. Instead, it might be worth trying training a different LSTM model for <i>each</i> look-ahead period (by down-sampling before training). This would entail having a different LSTM model per look-ahead period, but perhaps the forecast accuracy would be better, particularly further out ?</li>
<li>Likewise, for the Multi-regression models, all look-ahead periods were included in a single regression model. This means that the accuracy for the short look-ahead periods is penalised by the errors further out (since the stochastic gradient descent optimiser minimises a single number: the rms error across <i>all</i> look-ahead periods). Instead, it might be worth trying training a different Multi-regression model for each look-ahead period. Again, this would entail more models, but the accuracy may be better.</li>
<li>The hyper-parameter settings for all models were set via trial-and-error. It might be worth trying a more systematic approach e.g., by invoking an outer layer of optimisation which uses for example, techniques invoking genetic algorithms, to choose the optimum set of hyper parameters.</li>
<li>Try incorporating additional information in the regressions. For example, weather data for other locations known to correlate with the weather in the given location. Case-in-point: since most of the weather systems on the Isle of Man originate from the Atlantic i.e., to the west, it might be useful to incorporate weather data from Ireland, with suitable lag, to try and improve the model predictions for the Isle of Man. </li>
</ul>
<br />
<h2>
<span style="font-family: inherit;">
End-to-End Recipe For Weather Forecaster</span></h2>
<span style="font-family: inherit;"><br />If the Multi-regression results can be improved by the suggestions above, such that they can compete with the LSTM models over some portion(s) of the look-ahead range, then the following general recipe for an online ML-based weather forecaster can be proposed:</span><br />
<h3>
</h3>
<h3>
<span style="font-family: inherit;">
</span></h3>
<h3>
<span style="font-family: inherit;">Every few months (or so):</span></h3>
<ol>
<li><span style="font-family: inherit;">For a given location, gather as long a history of half-hourly METAR data as possible/available, ideally over at least the past year in order to capture seasonal variations</span></li>
<li><span style="font-family: inherit;">From the data in 1), perform a set of LSTM fits (sliding across the data, one sample at a time) to obtain the estimated quantities for use as inputs (alongside the METAR data) to the multivariate regressions. Note: for the 3.5 month historical data set, this set of fits took approximately two days per METAR variable on a p2.xlarge (GPU-equipped) AWS instance, owing to the many thousands of LSTM training runs required. </span></li>
<li><span style="font-family: inherit;">With the data from 1) combined with the estimates from 2), Perform Deep learning multivariate regressions for each target variable. Refer to this trained model as the <b>REGRESSION MODEL </b>for the given target variable. Also perform a regression for the given target variable excluding the LSTM estimates as a regressor. Refer to this trained model as the <b>REDUCED REGRESSION MODEL</b>for the given target variable. Note: for the 3.5 month historical data set, these two fits took approximately one hour per METAR variable on a p2.xlarge (GPU-equipped) AWS instance. Re-create a revised set of Forecast Error Plots from the results of the runs in 2 & 3 (in order to be able to select the best model per forecast look-ahead period, see below).</span></li>
</ol>
<h3>
<span style="font-family: inherit;">
Every time a new observation is received (half-hourly):</span></h3>
<ol>
<li><span style="font-family: inherit;">Re-train the LSTM models, one per variable, using the latest measurement as the most recent available. For each variable, refer to this trained model as the <b>LSTM MODEL </b> for the given variable (note: this fit will take a few minutes per METAR variable on a p2.xlarge GPU-equipped AWS instance). </span></li>
<li><span style="font-family: inherit;">For each forecast look-ahead period (i.e., half hourly up to three days ahead), use each <b>LSTM MODEL</b> each <b>REGRESSION MODEL</b>, and each<b> REDUCED REGRESSION MODEL </b>to generate three different forecasts for the given variable (note: these will take only a few seconds per METAR variable on a p2.xlarge GPU-equipped AWS instance). For each forecast look-ahead period, choose the forecast (i.e., from the <b>LSTM MODEL</b>, the <b>REGRESSION MODEL</b>, or the <b>REDUCED REGRESSION MODEL</b>) depending on which gives the lowest rms error for the given forecast look-ahead period, by referring to the updated Forecast Error Plots. </span></li>
</ol>
<span style="font-family: inherit;"></span><br />
<h2>
<span style="font-family: inherit;">
Production Deployment Possibilities</span></h2>
<span style="font-family: inherit;"><br />
The optimum choice of computational and software platform for the Production Deployment of the end-to-end ML-based weather forecaster presented above is not at all clear, requiring a detailed exploration of the available technical options and trade-offs. However, the following possibilities come to mind, each with its own advantages and disadvantages:</span>
<br />
<ol>
<li><span style="font-family: inherit;">Deploy on a suite of MATLAB-equipped Cloud-based server instances. Has the advantage that the code can be used essentially "as is" (since the MATLAB code is already written via the prototypes presented here). Has the disadvantage with respect to cost that the servers would have to be "always on", and the associated MATLAB licensing costs may become prohibitive.</span></li>
<li><span style="font-family: inherit;">Use the MATLAB compiler to package the trained models into deployable libraries which can be installed within (say, Docker) containers which can be instantiated on-demand in the Cloud (and automatically shut down when dormant). Has the advantages that the code is essentially written (just needs to be run through the MATLAB compiler); and that by using containers, there is no need to incur the cost of "always on" server instances. There some open questions, however: can the (half hourly) re-training of the LSTM models via the <b>trainNetwork</b> function be compiled via the MATLAB Compiler?; can functions deployed from the MATLAB Compiler access GPUs, or must the GPU Coder be used?; can compiled MATLAB software running within containers access GPUs? </span></li>
<li><span style="font-family: inherit;">Re-write the models in an open-source ML framework such as TensorFlow and deploy on the Google Cloud ML Engine as exemplified <a href="http://flylogical.blogspot.com/2018/01/deploying-tensorflow-object-detector.html" target="_blank">here</a>. Has the disadvantage that all the models would have to be rewritten, outside MATLAB.</span></li>
<li><span style="font-family: inherit;">Any suggestions welcome : )</span></li>
</ol>
<h2>
</h2>
<h2>
<span style="font-family: inherit;">
A Note On Workflow</span></h2>
<span style="font-family: inherit;">
In the past, I would tend to use MATLAB much like I would use other functional programming languages i.e., by creating many functions (subroutines) and calling them from a main program. However, by its very nature, machine learning is much more of a trial-and-error process than the type of analyses I have been used to. It is generally more amenable to the interactive process of defining a set of parameters, running the parameters through a script (e.g., which contains the ML model training commands), viewing the outputs (e.g., in terms of suitable performance metrics), re-assessing the assumptions and tweaking the parameters accordingly, then running the script again, etc., until a satisfactory outcome has been achieved. In fact, this very mode of interaction has resulted in <a href="https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/" style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;" target="_blank">Jupyter Notebooks</a> being one of the most widely-used IDEs for developing ML models in Python. Again, MATLAB seems to have their own recently-introduced answer to this: namely, the <a href="https://uk.mathworks.com/products/matlab/live-editor.html" style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;" target="_blank">MATLAB Live Editor. </a>As such, when starting out on this exploration, and having successfully used Jupyter Notebooks for Python on previous ML projects, I launched into using the MATLAB Live Editor for running the aforementioned interactive ML design scripts. Whilst I found this to be useful in the early prototyping stage for a given model, I reverted back to the tried-and-tested technique of executing scripts (stored in m-files) with embedded local sub-functions (for calling from loops in the given script). I simply found this mode of operation to be more productive. Also, the <i style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">publish</i> options from Live Edit seemed to be less flexible and less configurable than for normal m-file scripts.
</span><br />
<br />
<h2>
Conclusions</h2>
<ul>
<li>MATLAB is a highly productive platform for prototyping Machine Learning (in particular, Deep Learning) algorithms. The data-wrangling tools are excellent. In my opinion, it is easier to develop the ML models in MATLAB than in Python/TensorFlow, but that could be due to the fact that I have a long experience (decades) with using MATLAB compared with only a few weeks using Python/TensorFlow.</li>
<li>Weather forecasting is a <i>hard problem</i>. The Deep Learning approaches developed here show some promise, particularly the LSTM models, but generally only out to a few hours -- and not to the 3 days desired at the outset. Further refinement (perhaps along the lines presented above in Other Things To Try) would hopefully improve the predictive ability of the models.</li>
</ul>
Unknownnoreply@blogger.com18tag:blogger.com,1999:blog-223455584910870050.post-8462729729602552772018-05-05T10:03:00.003-07:002018-05-05T10:04:20.359-07:00Navigation -- New Track Reference TechniqueStuck on the ground due to fog on the Isle of Man today, waiting to do an air-test on one of our Bulldogs (to bed-in its brand new engine), our test pilot Robert Miller (with 21,000 hours on non-airline military and civilian aircraft!) spent the afternoon in Costa's Coffee Castletown explaining the <a href="https://flylogical.com/docs/fly/scan_RM_New_TRACK_Reference_TECHNIQUE.pdf" target="_blank">New Track Reference Technique for aerial navigation using map and compass. Here's his write-up.</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-3723741826345396742018-03-21T05:30:00.002-07:002018-03-21T10:59:31.249-07:00Royal Air Force CentenaryUnfortunately, work commitments take me out of the country from tomorrow onwards including1 April 2018, so I am no longer able to participate in the planned flying activities (Bulldog and Chipmunk formation at <a href="https://www.raf.mod.uk/our-organisation/stations/raf-henlow/" target="_blank">RAF Henlow</a>) in celebration of the RAF Centenary. So, I grabbed an hour of decent weather this morning on the Isle of Man, and flew my ex-RAF Bulldog TMk1 as a minor personal tribute. See photos.<br />
<br />
I have the privilege of having been taught to fly by the Royal Air Force at the <a href="https://en.wikipedia.org/wiki/Universities_of_Glasgow_and_Strathclyde_Air_Squadron" target="_blank">Universities of Glasgow and Strathclyde Air Squadron</a> back in the 1980s.<br />
<br />
...and yesterday's tragic events at RAF Valley are a stark reminder of the risks taken ever day on our behalf...<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTLQ5IrHUrHMTpdljYJuVNmC7xbQmk6x3cdIl3jf_JWsNfcIGHl8Z6onAYuzbFsg6VU38dbDft444Nxg_D_r35fW5CoaqOsDboJsSQ3nR8TA2q80VW3Y95c9u-AaG1MmdXHeYSQrr57Qc/s1600/20180321_102119.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTLQ5IrHUrHMTpdljYJuVNmC7xbQmk6x3cdIl3jf_JWsNfcIGHl8Z6onAYuzbFsg6VU38dbDft444Nxg_D_r35fW5CoaqOsDboJsSQ3nR8TA2q80VW3Y95c9u-AaG1MmdXHeYSQrr57Qc/s400/20180321_102119.jpg" width="300" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieW-gvtIeBUjO8wrWYNMkYFvc5OmXga6evXgnaIxBlqETpQFSN-v-hGjWPKaDKYetho6q5p0ytY4-Yp0tFOcxs1UVmsmTG270NWLL49B3gptcwwNZd526aS6hTDlUJwIVJnPygqmhNz5o/s1600/20180321_103042.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieW-gvtIeBUjO8wrWYNMkYFvc5OmXga6evXgnaIxBlqETpQFSN-v-hGjWPKaDKYetho6q5p0ytY4-Yp0tFOcxs1UVmsmTG270NWLL49B3gptcwwNZd526aS6hTDlUJwIVJnPygqmhNz5o/s400/20180321_103042.jpg" width="300" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0194SOT006XbpOZfZ8s9_ZCJFjhgvAohyTAbJlsskK12I51tkDq_amE487LXHXVSB3ypEDnucyxwVDaGK-dfy3z7nHNP0Z99vGfP9e1rxYWH-dnUBS2SNS1Ck57dd2USUXHqWm6HipjQ/s1600/20180321_102551.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0194SOT006XbpOZfZ8s9_ZCJFjhgvAohyTAbJlsskK12I51tkDq_amE487LXHXVSB3ypEDnucyxwVDaGK-dfy3z7nHNP0Z99vGfP9e1rxYWH-dnUBS2SNS1Ck57dd2USUXHqWm6HipjQ/s400/20180321_102551.jpg" width="300" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYezE-x_NTC6gs3Z9rFtRjiYAi2xJgPLYYvGenH6bw1DB0YMT-PEGCzCorCbSWam0Uhmx9MQnAwtexS7GOQGoMMJte8zTddxx4Snsuj096dlDfdg1ojSZMji5Cz50-snwQSuI6i_7Y10M/s1600/20180321_095656.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYezE-x_NTC6gs3Z9rFtRjiYAi2xJgPLYYvGenH6bw1DB0YMT-PEGCzCorCbSWam0Uhmx9MQnAwtexS7GOQGoMMJte8zTddxx4Snsuj096dlDfdg1ojSZMji5Cz50-snwQSuI6i_7Y10M/s400/20180321_095656.jpg" width="300" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1-KBcMdQQjtpRRqNVsXZAYXKV-rGMTnlGFwNcqlSmd4mRuyN44ec7hRUo_zDjckUJySpatN4Aq18W-IFcbYGuA6Tl0rIMKy-e3rOWonDkLRzhyphenhyphenTZaaXNKqzgQq7E4Yn8GQdD9yB7fsFA/s1600/20180321_101838.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1-KBcMdQQjtpRRqNVsXZAYXKV-rGMTnlGFwNcqlSmd4mRuyN44ec7hRUo_zDjckUJySpatN4Aq18W-IFcbYGuA6Tl0rIMKy-e3rOWonDkLRzhyphenhyphenTZaaXNKqzgQq7E4Yn8GQdD9yB7fsFA/s400/20180321_101838.jpg" width="300" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEf6JsEeinamLWTo89TPWd077I6XEdyZ-nSUq0j1spBQf5tM6AF5HK_FX_KMsIltVAkOJ6BVIPM5_McQxppmdujYzgPybEMMCX3473qYWX_1H0Yp0lusU5O_niUOBQuAnUxxWJ3FnOqwY/s1600/20180321_105531.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEf6JsEeinamLWTo89TPWd077I6XEdyZ-nSUq0j1spBQf5tM6AF5HK_FX_KMsIltVAkOJ6BVIPM5_McQxppmdujYzgPybEMMCX3473qYWX_1H0Yp0lusU5O_niUOBQuAnUxxWJ3FnOqwY/s320/20180321_105531.jpg" width="240" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-O0-YKDE81Yg2VMiHabZPUaPutTTOhnksRA3qcN6xjFoXk6HdodKaLFUXsjiBMUhOSX3u5yKr4znAN-9gf1IoUcmCVIsJezjsyyX3d0DUxCIkIA-qgIjfQe3a9IGqdCVR8zvntWzhcHU/s1600/20180321_094937.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-O0-YKDE81Yg2VMiHabZPUaPutTTOhnksRA3qcN6xjFoXk6HdodKaLFUXsjiBMUhOSX3u5yKr4znAN-9gf1IoUcmCVIsJezjsyyX3d0DUxCIkIA-qgIjfQe3a9IGqdCVR8zvntWzhcHU/s400/20180321_094937.jpg" width="400" /></a></div>
<br />
<b><br /></b>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-8574624273961956952018-01-02T16:21:00.001-08:002018-01-02T16:21:29.357-08:00Deploying a TensorFlow Object Detector into Production using Google Cloud ML EngineThis is the follow-on post to my <a href="http://flylogical.blogspot.com/2018/01/object-detection-with-tensorflow-simple.html" target="_blank">previous post</a> which described how I trained a Deep Learning AI (using the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" target="_blank">Google Object Detection API</a> ) to detect specific "P" symbols on screenshots of map images (as used by <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a>).<br />
<br />
In this post, I describe the final part of the process: namely deploying the trained AI model into "production". <br />
<br />
<h2>
Google Cloud ML Engine</h2>
As the title of the post suggests, I opted for the <a href="https://cloud.google.com/ml-engine/docs/technical-overview" target="_blank">Google Cloud ML Engine</a> for the production infrastructure for the simple reason that I wanted a <i><b>serverless</b></i> solution such that I would only be paying on-demand for the required computing resources as I needed them, rather than having to pay for continuously-operating virtual machine(s) (or Docker container(s)) whether I was utilising them or not.<br />
<br />
From what I could ascertain at the time I was deciding, Google Cloud ML Engine was the only available solution which provides such on-demand scaling (importantly, effectively reducing my assigned resources -- and costs -- to zero when not in use by me). Since then, <a href="https://aws.amazon.com/sagemaker/" target="_blank">AWS SageMaker</a> has come on the scene, but I could not determine from the associated documentation whether the computing resources are similarly auto-scaled (from as low as zero). If anyone knows the answer to this, please advise via the Comments section below.<br />
<br />
<i>GOTCHA: one of the important limitations of the</i> <i> <a href="https://cloud.google.com/ml-engine/docs/technical-overview" target="_blank">Google Cloud ML Engine</a></i> <i>for online prediction is that it auto-allocates single core CPU-based nodes (virtual machines), rather than GPUs. This means that the prediction is <b>slow</b></i> -- <i>especially on the (relatively complex) TensorFlow object detector model which I'm using (multiple minutes per prediction!). I suppose this may be the price one has to pay for the on-demand flexibility, but since Google obviously has GPUs and <a href="http://www.techradar.com/news/computing-components/processors/google-s-tensor-processing-unit-explained-this-is-what-the-future-of-computing-looks-like-1326915" target="_blank">TPUs</a> at their disposal, it would be a welcome improvement if they were to offer such on their Cloud ML Engine. Maybe that will come.</i>..<br />
<br />
<h2>
Deploying the TensorFlow Model into Google Cloud ML</h2>
<h3>
Exporting the Trained Model from TensorFlow</h3>
The first step is to export the trained model in the appropriate format. As in the <a href="http://flylogical.blogspot.com/2018/01/object-detection-with-tensorflow-simple.html" target="_blank">previous post</a>, and picking up where I left off, the <b>export_inference_graph.py</b> Python method included with the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" target="_blank">Google Object Detection API</a> does this, and can be called from the Ubuntu console as follows:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">python object_detection/export_inference_graph.py </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--input_type encoded_image_string_tensor</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--pipeline_config_path=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/faster_rcnn.config </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--trained_checkpoint_prefix=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/train/model.ckpt-46066 </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--output_directory /risklogical/DeeplearningImages/Outputs/PR_Detector_JustP_RCNN_ForDeploy</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span> where the paths and filenames are obviously substituted with your own. <i>GOTCHA: in the above code snippet, it is important to specify </i><br />
<i><i><br /></i></i> <span style="font-family: "courier new" , "courier" , monospace;">--input_type encoded_image_string_tensor</span><b></b><br />
<br />
rather than what I used previously, namely<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">--input_type image_tensor</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
since by specifying <b>encoded_image_string_tensor</b> this enables the image data to be presented to the model via encoded JSON via a RESTful web-service (in production) rather than simply via Python code (which I used in the <a href="http://flylogical.blogspot.com/2018/01/object-detection-with-tensorflow-simple.html" target="_blank">previous post</a> for post-training <i>ad hoc </i>testing of the model).<br />
<br />
<i>DOUBLE GOTCHA: ...and this is perhaps the worst of all the gotchas from the entire project. Namely, the Google object detection TensorFlow models, when exported via the Google API </i><b>export_inference_graph.py</b> <i>command as presented above, are NOT COMPATIBLE with the Google Cloud ML Engine if the command IS NOT RUN VIA TensorFlow <b>VERSION 1.2</b>. If you happen to use a later version of TensorFlow such as TF 1.3 (as I first did, since that was what I had installed on my Ubuntu development machine for training the model) THE MODEL WILL FAIL on </i><i>the Google Cloud ML Engine. The workaround is to create a <a href="http://www.pythonforbeginners.com/basics/how-to-use-python-virtualenv" target="_blank">Virtual Environment</a></i>, <i>install TensorFlow Version 1.2 into that Virtual Environment, and run the</i> <b>export_inference_graph.py</b> <i>command as presented above, from within the Virtual Environment. Perhaps the latest version of TensorFlow has eliminated this annoying incompatibility, but I'm not sure. If it has indeed not yet been resolved (does anyone know?), then c'mon Google!</i><br />
<i><br /></i>
<br />
<h3>
Deploying the Exported Model to Google Cloud ML<i><b></b></i></h3>
<h4>
Creating a Google Compute Cloud Account</h4>
In order to complete the next few steps, I had to create an account on Google Compute Cloud. That is all <a href="https://cloud.google.com/ml-engine/" target="_blank">well-documented</a> and the procedure will not be repeated here. The process was straightforward.<br />
<b><br /></b>
<br />
<h4>
Installing the Google Cloud SDK</h4>
<div>
This is required in order to interact with the Google Compute Cloud from my Ubuntu model-building/training machine e.g., for copying the exported model across. The SDK and installation instructions can be found <a href="https://cloud.google.com/sdk/downloads" target="_blank">here</a>. The process was straightforward.</div>
<h4>
Copying the Exported Model to Google Cloud Storage Platform</h4>
<div>
I copied the exported model described earlier up to the cloud by issuing the following command from the Ubuntu console:</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">gsutil cp -r /risklogical/DeeplearningImages/Outputs/PR_Detector_JustP_RCNN_ForDeploy/saved_model/ gs://parkingradar/trained_models/ </span></div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
where the <b>gsutil</b> application is from the Google Cloud SDK. The parameter containing the path to the saved model uses the same path specified when calling the <b>export_inference_graph.py</b> method above (and obviously should be substituted with yours), and the destination on Google Cloud Storage ("<b>gs://...</b>") is where my models are (temporarily) stored in a staging area on the cloud (and obviously should be substituted with yours).<br />
<br />
<h4>
Creating the Model on Google Cloud ML</h4>
<div>
<div>
I then had to create what Google Cloud ML refers to as a 'model' -- but which is really just a container for actual models which are then distinguished by version number -- by issuing the following command from the Ubuntu console:</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">gcloud ml-engine models create DetectPsymbolOnOSMMap --regions us-central1/ </span></div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
where the <b>gcloud </b>application is from the Google Cloud SDK. The name <b>DetectPsymbolOnOSMMap </b>is the (arbitrary) name I gave to my 'model', and the <b>--regions</b> parameter allows me to specify the location of the infrastructure on the Google Compute Cloud (I selected <b>us-central1</b>).<br />
<br />
The next step is the key one for creating the actual runtime model on the Google Cloud ML. I did this by issuing the following command from the Ubuntu console:<br />
<br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;">gcloud ml-engine versions create v3 --model DetectPsymbolOnOSMMap --origin=gs://parkingradar/trained_models/saved_model --runtime-version=1.2<br /> </span><span style="font-family: "courier new";"></span></div>
<div>
<span style="font-family: "courier new";"></span>What this command does is create a new runtime version under the model tag name <b>DetectPsymbolOnOSMMap</b> (version <b>v3</b> in this example -- as I had already created v1, and v2 from earlier prototypes) of the exported TensorFow model held in the temporary cloud staging area (<b>gs://parkingradar/trained_models/saved_model</b> ). <i>GOTCHA: it is essential to specify the parameter <b>--runtime-version=1.2 </b>(for the TensorFlow version) since Google Cloud ML does not support later versions of TensorFlow (see earlier DOUBLE GOTCHA).</i></div>
<div>
<br /></div>
<div>
At this point I found it helpful to login to the <a href="https://console.cloud.google.com/mlengine" target="_blank">Google Compute Cloud portal</a> (using my Google Compute Cloud access credentials) where I can view my deployed models. Here's what the portal looks like for the model version just deployed:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2WxG_ZuSywGcg7mKZAtwNwL43RxF-gvG0rn_Tf49VLNERcmgc9vNFtoObARavZjaZrxp3AIquKhfD0HYLzFmrM_LJ2QduTm60FWPfr7NZB8lTjJssu8eVCuuMT0ACXESA2BJnf_LjOJY/s1600/CaptureCLOUD_ML_PORTAL_1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="237" data-original-width="885" height="105" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2WxG_ZuSywGcg7mKZAtwNwL43RxF-gvG0rn_Tf49VLNERcmgc9vNFtoObARavZjaZrxp3AIquKhfD0HYLzFmrM_LJ2QduTm60FWPfr7NZB8lTjJssu8eVCuuMT0ACXESA2BJnf_LjOJY/s400/CaptureCLOUD_ML_PORTAL_1.PNG" width="400" /></a></div>
<div>
<br /></div>
<div>
<b><br /></b></div>
</div>
At this point, the exported TensorFlow model is now available for running on Google Cloud ML. It can be run remotely for test purposes (via the <b>gcloud ml-engine predict</b> command) but I'll not cover that here since my central purpose was to invoke the model from a web-service in order to "hook it up" to the <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> back-end, so I'll move on to that general topic now.<br />
<br />
<br />
<h2>
Running the Exported Model on Google Cloud ML via a C# wrapper</h2>
<h3>
</h3>
<h3>
</h3>
<h3>
Why C# ?</h3>
<div>
<br />
Since the <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> back-end stack is written in C#, I opted for C# for developing the wrapper code for calling the model on Google Cloud ML. Although Python was the most suitable choice for training and preparing the Deep Learning model for deployment, in my case C# was the natural choice for this next phase.</div>
<div>
<br /></div>
<div>
<a href="https://stackoverflow.com/questions/45218976/how-do-i-get-online-predictions-in-c-sharp-for-my-model-on-cloud-machine-learnin" target="_blank">This reference</a> provides comprehensive example code necessary to get it all working -- <i>mostly</i>. I say <i>mostly</i>, because in that reference they gloss over the issues surrounding authentication via OAUTH2. It turns out that the aspects surrounding authentication were the most awkward to resolve, so I'll provide some details on how to get this working.<br />
<br /></div>
<h3>
Source-code snippet</h3>
<div>
<br />
Here is the C# code-listing containing the essential elements for wrapping the calls to the deployed model on Google Cloud ML (for the specific deployed model and version described above). The code contains all the key components including (i) a convenient class for formatting the image to be sent, (ii) the code required for authentication via OAUTH2; (iii) the code to make the actual call via RESTful web-service to the appropriate end-point for the model running on the Google Cloud ML; (iv) code for interpreting the results returned from the prediction including the parsing of the bounding boxes, and filtering results against a specified threshold score. The results are packaged into XML, but this is entirely optional and can instead be packaged into whatever format you wish. Hopefully the code is self-explanatory. <i>GOTCHA: for reasons unknown to me at least, specification of model version caused a JSON parsing failure. The workaround was to leave the version parameter blank in the method call. This forces Google Cloud ML to use the assigned default version for the given model. This default assignment can be easily adjusted via the <u><a href="https://console.cloud.google.com/mlengine" target="_blank">Google Compute Cloud portal</a> </u>introduced earlier.</i><br />
<i><b></b></i></div>
<div>
<br />
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">using System;<br />using System.Collections.Generic;<br />using System.Net.Http;<br />using System.Net.Http.Headers;<br />using System.Text;<br />using System.Threading.Tasks;<br />using Google.Apis.Auth.OAuth2;<br />using Newtonsoft.Json;<br />using System.IO;<br />using System.Xml;</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<span style="font-family: "courier new" , "courier" , monospace;">namespace prediction_client<br />{</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> class Image<br /> {<br /> public String imageBase64String { get; set; }</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> public String imageAsJsonForTF;// { get; set; }</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //Constructor<br /> public Image(string imageBase64String)<br /> {<br /> this.imageBase64String = imageBase64String;<br /> this.imageAsJsonForTF = "{\"instances\": [{\"b64\":\"" + this.imageBase64String + "\"}]}";<br /> }<br /> }</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;"> class Prediction<br /> { <br /> //For object detection<br /> public List<Double> detection_classes { get; set; }<br /> public List<Double> detection_boxes { get; set; }<br /> public List<Double> detection_scores { get; set; }</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /> public override string ToString()<br /> {<br /> return JsonConvert.SerializeObject(this);<br /> }<br /> }</span><br />
<span style="font-family: "courier new";"><br /></span>
<span style="font-family: "courier new";"> class PredictClient<br /> {</span><br />
<span style="font-family: "courier new";"> private HttpClient client;</span><br />
<span style="font-family: "courier new";"> public PredictClient()<br /> {<br /> this.client = new HttpClient();<br /> client.BaseAddress = new Uri("https://ml.googleapis.com/v1/");<br /> client.DefaultRequestHeaders.Accept.Clear();<br /> client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));</span><br />
<span style="font-family: "courier new";"> //Set infinite timeout for long ML runs (default 100 sec)<br /> client.Timeout = System.Threading.Timeout.InfiniteTimeSpan;<br /> }</span><br />
<span style="font-family: "courier new";"> </span><br />
<span style="font-family: "courier new";">public async Task<string> Predict<I, O>(String project, String model, string instances, String version = null)<br />{<br /> var version_suffix = version == null ? "" : $"/version/{version}";<br /> var model_uri = $"projects/{project}/models/{model{version_suffix}";<br /> var predict_uri = $"{model_uri}:predict";</span><br />
<br />
<span style="font-family: "courier new";">//See https://developers.google.com/identity/protocols/OAuth2<br />//Service Accounts which is what should be used here rather than </span><br />
<span style="font-family: "courier new";">//DefaultCredentials...<br /> </span><br />
<span style="font-family: "courier new";">// Get active credential from credentials json file distributed with </span><br />
<span style="font-family: "courier new";">// app</span><br />
<span style="font-family: "courier new";">// NOTE: need to use App_data folder since cannot put files in bin </span><br />
<span style="font-family: "courier new";">// on Azure web-</span><span style="font-family: "courier new";">service...</span><br />
<span style="font-family: "courier new";">string credPath = System.Web.Hosting.HostingEnvironment.MapPath(@"~/App_Data/**********-********.json"); </span><br />
<br />
<span style="font-family: "courier new";"> var json = File.ReadAllText(credPath);<br /> Newtonsoft.Json.Linq.JObject cr = (Newtonsoft.Json.Linq.JObject)JsonConvert.DeserializeObject(json);<br /> string s = (string)cr.GetValue("private_key");<br /> // Create an explicit ServiceAccountCredential </span><span style="font-family: "courier new";"></span><br />
<span style="font-family: "courier new";"> // credential </span><br />
<span style="font-family: "courier new";"> ServiceAccountCredential credential = null;<br /> credential = new ServiceAccountCredential(<br /> new ServiceAccountCredential.Initializer((string)cr.GetValue("client_email"))//("client_email"))<br /> {<br /> Scopes = new[] { "https://www.googleapis.com/auth/cloud-platform" }<br /> }.FromPrivateKey((string)cr.GetValue("private_key")));//.FromCertificate(certificate));</span><br />
<span style="font-family: "courier new";"> <br /> var bearer_token = await credential.GetAccessTokenForRequestAsync().ConfigureAwait(false);</span><br />
<span style="font-family: "courier new";"><br /></span>
<span style="font-family: "courier new";"> client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", bearer_token);</span><br />
<span style="font-family: "courier new";"> var request = instances;<br /> var content = new StringContent(instances, Encoding.UTF8, "application/json");</span><br />
<span style="font-family: "courier new";"><br /> var responseMessage = await client.PostAsync(predict_uri, content);<br /> responseMessage.EnsureSuccessStatusCode();</span><br />
<span style="font-family: "courier new";"> var responseBody = await responseMessage.Content.ReadAsStringAsync();</span><br />
<span style="font-family: "courier new";"> return responseBody;<br /> }<br /> }<br /> </span><br />
<span style="font-family: "courier new";"> class PredictionCaller<br /> {<br /> static PredictClient client = new PredictClient();<br /> private String project = "************";<br /> private String model = "DetectPsymbolOnOSMMap";<br /> private String version = "v3";</span><br />
<span style="font-family: "courier new";"> </span><br />
<span style="font-family: "courier new";"> //Only show results with score >=this </span><br />
<span style="font-family: "courier new";"> private double thresholdSuccessPercent = 0.95;<br /> </span><br />
<span style="font-family: "courier new";">private String imageBase64String;<br /> public string resultXmlStr = null;</span><br />
<span style="font-family: "courier new";"> </span><br />
<span style="font-family: "courier new";"> //Constructor <br /> public PredictionCaller(string project, string model, double thresholdSuccessPercent, string imageBase64String)<br /> {<br /> this.project = project;<br /> this.model = model;<br /> //this.version = version;//OMIT and force use of DEFAULT version<br /> this.thresholdSuccessPercent = thresholdSuccessPercent;<br /> this.imageBase64String = imageBase64String;<br /> RunAsync().Wait();<br /> }</span><br />
<br />
<span style="font-family: "courier new";"> public async Task RunAsync()</span><br />
<span style="font-family: "courier new";"> {<br /> string XMLstr = null;<br /> string errStr = null;<br /> try<br /> {<br /> Image image = new Image(this.imageBase64String);<br /> var instances = image.imageAsJsonForTF;</span><br />
<span style="font-family: "courier new";"> <br /> string responseJSON = await client.Predict<String, Prediction>(this.project, this.model, instances).ConfigureAwait(false); //version blank to force use of default version for model </span><br />
<span style="font-family: "courier new";">//since version mechanism not working via json ???</span><span style="font-family: "courier new";"><br /></span><br />
<span style="font-family: "courier new";"> dynamic response = JsonConvert.DeserializeObject(responseJSON);</span><br />
<span style="font-family: "courier new";"> </span><span style="font-family: "courier new";">int numberOfDetections = Convert.ToInt32(response.predictions[0].num_detections);</span><br />
<span style="font-family: "courier new";"><br /></span>
<span style="font-family: "courier new";">//Create XML of detection results<br /> XMLstr = "<PredictionResults Project=\"" + project + "\" Model =\"" + model + "\" Version =\"" + version + "\" SuccessThreshold =\"" + thresholdSuccessPercent.ToString() + "\">";</span><br />
<span style="font-family: "courier new";"> try<br /> {<br /> for (int i = 0; i < numberOfDetections; i++)<br /> {<br /> double score = (double)response.predictions[0].detection_scores[i];<br /> double[] box = new double[4];<br /> for (int j = 0; j < 4; j++)<br /> {<br /> box[j] = (double)response.predictions[0].detection_boxes[i][j];<br /> }<br /> </span><span style="font-family: "courier new";"> </span><br />
<span style="font-family: "courier new";">//See //https://www.tensorflow.org/versions/r0.12/api_docs/python/image/working_with_bounding_boxes<br /> double box_ymin = (double)box[0]; </span><br />
<span style="font-family: "courier new";"> double box_xmin = (double)box[1];<br /> double box_ymax = (double)box[2];<br /> double box_xmax = (double)box[3];</span><br />
<span style="font-family: "courier new";"> //Just include if score better than threshold%<br /> if (score >= thresholdSuccessPercent)</span><br />
<span style="font-family: "courier new";"> { </span><br />
<span style="font-family: "courier new";"> try<br /> {<br /> XMLstr += "<Prediction Score=\"" + score.ToString() + "\" Xmin =\"" + box_xmin.ToString() + "\" Xmax =\"" + box_xmax.ToString() + "\" Ymin =\"" + box_ymin.ToString() + "\" Ymax =\"" + box_ymax.ToString() + "\"/>";<br /> }<br /> catch (Exception E)<br /> {<br /> errStr += "<Error><![CDATA[" + E.Message + "]]></Error>";<br /> }<br /> }<br /> }<br /> }<br /> catch (Exception E)<br /> {<br /> errStr += "<Error><![CDATA[" + E.Message + "]]></Error>";<br /> }<br /> finally<br /> {<br /> if (!string.IsNullOrWhiteSpace(errStr))<br /> {<br /> XMLstr += errStr;<br /> }<br /> XMLstr += "</PredictionResults>";<br /> }</span><br />
<span style="font-family: "courier new";"> //safety test that XML good<br /> XmlDocument xmlDoc = new XmlDocument();<br /> xmlDoc.LoadXml(XMLstr); <br /> }<br /> catch (Exception e)<br /> {<br /> XMLstr = "<Error>CLOUD_ML_ENGINE_FAILURE</Error>";<br /> }<br /> this.resultXmlStr=XMLstr;<br /> }</span><span style="font-family: "courier new";"><br /></span><br />
<span style="font-family: "courier new" , "courier" , monospace;">}</span><br />
<div>
<b><span style="font-family: "courier new" , "courier" , monospace;"></span><br /></b></div>
<b><br /></b>
<br />
For the <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> application, I actually built the above code into a RESTful web-service hosted on Microsoft Azure cloud where some of the <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> back-end code-stack resides. The corresponding WebApi controller code looks like this:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">using System;<br />using System.Collections.Generic;<br />using System.Linq;<br />using System.Net;<br />using System.Net.Http;<br />using System.Web.Http;<br />namespace FlyRestful.Controllers<br />{<br /> public class Parameters<br /> {<br /> public string project { get; set; }<br /> public string model { get; set; }<br /> public string thresholdSuccessPercent { get; set; }<br /> public string imageBase64String { get; set; }<br /> }<br /> public class GoogleMLController : ApiController<br /> {<br /> [Route("***/********")] //route omitted from BLOG post<br /> [HttpPost]<br /> public string PerformPrediction([FromBody] Parameters args)<br /> {<br /> string result = null;<br /> try<br /> {<br /> string model = args.model;<br /> string project = args.project;<br /> string thresholdSuccessPercent = args.thresholdSuccessPercent;<br /> string imageBase64String = args.imageBase64String;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> prediction_client.PredictionCaller pc = new prediction_client.PredictionCaller(project, model, double.Parse(thresholdSuccessPercent), imageBase64String); <br /> result = pc.resultXmlStr;<br /> }<br /> catch (Exception E)<br /> {<br /> result = E.Message;<br /> }<br /> return result;<br /> }<br /> } <br />}</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<br />
...and below is an example client-side caller to this RESTful web-service (snippet taken from a c# Windows console app). This sample includes (i) code for converting a test '.png' image file into the appropriate format for encoding via JSON for consumption by the aforementioned web-service (and passing on to the TensorFlow model); (ii) calling the predictor and retrieving the prediction results; (iii) converting the returned bounding boxes into latitude, longitude offsets (representing the centre-point location of given bounding-box since that is what <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> actually only cares about!) <br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">static async Task RunViaWebService()</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> {</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> try {</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> HttpClient client = new HttpClient();</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> client.BaseAddress = new Uri("https://*******/***/"); //hidden on BLOG</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> client.DefaultRequestHeaders.Accept.Clear();</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //Set infinite timeout for long ML runs (default 100 sec)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> client.Timeout = System.Threading.Timeout.InfiniteTimeSpan;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> var predict_uri = "*******"; //hidden on BLOG</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> Dictionary<string, string> parameters = new Dictionary<string, string>(); </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> parameters.Add("project", project);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> parameters.Add("model", model);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> parameters.Add("thresholdSuccessPercent", thresholdSuccessPercent.ToString());</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //Load a sample PNG </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> string fullFile = @"ExampleRuntimeImages\FromScreenshot.png";</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //Load image file into bytes and then into BAse64 string format for transport via JSON </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> parameters.Add("imageBase64String", System.Convert.ToBase64String(System.IO.File.ReadAllBytes(fullFile)));</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> var jsonString = JsonConvert.SerializeObject(parameters);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> var content = new StringContent(jsonString, Encoding.UTF8, "application/json");</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> var responseMessage = await client.PostAsync(predict_uri, content);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> responseMessage.EnsureSuccessStatusCode();</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> var resultStr = await responseMessage.Content.ReadAsStringAsync();</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> Console.WriteLine(resultStr);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //Now create lat-lon of centre point for each boumding box</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //See http://doc.arcgis.com/en/data-appliance/6.3/reference/common-attributes.htm</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //Since this data set created from ZOOM 17 on standrad web mercator</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //Mapscale=1:4514 , 1 pixel=0.00001 decimal degrees (1.194329 m a t equator)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> // See http://wiki.openstreetmap.org/wiki/Zoom_levels 0.003/256 = 1.1719e-5</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> double pixelsToDegrees = 0.000011719;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> //Strip out string delimiters</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> resultStr = resultStr.Remove(0, 1);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> resultStr = resultStr.Remove(resultStr.Length - 1, 1);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> resultStr = resultStr.Replace("\\", "");</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> XmlDocument tempDoc = new XmlDocument();</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> tempDoc.LoadXml(resultStr);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> XmlNodeList resNodes=tempDoc.SelectNodes("//Prediction");</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> if (resNodes != null)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> {</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> foreach (XmlNode res in resNodes)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> { </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> double Xmin = double.Parse(res.SelectSingleNode("@Xmin").InnerText);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> double Xmax = double.Parse(res.SelectSingleNode("@Xmax").InnerText);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> double Ymin = double.Parse(res.SelectSingleNode("@Ymin").InnerText);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> double Ymax = double.Parse(res.SelectSingleNode("@Ymax").InnerText);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> double lat = testLat + pixelsToDegrees * (0.5 - 0.5 * (Ymin + Ymax)) * imageHeightPx;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> double lon = testLon + pixelsToDegrees * (0.5 * (Xmin + Xmax) - 0.5) * imageHeightPx;</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> Console.WriteLine("LAT " + lat.ToString() + ", LON " + lon.ToString());</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> }</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> }</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> }</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> catch (Exception E)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> {</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> Console.WriteLine(E.Message);</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> } </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> }</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
With the RESTFul web-service (and suitable client-code) deployed on Azure, the entire project is complete. The goals have been met. The "P" symbol object detector is now live "in production" within the <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> back-end code-stack, and has been running successfully for some weeks now.<br />
<br />
<h2>
Closing Comments</h2>
If you have read this post (and especially the <a href="http://flylogical.blogspot.com/2018/01/object-detection-with-tensorflow-simple.html" target="_blank">previous post</a>) in it's entirety, I expect you will agree that the process for implementing a Deep Learning object-detection model in TensorFlow can reasonably be described as <i>tedious</i>. Moreover, if you have actually implemented a similar model in a similar way, you will know just how tedious it can be. I hope the code snippets provided here may be helpful if you happen to get stuck along the way.<br />
<br />
All that said, it is nevertheless quite remarkable, to me at least, that I was able to create a Deep Learning object detector and deploy it in "production" to the (serverless) cloud, all with open-source software, albeit with some bumps in the road. Google should be congratulated on making all that possible.<br />
<br />
<h3>
Do I think the Deep Learning model can be considered in any way "Intelligent" ?</h3>
No, I don't. I see it as a powerful computer program which utilises a cascade of nonlinear elements to perform the complex task of pattern recognition. Like all computer programs, it needs to be told precisely what to do -- and in the specific case of these Deep Learning neural nets -- it needs to be told not just once, but thousands of times via the painstakingly prepared training images. Its abilities are also very narrow and brittle. Case in point, with the "P" symbol detector, because it has been trained on images where each "P" symbol is enclosed in a separate, isolated bounding box, it completely fails to recognise "P" symbols which are closer together than the dimension of the bounding box. Or put another way, it cannot handle images where the bounding boxes overlap one another. One could imagine trying to create a further set of training images which attempt to cater for all possibilities of such overlaps: but the number of possibilities to cover would be very large, maybe impractically large. By contrast, I could imagine asking a young child to draw a circle round every "P " on the image. I would only have to demonstrate once (or maybe even not at all, the description being sufficient), and the child would "get it", and would circle all "P"s it could find, no matter how close they are to each other. That is the difference. And the difference is huge.<br />
<h3>
<br />The Future</h3>
In the near to mid-term, I aim to (i) investigate other open-source AI frameworks such as AWS SageMaker; (ii) give MATLAB Deep Learning a run for its money; (iii) hope that Google will enhance their Cloud ML offering by providing access to GPUs (or, better still, TPUs).<br />
<br />
<br />
<b><i><u><span style="font-family: "courier new" , "courier" , monospace;"></span><br /></u></i></b>
<br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<b></b><b></b><i></i><u></u><sub></sub><sup></sup><strike></strike><span style="font-family: "courier new" , "courier" , monospace;"></span><br />Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-223455584910870050.post-61921769505400659452018-01-01T12:37:00.001-08:002018-01-02T16:23:16.450-08:00Training an Object Detector with TensorFlow: a simple map-reading exampleAs I delve into the field of Deep Learning, here's a description of how I built and deployed an object detector using Google's TensorFlow framework. The central purpose was to gain an understanding of the steps involved in building such a thing, since I have various Machine Learning / Artificial Intelligence projects in the pipeline for 2018 for which I need to train myself. The secondary purpose was to give <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> it's first brain (beyond just it's database aka memory of parking spots). <br />
<br />
In this and a <a href="http://flylogical.blogspot.com/2018/01/deploying-tensorflow-object-detector.html" target="_blank">follow-on post</a>, I provide a "beans-to-cup" overview of the entire process. I make reference to various online resources which were extremely helpful, and which present many of the details in a clear manner, so there is need for me to repeat here. I do however point out various "gotchas" which frustrated my progress and for which I present the corresponding workarounds that worked for me, with the aim of hopefully saving you some trouble if you face such in your own AI endeavours.<br />
<br />
I make no apologies for any technical decisions which may be considered sub-optimal by those who know better. As I said, this was an AI learning exercise for me, first and foremost.<br />
<br />
<h2>
The Goal</h2>
<div>
For a given geographical (latitude, longitude) location on <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a>'s map, determine how far away it is from a "P" symbol on the map. The screenshot below shows examples of such "P" symbols. Each represents an officially recognised parking spot or lot.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIOusIQiCoz0tsCj0m28yh-RNTWmjtB9OreQoPyv_2DHBF9SpBOEtz53_uoTqFk5ouLQfzlqhFGg1df-O2lp27h8t56Zb0an011EZhZBkQIqTHlV38xn0d3RpGQn93tHSeccy3C-TDvWM/s1600/FromScreenshot_LAT_51_4906363_LON_-0_1136164.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="300" data-original-width="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIOusIQiCoz0tsCj0m28yh-RNTWmjtB9OreQoPyv_2DHBF9SpBOEtz53_uoTqFk5ouLQfzlqhFGg1df-O2lp27h8t56Zb0an011EZhZBkQIqTHlV38xn0d3RpGQn93tHSeccy3C-TDvWM/s1600/FromScreenshot_LAT_51_4906363_LON_-0_1136164.png" /></a></div>
<div>
<br /></div>
<h2>
The Solution Plan</h2>
The high-level plan to reach the specified goal comprised the following steps:<br />
<br />
<ol>
<li>Prepare a suite of screenshot images specifically selected to contain such "P" symbols in <b>known</b> relative positions (e.g., with respect to the center of the given screenshot)</li>
<li>Use the test images to train an AI Deep Learning object detection algorithm to recognise the "P" symbols <b>and </b>determine their relative positions (from which the physical coordinates in terms of latitude and longitude can be obtained)</li>
<li>Deploy the trained AI in a suitable "production" framework for automated use by <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a></li>
</ol>
<h2>
The Toolkit<b></b></h2>
<div>
The first decision to be made was what AI framework to use, and thereby what suite of software development tools to provision for the tasks ahead. The field is exploding right now, with many competing AI frameworks and platforms to choose from. </div>
<div>
<br /></div>
<h3>
MATLAB ?</h3>
<div>
My first instinct was to use MATLAB, not least since the latest version focuses heavily on Machine Learning & Deep Learning (see <a href="https://www.mathworks.com/solutions/deep-learning/examples.html">https://www.mathworks.com/solutions/deep-learning/examples.html</a> ). Moreover, I've used MATLAB extensively over many years, for prototyping as well as generating production code, across a variety of fields. However, I chose <i><b>not</b></i> to use MATLAB for present purposes -- and not because it is a legacy, closed-source, platform with commensurate annual subscription fees -- but because it was not obvious how scalable a MATLAB-based solution would be for deployment of the trained model into production. I am confident that MATLAB would be an effective framework for rapid prototyping and training of AI models, but production deployment is a different matter. That said, we have recently procured the necessary MATLAB toolboxes for investigating AI models. I aim to perform some MATLAB-based AI experiments in the near future, and I will report my findings in due course.</div>
<div>
<br /></div>
<h3>
GLUON ?</h3>
<div>
As an extensive user of Amazon Web Services AWS cloud-computing infrastructure for many years, I was enthusiastic about using their recently-announced open-source GLUON library for Deep learning (see <a href="https://aws.amazon.com/blogs/aws/introducing-gluon-a-new-library-for-machine-learning-from-aws-and-microsoft/">https://aws.amazon.com/blogs/aws/introducing-gluon-a-new-library-for-machine-learning-from-aws-and-microsoft/</a> ). So much so that I attempted the online step-by-step tutorials, but kept getting a python <i><b>kernel crash </b></i>right when it mattered most: when trying to run their object-detection sample. In frustration, I abandoned that approach. I would like to try again at some point (presumably once whatever bugs need to be ironed-out have been ironed-out?) because I would welcome being able to remain within the AWS eco-system for AI along with the many other areas where I routinely use AWS. <i>STOP PRESS: when making my decision on choice of AI framework, <a href="https://aws.amazon.com/sagemaker/" target="_blank">AWS SageMaker</a> did not exist. It does now. Something to explore in a follow-on project.</i></div>
<div>
<i><br /></i></div>
<h3>
TensorFlow</h3>
<div>
Cutting to the chase, after the GLUON failures, and having abandoned MATLAB for now, I settled upon Google's TensorFlow framework. Not least since TensorFlow seems to be the most widely-used AI framework these days, across all industries, for both prototyping and production deployment of AI models. Moreover, just around the time I was making my decision, Google open-sourced their own object-detection models and API built in TensorFlow (see <a href="https://github.com/tensorflow/models/tree/master/research/object_detection">https://github.com/tensorflow/models/tree/master/research/object_detection</a> ). That clinched it. </div>
<div>
<br /></div>
<h3>
Python</h3>
<div>
The decision to use TensorFlow, <i>de facto</i> led to the corresponding decision to use Python for the necessary software development surrounding the AI models since the two (TensorFlow and Python) go pretty-much hand-in-hand. Although I have been writing software in my profession for multiple decades, I had never used Python until now. So this was interesting: I had committed to embark on becoming sufficiently fluent in a new programming language, Python, in order to be able to use TensorFlow effectively. But I drew significant comfort from the fact that I was not alone: by all accounts, Python has (quite recently) become the industry-standard language for data-analysis, Machine Learning, and Deep Learning. I was confident that someone before me would have faced whatever hurdles I was about to cross, and that there would be some solution or workaround somewhere on <a href="https://stackoverflow.com/" target="_blank">Stack Overflow</a>. This turned out to be largely true, thereby validating my choice. So, if you are interested in pursuing software development for Machine Learning and/or Deep Learning, and if you don't yet know Python, first thing to do is learn Python. It is simple to learn, massively supported, and extremely powerful.</div>
<div>
<br /></div>
<h3>
Jupyter Notebooks</h3>
I had never come across these until now -- but became an instant fan. They make programming in Python very simple. Also, many of the relevant online tutorials are built around accompanying Jupyter Notebooks available via GitHub. In fact, in this entire project, I never needed to write any Python code outside of the Jupyter Notebooks environment.<br />
<h3>
</h3>
<h3>
</h3>
<h3>
Windows or Linux ?</h3>
<div>
Having decided upon TensorFlow and Python for the AI development, the decision to use Linux rather than Windows as the base operating system was essentially obvious. Again, this was going against the grain since I have been using the Windows operating system almost exclusively for all my software development environments over the many years to date. However, Google search for TensorFlow and Python quickly reveals that Linux is the platform of choice for these tools. Wanting to minimise future headaches associated with inappropriate choice of operating system, I therefore went with the herd on this: namely, I opted for Linux as the base operating system for training the AI models. There is, however, a footnote to this. When it came to building the deployment and production layers (i.e., after training the AI models), I reverted to Windows (and C#). More on that later.</div>
<div>
<br /></div>
<h3>
Cloud-based Virtual Machine for Training</h3>
<div>
Part of the reason for being somewhat casual about choice of operating system is that I knew from the outset that I would be developing the AI models and software on a virtual machine hosted on the cloud since I had migrated all my software development activities from local "bare metal" to public cloud servers, many years ago. I also knew that the production deployment would be cloud-hosted, alongside the rest of the <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> back-end software stack.</div>
<div>
<br /></div>
<h3>
AWS Deep Learning AMIs</h3>
<div>
Given my familiarity with AWS, it was a natural choice to deploy an AWS Deep Learning AMI (see <a href="https://aws.amazon.com/blogs/ai/get-started-with-deep-learning-using-the-aws-deep-learning-ami/">https://aws.amazon.com/blogs/ai/get-started-with-deep-learning-using-the-aws-deep-learning-ami/</a> ) as the base image for my cloud-based virtual machine for training the AI. Specifically, I chose the Ubuntu version (rather than the Amazon Linux), reason being that Ubuntu is widely used within and outwith the AWS universe -- so one could expect there would be more online support with any issues possibly encountered. AWS Deep Learning AMIs come with all the core AI framework software pre-installed including TensorFlow and Python. Moreover, they impart the great advantage that the underlying hardware can be switched seamlessly from low-cost CPU-based EC2 instances (for initial development of the software and models), to more expensive but much for powerful GPU-based EC2 instances for actually training the models. So far, so good for the training environment. But the choice of the appropriate infrastructure / environment for eventually deploying the trained models into production was less obvious (covered later).</div>
<div>
<b><br /></b></div>
<h3>
Configuring all the Kit</h3>
Configuring the (mostly Python / Jupyter Notebooks / TensorFlow) development environment (on the Ubuntu server spawned from the AWS Deep Learning AMI) was relatively straightforward. I just followed the relevant online tutorials. Encountered no significant "gotchas" to speak of. Having installed the core tools, the next step was to install the Google Object Detection API which includes the relevant TensorFlow models. The instructions found in the assorted links <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" target="_blank">here</a> worked without a hitch.<br />
<div>
<br /></div>
<h2>
Solution Details</h2>
<h3>
Useful Starting Example</h3>
<div>
"<a href="https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9" target="_blank">How to train your own Object Detector with TensorFlow's Object Detector API</a>" was a most helpful online resource (with code samples) for gaining rapid familiarity with the API. I used many of the elements presented there, with some necessary modifications, the significant ones of which are presented below. </div>
<div>
<br /></div>
<h3>
Creating the Learning Data Set</h3>
<h4>
The Raw Images</h4>
<div>
The preparation of the training data set raw images was the most time consuming and labour-intensive task of all (from what I've read, this is a common refrain). These are the steps I followed:</div>
<div>
<br /></div>
<ul>
<li>Used the <a href="http://wiki.openstreetmap.org/wiki/Overpass_API" target="_blank">OpenStreetMaps API</a> to auto-capture the coordinates (latitude, longitude) identifying the precise locations of known "P" symbols (since <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> uses OpenStreetMaps, and the known parking spots are encoded in the underlying meta data)</li>
<li>Used a screenshot grabbing app to create a 300 x 300 px image file (in '.png' format) of the <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> OpenStreetMaps map centred on each of the coordinates identified in the previous step. Regarding the screenshot grabbing app: I started by using the open-source Scrapy-Splash framework installed on the Ubuntu machine(see <a href="https://github.com/scrapy-plugins/scrapy-splash">https://github.com/scrapy-plugins/scrapy-splash</a> ). However, it was difficult to properly control the pre-delays (such that the map images got fully-loaded before the captures occurred), and many of the captured images were partially blank, even worse, with the "P" symbols were incomplete (which would have had significant adverse effects on the training). I was unable to find a suitable alternate screenshot-capturing library either in Python or C#, so I opted for a commercial web-service (<a href="https://urlbox.io/">https://urlbox.io</a>) which comes with both Python and C# (plus Node.js, Ruby, PHP, and Java) sample wrapper code (I used C#, more on that later).</li>
<li>Visually checked each file to ensure that the "P" symbol was indeed in the centre by adjusting the latitude and coordinates (by hand, very laborious for approximately 1000 images -- the irony wasn't lost on me that this was precisely the task that I was hoping my AI could eventually do!). Then, for each properly-centred file, I computed a random offset (defined in terms of horizontal and vertical pixels) from the centre, and re-took the screenshot centred on these controlled offset coordinates (converted to latitude and longitude via the scale-factors in <a href="http://wiki.openstreetmap.org/wiki/Zoom_levels">http://wiki.openstreetmap.org/wiki/Zoom_levels</a>). In this way, the final training images contained "P" symbols in randomly-distributed but wholly-known locations .<i>GOTCHA: in my very early attempts at training the AI, I used only centred (not randomly offset) "P" images. The AI training converged successfully, but the trained model could only detect "P" images that were centred. Perhaps I should have known or anticipated this. Anyway, by introducing the known random offsets, the later attempts were successful in detecting "P" symbols anywhere in the images.</i></li>
<li>Created a set of bounding-box coordinates for the "P" symbol in each image, based on the known pixel offsets from the previous step. For object detection, specification of these bounding boxes along with the training images is an essential component in the training of the AI, telling it where the known "P" symbols are located, so it can learn how to identify them. In my application, I had the advantage that the map images could all be defined at a single zoom-factor (I chose "17" in OpenStreetMaps) which meant that the "P" symbols would always be the same size (both in training and when deployed in production). This meant that the bounding boxes could all be defined as squares of a fixed size. I chose 60 x 60 px which happened to enclose the "P" symbol reasonably tightly. Also, I had read that bounding boxes should generally be about 15% of the entire image. So, 60 x 60 px seemed to be about right for my 300 x 300 px image size. For convenience, rather than storing the bounding box coordinates in a separate file, the centre-point of the bounding box for a given box was encoded within the filename. Then when processing the raw files into the format required for feeding to TensorFlow, the bounding box coordinates were computed programmatically based on the centre-points extracted from the filenames, coupled with the specified square size.</li>
</ul>
<div>
The screenshots below show a couple examples of the finally-prepared training images. Each image is 300 x 300 px. The accompanying bounding boxes (not shown) measure 60 x 60 px, centred on the "P" symbol in each image. Note that there is only one "P" symbol in each training image, located in a randomly select offset position from centre. The final data set comprised 1083 such files of which 900 were used for training the model, and the remaining 183 were used for validation / testing.</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLQSsPj3fZC7qGKGt_jQowM73_iE3IPD_mAEQKpQswCIQ1XBuXbn5uuAL9VXFMETzs0Ez86cLLj2246cXTpDAJ7m_pUOFF6sggdATomcR8pFo8MNbArhAS_78kB_qJCit82nmU4rY1HD8/s1600/lat_52_5728082619_lon_0_834548395079_px_-60_py_-8.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="300" data-original-width="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLQSsPj3fZC7qGKGt_jQowM73_iE3IPD_mAEQKpQswCIQ1XBuXbn5uuAL9VXFMETzs0Ez86cLLj2246cXTpDAJ7m_pUOFF6sggdATomcR8pFo8MNbArhAS_78kB_qJCit82nmU4rY1HD8/s1600/lat_52_5728082619_lon_0_834548395079_px_-60_py_-8.png" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOMMea3pGTB4yXIGeuhLtSDi_26T1sxgURqFfeK_FybfJWkyTm_FkFIlNhxv0SmE9kymigVagrt5kRwzUiX5z9NFzSIqO8XhFXRK-S1hq6TU3HLhUx9vLMKZP0JQYdMGTZP-ot8kEEsNY/s1600/lat_53_0416942165_lon_-2_21482762555_px_-80_py_-80.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="300" data-original-width="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOMMea3pGTB4yXIGeuhLtSDi_26T1sxgURqFfeK_FybfJWkyTm_FkFIlNhxv0SmE9kymigVagrt5kRwzUiX5z9NFzSIqO8XhFXRK-S1hq6TU3HLhUx9vLMKZP0JQYdMGTZP-ot8kEEsNY/s1600/lat_53_0416942165_lon_-2_21482762555_px_-80_py_-80.png" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<h4>
The TensorFlow TFRecord<b></b></h4>
<div>
TensorFlow doesn't consume the individual raw image files for training. Instead, the entire set of image files (and corresponding bounding boxes etc) need to be collated into a single entity called a TFRecord, which is then passed to the object detection model for training. Unfortunately the formal Google documentation is somewhat weak here -- especially in terms of providing worked examples. Moreover, the code is quite brittle, i.e., it is easy to get it wrong. Thankfully, however, those that have come before have shown the way. Based on the sample code in <a href="https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py">https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py</a> , here's my fully-working Python code for preparing the required TFRecord from the raw images described above. Note: a separate TFRecord is generated for the training image set and for the validation image set, respectively (commented/uncommented in code chunks as appropriate):</div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">import os<br />import io<br />import tensorflow as tf</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">from object_detection.utils import dataset_util</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br />flags = tf.app.flags</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">#flags.DEFINE_string('output_path', '', 'Path to output TFRecord')<br />#for use inside notebook, set the output dir explicitly since not a #command-line arg</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">#Set differently for training vs validation <br />#TRAINING SET<br />flags.DEFINE_string('output_path', '/risklogical/DeeplearningImages/TFRecords/train_PR_JustPOffsetV2.records', 'Path to output TFRecord')</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">#VALIDATION SET<br />#flags.DEFINE_string('output_path', '/risklogical/DeeplearningImages/TFRecords/validate_PR_JustPOffsetV2.records', 'Path to output TFRecord')</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br />FLAGS = flags.FLAGS</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">'''</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">where it should be obvious that the paths <b>/risklogical/DeeplearningImages/TFRecords/train_PR_JustPOffsetV2.records</b> and <b>/risklogical/DeeplearningImages/TFRecords/validate_PR_JustPOffsetV2.records </b>should be substituted with your own paths and filenames.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">'''</span></div>
<b><br /></b>
<span style="font-family: "courier new" , "courier" , monospace;">def create_tf_fromfile(imageFile, boundingboxsize):<br /> # TODO(user): Populate the following variables from your example.<br /> # See https://github.com/datitran/raccoon_dataset/blob/master/generate_tfrecord.py<br /> filename=imageFile.encode('utf8') # Filename of the image. Empty if image is not from file<br /> <br /> #image_decoded = tf.image.decode_image(image_string)<br /> <br /> with tf.gfile.GFile(filename, 'rb') as fid:<br /> encoded_png = fid.read()<br /> <br /> encoded_png_io = io.BytesIO(encoded_png)<br /> image=Image.open(encoded_png_io) <br /> width, height = image.size<br /> <br /> #Get offsets from filename<br /> file_=os.path.basename(imageFile)<br /> pieces=file_.split('_')<br /> #latStr=pieces[1]+'.'+pieces[2]<br /> #lonStr=pieces[4]+'.'+pieces[5]<br /> pxStr=pieces[7]<br /> pyStr=pieces[9].replace('.png','')<br /><br /> </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> image_format = b'png'<br /> <br /> xmins = [] #normalized left x coords for bounding box<br /> xmaxs = [] #normalized right x coords for bounding box ymins = [] #normalized top y coords for bounding box<br /> ymaxs = [] #normalized bottom y coords for bounding box</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> classes_text = [] #string class name of bounding box<br /> classes = [] #integer class id of bounding box</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /> name="PARKING_NORMAL"<br /> #square bounding box centred on image of side </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: "courier new";">'''</span></span></div>
<span style="font-family: "courier new" , "courier" , monospace;">
</span>
<br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: "courier new";">Programmatically define the bounding box as a square of side-length boundingboxsize whose location is offset from the image centre by (pxStr,pyStr) where these are known pre-defined quantities (see discussion above), which, for convenience, had been encoded in each respective raw filename, and extracted (see code a few lines above)</span></span></div>
<span style="font-family: "courier new" , "courier" , monospace;">
</span>
<br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><span style="font-family: "courier new";">'''</span><span style="font-family: "courier new" , "courier" , monospace;"></span></span></div>
<span style="font-family: "courier new" , "courier" , monospace;">
</span><span style="font-family: "courier new" , "courier" , monospace;"> xmin=((width-boundingboxsize)/2)+int(pxStr)<br /> ymin=((height-boundingboxsize)/2)+int(pyStr)<br /> xmax=((width+boundingboxsize)/2)+int(pxStr)<br /> ymax=((height+boundingboxsize)/2)+int(pyStr)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> classes_text.append(name.encode('utf8'))<br /> classes.append(1) #only one</span><br />
<div>
<br /></div>
<div>
<span style="font-family: "courier new";">'''</span></div>
<div>
<span style="font-family: "courier new";">Since I'm only looking for one class of objects to detect, namely the "P" symbols, only need to define one class for the TensorFlow object detector. Give it the (arbitrary) label PARKING_NORMAL</span></div>
<div>
<span style="font-family: "courier new";">'''</span><span style="font-family: "courier new" , "courier" , monospace;"></span></div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /> xmins.append(float(xmin)/width) #normalised<br /> xmaxs.append(float(xmax)/width) #normalised<br /> ymins.append(float(ymin)/height) #normalised<br /> ymaxs.append(float(ymax)/height) #normalised<br /> <br /> tf_example = tf.train.Example(features=tf.train.Features(feature={<br /> 'image/height': dataset_util.int64_feature(height),<br /> 'image/width': dataset_util.int64_feature(width),<br /> 'image/filename': dataset_util.bytes_feature(filename),<br /> 'image/source_id': dataset_util.bytes_feature(filename),<br /> 'image/encoded': dataset_util.bytes_feature(encoded_png),<br /> 'image/format': dataset_util.bytes_feature(image_format),<br /> 'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),<br /> 'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),<br /> 'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),<br /> 'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),<br /> 'image/object/class/text': dataset_util.bytes_list_feature(classes_text),<br /> 'image/object/class/label': dataset_util.int64_list_feature(classes),<br /> }))<br /> return tf_example</span><br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">import os<br />from PIL import Image<br />imageFolder="/risklogical/DeeplearningImages/JustPRandomReGrabbed"</span></div>
<span style="font-family: "courier new" , "courier" , monospace;"></span><br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<span style="font-family: "courier new" , "courier" , monospace;">
</span>
<br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;">'''</span></div>
<span style="font-family: "courier new" , "courier" , monospace;">
</span>
<br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;">This points to the folder containing the raw images described earlier. Obviously you would substitute for your own image folder</span></div>
<span style="font-family: "courier new" , "courier" , monospace;">
</span>
<br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;">'''</span></div>
<span style="font-family: "courier new" , "courier" , monospace;">
</span>
<br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<span style="font-family: "courier new" , "courier" , monospace;">
</span>
<br />
<div>
<span style="font-family: "courier new" , "courier" , monospace;">def main(_):<br /> writer = tf.python_io.TFRecordWriter(FLAGS.output_path) <br /> count=0<br /> actual_count=0<br /> mincount=0 # training<br /> maxcount=900 </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> #mincount=901 # validation<br /> #maxcount=1083<br /> for root, dirs, files in os.walk(imageFolder):<br /> for file_ in files:<br /> if (count>=mincount and count<=maxcount):<br /> if os.path.getsize(os.path.join(root, file_)) > 5000: #only include files bigger than this otherwise may not be fully rendered<br /> tf_example=create_tf_fromfile(os.path.join(root, file_),60)# boxsize 60 px for OSM Zoom of 17, images 300X300<br /> writer.write(tf_example.SerializeToString()) <br /> actual_count=actual_count+1<br /> print(file_)<br /> count=count+1 <br /> writer.close() <br /> output_path = FLAGS.output_path #os.path.join(os.getcwd(), FLAGS.output_path)<br /> print('Successfully created the TFRecords: %s from %s files'%(output_path,str(actual_count)))<br /> <br />if __name__ == '__main__':<br /> tf.app.run() </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<br /></div>
<h3>
The TensorFlow Object Detection Model</h3>
The <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" target="_blank">Google Object Detection API</a> includes a variety of different pre-trained model architectures. Since my requirements emphasised accuracy over speed (given that the ultimate intention was to deploy the production model on a batch scheduler rather than for real-time individual detection, more on that later), I opted for the <b>Faster RCNN with Inception Resnet v2</b> <b>trained on the COCO dataset</b>. This was certainly not a scientifically informed choice: I simply read the background information on each model, and concluded that this ought to be suitable for my purposes. I was led by the notion that Google have been doing this for a long time and that their models are well-proven. It did not seem sensible for me to try and re-invent the wheel by attempting to define a brand new object detection neural network from scratch. By contrast, I took the approach (adopted by others before me) of starting with a pre-trained model, then training it further on my own images, in the hope that it would adapt and learn to detect my objects beyond those from its underlying dataset (COCO in this case). This turned out to be true.<br />
<br />
<h3>
Training the Object Detection Model</h3>
<h4>
The Configuration File</h4>
<div>
The TensorFlow training process is controlled via a configuration file. This can be taken "as is" from the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" target="_blank">Google Object Detection API</a> distribution, making only very minor adjustments. Here is the configuration file (<b>faster_rcnn.config</b>), with my minor adjustments in <b><i>bold italics</i></b></div>
<div>
<b><i><br /></i></b></div>
<div>
<b><i><br /></i></b></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"># Faster R-CNN with Inception Resnet v2, Atrous version;<br /># Configured for MSCOCO Dataset.<br /># Users should configure the fine_tune_checkpoint field in the train config as<br /># well as the label_map_path and input_path fields in the train_input_reader and<br /># eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that<br /># should be configured.</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">model {<br /> faster_rcnn {<br /> num_classes: 1<br /> image_resizer {<br /> keep_aspect_ratio_resizer {<br /> min_dimension: <b><i>300</i></b><br /> max_dimension: <b><i>300</i></b><br /> }<br /> }<br /> feature_extractor {<br /> type: 'faster_rcnn_inception_resnet_v2'<br /> first_stage_features_stride: 8<br /> }<br /> first_stage_anchor_generator {<br /> grid_anchor_generator {<br /> scales: [0.25, 0.5, 1.0, 2.0]<br /> aspect_ratios: [0.5, 1.0, 2.0]<br /> height_stride: 8<br /> width_stride: 8<br /> }<br /> }<br /> first_stage_atrous_rate: 2<br /> first_stage_box_predictor_conv_hyperparams {<br /> op: CONV<br /> regularizer {<br /> l2_regularizer {<br /> weight: 0.0<br /> }<br /> }<br /> initializer {<br /> truncated_normal_initializer {<br /> stddev: 0.01<br /> }<br /> }<br /> }<br /> first_stage_nms_score_threshold: 0.0<br /> first_stage_nms_iou_threshold: 0.7<br /> first_stage_max_proposals: 300<br /> first_stage_localization_loss_weight: 2.0<br /> first_stage_objectness_loss_weight: 1.0<br /> initial_crop_size: 17<br /> maxpool_kernel_size: 1<br /> maxpool_stride: 1<br /> second_stage_box_predictor {<br /> mask_rcnn_box_predictor {<br /> use_dropout: false<br /> dropout_keep_probability: 1.0<br /> fc_hyperparams {<br /> op: FC<br /> regularizer {<br /> l2_regularizer {<br /> weight: 0.0<br /> }<br /> }<br /> initializer {<br /> variance_scaling_initializer {<br /> factor: 1.0<br /> uniform: true<br /> mode: FAN_AVG<br /> }<br /> }<br /> }<br /> }<br /> }<br /> second_stage_post_processing {<br /> batch_non_max_suppression {<br /> score_threshold: 0.0<br /> iou_threshold: 0.6<br /> max_detections_per_class: 100<br /> max_total_detections: 100<br /> }<br /> score_converter: SOFTMAX<br /> }<br /> second_stage_localization_loss_weight: 2.0<br /> second_stage_classification_loss_weight: 1.0<br /> }<br />}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">train_config: {<br /> batch_size: 1<br /> optimizer {<br /> momentum_optimizer: {<br /> learning_rate: {<br /> manual_step_learning_rate {<br /> initial_learning_rate: 0.0003<br /> schedule {<br /> step: 0<br /> learning_rate: .0003<br /> }<br /> schedule {<br /> step: 900000<br /> learning_rate: .00003<br /> }<br /> schedule {<br /> step: 1200000<br /> learning_rate: .000003<br /> }<br /> }<br /> }<br /> momentum_optimizer_value: 0.9<br /> }<br /> use_moving_average: false<br /> }<br /> gradient_clipping_by_norm: 10.0<br /> fine_tune_checkpoint: "<b><i>/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/model.ckpt</i></b>"<br /> from_detection_checkpoint: true<br /> # Note: The below line limits the training process to 200K steps, which we<br /> # empirically found to be sufficient enough to train the pets dataset. This<br /> # effectively bypasses the learning rate schedule (the learning rate will<br /> # never decay). Remove the below line to train indefinitely.<br /> num_steps: 200000<br /> data_augmentation_options {<br /> random_horizontal_flip {<br /> }<br /> }<br />}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">train_input_reader: {<br /> tf_record_input_reader {<br /> input_path: "<b><i>/risklogical/DeeplearningImages/TFRecords/train_PR.records</i></b>"<br /> }<br /> label_map_path: "<b><i>/risklogical/DeeplearningImages/TFRecords/label_map.pbtxt</i></b>"<br />}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">eval_config: {<br /> num_examples: <b><i>183</i></b><br /> # Note: The below line limits the evaluation process to 10 evaluations.<br /> # Remove the below line to evaluate indefinitely.<br /> max_evals: 10000<br />}</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">eval_input_reader: {<br /> tf_record_input_reader {<br /> input_path: "<b><i>/risklogical/DeeplearningImages/TFRecords/validate_PR.records</i></b>"<br /> }<br /> label_map_path: "<b><i>/risklogical/DeeplearningImages/TFRecords/label_map.pbtxt</i></b>"<br /> shuffle: false<br /> num_readers: 1<br /> num_epochs: 1<br />}</span></div>
<div>
<br />
The minor modifications are summarised as follows:</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">image_resizer {<br /> keep_aspect_ratio_resizer {<br /> min_dimension: <b><i>300</i></b><br /> max_dimension: <b><i>300</i></b><br /> }</span><br />
<b><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></b>
I set these to the raw dimensions of my training images, namely 300 x 300 (pixels) on the assumption that by doing so, there would be no need for any re-sizing and hence no associated distortion. I must admit, though, I am not sure of the validity of this reasoning.</div>
<div>
<i><br /></i></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">fine_tune_checkpoint: "<b><i>/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/model.ckpt</i></b>"</span><br />
<b></b><b></b><u></u><sub></sub><sup></sup><strike><b><span style="font-family: "courier new" , "courier" , monospace;"><i></i></span></b></strike></div>
<div>
<br /></div>
<div>
This specifies the frozen state of the pre-trained model (in the TensorFlow '.ckpt' format) from which the new training commences. The path is simply the location where the <b>model.ckpt</b> file is located and should obviously be substituted by your own path. The actual <b>model.ckpt</b> file in question was taken directly from the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" target="_blank">Google Object Detection API</a> " as is".</div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">train_input_reader:{ </span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">tf_record_input_reader {<br /> input_path: "<b><i>/risklogical/DeeplearningImages/TFRecords/train_PR.records</i></b>"<br /> }</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">label_map_path: "<b><i>/risklogical/DeeplearningImages/TFRecords/label_map.pbtxt</i></b>"</span><span style="font-family: "courier new";"></span><br />
<span style="font-family: "courier new";">}</span><br />
<b></b><i></i><u></u><sub></sub><sup></sup><strike></strike><br /></div>
<div>
<i><b><u><sub><sup><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></sup></sub></u></b></i></div>
<i></i>The <b>train_PR_records</b> file specifies the TFRecord file for the training set, as described earlier, and should obviously be substituted by your own path.<br />
<br />
The<b> label_map.pbtxt </b>specifies a file which contains the object class labels, and should obviously be substituted by your own path. In my case, there is only one object class (corresponding to the "P" symbol object to be detected). The contents of the file is precisely as follows:<br />
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">item {</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> id: 1</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"> name: 'normal_parking'</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">}</span></div>
<div>
<span style="font-family: "courier new";"><br /></span></div>
<div>
<br /></div>
<div>
Finally,</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new";"><br /></span></div>
<div>
<span style="font-family: "courier new";"><span style="font-family: "courier new" , "courier" , monospace;">eval_input_reader: {<br /> tf_record_input_reader {<br /> input_path: "<b><i>/risklogical/DeeplearningImages/TFRecords/validate_PR.records</i></b>"<br /> }<br /> label_map_path: "<b><i>/risklogical/DeeplearningImages/TFRecords/label_map.pbtxt</i></b>"<br /> shuffle: false<br /> num_readers: 1<br /> num_epochs: 1<br />}</span></span><span style="font-family: "courier new";"><br /></span></div>
<div>
<span style="font-family: "courier new";"><br /></span></div>
<div>
<br /></div>
<i></i>The <b>validate_PR_records</b> file specifies the TFRecord file for the validation set, as described earlier, and should obviously be substituted by your own path.<br />
<br />
The<b> label_map.pbtxt </b>is exactly the same as above.<br />
<br />
<h4>
Executing the Training Run</h4>
This is most easily achieved by running the Python function named <b>train.py </b>which is distributed with the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" target="_blank">Google Object Detection API</a> . The function can be called from the Linux (Ubuntu) command-line, specifying the necessary parameters. To ensure correct paths, open a command-line within the<br />
<b>TensorFlow/models-master/research </b>folder of the API installation location, and execute the following command:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">python object_detection/train.py </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--logtostderr</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--pipeline_config_path=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/faster_rcnn.config</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--train_dir=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/train/</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
where the parameters are as follows:<br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">--pipeline_config_path=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/faster_rcnn.config</span><br />
<b><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></b>
which points to the Configuration File described in the previous section. Obviously this should be substituted with your own path.<br />
<b><br /></b>
<span style="font-family: "courier new" , "courier" , monospace;">--train_dir=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/train/</span><br />
<div>
<br /></div>
<div>
which specifies the destination for the outputs (e.g., updated <b>model.ckpt</b> snapshots, etc) generated during the training process. Obviously this should be substituted with your own path.</div>
<div>
<br /></div>
<div>
When successfully initiated, the console should display the training stepwise progress, as shown in the screenshot below:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTlVUWa8nDfLYTUUWrVY95tr_E65aXDetNDjZGH4Aze3jvDQMyibAtkYWXwrEAFIwtcLyssNH75C6HhO0KWPrnXGB5sXYZ83siHRiPYlYkKez13dywTJHTHnsulcA6BrbQmu2vlQzdQxw/s1600/Capture_TF_CONSOLE.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="418" data-original-width="648" height="257" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTlVUWa8nDfLYTUUWrVY95tr_E65aXDetNDjZGH4Aze3jvDQMyibAtkYWXwrEAFIwtcLyssNH75C6HhO0KWPrnXGB5sXYZ83siHRiPYlYkKez13dywTJHTHnsulcA6BrbQmu2vlQzdQxw/s400/Capture_TF_CONSOLE.PNG" width="400" /></a></div>
<div>
<br /></div>
<div>
For this particular model which has a high degree of complexity, training progress is rather slow on a standard CPU-based machine such as the <b>AWS EC2 c4.large</b> instance type I typically use when developing software. Switching to a GPU-based machine, such as the <b>AWS EC2 g3.4xlarge</b> instance type yields approximately 100 times the performance on this model, for only 10 times the cost, so is certainly cost-effective for performing the actual training runs, as demonstrated in the screenshot above (where a training step is seen to take less than 1 second on the <b>AWS EC2 g3.4xlarge</b> instance type).</div>
<div>
<b><i><br /></i></b></div>
<div>
Whilst this shows that the training is indeed progressing, it is not very informative. For more detailed information, the accompanying TensorBoard browser app can be invoked.</div>
<div>
<br /></div>
<h4>
Using TensorBoard for Monitoring Training Progress </h4>
<div>
Issuing the following command in the Ubuntu console</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">tensorboard --logdir=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/</span></div>
<div>
<br /></div>
<div>
with the <b>logdir</b> parameter pointing to the folder containing the model-under-training,</div>
<div>
<br /></div>
<div>
instigates an instance of TensorBoard attached to the running training job. By pointing a browser (on the same Ubuntu machine) to</div>
<div>
<br /></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;">http://ip-********:6006</span></div>
<div>
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div>
<div>
(putting in your own IP address in place of ********) opens the TensorBoard dashboard, illustrated in the screenshot below:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwsc4Hrgw4-rRwNUslYEiTyJ0T93KKrq-_9XNI8tULDpBC98KfLNmupjqbeA2f9bemT8FPL5bZ-5SZhnTBR7WDnomVCFb0e3LrAvmbdufhUc6sSzPDMOoKXofnVVIUSe3dOf0uZhcxxIY/s1600/Capture_TB_TOTAL_LOSS.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="639" data-original-width="808" height="315" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwsc4Hrgw4-rRwNUslYEiTyJ0T93KKrq-_9XNI8tULDpBC98KfLNmupjqbeA2f9bemT8FPL5bZ-5SZhnTBR7WDnomVCFb0e3LrAvmbdufhUc6sSzPDMOoKXofnVVIUSe3dOf0uZhcxxIY/s400/Capture_TB_TOTAL_LOSS.PNG" width="400" /></a></div>
<div>
<br /></div>
In this example, TotalLoss is displayed. It can be seen from the graph that the training has converged effectively. In fact, it demonstrates a slight upturn towards the end which would typically suggest that the training has been running for a too long on the same data set such that the model is now over-fitting the data. It is important to stop the training when the loss curve reaches its minimum (and use the model at that point for prediction, more on that later). It is noteworthy (well, certainly to me) that apart from using my own training images, I made <b><i>no changes</i></b> to the underlying model / parameters. I simply used it "as is" -- and it worked.<br />
<br />
To enable TensorBoard to also monitor the validation tests against the current state of the model-under-training, execute the <b>eval.py</b> Python command in the Ubuntu console:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">python object_detection/eval.py --logtostderr --pipeline_config_path=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/faster_rcnn.config --checkpoint_dir=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/train/ --eval_dir=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/eval/</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
where the analogous set of parameters are almost the same as for the <b>train.py</b> command, and their meanings by now should be (almost) self-explanatory. <i>GOTCHA: on occasion, this command will fail to run (generating a "CUDA_OUT_OF_MEMORY" and/or "Resource exhausted: OOM" error before terminating). The workaround is to kill all Jupyter Notebooks currently running on the server, then try again.</i> Once successfully executed, enable <b>eval</b> (in the left-hand panel of TensorBoard) navigate to the IMAGES tab on TensorBoard, and you should see the results of the ongoing validation tests i.e., the validation images with the corresponding bound-boxes from the detection trials. Below are some screenshots of such, demonstrating successful detection.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1XVfSVpeijP5l2ku8ffY9j2YQxsmAqfU1Wtocn4ghm4MI5770o6aY4-cf47pcEZHw0wsvu1aCnVzN7tzy8GP7utSjpcjRQ6T1QV3uqqLdlr3-GvdfUla6Fifl4O3DMr0ytC88nSBukRM/s1600/Capture_TB_EVAL_1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="601" data-original-width="756" height="317" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1XVfSVpeijP5l2ku8ffY9j2YQxsmAqfU1Wtocn4ghm4MI5770o6aY4-cf47pcEZHw0wsvu1aCnVzN7tzy8GP7utSjpcjRQ6T1QV3uqqLdlr3-GvdfUla6Fifl4O3DMr0ytC88nSBukRM/s400/Capture_TB_EVAL_1.PNG" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7OxOyCJ2ofUn1r-FZK2yjQHd7QppxxJSddojuKLh7beFs5wCrf8SNE4JEIjR3P1gjvt8xraKzd0DlgVKMxOZQXRsihkXb8rQ3wXXyftZVl6hZVRoo6iWoiSYNqq62WyCjPZrs1gUXrGg/s1600/Capture_TB_EVAL_2.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="415" data-original-width="373" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7OxOyCJ2ofUn1r-FZK2yjQHd7QppxxJSddojuKLh7beFs5wCrf8SNE4JEIjR3P1gjvt8xraKzd0DlgVKMxOZQXRsihkXb8rQ3wXXyftZVl6hZVRoo6iWoiSYNqq62WyCjPZrs1gUXrGg/s400/Capture_TB_EVAL_2.PNG" width="358" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggeJEW4BSwEXcoriUihmqpe4PxmOc75UGa4kcx5NJjBamRa29iOhplIpvsaX5phTiX7Qi7eGqYqWsNZgkBAfrExfRhf3t_y4s7zEw77ngmlv0WKL_2Q7M5GpKtzD5zeEpgCqbmAJkyQPA/s1600/Capture_TB_EVAL_3.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="417" data-original-width="371" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggeJEW4BSwEXcoriUihmqpe4PxmOc75UGa4kcx5NJjBamRa29iOhplIpvsaX5phTiX7Qi7eGqYqWsNZgkBAfrExfRhf3t_y4s7zEw77ngmlv0WKL_2Q7M5GpKtzD5zeEpgCqbmAJkyQPA/s400/Capture_TB_EVAL_3.PNG" width="355" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2RGdIwKv_aLXTz1RM9VbtlisJ0dPY7OGsguzBrxGu_f4ZGUTEqiRKxHhW4WwSVuBS6fRb4BH6GiNNSs1GgmutfnHwr5d8nEbHwwhuyiKpUlkTDJTf1YEOM723YOLxFo4DPnZ1Tf5L9sQ/s1600/Capture_TB_EVAL_4.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="401" data-original-width="369" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2RGdIwKv_aLXTz1RM9VbtlisJ0dPY7OGsguzBrxGu_f4ZGUTEqiRKxHhW4WwSVuBS6fRb4BH6GiNNSs1GgmutfnHwr5d8nEbHwwhuyiKpUlkTDJTf1YEOM723YOLxFo4DPnZ1Tf5L9sQ/s400/Capture_TB_EVAL_4.PNG" width="367" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpmxeGz8sIWuwoTytYlRTHhRsZqoDRUZNx7i2sNhq999jfQg2ivXC_M9FPEKZS6-BLKm7SlIC3fR_gMUbLARdPlZ-lwJmnxLLPJLif_l1Dy33zvquWDAU86xqJzNGga1D08rYLpITKzpw/s1600/Capture_TB_EVAL_5.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="406" data-original-width="365" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhpmxeGz8sIWuwoTytYlRTHhRsZqoDRUZNx7i2sNhq999jfQg2ivXC_M9FPEKZS6-BLKm7SlIC3fR_gMUbLARdPlZ-lwJmnxLLPJLif_l1Dy33zvquWDAU86xqJzNGga1D08rYLpITKzpw/s400/Capture_TB_EVAL_5.PNG" width="358" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0BDJqeC59BWlFEiqv7Y9Nju-xtU4GsYMhx9NETAvEhYRWMM0oVsqSvzUlZikt806zImIFB1XkPbLVL70FV2APFWqvSzH_40HaZ4ZNi_IUX5j3D_U86wFlsKiH3-Y1e-7nRB91sGiiwbg/s1600/Capture_TB_EVAL_6.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="403" data-original-width="360" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0BDJqeC59BWlFEiqv7Y9Nju-xtU4GsYMhx9NETAvEhYRWMM0oVsqSvzUlZikt806zImIFB1XkPbLVL70FV2APFWqvSzH_40HaZ4ZNi_IUX5j3D_U86wFlsKiH3-Y1e-7nRB91sGiiwbg/s400/Capture_TB_EVAL_6.PNG" width="356" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<h3>
<i>Ad Hoc</i> Testing of the Trained Model Before Production Deployment</h3>
<div>
The validation images available via TensorBoard presented above demonstrate that the training has been successful and the trained model can effectively detect the "P" symbols. Before proceeding to the next phase of deploying the model to production, it is useful to present an <i>ad hoc</i> method for testing the trained model on an arbitrary input image, i.e., rather than just the validation set processed via the <b>eval.py</b> method.<br />
<br />
The first step is to export the trained model in the format which can accept a single image as an input. The <b>export_inference_graph.py</b> Python method included with the <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" target="_blank">Google Object Detection API</a> does this, and can be called from the Ubuntu console as follows:<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">python object_detection/export_inference_graph.py </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--input_type image_tensor </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--pipeline_config_path=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/faster_rcnn.config </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--trained_checkpoint_prefix=/risklogical/DeeplearningImages/models/faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017/train/model.ckpt-46066 </span><br />
<span style="font-family: "courier new" , "courier" , monospace;">--output_directory /risklogical/DeeplearningImages/Outputs/PR_Detector_JustP_RCNN_ForJupyter</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
where the paths and filenames are obviously substituted with your own. <i>GOTCHA: in the above code snippet, it is important to specify </i><br />
<i><i><br /></i></i>
<span style="font-family: "courier new" , "courier" , monospace;">--input_type image_tensor</span> <b></b><br />
<br />
since this enables single images to be presented to the model via Python code.<br />
<br />
Successful execution of this method creates a 'frozen' model contained in the file named<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">frozen_inference_graph.pb</span><br />
<b><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></b>
located in the specified output folder<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">/risklogical/DeeplearningImages/Outputs/PR_Detector_JustP_RCNN_ForJupyter/</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<br />
This 'frozen model' can then be imported and run against an arbitrary test image in an <i>ad hoc</i> manner. Here is the Python code which performs such <i>ad hoc</i> tests.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">import numpy as np<br />import os<br />import six.moves.urllib as urllib<br />import sys<br />import tarfile<br />import tensorflow as tf<br />import zipfile</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">from collections import defaultdict<br />from io import StringIO<br />from matplotlib import pyplot as plt<br />from PIL import Image</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"># This is needed to display the images.<br />%matplotlib inline</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"># This is needed since need modules from the object_detection folder.<br />sys.path.append("/home/ubuntu/workplace/TensorFlow/models-master/research/object_detection")</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br />from utils import label_map_util</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">from utils import visualization_utils as vis_util</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new";"><br /></span>
<span style="font-family: "courier new";"># Path to frozen detection graph. This is the actual model that is used for the object detection.<br />PATH_TO_CKPT ='/risklogical/DeeplearningImages/Outputs/PR_Detector_JustP_RCNN_ForJupyter/frozen_inference_graph.pb'</span><br />
<span style="font-family: "courier new";">labels_folder='/risklogical/DeeplearningImages/TFRecords'<br /># List of the strings that is used to add correct label for each box.<br />PATH_TO_LABELS = os.path.join(labels_folder, 'label_map.pbtxt')</span><br />
<span style="font-family: "courier new";">NUM_CLASSES = 1</span><br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">detection_graph = tf.Graph()<br />with detection_graph.as_default():<br /> od_graph_def = tf.GraphDef()<br /> with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:<br /> serialized_graph = fid.read()<br /> od_graph_def.ParseFromString(serialized_graph)<br /> tf.import_graph_def(od_graph_def, name='')</span><br />
<span style="font-family: "courier new";"></span></div>
<div>
<span style="font-family: "courier new";"><br /></span></div>
<div>
<span style="font-family: "courier new";">#load label map<br />label_map = label_map_util.load_labelmap(PATH_TO_LABELS)<br />categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)<br />category_index = label_map_util.create_category_index(categories)</span></div>
<br />
<span style="font-family: "courier new" , "courier" , monospace;">import io<br />import matplotlib.image as mpimg</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;">PATH_TO_TEST_IMAGES_DIR = '/risklogical/DeeplearningImages/ManualTestImagesPR/TestForJustP'<br />TEST_IMAGE_PATHS = []</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">for root, dirs, files in os.walk(PATH_TO_TEST_IMAGES_DIR):<br /> for file_ in files:<br /> TEST_IMAGE_PATHS.append(os.path.join(root, file_))</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"># Size, in inches, of the output images.<br />IMAGE_SIZE = (12, 8)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;">with detection_graph.as_default():<br /> with tf.Session(graph=detection_graph) as sess:<br /> for image_path in TEST_IMAGE_PATHS: <br /> image = Image.open(image_path)<br /> image_np= np.array(image)<br /> if image_np.shape[2]==4: # got a 4th column e.g. alpha channel<br /> imr=image_np[:,:,:-1] #remove 4th column<br /> image_np=imr <br /> <br /> <br /> # Expand dimensions since the model expects images to have shape: [1, None, None, 3]<br /> image_np_expanded = np.expand_dims(image_np, axis=0)</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"> <br /> image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') <br /> <br /> # Each box represents a part of the image where a particular object was detected.<br /> boxes = detection_graph.get_tensor_by_name('detection_boxes:0')<br /> # Each score represent how level of confidence for each of the objects.<br /> # Score is shown on the result image, together with the class label.<br /> scores = detection_graph.get_tensor_by_name('detection_scores:0')<br /> classes = detection_graph.get_tensor_by_name('detection_classes:0')<br /> num_detections = detection_graph.get_tensor_by_name('num_detections:0')<br /> # Actual detection.<br /> (boxes, scores, classes, num_detections) = sess.run(<br /> [boxes, scores, classes, num_detections],<br /> feed_dict={image_tensor: image_np_expanded})</span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<span style="font-family: "courier new" , "courier" , monospace;"># Visualization of the results of a detection.<br /> <br /> vis_util.visualize_boxes_and_labels_on_image_array(<br /> image_np,<br /> np.squeeze(boxes),<br /> np.squeeze(classes).astype(np.int32),<br /> np.squeeze(scores),<br /> category_index,<br /> use_normalized_coordinates=True,<br /> line_thickness=8)<br /> <br /> plt.figure(figsize=IMAGE_SIZE)<br /> plt.imshow(image_np) </span><br />
<span style="font-family: "courier new" , "courier" , monospace;"><br /></span>
<br />
<i>GOTCHA: in the above code snippet it is essential to perform the re-shaping of the image data depending on whether it contains an alpha channel or not, otherwise it will fail to execute</i><br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"> if image_np.shape[2]==4: # got a 4th column e.g. alpha channel<br /> imr=image_np[:,:,:-1] #remove 4th column<br /> image_np=imr <br /> <br /> # Expand dimensions since the model expects images to have shape: [1, None, None, 3]<br /> image_np_expanded = np.expand_dims(image_np, axis=0)</span><b></b><br />
<b><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></b>
Example output generated by running this <i>ad hoc </i>test script is shown in the screenshot below. The resulting display is analogous to those generated via TensorBoard, but with the advantage of flexibility in that the model can be run in an <i>ad hoc</i> manner against any test image files contained in the folder<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;">PATH_TO_TEST_IMAGES_DIR = '/risklogical/DeeplearningImages/ManualTestImagesPR/TestForJustP'</span><br />
<b><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></b>
without having to create a TFRecord from a collection of files and without having to evoke TensorBoard (obviously you would substitute with your own path and contents).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtW2-SHP2qGENGS6vJWeBWtasTZBy6biWzIX2KBCjCXkpSVXE1E0xbERuXbKq29GDaCGVVaN7fqLvDoRo6PAMU-lCVWG1RXcb9k8zXBOnAVHMkE06BZiJwvUz2gSb48pJ3u51S5nXFSJo/s1600/Capture_AD_HOC_TEST.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="476" data-original-width="488" height="388" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtW2-SHP2qGENGS6vJWeBWtasTZBy6biWzIX2KBCjCXkpSVXE1E0xbERuXbKq29GDaCGVVaN7fqLvDoRo6PAMU-lCVWG1RXcb9k8zXBOnAVHMkE06BZiJwvUz2gSb48pJ3u51S5nXFSJo/s400/Capture_AD_HOC_TEST.PNG" width="400" /></a></div>
<br />
This concludes Steps 1 & 2 of the overall Solution Plan presented at the start of this post, namely the creation of a set of suitable training images, then the successful completion of the training of an object detection Deep Learning model for identifying the location of "P" symbols in <a href="http://parkingradar.org/" target="_blank">ParkingRadar</a> map screenshots. The final step is the deployment of this trained model into production. This is the topic covered in the <a href="http://flylogical.blogspot.com/2018/01/deploying-tensorflow-object-detector.html" target="_blank">follow-on post</a>, where I'll also present my conclusions on the overall project.<br />
<br />
<i>FINAL GOTCHA: since the AI was trained with only a single P per bounding box, it is unable to detect when multiple P's are closer together than the dimension of the 60 px bounding box. It can however detect multiple "P"s in a given image, as long as each is separated by 60 px or more.</i><br />
<div>
<i><br /></i></div>
<div>
<i><br /></i></div>
<div>
<br /></div>
<div>
</div>
<div>
<br /></div>
<div>
<br /></div>
Unknownnoreply@blogger.com12tag:blogger.com,1999:blog-223455584910870050.post-64267348657981029612017-10-01T04:02:00.001-07:002017-10-01T04:07:15.604-07:00Parking Radar gets its own website<p dir="ltr">ParkingRadar now has its own dedicated <a href="http://www.parkingradar.org/">website</a> and <a href="https://www.facebook.com/ParkingRadarApp">Facebook</a> page. If you find the app useful, spread the word 😋</p>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-64979635075929632852017-09-16T00:54:00.000-07:002017-09-16T00:54:01.133-07:00Attention carpark owners / operatorsAre you the owner /operator of a carpark ? If you were to provide the GPS coordinates of your lots, it would be a benefit to you by having exposure of your lots, plus a benefit to the Parking Radar community (<a href="https://itunes.apple.com/us/app/parking-radar/id1265641228?ls=1&mt=8 " target="_blank">iOS</a> and <a href="https://play.google.com/store/apps/details?id=com.flylogical.parkingradar" target="_blank">Android</a>) by having more complete coverage.<br />
<br />
<a href="https://flylogical.azurewebsites.net/ParkingRadarContrib.aspx" target="_blank">Here</a> is a simple web-form for submitting GPS data.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-90787542933423316382017-09-16T00:47:00.000-07:002017-09-16T09:12:31.938-07:00Parking your way to fitnessYou don't need a car to participate in <a href="http://flylogical.blogspot.com/2017/07/parking-radar-goes-live_30.html">Parking Radar</a> (for <a href="https://itunes.apple.com/us/app/parking-radar/id1265641228?ls=1&mt=8">iOS</a> and <a href="https://play.google.com/store/apps/details?id=com.flylogical.parkingradar">Android</a>). When out walking, running, cycling, etc., whenever you see a parking spot not already on the map simply grab it (by tapping the <b>I'M PARKED HERE</b> button). Takes only a few moments, and helps everyone by enhancing the database by including more and more potential parking spots -- and good for you, too, if you do ever happen to get in a car again. You might get some funny looks when grabbing spots-- but it is surely worth it for the thrill of being plugged-in to The (Parking) Matrix. And remember to KEEP SAFE. You do not need to step on to the road to grab a spot. Anywhere within 3m (10 feet) of the spot will do fine.
<br />
<br />
<h4>
Guide to Parking Spot validity</h4>
Please only "grab" spots that meet the following criteria:<br />
<ul>
<li>Must be a legal parking spot as defined by local laws and regulations</li>
<li>Must be publicly available in principle, either free (including disc zones) or paid</li>
<li>NO private driveways</li>
<li>NO reserved parking spots (or spots restricted to clients of a specific business)</li>
<li>Generally accessible all year, not just for special events</li>
<li>Must be suitable for cars (i.e., not just motorbikes etc.) </li>
</ul>
<div>
By adhering to these guidelines you well help ensure the quality control of the database.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXXEdGa8kFYiH6sHG0nLTfd8Tp47UtkUVGGFuwAWYftRBwja6XD-Nvc4qbDWRX_v6EemH62uzjxqkmKn4L136n1vab1VgLemxdsYtLqCkcn4ESdm1vPsk9JcZA9V2Ql_txprByUoeOrco/s1600/Screenshot_20170916-141506.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXXEdGa8kFYiH6sHG0nLTfd8Tp47UtkUVGGFuwAWYftRBwja6XD-Nvc4qbDWRX_v6EemH62uzjxqkmKn4L136n1vab1VgLemxdsYtLqCkcn4ESdm1vPsk9JcZA9V2Ql_txprByUoeOrco/s640/Screenshot_20170916-141506.png" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqYgK2yyH-SH49DyRCArI_0e_9c1FLV1QCyqrpBbJkPXTz0h7YdqPlq7TtzwpqMBkXea4gDmmjuzgkKFaYhCMBnHnaONyAPCD30vZldJToGOtEFOod0AMrj0sUxx1J_k0vcLq8sysQqcI/s1600/20170916_141516.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqYgK2yyH-SH49DyRCArI_0e_9c1FLV1QCyqrpBbJkPXTz0h7YdqPlq7TtzwpqMBkXea4gDmmjuzgkKFaYhCMBnHnaONyAPCD30vZldJToGOtEFOod0AMrj0sUxx1J_k0vcLq8sysQqcI/s640/20170916_141516.jpg" width="480" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWpZWZ34ZVyB2c9BMYTrcZBjlWZ_Fx6-NS6th3smorSN6hiVDDieUpsUMt-lexo882SDziTu9dd2gPg2Rg76RdO74ZPvwLbKmsoAHuqOMFAJmdU9maq5C92GdY9fjNN4cfjeina-RDYKY/s1600/Screenshot_20170916-144138.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWpZWZ34ZVyB2c9BMYTrcZBjlWZ_Fx6-NS6th3smorSN6hiVDDieUpsUMt-lexo882SDziTu9dd2gPg2Rg76RdO74ZPvwLbKmsoAHuqOMFAJmdU9maq5C92GdY9fjNN4cfjeina-RDYKY/s640/Screenshot_20170916-144138.png" width="360" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJkiqoInmKRR_op-wXtdvte8HwrvuUOaeqAAd7PGn9WWBjdHkX1JGieb3heu0W0CIZBcym_pwYEjXkm0EiLIgpHx2s6SzdmA3rYiOumAuYWkdjFthbcyzVTEx-2Onnc9yawpc0jgouHIo/s1600/20170916_144149.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJkiqoInmKRR_op-wXtdvte8HwrvuUOaeqAAd7PGn9WWBjdHkX1JGieb3heu0W0CIZBcym_pwYEjXkm0EiLIgpHx2s6SzdmA3rYiOumAuYWkdjFthbcyzVTEx-2Onnc9yawpc0jgouHIo/s640/20170916_144149.jpg" width="480" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br /></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-23936363295093333242017-09-02T15:35:00.004-07:002017-09-02T15:35:58.528-07:00Sywell Formation Sectors<a href="http://bit.ly/2etQq61" target="_blank">Click here</a> for a moving-map browser app containing the Red, Green, and Blue Sywell Formation Flying Sectors for Chipmeet. I have transcribed them from <a href="http://www.chipfest.co.uk/resources/Sywell-Formation-Areas.pdf" target="_blank">these originals</a>, available via the Chipmeet website.<br />
<br />
This moving-map browser app (<i>ReallySimpleMovingMap</i>) can be used for in-flight situational awareness in the cockpit -- by displaying your current aircraft position relative to the sectors -- using any mobile device with a browser (e.g., iPhone, Android phone, iPad, etc). The app requires an internet connection to load and refresh the maps, but does not require an internet connection to track your position on the map once loaded (this just requires that you have Location Services enabled on your device). I therefore recommend that you load the map into your mobile browser, at the appropriate zoom factor, before you take off. Then with the map be pre-loaded, it will stay visible even if your internet connection fades when airborne. Here's what it will look like if you pre-load and zoom to allow the display of all three sectors.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWLP057e1yECRl55oS5tM7TwWAZMAkhWUFAO_v1JiIoSPkrR5WgBMm6d1mhzOBaHOCeAGTAMbLhxvRkmSqNAxc0M5BlRDf73Ce-UKlUmVs4YpNZSOKlqMg6ciON0oIw6Q49WiC8wg4Qck/s1600/Capture_RSMM_SYWELL_SECTORS.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="598" data-original-width="776" height="307" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWLP057e1yECRl55oS5tM7TwWAZMAkhWUFAO_v1JiIoSPkrR5WgBMm6d1mhzOBaHOCeAGTAMbLhxvRkmSqNAxc0M5BlRDf73Ce-UKlUmVs4YpNZSOKlqMg6ciON0oIw6Q49WiC8wg4Qck/s400/Capture_RSMM_SYWELL_SECTORS.PNG" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-45614797169764629382017-09-02T10:59:00.000-07:002017-09-02T10:59:01.797-07:00"Turbine Legend" - Crazy, Beautiful, ThingI was down at the Isle of Man airport grabbing some tools to fix my dishwasher the other day, when this crazy, beautiful, thing was parked next to my hangar. It is a "Turbine Legend" -- a US Experimental aircraft, on a ferry trip to its new owner in Germany, having just crossed the North Atlantic from the US.<br />
<br />
<br />
<ul>
<li>Tandem two-seat turboprop</li>
<li>"Walter" engine, 724 SHP</li>
<li>275kt cruise</li>
<li>6000 ft/min climb rate</li>
<li>700 nm range with ferry tanks</li>
<li>Approximately 50 of them in existence</li>
<li>Pick one up for approximately $500 k</li>
</ul>
<div>
Spectacular looking machine.<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6TWqF_2bCd_HJ1s3igjThaXEm9x78oQCd9NoBG4U_OT9CyvPdODWKoJZj2yHMtGMSZ7yPt2H-HW24U1nBieT1hN5Ez8aqWIrCvj9SxdqnyGZ_Ib2r90UW7noQ8Ae2ZYmDNonRzqVlqPA/s1600/20170831_112522.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6TWqF_2bCd_HJ1s3igjThaXEm9x78oQCd9NoBG4U_OT9CyvPdODWKoJZj2yHMtGMSZ7yPt2H-HW24U1nBieT1hN5Ez8aqWIrCvj9SxdqnyGZ_Ib2r90UW7noQ8Ae2ZYmDNonRzqVlqPA/s400/20170831_112522.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3Z2OApcR7IOcrLTHWXeRCIi87XTW8yP0rf3Cdcx4OS4JAXgiu4DJWyiYgxkJtfLcmWIreRAblX-Kqt24y_RPOClwXguabzerJfWZvsiAbDDW1YuLFe4xatpyDITEhkuYXRJwkf9jlhe8/s1600/20170831_112501.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3Z2OApcR7IOcrLTHWXeRCIi87XTW8yP0rf3Cdcx4OS4JAXgiu4DJWyiYgxkJtfLcmWIreRAblX-Kqt24y_RPOClwXguabzerJfWZvsiAbDDW1YuLFe4xatpyDITEhkuYXRJwkf9jlhe8/s400/20170831_112501.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEib4DvV8c15ttaj7_yw0xbkyRi8ZX3lx-VJDZ-vCx9Ssf6fxO620ktdYg1ZTIuAW_oOIV25SHYDGxVbImrKxxYrAlp7rpcO4GkinonB-PjHRuONUoj5uV-zr6t6u6ZM2xrhcAFCQBIoe3w/s1600/20170831_112654.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEib4DvV8c15ttaj7_yw0xbkyRi8ZX3lx-VJDZ-vCx9Ssf6fxO620ktdYg1ZTIuAW_oOIV25SHYDGxVbImrKxxYrAlp7rpcO4GkinonB-PjHRuONUoj5uV-zr6t6u6ZM2xrhcAFCQBIoe3w/s400/20170831_112654.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_Remx0Oc76IqxoQLAbaK6Dz3kMgkfmcssuMZjB5haqU4KYWzSb_l_0nK3mrYtxP2UsVQsG_2Al_ZHG84Rbaw8NWem-qejL-prKSywih89cxjU5Vs4Z1HvJ79tokVJ3NGVF1JBKYabtEA/s1600/20170831_112734.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_Remx0Oc76IqxoQLAbaK6Dz3kMgkfmcssuMZjB5haqU4KYWzSb_l_0nK3mrYtxP2UsVQsG_2Al_ZHG84Rbaw8NWem-qejL-prKSywih89cxjU5Vs4Z1HvJ79tokVJ3NGVF1JBKYabtEA/s640/20170831_112734.jpg" width="480" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjY8ygvOgAq5Mv5SbbRTQfRCZb6TYGy0tbQCJwOE94rt10SxsOqY2AhdQ-du4iPaRdjKEKOtoUTZ8wafamSfIYe25KxQZW7FMF4F6J1ycxFeaX-B_WP9PLpKcA4bDy8_fEi7CUzIi4z5DY/s1600/20170831_112550.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjY8ygvOgAq5Mv5SbbRTQfRCZb6TYGy0tbQCJwOE94rt10SxsOqY2AhdQ-du4iPaRdjKEKOtoUTZ8wafamSfIYe25KxQZW7FMF4F6J1ycxFeaX-B_WP9PLpKcA4bDy8_fEi7CUzIi4z5DY/s400/20170831_112550.jpg" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div>
<br /></div>
<div>
<br /></div>
<div>
<b><br /></b></div>
<br />
<br />
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-6830631513942979322017-08-19T02:22:00.002-07:002017-08-19T09:51:16.642-07:00Navigate to Eclipse 2017 USA with ReallySimpleMovingMapAre you planning to view the Total Solar Eclipse in the USA on 21 August 2017? If so, <a href="http://bit.ly/2uRmknK" target="_blank">here's the trajectory</a> in the web-browser version of <i>ReallySimpleMovingMap. </i>Just drive/hike/etc(!) until your real-time location marker hits the line! You can also load the eclipse trajectory via the <a href="https://itunes.apple.com/us/app/reallysimplemovingmap/id1230068561?ls=1&mt=8" target="_blank">iOS</a> or <a href="https://play.google.com/store/apps/details?id=com.flylogical.rsmm" target="_blank">Android</a> App version of <i>ReallySimpleMovingMap.</i> In the App, click <i>Shapes, </i>check<i> Show Shapes on map</i>, click <i>...from Cloud</i>, select <i>Eclipse 2017 USA</i> from the available <i>Shape Groups from Cloud</i>, and confirm <i>OK </i>to load. Note: this requires you to have logged-in to the App.<br />
<i><br /></i>
The trajectory data is reproduced courtesy of <a href="https://eclipse2017.nasa.gov/" target="_blank">NASA</a>, who have published a GoogleMaps version of the trajectory <a href="https://eclipse2017.nasa.gov/sites/default/files/interactive_map/index.html" target="_blank">here</a>, showing the exact times of the eclipse along the trajectory (obtained by clicking anywhere on the trajectory). Use <i><a href="http://bit.ly/2uRmknK" target="_blank">ReallySimpleMovingMap</a> </i>to help you navigate to the precise location, then check the <a href="https://eclipse2017.nasa.gov/sites/default/files/interactive_map/index.html" target="_blank">NASA map</a> to get the timings.<b></b><u></u><sub></sub><sup></sup><strike></strike><i></i>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-67765443444933237732017-08-05T16:11:00.003-07:002017-08-07T01:18:27.624-07:00RAF Henlow "100" Bulldog & Chipmunk FlybyI recently had the pleasure in participating with my Bulldog in the RAF Henlow "100" (centenary) flyby. Together with 10 Chipmunks, we flew over the base in the shape of "1 0 0" (viewed from the ground). Here are a collection of photos and videos from the event. I'll add more materials if/when I receive them e.g., the photos from the official "camera ship" and from the official photographer on the ground.<br />
<br />
At one point, due to thunderstorms preventing aircraft from getting through to Henlow in the days leading up the the flyby, we thought we would not have enough aircraft for "1 0 0" so we were aiming to resort to "Plan B" which would have replaced "1 0 0" with "C" (for the classically-educated). However, the storms cleared, and on the day, we had 11 aircraft: 10 Chipmunks and my lone Bulldog, enabling 4 aircraft for each "0" and three for the "1". My Bulldog's position was lead aircraft in the "box" formation of 4 aircraft in the outermost "0" i.e., the front/centre of the "0". It's always more challenging with mixed types: the Bulldog wants to climb at 80kts, the Chipmunks at 70kts. Even with "inter" flap, the Bulldog was on the verge of stalling when climbing at 70 kts -- the stall-waning horn chirping away is quite a distraction when climbing-out, low over the perimeter trees and hedges, having survived the bumpy formation takeoff run along the grass runway. In the end, we compromised on 75 kts which was far more comfortable.<br />
<br />
Anyway, here are the pics so far. Videos to be added later (once processed).<br />
<br />
<h3>
Flightline</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguUiZzWqKIiL5two0GfJOkPvN7thrDDmpjnGdWsexzdgemjTKyoe3lz_0V5ocWeOwXwxqpVA0TNseH3-iNxJUHbC0zx9aWdltPsB_2HdGQklynRWqbVL7F1Jv-HsNv67dueK9v7fW-nYk/s1600/20170721_FLIGHTLINE_1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguUiZzWqKIiL5two0GfJOkPvN7thrDDmpjnGdWsexzdgemjTKyoe3lz_0V5ocWeOwXwxqpVA0TNseH3-iNxJUHbC0zx9aWdltPsB_2HdGQklynRWqbVL7F1Jv-HsNv67dueK9v7fW-nYk/s400/20170721_FLIGHTLINE_1.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2npqxDNndUA5r_z8atXUT8OEbVT4rqGK8KAb8CoSVGojJHc_n7dIXZIHjxBQ1gBgDxusWcNyYvX99mw2DS8rNBgEW__wtkktGUQQ6fPc941FCUDzrEZpK58GidiLCjh6SBYLJ_UWxAxw/s1600/20170721_FLIGHTLINE_2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2npqxDNndUA5r_z8atXUT8OEbVT4rqGK8KAb8CoSVGojJHc_n7dIXZIHjxBQ1gBgDxusWcNyYvX99mw2DS8rNBgEW__wtkktGUQQ6fPc941FCUDzrEZpK58GidiLCjh6SBYLJ_UWxAxw/s400/20170721_FLIGHTLINE_2.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLnFNHOkWiOMngTngcmH3XV7F2gIRidIPulH3aU9mN6W7W4KJbhynYfFT_6vOVlh1nQYJSw9hXwTtGaChfhxi-FspHJBak0oMaPiecnuhwjbmBFIwfFq4_kz1BoQV2eIsDQ3eA0Vx5bfU/s1600/20170721_FLIGHTLINE_3.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLnFNHOkWiOMngTngcmH3XV7F2gIRidIPulH3aU9mN6W7W4KJbhynYfFT_6vOVlh1nQYJSw9hXwTtGaChfhxi-FspHJBak0oMaPiecnuhwjbmBFIwfFq4_kz1BoQV2eIsDQ3eA0Vx5bfU/s400/20170721_FLIGHTLINE_3.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUvxSsl-cx2Q8wsP2Ti3fApPVi5VCla7CQ9OgLg9naFgaks7wNIY6uyVaZuyV0ekyQlBkc36URx5wTjrOfzLfSfc11gnKGhJJnQ4jpQZZpQObNCJ8MhQc8eOw65X-dGFdBcUZt1RFP7Go/s1600/20170721_GBZFN_ROYALCHIPPE.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUvxSsl-cx2Q8wsP2Ti3fApPVi5VCla7CQ9OgLg9naFgaks7wNIY6uyVaZuyV0ekyQlBkc36URx5wTjrOfzLfSfc11gnKGhJJnQ4jpQZZpQObNCJ8MhQc8eOw65X-dGFdBcUZt1RFP7Go/s400/20170721_GBZFN_ROYALCHIPPE.jpg" width="400" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
<h3>
Training flight #1</h3>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmzOHmzphF25hr-p5KXfw3xorxKm5iCuMpED1GaIEEPKbraQUCl7zaSWsnzqWgmlmCwg-1CSzqKdhT9D9oropac3k23bfNObh-u0kui2cQazT-9gf_DYhYEIFZtk6-SwbL-KlFdsne0jM/s1600/IMG_2241.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmzOHmzphF25hr-p5KXfw3xorxKm5iCuMpED1GaIEEPKbraQUCl7zaSWsnzqWgmlmCwg-1CSzqKdhT9D9oropac3k23bfNObh-u0kui2cQazT-9gf_DYhYEIFZtk6-SwbL-KlFdsne0jM/s640/IMG_2241.JPG" width="480" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfIPqf0EVjjjJhz7BDRBtWit9AdyrBtt7oL6bxyhxa_U3RdRWJKybdsYgZ7AmY2BTl5Zs0DUffxor5freLQAOwsJhcjCVZ3FPrmmnHFGbCw9PNuErQzelDKPPiJgWFmjVbIRgDcN_PHzk/s1600/IMG_2229.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhfIPqf0EVjjjJhz7BDRBtWit9AdyrBtt7oL6bxyhxa_U3RdRWJKybdsYgZ7AmY2BTl5Zs0DUffxor5freLQAOwsJhcjCVZ3FPrmmnHFGbCw9PNuErQzelDKPPiJgWFmjVbIRgDcN_PHzk/s400/IMG_2229.JPG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcBWxZxdNi6OzT5H3AonL3zAjb5bUBwN8YGqyiNgzCp9HY_cUyNbSl8Bt7cP1ujnkAHhkaXs83OheO8v555wCzCCOGYmq-WUOJza90cfU3q1y-xhxzvzx1QXoT46YqUE-v27BOTowFOas/s1600/IMG_2230.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcBWxZxdNi6OzT5H3AonL3zAjb5bUBwN8YGqyiNgzCp9HY_cUyNbSl8Bt7cP1ujnkAHhkaXs83OheO8v555wCzCCOGYmq-WUOJza90cfU3q1y-xhxzvzx1QXoT46YqUE-v27BOTowFOas/s400/IMG_2230.JPG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCiNE_H_hmzv6UdSrQ9jgdoKwfzn0jbpLot-iA28lIpSvVT9eYnn15FY6rEnF-LfPyBFrsX-FIcn69TPWpmg16e4iqpnRvBSq2ICwzCXO_VdQYGj0q9GZO86AQyhnlxBUjA8VCixmCsxI/s1600/IMG_2231.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCiNE_H_hmzv6UdSrQ9jgdoKwfzn0jbpLot-iA28lIpSvVT9eYnn15FY6rEnF-LfPyBFrsX-FIcn69TPWpmg16e4iqpnRvBSq2ICwzCXO_VdQYGj0q9GZO86AQyhnlxBUjA8VCixmCsxI/s400/IMG_2231.JPG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7PJ75X9L1plXjp6lEN_FBFOwaiHueJ0b2LdVNdWKmHhE6HD9SpzLa6GJXOqwBRNO9eTuBFdd03ECFNM56Y2knM74-TNb55XAH-nQgC6CzIh1IL6zV3m6HDj8jqi44zfrrukNZ264aS3E/s1600/IMG_2232.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="480" data-original-width="640" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7PJ75X9L1plXjp6lEN_FBFOwaiHueJ0b2LdVNdWKmHhE6HD9SpzLa6GJXOqwBRNO9eTuBFdd03ECFNM56Y2knM74-TNb55XAH-nQgC6CzIh1IL6zV3m6HDj8jqi44zfrrukNZ264aS3E/s400/IMG_2232.JPG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBdanBGLbcvKVIRjwN_F7R1HnDorLhGSceOTpIyJb2raHVOHCguVbgOj1cSBW_IY_OZf-N6BrEviY2B6FwmsVUUqnQ6IYcOpwzlrHv5Gn2SZJPbHQv9E_gORrAHAtjrHfh61A0L_ffW58/s1600/IMG_2237.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBdanBGLbcvKVIRjwN_F7R1HnDorLhGSceOTpIyJb2raHVOHCguVbgOj1cSBW_IY_OZf-N6BrEviY2B6FwmsVUUqnQ6IYcOpwzlrHv5Gn2SZJPbHQv9E_gORrAHAtjrHfh61A0L_ffW58/s400/IMG_2237.JPG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfKQHSgffwx6nrpcWmCX2ItOIHPCUJ-RpdRfePuNP-NK2FiEKDIBUte0KXTWe8Wk0HmrTp35EfqxaFy77Efi3Mjt3qtIU0yDH8ayWOK7s8KjUwGS_u6BK7HvEUr8Kklyfu6E6yQ0SgLlw/s1600/IMG_2239.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfKQHSgffwx6nrpcWmCX2ItOIHPCUJ-RpdRfePuNP-NK2FiEKDIBUte0KXTWe8Wk0HmrTp35EfqxaFy77Efi3Mjt3qtIU0yDH8ayWOK7s8KjUwGS_u6BK7HvEUr8Kklyfu6E6yQ0SgLlw/s400/IMG_2239.JPG" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDMaNTmsVEacTW0ycsG46t3g2h7qHxrrubyTPmIkfIOEv75ciqRFNJDV-A7A31fIsKgkufBam2i9aKrpgMYl7lrC94oC4Y4eEc51nM7ExFxoiN4E6SB5jfbHVw1eRwb2im9hkV2CjZrnE/s1600/IMG_2235.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDMaNTmsVEacTW0ycsG46t3g2h7qHxrrubyTPmIkfIOEv75ciqRFNJDV-A7A31fIsKgkufBam2i9aKrpgMYl7lrC94oC4Y4eEc51nM7ExFxoiN4E6SB5jfbHVw1eRwb2im9hkV2CjZrnE/s400/IMG_2235.JPG" width="400" /></a></div>
<br />
<br />
<h3>
Training flight #2</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQb-qAa9AJmGUDvXuE1t4w7C28Z8jftDfjLOmTOOD-Bx8eYTsg-ZFaCVYsvAgT_ylirahDTL7BBdTkx-H-Q41KzrF2tZRmXr7liNvpd5rJv-2XRQZrPBHEEfzTK2R1S9gPj-nPfXEGj98/s1600/20170719_GBZFN_CHIPPIE_FORMATION_1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQb-qAa9AJmGUDvXuE1t4w7C28Z8jftDfjLOmTOOD-Bx8eYTsg-ZFaCVYsvAgT_ylirahDTL7BBdTkx-H-Q41KzrF2tZRmXr7liNvpd5rJv-2XRQZrPBHEEfzTK2R1S9gPj-nPfXEGj98/s400/20170719_GBZFN_CHIPPIE_FORMATION_1.jpg" width="400" /></a></div>
<h3>
</h3>
<h3>
The "1 0 0"</h3>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTSJOCq4xhthqCEh5uIMR1ZA1rE8RvMyFgbK-1SMhoXAFvLUS1frZoLJJE8_ygZJZBmqTEeTvyzHh86y8pOv7I5J2cZB2pNI9G_am06GIv2nQJi7xFRRIVwXLqHstoui9B0QoWewrFNcA/s1600/-100-+at+Henlow-1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="360" data-original-width="480" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTSJOCq4xhthqCEh5uIMR1ZA1rE8RvMyFgbK-1SMhoXAFvLUS1frZoLJJE8_ygZJZBmqTEeTvyzHh86y8pOv7I5J2cZB2pNI9G_am06GIv2nQJi7xFRRIVwXLqHstoui9B0QoWewrFNcA/s400/-100-+at+Henlow-1.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPOaYpaKwAzDRMd_LXVP3R4ieT4XOAwMETNogB3coTEKPRWSBXtBC3QpJP8bN3qL40PkF9lBbP-F7f4EQRxtP99c2Yh5l9GREjFWXhl1sJZEUhNOG-Obp8nK67wQyl908ez9xAC6LDNVA/s1600/-100-+at+Henlow-2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="960" data-original-width="539" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPOaYpaKwAzDRMd_LXVP3R4ieT4XOAwMETNogB3coTEKPRWSBXtBC3QpJP8bN3qL40PkF9lBbP-F7f4EQRxtP99c2Yh5l9GREjFWXhl1sJZEUhNOG-Obp8nK67wQyl908ez9xAC6LDNVA/s640/-100-+at+Henlow-2.jpg" width="356" /></a></div>
<div>
<br /></div>
<h3>
<br /></h3>
<h3>
The "Royal Chipmunk"</h3>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9qg1XMc7bQEHVRhrm-kY_hKqxWGH0qvwXfNqsThzE_nQLpcFhwKoczHBY70OF2VURjKkHmigVLi6kiSZ9FDC2AxXqLRFQoabPGQaLihoK6nsBqLa-e6e72GH7qsJZxdZSoOkeptyP7ZA/s1600/20170718_ROYAL_CHIPPIE_1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9qg1XMc7bQEHVRhrm-kY_hKqxWGH0qvwXfNqsThzE_nQLpcFhwKoczHBY70OF2VURjKkHmigVLi6kiSZ9FDC2AxXqLRFQoabPGQaLihoK6nsBqLa-e6e72GH7qsJZxdZSoOkeptyP7ZA/s640/20170718_ROYAL_CHIPPIE_1.jpg" width="360" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeQBrzgpw91_dgj2oNB4IZltAV6hfkAUTTUnCXeuqqpHKHorwB-7z8QZnuhCFqdXWxsFRrmVKCbkRwfPUmnte2WLZRN2NuAwS7uMKv56ifY3TwcqSqcY1jU5NSRsj0cQLM51zlIG3NVrA/s1600/20170718_ROYAL_CHIPPIE_2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgeQBrzgpw91_dgj2oNB4IZltAV6hfkAUTTUnCXeuqqpHKHorwB-7z8QZnuhCFqdXWxsFRrmVKCbkRwfPUmnte2WLZRN2NuAwS7uMKv56ifY3TwcqSqcY1jU5NSRsj0cQLM51zlIG3NVrA/s640/20170718_ROYAL_CHIPPIE_2.jpg" width="360" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrZgcduYEEPG81WzBKJHtUCh42pnZHZUE0CR7VGlSOpQ45MN6tj-CMq-QE12b4EB6iCr7S-fn0pIY9Vt8UyIVFHaYtIafmIzcvDmrkmD3o5sOrAw2d70OeF9Nqhcd_VIhW7MqzhjTK2U4/s1600/20170718_ROYAL_CHIPPIE_COCKPIT_1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="900" data-original-width="1600" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrZgcduYEEPG81WzBKJHtUCh42pnZHZUE0CR7VGlSOpQ45MN6tj-CMq-QE12b4EB6iCr7S-fn0pIY9Vt8UyIVFHaYtIafmIzcvDmrkmD3o5sOrAw2d70OeF9Nqhcd_VIhW7MqzhjTK2U4/s400/20170718_ROYAL_CHIPPIE_COCKPIT_1.jpg" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYlfaX9bkqNK0pdaTyJJDQ6Mwf2hk3lfRy5nefijHGTmyjMKZ2xqM_1wVMZvsi9gRhGzo_aZaOkzwHjhP4e3qJvCBzySszEKQ396FH8dXYGg6U3nzIYpVn9ep6TXsaW9RUnw063MaFblA/s1600/20170718_ROYAL_CHIPPIE_COCKPIT_2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYlfaX9bkqNK0pdaTyJJDQ6Mwf2hk3lfRy5nefijHGTmyjMKZ2xqM_1wVMZvsi9gRhGzo_aZaOkzwHjhP4e3qJvCBzySszEKQ396FH8dXYGg6U3nzIYpVn9ep6TXsaW9RUnw063MaFblA/s640/20170718_ROYAL_CHIPPIE_COCKPIT_2.jpg" width="360" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0RmB2KPKVv-T5CBVO8av1CQN8oQ407_baTWjf6cSbDuq2XBy3dP4yW-uoN-HoywhfP76EYUc6H7Lw3a5mekf4NtI9BdD5ahyjEqcOecL2sBIEmdqFXhFmAZlInRA246b1Ddw9tZgu7D4/s1600/20170718_ROYAL_CHIPPIE_PLAQUE.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0RmB2KPKVv-T5CBVO8av1CQN8oQ407_baTWjf6cSbDuq2XBy3dP4yW-uoN-HoywhfP76EYUc6H7Lw3a5mekf4NtI9BdD5ahyjEqcOecL2sBIEmdqFXhFmAZlInRA246b1Ddw9tZgu7D4/s640/20170718_ROYAL_CHIPPIE_PLAQUE.jpg" width="360" /></a></div>
<br />
<h3>
RAF Henlow -- Isle of Man connection</h3>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI4UXscnhMYfKqfK6tW3IGgXKflzkQ8dWzzFZ2MDjm0PJXA17j9t5ykyQ8OgJUI-dHSbdt8EZYU6hzI3-mdNRZnITBc-hykld0p-Gojy3Kq7LWwakH6E2yJ9B1_s_-xz0u_7YesDrWBig/s1600/20170719_HENLOW_IOM_CONNECTION.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="900" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhI4UXscnhMYfKqfK6tW3IGgXKflzkQ8dWzzFZ2MDjm0PJXA17j9t5ykyQ8OgJUI-dHSbdt8EZYU6hzI3-mdNRZnITBc-hykld0p-Gojy3Kq7LWwakH6E2yJ9B1_s_-xz0u_7YesDrWBig/s640/20170719_HENLOW_IOM_CONNECTION.jpg" width="360" /></a></div>
<div>
<br /></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<b><br /></b></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-223455584910870050.post-20220306754998767122017-07-30T09:45:00.001-07:002017-10-01T06:59:45.856-07:00Parking Radar goes LIVE<i><br /></i>
<i>Updated 1 October 2017: <span id="goog_434508565"></span><a href="http://flylogical.blogspot.com/2017/10/parking-radar-gets-its-own-website.html" target="_blank">Parking Radar gets its own website</a></i><br />
<br />
<i>Updated 12 September 2017: <span id="goog_434508565"></span><a href="https://itunes.apple.com/us/app/parking-radar/id1265641228?ls=1&mt=8" target="_blank">iOS version now available<span id="goog_434508566"></span></a></i><br />
<a href="https://www.blogger.com/"><i></i><br /></a>
Today we are pleased to announce the release of the <b>Parking Radar</b> App on the <a href="https://play.google.com/store/apps/details?id=com.flylogical.parkingradar" target="_blank">Google Play Store</a> and <a href="https://www.amazon.com/dp/B074HDMLH7/ref=sr_1_1?s=mobile-apps&ie=UTF8&qid=1501756594&sr=1-1&keywords=parking+radar" target="_blank">Amazon App Store</a> , and the <a href="https://itunes.apple.com/us/app/parking-radar/id1265641228?ls=1&mt=8" target="_blank">App Store for iOS</a>. Parking Radar is a free (and ad free) crowd-sourcing service for you and the community. By interacting with the app, you are able to designate parking spaces by telling the app that you have parked at your current location. This information is then available to the community in real-time via a moving-map display. When traveling and looking for a parking space, the app can help you find your next parking space with greater ease.
The app is the brainchild of Steve Adler (of Sacred Chocolate) and Yusuf Jafry (of FlyLogical). Steve and Yusuf met almost 30 years ago when they were engineering grad students at Stanford University, California. Their first project together, at Stanford, was to design a re-usable re-entry space vehicle: Steve designed the heat-shield from Chinese White Oak, and Yusuf designed the orbital trajectory to guide the vehicle to touch down on the Great Salt Lake in Utah, for ease-of-recovery. With a mutual fascination of the emergence and convergence of Cloud Computing, Artificial Intelligence, Big Data, and Crowd-Sharing, Parking Radar is their first joint foray into this new and exciting space.
If anyone says it isn't rocket science, it is, actually : )Unknownnoreply@blogger.com0