AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Topcoder spacenet11/14/2022 ![]() The best algorithm also performed well at detecting buildings partially blocked by trees. At the other end of the range, 90% of buildings larger than 105 square meters were correctly identified in the best solution. And the best one identified 20% of the buildings smaller than 40 square meters. Building SizeĪll the algorithms found the smaller buildings harder to find. Two competitors used gradient boosting machines to filter false positives out of their predictions, which likely gave them the upper hand in precision. False PositivesĪlthough there was more variability in false positives than in the correct predictions, of the five different algorithms, they all came up with very similar incorrect predictions. But only 30% of the buildings found by one or more of the competitors were not found by all of them. Differences were greatest in the very off-nadir range. Over 80% of the buildings were identifiable by none or all of the five competitors, with algorithms differing in their ability to identify approximately 20% of the buildings. Nadir vs Off-NadirĪll in all, the algorithms differed slightly in their ability to identify buildings in the nadir and off-nadir images. However, the north-facing images had brighter sunlight reflections on the buildings, proving that look angle isn’t all that matters. Problems arose with south-facing images where shadows obscured many building features. Look Angle and Directionįirstly, in the very off-nadir imagery, the algorithms identified approximately the same number of buildings, albeit, using different methodologies. If you want more detailed information directly from SpaceNet, read The good and the bad in the SpaceNet Off-Nadir Building Footprint Extraction Challenge,by Nick Weir, Data Scientist at CosmiQ Works and SpaceNet 4 Challenge Director at the SpaceNet LLC. Below are the summaries of each component. How Did the Algorithms Measure Up?Īs a way to measure success, SpaceNet assessed the results against five key components. The results were measured using the SpaceNet metric. The created polygons in the top five algorithms were compared with the actual buildings. The competitors were to develop algorithms that generate polygons correctly outlining the boundary of each identifiable building. The dataset covered 665 square kilometers of downtown Atlanta with 27 worldview images from 7 to 54 degrees off-nadir, with approximately 126,000 buildings labeled with a footprint. It would help create better maps in urgent situations. The ability to work with these off-nadir images and accurately extract building footprints is vital. Images acquired after a disaster are frequently more off-nadir than standard mapping images, as the satellite is not always directly above the disaster area. The competing algorithms attempted to extract map-ready building footprints from high off-nadir imagery. In other words, ‘off-nadir’ is satellite imagery taken at an angle, not directly above, the location. The challenge focused on off-nadir imagery for building footprint extraction. To discover if off-nadir imagery can help automate mapping, SpaceNet launched an off-nadir building detection challenge by crowdsourcing with Topcoder. SpaceNet is on a mission to accelerate geospatial machine learning. The Challenge Defined Crowdsourcing the Algorithm Read on for a short value-prop summary of the Topcoder SpaceNet Challenge. It used Topcoder’s expertise to crowdsource computer vision algorithms in a push to advance mapping automation. One such success story is SpaceNet’s Off-Nadir Building Footprint Extraction Challenge. ![]()
0 Comments
Read More
Leave a Reply. |