Showing posts with label GIS4048. Show all posts
Showing posts with label GIS4048. Show all posts

Friday, August 5, 2016

A Residential GIS Location Study

Applications in GIS Final Project

Well it's easy to see why this project usually gets three weeks during a normal semester. The project was to produce a residential location study using GIS to help a woman and her daughter get started on house hunting in St. Augustine, Fl.  The analysis involved two proximity analyses, three calculated analyses, and two weighted analyses, one equal, one favored.  With the completion of the analysis recommendations were made on three tracts that best fit the criteria they had set.

To present the data a Power Point presentation was made.  It is located at this site:

http://students.uwf.edu/mr80/AppsFinalProject_MR.pptx

Sunday, July 17, 2016

Urban Planning - GIS for Local Government

Applications in GIS - Participation Assignment


The web address for the property appraiser of Walton County is www.waltonpa.com.  On this site I was able to discover the highest property sold for the month of June, 2016 was $10,200,000.  Um, let me spell that out just so we're clear...Ten million, two hundred thousand dollars.  That is what was paid on June 10, 2016 for this single family home of 4 bedrooms, 4 baths.  It previously sold for $7,200,000 in October of 2006.

Legal Informations:  LOT 1 BLK 19 ROSEMARY BEACH PH3 PB 13-2 IN OR 1973-61 OR 2072-84 OR 2623-810 OR 2733-3116 OR 3012-3183 

The legal description shown here may be condensed for assessment purposes. Exact description should be obtained from the recorded deed.

The assessed land value for this property is $4,856,300.  This site didn't offer the ability to check the assessed land value form the last sale, only the total price.

This two story house was built in 2000 in Rosemary Beach, Walton County, Florida.  It is 6,200 square feet, which is a pretty fair sized house, but it is the location that jacks the price up so high.  


The second part of this assignment was to assess the values of parcels in a subdivision named West Ridge Place. One of the parcels, 090310165 looks like it should have it's value lowered $6,175 to match the assessment of the lots around it.  Lots 090310320 and 090310325 should have their raised $2,137 to match those lots of same size around them.  The other three lots in the $24,938 range should stay the same even though they are comparable in size to others around them because they have easements on them.




Urban Planning - GIS for Local Government

Applications in GIS - Mod 9

Local governments worldwide use zoning as a method of land use planning.  The first scenario of  this week's exercise had us viewing property and obtaining data from Marion County Appraiser's website to create a site map of the parcel of a client who wanted to find out what, if any impacts a fly-in community would have on adjacent parcels.  To do this we learned about different types of zoning codes and their meanings and how to download a table of parcel data to a GIS file.  We then joined that table to our parcels feature class to access that data.  Next we did some editing to capture only those parcels we were interested in.
With our selection set we moved on to Data Driven Pages which is an awesome tool that allows you to generate a series of maps using a single layout and applying it to numerous map extents.  To do this we created a grid index that covered our entire area of interest then used this grid as an intersect with our zoning feature class to create a zoning index for the area of interest to symbolize according to zone codes.  Next we followed some seemingly convoluted steps to create a locator map.  We copied our index grid, Index2400, to another feature class of a different name, Locator_Mask, then copied that and renamed that copy Locator-Mask Current Page.  So in the end we had three identical layers with different names all in the locator map data frame.  Which makes no sense at all.  Until you manipulate each of them to do something different.  With Data Driven Pages enabled we were able to set up the Locator_Mask Currant Page layer to highlight within the locator map the current page being displayed in the extents.  Locator_Mask masked the rest of the grid and the original, Index2400, was symbolized to outline each page in white and label the pages.  So the data driven pages allowed the locator map to match the portion of the map being displayed in the extents and using Dynamic Text allowed the page number and the name of the index grid to update as you page through the file.  This set up allowed me to add all the standard map elements just once, and yet produce 16 different maps.  I did this quite simply by exporting my map to a PDF file specifying to export all pages to a single file.  For the last step in this portion we generated a report using the Create Report option in the attribute table and exported that to a PDF file as well.

The second scenario had us doing a lot of editing.  We had to merge a couple parcels and update the attribute information in them, then split a portion out of that to create a new parcel and update that attribute information.  This was actually a lot of fun because it included drawing lines.  Next we had to do some data manipulation and searches to report on suitable sites for a new Extension office to be built in Gulf County.  The requirements were the land had to be county owned, 20 acres or larger and vacant.  Gathering this data required using Query Builder to limit the records to those acres equal to or greater than 20 and joining tables.  Again, we used the Create Report feature of the attribute table to report our results.

Sunday, July 10, 2016

Location Decisions

Applications in GIS - Mod 8

This week's assignment was to make a recommendation on places to live for a couple, a doctor who will be working at the North Florida Regional Medical Center, and her husband who will be teaching at the University of Florida, in Alachua County, Florida.
 The criterion set by the couple were nearness to work for both of them, a neighborhood with a high percentages of people 40 to 49 years old, and a neighborhood with high house values.  This map was created with four data frames, one each for each of the criterion.  For the distance frames we ran the proximity analysis tool Euclidean Distance to generate a raster image showing distances out from the points of interest in 3 mile increments.  To allow for easier interpretations and analysis we then reclassified them. Next we added fields to our census tract attribute table to calculate the percentage of people 40-49 and homeowners, then converted these feature classes to rasters and reclassified them so they could be compared to the distance rasters and recommended a few locations for each criteria.

 The next map is where our comparisons were made.  We added our four reclassified layers to the map then created a model with them as the input and the Spatial Analyst Weighted Overlay tool.  For the first data frame we gave each of the criterion equal weight, but for the second we responded to the clients' request to focus more on proximity to workplaces and restricted some of the Scale Values in the distance layers to place them within the least favorable values so they wouldn't be influenced by the weight of the other criteria.  Next the percent of influence was adjusted to give more weight to the distances and less to the population and home ownership.  With this new information we were then able to recommend a few areas evenly spaced between the two locations with the rest of the criteria accounted for.




Sunday, July 3, 2016

Homeland Security - MEDS Protect

Applications in GIS - Mod 7

Critical infrastructure includes not only those infrastructures vital to our defense and security, but also those people, places and things necessary to allow for everyday living and working.  Certain key resources, such as large gathering place, may be deemed to be critical at certain times when circumstances can cause them to become a target.  One such example of this is the Boston Marathon Bombing of 2013.  

Our lesson this week took us back to that incident as though we were preparing for it.  We started with the MEDS data we had put together for last weeks assignment.  The Boston Marathon was determined to be at risk, so security needed to be stepped up.  We were tasked with identifying critical infrastructures in the vicinity of the marathon finish line location and implement protective measures such as securing the perimeter of a protective buffer zone around targeted sites by locating surveillance posts at ingress and egress points and siting the most optimal observations positions in and around the event site.

We started with our Buffer tool to create a 3 mile buffer around the finish line.  Our next step was to locate those key resources within this buffer zone that could potentially be secondary targets.  For this we used our GNIS (Geographic Names Information System) layer to run a select by location query to isolate those features that fell within our zone of interest.  For this lesson we focused our attention on hospitals, selecting out those that were nearest to the finish line using the Generate Near Table tool, then of those, manually selecting those that met our criteria.  We then created a 500' buffer around those hospitals and the finish line to emphasize those areas that required stepped up security.  Extra security precautions were required at the finish line so we used the Intersect tool to determine appropriate locations for security checkpoints along the ingress and egress routes at the border of our finish line buffer zone.

Safety and security at large events like the Boston Marathon is dependent upon surveillance.  Since the finish line for this event was considered to be a high target area security needed to be heightened with the use of surveillance equipment, providing line of sight coverage of the finish line for the entire block.  15 sights were chosen to provide the greatest amount of coverage without obstruction. The height of the equipment ranges from 10 to 15 meters in elevation.  Determining the placement of these cameras started with converting LiDAR data to a raster then generating a hillshade effect to see how the shadows would fall at the time and date of the event.  Next we created a new feature class of surveillance points strategically placed along the around the finish line to provide the most unobstructed view possible.  To verify the placement of these points we ran the Viewshed tool to show us an obstructed areas between the surveillance point and the finish line.  With this information we were able to determine which points need adjustment either through relocation or increased height.

With that all set we then used the Create Line of Sight tool which allowed us to run a line from our surveillance points to the finish line, including the elevation of the point, which enabled it to determine where there might be sections of obstructed view.  We created a graph of one of the lines that had an obstructed view and the graph shows how that line of sight is obstructed.

It would have been nice to see how all this showed up in a 3D view, but I was never able to get the line of sight lines copied over to ArcScene to see it.

Saturday, June 25, 2016

Homeland Security - Prepare Minimum Essential Data Sets

Applications in GIS - Mod 6

The Department of Homeland Security, recognizing the importance of geospatial information and technology in securing our nation, has developed the DHS Geospatial Data Model (DHS GDM) in support of urgent DHS mission requirements. This standards-based logical data model is to be used to collect, discover, store and share homeland security geospatial data.

During a crisis, emergency operations personnel are dependent on a comprehensive geospatial database that is prepared, ready and available for immediate use.  The MEDS layer configuration is a standardized file and directory structure that enables inter-operable and efficient data analysis in times of crisis.  To this end, a series of Minimum Essential Data Sets (MEDS) for urban areas and large area spatial extents was established as vital to successful homeland security operations.  There are eight MEDS. 

Boundaries are used to define the geographic area of interest.  Urban areas are defined as Tier 1 and Tier 2. Tier 1 Urban Areas are the largest, most populated metropolitan centers in the country.  These boundaries are set by mapping the extent of the combined entities in the metropolitan centers and a 10 mile buffer beyond. 

Elevation is represented by digital elevation models (DEMS), or digital  terrain models (DTM).  They are representations of continuous elevation values over a topographic surface typically used to present terrain relief.  They are used to make 3D graphics that display terrain slope, aspect, and profiles between selected points.

Geographic Names from the Geographic Names Information System (GNIS), developed by the US Geological Survey (USGS) contains information about physical and cultural geographic features in the US and associated areas.  This database contain federally recognized names of features and defines the location by state, county, USGS topographic map and geographic coordinates.

Hydrology includes information about surface water features such as lakes, ponds, streams, rivers, springs and wells and other hydrologic areas that have great significance to homeland security such as spillways and dam weirs.  These critical infrastructures that control the flow and direction of flood waters require close surveillance and protection from possible attack.

Land Cover data sets are important to homeland security planning and operations.  They provide detailed information about land uses and ground conditions which assist in preventing and mitigating the impacts of a catastrophic event.

Orthoimagery is aerial photography that has had distortion and relief displacement removed so ground features are displayed in their true position.  This type of imagery allows direct measurements to be made while providing a realistic visualization of the landscape.  This ability for first responders to view geographic features that may not be mapped can reduce response times considerably and save lives.

Transportation includes roads, railroads, airports, ferry crossings and more.  Each plays a critical role in our lives and as such are attractive targets to terrorist.  Transportation systems are used not only to transport people from place to place, and consumer goods to distributions centers, but also to transport highly toxic and volatile substances.  For this reason they pose a significant public safety and security threat that must constantly be guarded against.

Structures consists of significant buildings and facilities that are focal meeting places for large groups of people or house critical infrastructure heavily dependent on for everyday operations.  This could include government offices, police stations, power lines and substations, sports arenas and entertainment centers, houses of worship, schools, hospital, and more.

This weeks lesson was to put together a MEDS package.  While we didn't have the time to download all the data ourselves, we did have to organize the data by creating group layers and manipulate some of the data so it could be analyzed.

We started by creating group layers for each of the MEDS except Structures and adding to them the datasets that had been downloaded for us.  Next we manipulated the transportation data by joining a table of Census Feature Class Codes (CFCC) to our roads layer.  We then used this code to create separate local, secondary and primary roads we could then symbolize individually.  The Land Cover needed to be clipped to the boundary area and symbolized according to the National Land Cover Database standards.  The Geographic Names file had all the data jumbled into one column so we had to adjust that before we could even add that to the map and define the projection.  Once that was done we were able to do a Select by Location to create a dataset of just those points that fell within the boundary of our study area.  None of the other data sets required any manipulation other than putting them into a layer group.  With all that done we were then able to save all the layer groups to layer files that could be shared.

Sunday, June 19, 2016

Homeland Security - DC Crime Mapping

Applications in GIS - Mod 5

This week's lesson was in four parts that resulted in two maps.  We started with organizing and documenting our work by setting up our project folders, defining our Map Document Properties, including assigning our default geodatabase, and setting our environments.  

Part 2 involved prepping our first map for analysis by adding the datasets we would need, creating an Address Locator and geocoding addresses for the police stations in our area of interest.  During this process we had only one address that was unmatched so that shouldn't have taken very long, but I realized after completing the next step, symbolizing the main highways, that locating that unmatched address would have gone much more quickly if I had done that first.  After that I took the time to read through the rest of the lab instructions again to make sure there wasn't anything else I wanted to do in reverse order.

Part 3 had us analyze crime in Washington DC and police stations by crime proximity.  We started by creating a graph of all the crimes in DC and the number of occurrences reported for each one.  Next we used the Multiple Ring Buffer to create three buffers of .5, 1 and 2 miles around each police station.  This, with the point feature of our Crimes layer, allowed us at a quick glance to see how well the police stations were located in comparison to where crimes were reported.  But we needed something better than a quick glance to run an analysis so we did a spatial join that joined the Crimes with the buffer zone in which they occurred.  We now had an attribute table associated with our buffers that allowed us to run queries.  With this we were able to determine the percent of crimes that were reported in each buffer zone and determine most are within one mile of a police station.  The next analysis we ran was to determine how many crimes each police station was nearest to.  This was represented by using graduated symbols for the police stations.

The last step was to propose a location for a new police substation based on the results of our analysis.  Our analysis showed there were a 127 reported crimes that occurred outside the 2 miles radius of a police station, mostly in three different areas.  One of those areas included a cluster of 73 crimes so that seemed to me like the best place to locate a new police station.

Part 4 was creating density maps for three types of crime;
burglaries, homicide and sex abuse.  We did this using the Spatial Analyst tool Kernel Density.  I chose to go with a 1500 square kilometer radius for my search parameters for all three crimes and classified the results with natural breaks and 5 classes to display the results, showing the most crimes per square mile occurred in the darkest areas.  This layer sat above a census block layer symbolized by population per square mile to show the density of the population, and beneath that was a layer of roads.  Setting up the transparency of the layers to show all the information was a bit difficult.  To get a clearer picture of how the density of crimes compared to the density of population was easy enough to do with the Swipe tool from the Effects tool bar, but it was more difficult to make this information clear in a print out or image file that couldn't be manipulated.  Still, with a side by side comparison of the three crimes it is easier to see the population density showing through under the frames for homicides and sex abuse.  This clearly showed most burglaries tend to occur in the more densely populated areas, whereas homicides tend to occur in the outer edges of the most densely populated areas, and sex abuse between the two, more in the mid-range of population density.

Sunday, June 12, 2016

Natural Hazards - Hurricanes

Applications in GIS - Mod 4

In October of 2012 Hurricane Sandy hit the northeast coast of the United States with devastating consequences.  This week's lab was to create two different maps of this event.  The first showing Hurricane Sandy's track from it's inception to it's end stage and the second, a damage assessment for just one side of one street impacted by the storm.

This map shows the path Hurricane Sandy took and the ten states and District of Columbia that were affected.  We started by creating a polyline feature class from the points of the x,y values we added to the map from a table.  We next created a new symbol to represent the storm and used the Unique Values option in the Symbology Category display the different levels of the storm in different colors.  Using the Expression Window on the Labels tab we were able to display both the miles per hour of the winds and also the barometric pressure at each of our points.  This provided a little more information than the category of the storm at each location.  The last step was to add Graticules to the map.  This was my first time working with graticules and that took a little getting used to.  I didn't want the lines to overwhelm my drawing, but there was a fine line between not overwhelming and not showing up.  I had to adjust the color of the lines a few times to get that right.  With that done it was just a matter of adding the labels for the states and countries then the map elements and it was done.

Next on the list was creating a map showing the before and after images of a block in Tom's River New Jersey and assess the damage done to the homes on that block.  This we started by creating a new geodatabase, a feature class dataset and a couple raster datasets to store the data we'd use or create for this portion.  We also created an Attribute Domain.  This allowed us to specify what kind of information would be acceptable to add to the attribute table of the feature class we were creating to identify the damage done to the homes in our area of study.

Up to this point thing were pretty simple.  The difficult part came in trying to determine how much damage had been done to the property and what caused that damage.  The "before Sandy" image was pretty bright and clear, with the houses all very well defined.  The "after Sandy" image, however was darker and not nearly as easy to distinguish details.  Plus it was left up to us to use our own best judgment with no criteria for what constitutes major damage, minor damage, or what has just been affected.  Affected?  What does that mean?  That there was just some level of damage?  Or nothing happened to this house, but the property value is now trashed because everything around it is destroyed?  So it was a guessing game I wasn't really comfortable playing about something so serious.

Looking at the houses from overhead it was difficult to tell what if any damage was done to them, other than the ones that were obviously completely destroyed or moved from where they had originally been.  But one thing that was pretty clear was the inundation was more likely to be the cause of damage than the wind.  There were some houses that appeared to have some damage to the roof, but just about all of the houses seemed to have mud and debris pressed up against them.  I had to assume most of them had gotten damaged to some extent from that.  It was rather astonishing to me that the house closest to the coast had no visible damage from the image.  I'm sure it had to have had some damage from the inundation since it was first hit with it, but looking at it from overhead it would almost appear as though it had survived the storm unscathed.

Saturday, June 4, 2016

Tsunami Evacuation

Applications in GIS - Mod 3

In March of 2011 the Great East Japan Earthquake occurred off the Pacific coast of the northeastern part of Japan causing devastating damage and extremely destructive tsunami waves, which caused even more damage.  This damage included the disabling of the reactor cooling systems of the Fukushima Daiichi Nuclear Power Plant, one of the largest nuclear power stations in the world, which lead to nuclear radiation leaks.  This week's assignment was to map the radiation evacuation zones surrounding the Fukushima-Daiichi Nuclear Power Plant, analyze the tsunami runup on the Fukushima coast for three zones, and determine the at-risk population locations within each of those zones.

We started the assignment by creating a geodatabase and datasets to store the data we'd be using and creating for the assignment.  Next we
isolated the Fukushima II Power Plant for analysis to determine radiation evacuation zones using the Multiple Ring Buffer and Clip tools, then adjusted the symbology to create the different zones.  With the evacuation zones in place we ran queries to isolate the impacted cities and determine the population affected.

Our last portion was to determine the evacuation zones for the tsunami runup, estimate at-risk cities within each, and identify roads and nuclear power plants that would be inundated.  Again we created a buffer of 10,000 meters of coastline to work from. The rest was accomplished through a model, automating the Con, Raster to Polygon, Append and Intersect tools to create the tsunami runup evacuation zones and clip out the roads, power plants and cities that fell within those zones.

Thursday, May 26, 2016

Natural Hazards - Lahars

Applications in GIS - Mod 2


This week's assignment was on the Natural Hazard called  Lahars.  Lahars refers to a rapidly flowing mixture of rock debris and water that originates on the slopes of a volcano.  They pose a great hazard because more people live downstream in lahar-prone river valleys than live on the volcano's flanks and they can flow from a few meters per second to several tens of meters per second and vary in size from a few meters to hundreds of meters wide and several centimeters deep to tens of meters.  Lahars are also referred to as volcanic mud-flows or debris flows.

Our assignment  was to identify potential inundations zones for the Mt. Hood, Oregon area.  Mt. Hood, the highest peak in Oregon, is a volcano.  We started by creating a shapefile for the peak of Mt. Hood using the Add Point tool then converted that point to a feature. We then added two digital elevations rasters to run our analysis on.

Before we could do much with those images we needed to combine them into one using the Data Management tool Mosaic to New Raster.  By combining these two rasters into one we were able to run the analysis on the entire area just once and without worrying about differences in symbology between to the two images.

With our rasters combined we were then able to run a series of Spatial Analyst Hydrology tools to determent the likely flow of lahars from Mt. Hood.  We started with the Fill tool to be sure our stream would flow rather than pool, then the Flow Direction tool to determine the direction of flow based on the lowest relative elevation of the surrounding raster cells.  With the flow direction we are then able to run the Flow Accumulation tool to determine just how much flow we have, creating a stream network, or steam raster.

Because our raster image was a floating point raster, representing a continuous surface, we needed to convert it to integer so we could run further analysis on it.  We did this using the Math too Int.  We were then able to access the attribute table to determine the value of 1% the total number of pixels in the raster. With this information we were able to run the Conditional tool Con.  This tool defined the threshold of flow accumulation to determine what flow actually qualified as a stream.  From this we were able to create a polyline feature of our final steam using the Stream to Feature tool.  Once we knew where a stream was we were able to determine what population was at risk, in what cities and what schools by running queries on what fell within a 1/2 mile buffer zone from the stream.

This was a very interesting assignment and I really enjoyed it until I got to the labeling portion.  I think my biggest lesson learned was that I have to spend a bit more time delving into the mechanics of labeling and get a better handle on that.  I lost a lot of time trying to appropriately place my text because I didn't know how to stop it from shifting every time I zoomed in or out.  Fortunately I got some leads on how to deal with that through the discussion boards so I now have the starting point I had been unable to find on my own to start an investigation.  Had it not been for the labeling issue I would have finished this assignment in a quarter of the time.  Hopefully this will be the last assignment I have to deal with that frustration.