Posts Tagged ‘GIS’

Average Distance to Public Libraries in the US

Monday, February 22nd, 2016

A few months ago I had a new article published in LISR, but given the absurd restrictions of academic journal publishing I’m not allowed to publicly post the article, and have to wait 12 months before sharing my post-print copy. It is available via your local library if they have a subscription to the Science Direct database (you can also email me to request a copy). I am sharing some of the un-published state-level data that was generated for the project here.

Citation and Abstract

Regional variations in average distance to public libraries in the United States
F. Donnelly
Library & Information Science Research
Volume 37, Issue 4, October 2015, Pages 280–289
http://dx.doi.org/10.1016/j.lisr.2015.11.008

Abstract

“There are substantive regional variations in public library accessibility in the United States, which is a concern considering the civic and educational roles that libraries play in communities. Average population-weighted distances and the total population living within one mile segments of the nearest public library were calculated at a regional level for metropolitan and non-metropolitan areas, and at a state level. The findings demonstrate significant regional variations in accessibility that have been persistent over time and cannot be explained by simple population distribution measures alone. Distances to the nearest public library are higher in the South compared to other regions, with statistically significant clusters of states with lower accessibility than average. The national average population-weighted distance to the nearest public library is 2.1 miles. While this supports the use of a two-mile buffer employed in many LIS studies to measure library service areas, the degree of variation that exists between regions and states suggests that local measures should be applied to local areas.”

Purpose

I’m not going to repeat all the findings, but will provide some context.

As a follow-up to my earlier work, I was interested in trying an alternate approach for measuring public library spatial equity. I previously used the standard container approach – draw a buffer at some fixed distance around a library and count whether people are in or out, and as an approximation for individuals I used population centroids for census tracts. In my second approach, I used straight-line distance measurements from census block groups (smaller than tracts) to the nearest public library so I could compare average distances for regions and states; I also summed populations for these areas by calculating the percentage of people that lived within one-mile rings of the nearest library. I weighted the distances by population, to account for the fact that census areas vary in population size (tracts and block groups are designed to fall within an ideal population range – for block groups it’s between 600 and 3000 people).

Despite the difference in approach, the outcome was similar. Using the earlier approach (census tract centroids that fell within a library buffer that varied from 1 to 3 miles based on urban or rural setting), two-thirds of Americans fell within a “library service area”, which means that they lived within a reasonable distance to a library based on standard practices in LIS research. Using the latest approach (using block group centroids and measuring the distance to the nearest library) two-thirds of Americans lived within two miles of a public library – the average population weighted distance was 2.1 miles. Both studies illustrate that there is a great deal of variation by geographic region – people in the South consistently lived further away from public libraries compared to the national average, while people in the Northeast lived closer. Spatial Autocorrelation (LISA) revealed a cluster of states in the South with high distances and a cluster in the Northeast with low distances.

The idea in doing this research was not to model actual travel behavior to measure accessibility. People in rural areas may be accustomed to traveling greater distances, public transportation can be a factor, people may not visit the library that’s close to their home for several reasons, measuring distance along a network is more precise than Euclidean distance, etc. The point is that libraries are a public good that provide tangible benefits to communities. People that live in close proximity to a public library are more likely to reap the benefits that it provides relative to those living further away. Communities that have libraries will benefit more than communities that don’t. The distance measurements serve as a basic metric for evaluating spatial equity. So, if someone lives more than six miles away from a library that does not mean that they don’t have access; it does means they are less likely to utilize it or realize it’s benefits compared to someone who lives a mile or two away.

Data

I used the 2010 Census at the block group level, and obtained the location of public libraries from the 2010 IMLS. I improved the latter by geocoding libraries that did not have address-level coordinates, so that I had street matches for 95% of the 16,720 libraries in the dataset. The tables that I’m providing here were not published in the original article, but were tacked on as supplementary material in appendices. I wanted to share them so others could incorporate them into local studies. In most LIS research the prevailing approach for measuring library service areas is to use a buffer of 1 to 2 miles for all locations. Given the variation between states, if you wanted to use the state-average for library planning in your own state you can consider using these figures.

To provide some context, the image below shows public libraries (red stars) in relation to census block group centroids (white circles) for northern Delaware (primarily suburban) and surrounding areas (mix of suburban and rural). The line drawn between the Swedesboro and Woodstown libraries in New Jersey is 5-miles in length. I used QGIS and Spatialite for most of the work, along with Python for processing the data and Geoda for the spatial autocorrelation.

Map Example - Northern Delaware

The three tables I’m posting on the resources page are for states: one counts the 2010 Census population within one to six mile rings of the nearest public library, the second is the percentage of the total state population that falls within that ring, and the third is a summary table that calculates the mean and population-weighted distance to the nearest library by state. One set of tables is formatted text (for printing or just looking up numbers) while the other set are CSV files that you can use in a spreadsheet. I’ve included a metadata record with some methodological info, but you can read the full details in the article.

In the article itself I tabulated and examined data at a broader, regional level (Northeast, Midwest, South, and West), and also broke it down into metropolitan and non-metropolitan areas for the regions. Naturally people that live in non-metropolitan areas lived further away, but the same regional patterns existed: more people in the South in both metro and non-metro areas lived further away compared to their counterparts in other parts of the country. This weekend I stumbled across this article in the Washington Post about troubles in the Deep South, and was struck by how these maps mirrored the low library accessibility maps in my past two articles.

Some QGIS Odds and Ends

Thursday, July 3rd, 2014

My colleague Joe Paccione recently finished a QGIS tutorial on working with raster data. My introductory tutorial for the GIS Practicum gives only cursory treatment to rasters, so this project was initially conceived to give people additional opportunities to learn about working with them. It focuses on elevation modeling and uses DEMs and DRGs to introduce tiling and warping, and creating hillshades and contour lines.

topo_contour

The tutorial was written using QGIS 2.0 and was tested with version 2.4; thus it’s readily usable with any 2.x version of QGIS. With the rapid progression of QGIS my introductory tutorial for the workshop is becoming woefully outdated, having been written in version 1.8. It’s going to take me quite a while to update (among other things, the image for every darn button has changed) but I plan to have a new version out sometime in the fall, but probably not at the beginning of semester. Since I have a fair amount of work to do any way, I’m going to rethink all of the content and exercises. Meanwhile, Lex Berman at Harvard has updated his wonderfully clear and concise tutorial to Q version 2.x.

The workshops have been successful for turning people on to open source GIS on my campus, to the point were people are using it and coming back to teach me new things – especially when it comes to uncovering useful plugins:

  • I had a student who needed to geocode a bunch of addresses, but since many of them were international I couldn’t turn to my usual geocoding service at Texas A & M. While I’ve used the MMQGIS plugin for quite a while (it has an abundance of useful tools), I NEVER NOTICED that it includes a geocoding option that acts as a GUI for accessing both the Google and Open Streetmap API for geocoding. He discovered it, and it turned out quite well for him.
  • I was helping a prof who was working with a large point file of street signs, and we discovered a handy plugin called Points2One that allowed us to take those points and turn them into lines based on an attribute the points held in common. In this case every sign on a city block shared a common id that allowed us to create lines representing each side of the street on each block.
  • After doing some intersect and difference geoprocessing with shapefiles I was ending up with some dodgy results – orphaned nodes and lines that had duplicate attributes with good polygons. If I was in a database, an easy trick to find these duplicates would be to run a select query where you group by ID and count them, and anything with a count more than two are duplicates – but this was a shapefile. Luckily there’s a handy plugin called Group Stats that lets you create pivot tables, so I used that to do a summary count and identified the culprits. The plugin allowed me to select all the features that matched my criteria of having an id count of 2 or more, so I could eyeball them in the map view and the attribute table. I calculated the geometry for all the features and sorted the selected ones, revealing that all the duplicates had infinitesimally small areas. Then it was a simple matter of select and delete.
  • The Geography of US Public Libraries

    Monday, March 18th, 2013

    Last month my article on the geographic disribution of US public libraries was pre-published online in JOLIS, with a print date pending. I can’t share this one freely on-line, so if you don’t have access via a library database (Sage Journals, ERIC, various LIS dbs) you can contact me if you’re interested in getting a copy.

    Title: The geographic distribution of United States public libraries : An analysis of locations and service areas
    Author: Francis P. Donnelly
    Journal: Journal of Librarianship and Information Science (JOLIS)
    Year: 2014, Volume 46, Issue 2, Pages 110-129
    ISSN: 0961-0006
    DOI: 10.1177/0961000612470276
    Publisher: Sage

    Abstract

    This article explores the geography of public libraries in the United States. The distribution of libraries is examined using maps and spatial statistics to analyze spatial patterns. Methods for delineating and studying library service areas used in previous LIS research are modified and applied using geographic information systems to study variations in library accessibility by state and by socio-economic group at the national level. A history of library development is situated within the broader economic and demographic history of the US to provide insight to contemporary patterns, and Louis Round Wilson’s Geography of Reading is used as a focal point for studying historical trends. Findings show that while national library coverage is extensive, the percentage of the population that lives in a library’s geographic service area varies considerably by region and state, with Southern and Western states having lower values than Northeastern and Midwestern states.

    Keywords

    Geographic information systems, geography, public libraries, service areas, spatial equity, United States

    This OCLC flier (How Public Libraries Stack Up) piqued my interest in public libraries as community resources, public goods, and placemaking institutions. If the presence of a public library brings so much value to a community, then by extension the lack of a public library could leave a community at a disadvantage. This led to the next set of logical questions: how are libraries distributed across the country, and which people and communties are being served and which aren’t?

    I took a few different approaches to answer these questions. The first approach was to use (and learn) spatial statistics so the overall distribution could be characterized, and the second was to use spatial extraction methods to select census areas and populations that were within the service areas of each library, to see differences in how populations were served and to study these differences across different states. The LIS literature is rich with research that uses GIS to study library use, so I provide a thorough summary of what’s come before. Then after I had the results I spent a good deal of time researching how the contemporary pattern came to be, and coupled the research on the history of public libraries with the broader history of urban and economic development in the United States.

    I had a few unstated motives – one of them was to learn spatial statistics, with the help of: OpenGeoda and its documentation, this excellent book on Spatial Data Analysis (for theory), these great examples from Spatial Justice, and invaluable advice from Deborah Balk, a Professor of Spatial Demography with the CUNY Institute for Demographic Research.

    One of my other goals was to use only open source software – QGIS, GRASS, and OpenGeoda, which was also a success. Although in my next study I’ll probably rely on QGIS and Spatialite; I found I was doing a lot of attribute data crunching using the SQLite Manager, since the attributes of GRASS vectors can be stored in SQLite, and I could probably save time (and frustration) by using Spatialite’s features instead. I did get to learn a lot about GRASS, but for my purposes it was overkill and I would have been just fine with a spatial database. I was definetely able to sharpen my Python skills, as processing the American Community Survey data for every census tract in the US manually would have been crazy.

    In a project this size there are always some pieces that end up on the cutting room floor, so I thought I’d share one here – a dot map that shows where all 16,700 public libraries are. In the article I went with a county choropleth map to show the distribution, because I was doing other county-level stuff and because the dimension restrictions on the graphic made it a more viable option. The dot map reveals that libraries are typically where people are, except that the south looks emptier and the midwest fuller than it should be, if libraries were in fact evenly distributed by population. As my research shows – they’re not.

    US Public Libraries

    NYC Geodatabase in Spatialite

    Wednesday, February 6th, 2013

    I spent much of the fall semester and winter interim compiling and creating the NYC geodatabase (nyc_gdb), a desktop geodatabase resource for doing basic mapping and analysis at a neighborhood level – PUMAs, ZIP Codes / ZCTAs, and census tracts. There were several motivations for doing this. First and foremost, as someone who is constantly introducing new people to GIS it’s a pain sending people to a half dozen different websites to download shapefiles and process basic features and data before actually doing a project. By creating this resource I hoped to lower the hurdles a bit for newcomers; eventually they still need to learn about the original sources and data processing, but this gives them a chance to experiment and see the possibilities of GIS before getting into nitty gritty details.

    Second, for people who are already familiar with GIS and who have various projects to work on (like me) this saves a lot of duplicated effort, as the db provides a foundation to build on and saves the trouble of starting from scratch each time.

    Third, it gave me something new to learn and will allow me to build a second part to my open source GIS workshops. I finally sat down and hammered away with Spatialite (went through the Spatialite Cookbook from start to finish) and learned spatial SQL, so I could offer a resource that’s open source and will compliment my QGIS workshop. I was familiar with the Access personal geodatabases in ArcGIS, but for the most part these serve as simple containers. With the ability to run all the spatial SQL operations, Spatialite expands QGIS functionality, which was something I was really looking for.

    My original hope was to create a server-based PostGIS database, but at this point I’m not set up to do that on my campus. I figured Spatialite was a good alternative – the basic operations and spatial SQL commands are relatively the same, and I figured I could eventually scale up to PostGIS when the time comes.

    I also created an identical, MS Access version of the database for ArcGIS users. Once I got my features in Spatialite I exported them all out as shapefiles and imported them all via ArcCatalog – not too arduous as I don’t have a ton of features. I used the SQLite ODBC driver to import all of my data tables from SQLite into Access – that went flawlessly and was a real time saver; it just took a little bit of time to figure out how to set up (but this blog post helped).

    The databases are focused on NYC features and resources, since that’s what my user base is primarily interested in. I purposefully used the Census TIGER files as the base, so that if people wanted to expand the features to the broader region they easily could. I spent a good deal of time creating generalized layers, so that users would have the primary water / coastline and large parks and wildlife areas as reference features for thematic maps, without having every single pond and patch of grass to clutter things up. I took several features (schools, subway stations, etc) from the City and the MTA that were stored in tables and converted them to point features so they’re readily useable.

    Given that focus, it’s primarily of interest to NYC folks, but I figured it may be useful for others who wish to experiment with Spatialite. I assumed that most people who would be interested in the database would not be familiar with this format, so I wrote a tutorial that covers the database and it’s features, how to add and map data in QGIS, how to work with the data and do SQL / spatial SQL in the Spatialite GUI, and how to map data in ArcGIS using the Access Geodb. It’s Creative Commons, Attribution, Non-Commercial, Share-alike, so feel free to give it a try.

    I spent a good amount of time building a process rather than just a product, so I’ll be able to update the db twice a year, as city features (schools, libraries, hospitals, transit) change and new census data (American Community Survey, ZIP Business Patterns) is released. Many of the Census features, as well as the 2010 Census data, will be static until 2020.

    New Version of Introductory GIS Tutorial Now Available

    Sunday, October 7th, 2012

    The latest version of my Introduction to GIS tutorial using QGIS is now available. I’ve completely revised how it’s organized and presented; I wrote the first two manuals in HTML, since I wanted something that gave me flexibility with inserting many images in a large document (word processors are notoriously poor at this). Over the summer I learned how to use LaTeX, and the result for this 3rd edition is an infintely better document, for classroom use or self study.

    I also updated the manual for use with QGIS 1.8. I’m thinking that the addition of the Data Browser and the ability to simply select the CRS of the current layer or project when you’re doing a Save As (rather than having to select the CRS from the master list) will save a lot of valuable time in class. With every operation that we perform we’re constantly creating new files as the result of selections and geoprocessing, and I always lose a few people each time we’re forced to crawl through the file system to add new layers we’ve created. These simple changes should speed things up. I’ve updated the manual throughout to reflect these changes, and have also updated the datasets to reflect what’s currently available. I provide a summary of the most salient changes in the introduction.

    Adding a Scale Bar in the QGIS Map Composer

    Friday, August 31st, 2012

    I recently finished revisions for the next edition of the manual for my GIS workshop, and incorporated a new section on adding a scale bar in the QGIS map composer. I thought I’d share that piece here. I’m using the latest version of QGIS, 1.8 Lisboa.

    The scale bar in the map composer automatically takes the units used in the coordinate reference system (CRS) that your layers are in. In order to get a scale bar that makes sense, your layers will need to use a CRS that’s in meters or feet. If you’re using a geographic coordinate system that’s in degrees, like WGS 84 or NAD 83, the result isn’t going to make sense. It’s possible at large scales (covering small areas) to convert degrees to meters or feet but it’s a pain. Most projected coordinate systems for countries, continents, and the globe use meters. Regional systems like UTM also use meters, and in the US you can use a state plane system and choose between meters or feet. Transform your layers to something that’s appropriate for your map.

    Once you have layers that are in a CRS that uses meters or feet, you’ll need to convert the units to kilometers or miles using the scale bar’s menu. Let’s say I have a map of the 50 states that’s in the US National Atlas Equal Area Projection, which is a continental projected coordinate system of the US, number 2163 in the EPSG library (this system is also known as the Lambert Azimuthal Equal-Area projection). It’s in meters. In the QGIS Map Composer I select the Add New Scalebar button, and click on the map to add it. The resulting scale bar is rather small and the units are clumped together.

    Scalebar button on the toolbar

    With the scale bar active, in the menus on the right I click on the Item Properties. First, we have to decide how large we want the individual segments on the scale bar to be. I’m making a thematic map of the US, so for my purposes 200 miles would be OK. The conversion factor is 1,609 meters in a mile. Since we want each segment on the scale bar to represent 200 miles, we multiply 1,609 by 200 to get 321,800 meters. In the Item tab, this is the number that I type in the segment size box: 321,800, replacing the default 100,000. Then, in the map units by bar, I change this from 1 to 1,609. Now the units of the scale bar start to make sense. Increase the number of right segments from 2 to 3, so I have a bar that goes from 0 to 600 miles, in 200 mile segments. I like to decrease the height of the bar a bit, from 5mm to 3mm. In the Unit label box I type Miles. Lastly, I switch to the General options tab just below, and turn off the outline for the bar. Now I have a scale bar that’s appropriate for this map!

    Most people in the rest of the world will be using kilometers. That conversion is even simpler. If we wanted the scale bar segments to represent 200 km, we would multiply 200 by 1,000 (1,000 meters in a kilometer) to get 200,000 meters. 200,000 goes in the segment size box and 1,000 goes in the map units per bar. Back in the US, if you’re working in a local state plane projection that uses feet, the conversion will be the familiar 5,280 feet to a mile. So, if we wanted a scale bar that has segments of 20 miles, multiply 5,280 by 20 to get 105,600 feet. 105,600 goes in the segment size box, and 5,280 goes in the map units per bar box.

    A reality check is always a good idea. Take your scale bar and compare it to some known distance between features on your map. In our first example, I would drag the scale bar over the map of the US and measure the width of Illinois at its widest point, which is just over 200 miles. In the Item Properties tab for the scale bar I turn the opacity down to zero, so the bar doesn’t hide the map features underneath. If the bar matches up I know I’m in good shape. Remember that not all map projections will preserve distance as a property, and distortion becomes an issue for small scale maps that cover large areas (i.e. continents and the globe). On small scale maps distance will be true along standard parallels, and will become less accurate the further away you go.

    GIS Workshops This Apr & May

    Sunday, March 25th, 2012

    This semester I’ll be teaching three workshops with Prof. Deborah Balk in spatial tools and analysis. Sponsored by the CUNY Institute of Demographic Research (CIDR), the workshops will be held on Baruch College’s campus in midtown NYC on Friday afternoons. The course is primarily intended for data and policy analysts who want to gain familiarity with the basics of map making and spatial analysis; registration is open to anyone. The workshops progress from basic to intermediate skills that cover making a map (Apr 27th), geospatial calculations (May 4th), and geospatial analysis (May 11th). We’ll be using QGIS and participants will work off of their own laptops; we’ll also be demonstrating some of the processes in ArcGIS and participants will receive an evaluation copy of that software. Each workshop is $300 or you can register for all three for $750.

    For full details check out this flier. You can register via the College’s CAPS website; do a search for DEM and register for each session (DEM0003, DEM0004, and DEM0005).

    Thiessen Polygons and Listing Neighboring Features

    Monday, January 2nd, 2012

    I was helping someone with a project recently that I thought would be straightforward but turned out to be rather complex. We had a list of about 10,000 addresses that had to be plotted as coordinates, and then we needed to create Thiessen or Voroni polygons for each point to create market areas. Lastly we needed to generate an adjacency table or list of neighbors; for every polygon list all the neighboring polygons.

    For step one I turned to the USC Geocoding service to geocode the addresses; I became a partner a ways back so I could batch geocode datasets for students and faculty on my campus. Once I had coordinates I plotted them in ArcGIS 10 (and learned that the Add XY data feature had been moved to File > Add Data > Add XY Data). Step 2 seemed easy enough; in Arc you go to ArcToolbox > Analysis Tools > Proximity > Create Thiessen Polygons. This creates a polygon for each point and assigns the attributes of each point to the polygon.

    I hit a snag with Step 3 – Arc didn’t have a tool for generating the adjacency table. After a thorough search of the ESRI and Stack Exchange forums, I stumbled on the Find Adjacent Features Script by Ken Buja which did exactly what I wanted in ArcGIS 9.2 and 9.3, but not in 10. I had used this script before on a previous project, but I’ve since upgraded and can’t go back. So I searched some more until I found the Find Adjacent & Neighboring Polygons Tool by cmaene. I was able to add this custom toolbox directly to ArcToolbox, and it did exactly what I wanted in ArcGIS 10. I get to select the unique identifying field, and for every ID I get a list of the IDs of the neighboring polygons in a text file (just like Ken’s tool). This tool also had the option of saving the list of neighbors for each feature directly in the attribute table of a shapefile (which is only OK for small files with few neighbors; fields longer than 254 characters get truncated), and it gave you the option of listing neighbors to the next degree (a list of all the neighbor’s neighbors).

    Everything seemed to run fine, so I re-ran the tool on a second set of Thiessen polygons that I had clipped with an outline of the US to create something more geographically realistic (so polygons that share a boundary only in the ocean or across the Great Lakes are not considered neighbors).

    THEN – TROUBLE. I took some samples of the output table and checked the neighbors of a few features visually in Arc. I discovered two problems. First, I was missing about a thousand records or so in the output. When I geocoded them I couldn’t get a street-level address match for every record; the worse case scenario was a plot to the ZCTA / ZIP code centroid for the address, which was an acceptable level of accuracy for this project. The problem is that if there are many point features plotted to the same coordinate (because they share the same ZIP), a polygon was created for one feature and the overlapping ones fell away (you can’t have overlapping Thiessen polygons). Fortunately this also wasn’t an issue for the person I was helping; we just needed to join the output table back to the master one to track which ones fell out and live with the result.

    The bigger problem was the output was wrong. I discovered that the neighbor list for most of the features I checked, especially polygons that had borders on the outer edge of the space, had incomplete lists; each feature had several (and in some cases, all) neighbors missing. Instead of using a shapefile of Thiessen’s I tried running the tool on polygons that I generated as feature classes within an Arc geodatabase, and got the same output. For the heck of it I tried dissolving all the Thiessen’s into one big polygon, and when I did that I noticed that I had orphaned lines and small gaps in what should have been one big, solid rectangle. I tried checking the geometry of the polygons and there were tons of problems. This led me to conclude that Arc did a lousy job when constructing the topology of the polygons, and the neighbor tool was giving me bad output as a result.

    Since I’ve been working more with GRASS, I remembered that GRASS vectors have strict topology rules, where features have shared boundaries (instead of redundant overlapping ones). So I imported my points layer from a shapefile into GRASS and then used the v.voroni tool to create the polygons. The geometry looked sound, the attributes of each point were assigned to a polygon, and for overlapping points one polygon was created and attributes of the shared points were dumped. I exported the polygons out as a shapefile and brought them back into Arc, ran the Find Adjacent & Neighboring Polygons tool, spot checked the neighbors of some features, and voila! The output was good. I clipped these polygons with my US outline, ran the tool again, and everything checked out.

    Morals of this story? When geocoding addresses consider how the accuracy of the results will impact your project. If a tool or feature doesn’t exist assume that someone else has encountered the same problem and search for solutions. Never blindly accept output; take a sample and do manual checks. If one tool or piece of software doesn’t work, try exporting your data out to something else that will. Open source software and Creative Commons tools can save the day!

    Footnote – apparently it’s possible to create lists of adjacent polygons in GRASS using the sides option in v.to.db, although it isn’t clear to me how this is accomplished; the documentation talks about categories of areas on the right and left of a boundary, but not on all sides of an area. Since I already had a working solution I didn’t investigate further.

    Giving GRASS GIS a Try

    Saturday, July 30th, 2011

    I’ve been taking the plunge in learning GRASS GIS this summer, as I’ve been working at home (and thus don’t have access to ArcGIS) on a larger and more long-term project (and thus don’t want to mess around with shapefiles and csv tables). I liked the idea of working with GRASS vectors, as everything is contained in one folder and all my attributes are stored rather neatly in a SQLite database.

    I started out using QGIS to create my mapset and to connect it to my SQLite db which I had created and loaded with some census data. Then I thought, why not give the GRASS interface a try? I started using the newer Python-wx GUI and as I’m trying different things, I bounce back and forth between using the GUI for launching commands and the command line for typing them in – all the while I have Open Source GIS A GRASS GIS Approach at my side and the online manual a click away . So far, so good.

    I loaded and cleaned a shapefile with the GRASS GUI (the GUI launches v.in.ogr, v.build, abd v.clean) and it’s attributes were loaded into the SQLite database I had set (using db.connect – need to do this otherwise a DBF is created by default for storing attributes). Then I had an age-old task to perform – the US Census FIPS / ANSI codes where stored in separate fields, and in order to join them to my attribute tables I had to concatenate them. I also needed to append some zeros to census tract IDs that lacked them – FIPS codes for states are two digits long, counties are three digits, and tracts are between four and six digits, but to standardize them four digit tracts should have two zeros appended.

    Added the new JOIN_ID column using v.db.addcol, then did the following using db.execute:

    UPDATE tracts_us99_nad83
    SET JOIN_ID = STATE || COUNTY || TRACT

    Then:

    UPDATE tracts_us99_nad83
    SET JOIN_ID = JOIN_ID || ’00’
    WHERE length(JOIN_ID)=9

    So this:

    STATE COUNTY TRACT
    01 077 0113
    01 077 0114
    01 077 011502
    01 077 011602

    Becomes this:

    JOIN_ID
    01077011300
    01077011400
    01077011502
    01077011602

    db.execute GRASS GUI

    I could have done this a few different ways from within GRASS: instead of the separate v.db.addcol command I could have written a SQL statement in db.execute to alter the table and add a column. Or, instead of db.execute I could have used the v.db.update command.

    My plan is to use GRASS for geoprocessing and analysis (will be doing some buffering, geographic selection, and basic spatial stats), and QGIS for displaying and creating final maps. I just used v.in.db to transform an attribute table with coordinates in my db to a point vector. But now I’m realizing that in order to draw buffers, I’ll need a projected coordinate system that uses meters or feet, as inputting degrees for a buffer distance (for points throughout the US) isn’t going to work too well. I’m also having trouble figuring out how to link my attribute data to my vectors – I can easily use v.db.join to fuse the two together, but there is a way to link them more loosely using the CAT ID number for each vector, but I’m getting stuck. We’ll see how it goes.

    Some final notes – since I’m working with large datasets (every census tract in the US) and GRASS uses a topological data model where common node and boundaries between polygons are shared, geoprocessing can take awhile. I’ve gotten in the habit of testing things out on a small subset of my vectors, and once I get it right I run the process on the total set.

    Lastly, there are times where I read about commands in the manual and for the life of me I can’t find them in the GUI – for example, finding a menu to delete (i.e. permanently remove) layers. But if you type the command without any of its parameters in the command line (in this case, g.remove) it will launch the right window in the GUI.

    GRASS GIS Interface

    Updates for QGIS 1.7 Wroclaw

    Tuesday, July 5th, 2011

    The latest version of QGIS, 1.7 “Wroclaw” was released a few weeks ago. Some of the recent updates render parts of my GIS Practiucm out of date (unless you’re sticking with versions 1.5 or 1.6), so I’ll be making updates later this summer for the upcoming workshops this academic year. In the meantime, I wanted to summarize the most salient changes here for the participants in this past spring’s workshops, and for anyone else who may be interested. Here are the big two changes that affect the tutorial / manual:

    Transforming Projections – In previous versions you would go under Vector – > Data Management Tools > Export to New Projection. In 1.7 this has been dropped from the ftools vector menu. To transform the projection of a file, you select it in the Map Legend (ML), right-click, hit Save As, give it a new name and specify the new CRS there. The QGIS developers have provided some info on how QGIS handles projections that’s worth checking out. You can go in the settings and have QGIS transform projections on the fly, which is fine depending on what you’re going to do. My preference is to play it safe – do the transformations and make sure all your files and the window share the same CRS. It can save you headaches later on.

    Table Joins – In previous versions you would also accomplish this under Vector – > Data Management Tools > Join Attributes, where you’d join a DBF or CSV to a shapefile to create a new file with both the geometry and the data. Now that’s out, and QGIS can support dynamic joins, similar to ArcGIS where you couple the attribute table to the shapefile without permanently fusing the two. To do this you must add your DBF or CSV directly to your project; do this the same way you’d add a vector layer. Hit the Add Vector button, and change the drop down at the bottom so you can see all files (not just shapefiles) in your directory. Add your table. Then select your layer in the ML and double click to open the properties menu. You’ll see a new tab for Joins. Hit the tab and hit the plus button to add a new join. You’ll select your table, the table’s join field, and the layer’s join field. Hit OK to join em, and it’s done. Open the attribute table for the layer and you’ll see all columns for the layer and the joined field.
    New Join Menu QGIS 1.7

    This sounds great, but I had some trouble when I went to symbolize my data. Using the old symbology tab, I couldn’t classify any of my columns from my attribute table using Equal Intervals; it populated each class with zeros. Quantiles worked fine. If I switched to the new symbology, I still couldn’t use Equal Intervals, and Quantiles and Natural Breaks only worked partially – my dataset contained negative values which were simply dropped instead of classified. Grrrrr. I got around this by selecting my layer in the ML (after doing the join), right clicked, and saved it as a new layer. This permanently fused the shapefile with the attributes, and I had no problem classifying and mapping data in the new file. I went to the forum and asked for help to see if this is a bug – will report back with the results.

    Here are some other updates worth noting:

    • Feature Count – if you right click on a layer in the ML, you can select the Feature Count option and the number of features will be listed under your layer. If you’ve classified your features, it will do a separate count for each classification.
    • Feature Count QGIS 1.7

    • Measuring Tool – it looks like it’s now able to convert units on the fly, so even if you’re using a CRS with units in degrees, it can convert it to meters and kilometers (you can go into the options menu if you want to switch to feet and miles).
    • Labels – it looks like the experimental labelling features have been made default in this verson, and they do work better, especially with polygons.
    • Map Composer Undo – undo and redo buttons have been added to the map composer, which makes it much easier to work with.
    • Undo Redo Button Map Composer

    • Map Composer Symbols – if you go to insert an image in the map composer, you’ll have a number of default SVG symbols you can load, incuding north arrows
    • Export to PDF – from the map composer, this was pretty buggy in the past but I was able to export a map today without any problems at all.

    Copyright © 2017 Gothos. All Rights Reserved.
    No computers were harmed in the 0.520 seconds it took to produce this page.

    Designed/Developed by Lloyd Armbrust & hot, fresh, coffee.