Posts Tagged ‘Data Processing’

Joining CSV Files in QGIS

Monday, April 4th, 2011

DBF files are the other thorny issue that comes up when I’m teaching the Intro to GIS workshops with QGIS – specifically how do you create and edit them? Doing that has been pretty inconvenient in the Windows world since they were deprecated in Excel 2007. You can download plugins or basic stand-alone software to do the job. Since I’m a Linux user I have the Open Office suite by default, and I can easily work with DBFs in Calc. That’s an option for Windows and Mac users too, but it’s kind of a drag to download an entire office suite just for working with DBFs (assuming most folks are use MS Office).

Another possibility is to dump DBF all together; even though the join table option in the fTools menu in QGIS only presents you the option of joining shapfiles or DBF, it turns out if you choose the DBF radio button option you can actually point to DBFs OR CSV files when you’re browsing to point to your data table. The join proceeds the same way, and you get a new shapefile with the data from the CSV joined to it. CSVs can be easily created from any spreadsheet, text editor, or database program in any operating system, so you can prep your attribute data in your program of choice before exporting to CSV and importing to GIS.

The problem with this approach is that all of the fields from the CSV file are automatically saved as text or strings when they’re appended to the shapefile. This means that any numeric data you have can’t be treated numerically; you can’t perform calculations or classify data as value ranges for mapping. You can go into the attribute table, enter the edit mode, and use the field calculator to create new fields where you convert the values to integers, but this adds a bunch of duplicate fields and is rather messy. You can’t delete columns from shapefiles within QGIS; you have to edit the attributes outside the software to remove extra columns.

Here’s an easy work around: you can create a CSVT file to go along with your CSV file. Open a text editor like Notepad or gedit and create a one line file where you specify the data type for each of the fields in your CSV file. Save the CSVT with the SAME NAME as your CSV file. Now when you go to join your CSV to your shapefile in QGIS, it will read the CSVT and save all of your fields properly in the new shapefile.

So for a CSV file like this with six fields:

01, AL, ENU0100010551, 27013, ENU0100010510, 1570188
02, AK, ENU0200010551, 6988, ENU0200010510, 237708
04, AZ, ENU 0400010551, 41833, ENU0400010510, 2174919 …

You’d create a CSVT file like this:

“String”, “String”, “String”, “Integer”, “String”, “Integer”

So the 4th and 6th fields that have numeric values are saved as integers. There are a few other field types you can use, like Real for decimal numbers and some Date / Time options, and you can specify length and precision – see this post for details.

Using shapefiles, CSVs, and CSVTs are fine for small to medium projects and for the introductory workshops I’m teaching; geodatabases are another option and are certainly better for medium to large projects, but introducing them in my intro workshop is a little too much.

(NOTE – in QGIS 1.7 the join tool has been dropped from the ftools menu. To join a csv or dbf in 1.7+, you have to add your data table as a vector to your map, and then use the join tab within the properties menu of the feature you want to join it to. See this post for details.)

Calculating Standard Deviation for Summarized Data

Tuesday, March 29th, 2011

This isn’t a geospatial issue per say, but I thought it would be useful to share. I have a spreadsheet where I’m tracking course evaluation responses for the GIS workshops I’m teaching. I have to report the total number of responses, the mean, and standard deviation for each question. The worksheet I designed tracks aggregate responses; the total number of people who responded to each question in each category, on a scale of 5 (strongly agree) to 1 (strongly disagree). For example:

The problem I had was that Excel’s standard deviation formula doesn’t work for summaries – you need to give the formula individual responses or raw scores for arguments. In other words:

So I was fixated on trying to find a formula, through the help and by searching the forums, where somehow I could calculate standard deviation using summaries or aggregates. It finally dawned on me (duh) that I could plug in the standard deviation formula myself and modify it.

To calculate the standard deviation for an entire population you compute the difference of each data point from the mean and square each result. Then you calculate the average of all these values and take the square root.

So for each question I subtract the mean score for that question from the score category for that question, square it, and then multiply the result by the number of people who answered in that category. So if 10 people strongly agreed with the question and strongly agreed is associated with a score of 5, I subtract the average score (4.71) from 5, square the result, and multiply it by 10 (since ten people responded that they strongly agreed).

((score value – mean score)^2)*respondents

I perform the same operation for each category. So if 4 people said they agreed with a question and agreed is a value of 4, subtract 4.71 from 4, square it, and multiply by 4. After I do this this for each category, I sum the values for each one and take the square root of the whole thing.

SQRT ((((score5 – mean score)^2)*respondents)+(((score4 – mean score)^2)*respondents))

SQRT ((((5-4.71)^2)*10)+(((4-4.71)^2)*4))

For my spreadsheet the formula is repeated for each of the 5 possible scores, references are used to pull in the mean and respondent values from other cells, and I round the entire result to 1 decimal place. The number of parens makes it a little confusing; I’ve inserted a color-coded image below so it’s a little clearer.


Given all that can go wrong with one misplaced parens, I tested this by inputting some raw scores by hand and running the STDEVPA formula to verify that I get the same result.

Learning Python at PyCamp

Thursday, June 10th, 2010

I got back from leave a couple week ago, and spent part of it at a Python boot camp. I’ve gotten tired of hacking away at data in spreadsheets and read in several places that Python is a good language to learn for beginning programmers – it’s also open source, flexible, and is used by many in the GIS community for processing data and building plugins and software (the instructor for the camp, Chris Calloway, pointed me to this presentation on Python scripting techniques for ArcGIS).

The workshop was a three-day event hosted at Penn State by the Triangle Zope and Python Users Group (TriZPUG). It was geared towards beginners and non-programmers (although many of my fellow classmates were IT and systems people) and provided a pretty thorough review of all of the elements of the language – now it’s up to me to tie it all together! The price was extremely reasonable (only $300 for a 3 day class!) and I’d certainly recommend it if there’s a camp in your area; although I would also recommend reading a book or taking a tutorial to familiarize yourself with the basics BEFORE attending the class; I did, and as a result I think I got more out of it than I would have had going in cold.

The next PyCamp is being held in LA in a few days, and the following one will be in Toronto from Aug 30th to Sept 3rd (although this isn’t posted on the website yet); the normal workshop is a five day affair, the one I attended was a mini 3 day version which suited my needs pretty well.

There are tons of Python tutorials on the web and Python’s site is pretty definitive. If you’re looking for a book, I’d recommend Practical Programming: An Introduction to Computer Science Using Python. Unlike the “Learn Language X” books, this one introduces you to general theory and practice in programming, and the authors illustrate the applications with practical examples using Python – it’s been immensely helpful to me. Now that I’m around the initial learning curve, I’ve been relying more on Beginning Python: From Novice to Professional, which is better as a reference book and good for illustrating many of the uses for individual objects, methods, etc (which I had a hard time grasping before I covered the basics of programming).

Excel COUNTIF Function to Clean ACS Data

Wednesday, March 19th, 2008

I’ve been preparing a GIS workshop for the New York Census Research Data Center’s 2nd Annual Workshop series, and have dug up some useful tips as I’ve assembled my materials. Here’s one of them:

I have a data table from the Census Bureau’s 2006 Annual Community Survey (ACS) in Excel which contains some data for Metropolitan and Micropolitan Areas. Now, I have a shapefile of Metropolitan Areas that I would like to join this data table to, but I would like to get rid of the records for the Micropolitan Areas in the data table. Unfortunately, the data table does not have a field that indicates whether an area is a Metro or Micro. Instead, this information is embedded in the name field, like “Akron, OH Metro Area” which means there is no way to sort the table to weed out the Micro Areas.

COUNTIF function to the rescue! I inserted a new column and typed in the formula:

=COUNTIF(D3, “*Metro*”)

If the formula sees the word Metro anywhere in the GEO_NAME, it counts it as a one in the new column, otherwise it counts it as zero (by default, the zeros will be the Micro areas). Copied and pasted the formula all the way down, then copied and pasted the formula column over top of itself using Paste Special (to replace the formulas with the actual values), and voila! Sorted by this column, and deleted all the records with a zero in the field (the Micro areas).


I’ve done something like this before in Microsoft Access using LIKE, but Excel doesn’t include this function. I knew about COUNTIF but didn’t connect the dots. I discovered I could apply it after stumbling across this useful post at Daily Dose of Excel.

Lastly, before you can bring this table into GIS, you have to delete that second header row (you can only have one column heading – the rest of the rows are assumed to be data). While the codes in the first row are cryptic, they are concise. The headings in the second row are too long and contain spaces, which will cause problems when you import the table into GIS.

NOTE – If you’re using Open Office’s Calc instead of Excel, and you have enabled regular expressions under the Tools – Options – Calc – Calculate menu, the same function would look like this:


Copyright © 2017 Gothos. All Rights Reserved.
No computers were harmed in the 0.415 seconds it took to produce this page.

Designed/Developed by Lloyd Armbrust & hot, fresh, coffee.