Documente Academic
Documente Profesional
Documente Cultură
10
Further sessions will be added in the months (more likely, years) ahead.
The document is provided in good faith and the contents have been tested by the author. However,
use is entirely as the user's risk. Absolutely no responsibility or liability is accepted by the author
for consequences arising from this document howsoever it is used. It is is licensed under a Creative
Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License (see above).
Before starting the following should be considered.
First, you will notice that in this document the pages and, more unusually, the lines are numbered.
The reason is educational: it makes directing a class to a specific part of a page easier and faster. For
other readers, the line numbers can be ignored.
20
30
Second, the sessions presume that, as well as R, a number of additional R packages (libraries) have
been installed and are available to use. You can install them by following the 'Before you begin'
instructions below.
Third, each session is written to be completed in a single sitting. If that is not possible, then it would
normally be possible to stop at a convenient point, save the workspace before quitting R, then
reload the saved workspace when you wish to continue. Note, however, that whereas the additional
packages (libraries) need be installed only once, they must be loaded each time you open R and
require them. Any objects that were attached before quitting R also need to be attached again to take
you back to the point at which you left off. See the sections entitled 'Saving and loading
workspaces', 'Attaching a data frame' and 'Installing and loading one or more of the packages
(libraries)' on pages 10, 31 and 37 for further information.
Next, type
unzip("Rintro.zip")
All the data you need for the sessions are now available in the working directory.
If you would like to install all the libraries (packages) you need for these practicals, type
load(begin.RData)
and then
install.libs()
You are advised to read Installing and loading one or more of the packages (libraries) on p. 37
before doing so.
20
Please note:
this is a draft version of the document and has not as yet
been thoroughly checked for typos and other errors.
1.1 About R
R is an open source software package, licensed under the GNU General Public Licence. You can
obtain and install it for free, with versions available for PCs, Macs and Linux. To find out what is
available, go to the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org/
10
Being free is not necessarily a good reason to use R. However, R is also well developed, well
documented, widely used and well supported by an extensive user community. It is not just software
for 'hobbyists'. It is widely used in research, both academic and commercial. It has well developed
capabilities for mapping and spatial analysis.
In his book R in a Nutshell (O'Reilly, 2010), Joseph Adler writes, R is very good at plotting
graphics, analyzing data, and fitting statistical models using data that fits in the computer's
memory. Nevertheless, no software provides the perfect tool for every job and Adler adds that it's
not good at storing data in complicated structures, efficiently querying data, or working with data
that doesn't fit in the computer's memory.
20
30
To these caveats it should be added that R does not offer spreadsheet editing of data of the type
found, for example, in Microsoft Excel. Consequently, it is often easier to prepare and 'clean' data
prior to loading them into R. There is an add-in to R that provides some integration with Excel. Go
to http://rcom.univie.ac.at/ and look for RExcel.
A possible barrier to learning R is that it is generally command-line driven. That is, the user types a
command that the software interprets and responds to. This can be daunting for those who are used
to extensive graphical user interfaces (GUIs) with drop-down menus, tabs, pop-up menus, left or
right-clicking and other navigational tools to steer you through a process. It may mean that R takes
a while longer to learn; however, that time is well spent. Once you know the commands it is usually
much faster to type them than to work through a series of menu options. They can be easily edited
to change things such as the size or colour of symbols on a graph, and a log or script of the
commands can be saved for use on another occasion or for sharing with others.
Saying that, a fairly simple and platform independent GUI called R Commander can be installed
(see http://cran.r-project.org/web/packages/Rcmdr/index.html). Field et al.'s book Discovering
Statistics Using R provides a comprehensive introduction to statistical analysis in R using both
command-lines and R Commander.
40
Assuming R has been installed in the normal way on your computer, clicking on the link/shortcut to
R on the desktop will open the RGui, offering some drop-down menu options, and also the R
Console, within which R commands are typed and executed. The appearance of the RGui differs a
little depending upon the operating system being used (Windows, Mac or Linux) but having used
one it should be fairly straightforward to navigate around another.
At its simplest, R can be used as a calculator. Typing 1 + 1 after the prompt > will (after pressing
the return/enter key, ) produce the result 2, as in the following example:
> 1 + 1
[1] 2
20
30
> 10 - 5
[1] 5
> 10 * 2
[1] 20
> 10 - 5 * 2
[1] 0
> (10 - 5) * 2
[1] 10
> sqrt(100)
[1] 10
> 10^2
[1] 100
> 100^0.5
[1] 10
> 10^3
[1] 1000
> log10(100)
[1] 2
> log10(1000)
[1] 3
> 100 / 5
[1] 20
> 100^0.5 / 5
[1] 2
If you see the + symbol instead of the usual (>) prompt it is because what has been typed is
incomplete. Often there is a missing bracket. For example,
10
> sqrt(
+ 100
+ )
[1] 10
> (1 + 2) * (5 - 1
+ )
[1] 12
20
If there is a mistake in a line of a code that needs to be corrected or if some previously typed
commands will be repeated then the and keys on the keyboard can be used to scroll between
previous entries in the R Console. Try it!
1.3.1 Scripting
You can create a new script file from the drop down menu File New script (in Windows) or File
New Document (Mac OS). It is basically a text file in which you could write, for example,
a <- 1:10
print(a)
40
In Windows, if you move the cursor up to the required line of the script and press Ctrl + R, then it
will be run in the R Console. So, for example, move the cursor to where you have typed a <- 1:10
and press Ctrl + R. Then move down a line and do the same. The contents of a, the numbers 1 to 10,
should be printed in the R Console. If you continue to keep the focus on the Scripting window and
go to Edit in the RGui you will find an option to run everything. Similar commands are available
for other Operating Systems (e.g. Mac key + Return). You can save files and load previously saved
files.
Scripting is both good practice and good sense. It is good practice because it allows for
reproducibility of your work. It is good sense because if you need to go back and change things you
can do so easily without having to start from scratch.
Tip: It can be sensible to create the script in a simple text editor that is independent of R, such as
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
Notepad. Although you will not be able to use Ctrl + R in the same way, if R crashes for any reason
you will not lose your script file.
1.3.2 Logging
You can save the contents of the R Console window to a text file which will then give you a log file
of the commands you have been using (including any mistakes). The easiest way to do this is to
click on the R Console (to take the focus from the Scripting window) and then use File Save
History (in Windows) or File Save As (Mac). Note that graphics are not usually plotted in the R
Console and therefore need to be saved separately.
20
Doing this runs the function ls(), which lists the contents of the workspace. The result,
character(0), indicates that the workspace is empty. (Assuming it currently is).
To find out more about a function, type ? or help with the function name,
> ?ls()
> help(ls)
This will provide details about the function, including examples of its use. It will also list the
arguments required to run the function, some of which may be optional and some of which may
have default values which can be changed as required. Consider, for example,
> ?log()
A required argument is x, which is the data value or values. Typing log() omits any data and
generates an error. However, log(100) works just fine. The argument base takes a default value of e1
which is approximately 2.72 and means the natural logarithm is calculated. Because the default is
assumed unless otherwise stated so log(100) gives the same answer as log(100, base=exp(1)).
Using log(100, base=10) gives the common logarithm, which can also be calculated using the
convenience function log10(100).
30
The results of mathematical expressions can be assigned to objects, as can the outcome of many
commands executed in the R Console. When the object is given a name different to other objects
within the current workspace, a new object will be created. Where the name and object already
exist, the previous contents of the object will be over-written, without warning so be careful!
> a <- 10 5
> print(a)
[1] 5
> b <- 10 * 2
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
> print(b)
[1] 20
> print(a * b)
[1] 100
> a <- a * b
> print(a)
[1] 100
10
20
In these examples the assignment is achieved using the combination of < and -, as in a <- 100.
Alternatively, 100 -> a could be used or, more simply, a = 100. The print(..)command can often
be omitted, though it is useful, and sometimes necessary (for example, when what you hope should
appear on-screen doesn't).
> f = a * b
> print(f)
[1] 2000
> f
[1] 2000
> sqrt(b)
[1] 4.472136
> print(sqrt(b), digits=3)
[1] 4.47
> c(a,b)
[1] 100 20
> c(a,sqrt(b))
[1] 100.000000
4.472136
> print(c(a,sqrt(b)), digits=3)
[1] 100.00
4.47
> _a <- 10
Error: unexpected input in "_"
> 2a <- 10
Error: unexpected symbol in "2a"
<- 10
<- 20
== A
FALSE
The following is rarely sensible because it won't appear in the workspace, although it is there,
40
> .a <- 10
> ls()
[1] "a" "b" "f"
> .a
[1] 10
> rm(.a, A)
From typing ls() we know when the workspace is not empty. To remove an object from the
workspace it can be referenced to explicitly as in rm(A) or indirectly by its position in the
workspace. To see how the second of these options will work, type
> ls()
The output returned from the ls() function is here a vector of length three where the first element is
the first object (alphabetically) in the workspace, the second is the second object, and so forth. We
can access specific elements by using notation of the form ls[index.number]. So, the first element
the first object in the workspace can be obtained using,
> ls()[1]
[1] "a"
> ls()[2]
[1] "b"
[]
> ls()[3]
[1] "f"
> ls()[c(1,3)]
[1] "a" "f"
> ls()[c(1,2,3)]
[1] "a" "b" "f"
> ls()[c(1:3)]
[1] "a" "b" "f"
Using the remove function, rm(...), the second and third objects in the workspace can be removed
using
20
> rm(list=ls()[c(1,3)])
> ls()
[1] "b"
To delete all the objects in the workspace and therefore empty it, type the following code but be
warned! there is no undo function. Whenever rm(...) is used the objects are deleted permanently.
> rm(list=ls())
> ls()
character(0)
30
Because objects are deleted permanently, a sensible precaution prior to using rm(...) is to save the
workspace. To do so permits the workspace to be reloaded if necessary and the objects recovered.
One way to save the workspace is to use
> save.image(file.choose(new=T))
Alternatively, the drop-down menus can be used (File Save Workspace in the Windows version
of the RGui). In either case, type the extension .RData manually else it risks being omitted, making
it harder to locate and reload what has been saved. Try creating a couple of objects in your
workspace and then save it with the names workspace1.RData
To load a previously saved workspace, use
40
> load(file.choose())
10
be saved to the file .RData within the working directory. Assuming that directory is the default one,
the workspace and all the objects it contains will be reloaded automatically each and every time R is
opened, which could be useful but also potentially irritating. To stop it, locate and delete the file.
The current working directory is identified using the get working directory function, getwd() and
changed most easily using the drop-down menus.
> getwd()
[1] "/Users/rich_harris"
Tip: A good strategy for file management is to create a new folder for each project in R, saving the
workspace regularly using a naming convention such as Dec_8_1.RData, Dec_8_2.RData etc. That
way you can easily find and recover work.
1.5 Quitting R
Before quitting R, you may wish to save the workspace. To quit R use either the drop-down menus
or
> q()
As promised, you will be prompted whether to save the workspace. Answering yes will save the
workspace to the file .RData in the current working directory (see section 1.4.4, 'Saving and loading
workspaces', on page 10, above). To exit without the prompt, use
> q(save = "no")
20
I also have a free introduction to statistical analysis in R which accompanies the book Statistics for
Geography and Environmental Science. It can be obtained from http://www.social-statistics.org/?
p=354.
There are many books available. My favourite, with a moderate level statistical leaning and written
with clarity is,
Maindonald, J. & Braun, J., 2007. Data Analysis and Graphics using R (2nd edition). Cambridge:
CUP.
I also find useful,
Adler, J., 2010. R in a Nutshell. O'Reilly: Sebastopol, CA.
Crawley, MJ, 2005. Statistics: An Introduction using R. Chichester: Wiley (which is a shortened
version of The R Book by the same author).
11
Field, A., Miles, J. & Field, Z., 2012. Discovering Statistics Using R. London: Sage
However, none of these books is about mapping or spatial analysis (of particular interest to me as a
geographer). For that, the authoritative guide making the links between geographical information
science, geographical data analysis and R (but not really written for R newcomers) is,
Bivand, R.S., Pebesma, E.J. & Gmez-Rubio, V., 2008. Applied Spatial Data Analysis with R.
Berlin: Springer.
Also helpful is,
Ward, M.D. & Skrede Gleditsch, K., 2008. Spatial Regression Models. London: Sage. (Which uses
R code examples).
10
And
Chun, Y. & Griffith, D.A., 2013. Spatial Statistics and Geostatistics. London: Sage. (I found this
book a little eccentric but it contains some very good tips on its subject and gives worked examples
in R).
The following book has a short section of maps as well as other graphics in R (and is also, as the
title suggests, good for practical guidance on how to analyse surveys using cluster and stratified
sampling, for example):
Lumley, T., 2010. Complex Surveys. A Guide to Analysis Using R. Hoboken, NJ: Wiley.
20
Springer publish an ever-growing series of books under the banner Use R! If you are interested in
visualization, time-series analysis, Bayesian approaches, econometrics, data mining, , then you'll
find something of relevance at http://www.springer.com/series/6991. But you may well also find
what you are looking for for free on the Internet.
12
Session 2: A Demonstration of R
This session provides a quick tour of some of R's functionality, with a focus on some geographical
applications. The idea here is to showcase a little of what R can do rather than providing a
comprehensive explanation to all that is going on. Aim for an intuitive understanding of the
commands and procedures but do not worry about the detail. More information about the workings
of R is given in the next session. More details about how to use R as a GIS and for spatial analysis
are given in Sessions 4, 5 and 6.
Note: this session assumes the libraries RgoogleMaps, png, sp and spdep are installed and available
for use. You can find out which packages you currently have installed by using
10
> row.names(installed.packages())
20
As the focus of this session is on showing what R can do rather than teaching you how to do it.
instead of requiring you to type a series of commands, they can instead be executed automatically
from a previously written source file (a script: see Section 1.3.1, page 7). As the commands are
executed we will ask R to echo (print) them to the screen so you can following what is going on. At
regular intervals you will be prompted to press return before the script continues.
To begin, type,
> source(file.choose(), echo=T)
and load the source file session2.R. After some comments that you should ignore, you will be
prompted to load the .csv file schools.csv:
> ## Read in the file schools.csv file
> wait()
Please presss return
schools.data <- read.csv(file.choose())
30
Assuming there is no error, we will now proceed to a simple inspection of the data. Remember: the
commands you see written below are the ones that appear in the source file. You do not need to type
them yourself for this session.
In this instance, each column is a continuous variable so we obtain a six-number summary of the
centre and spread of each variable.
40
13
> names(schools.data)
Next the number of columns and rows; and a check row-by-row to see if the data are complete
(have no missing data).
> ncol(schools.data)
> nrow(schools.data)
> complete.cases(schools.data)
The file schools.csv contains information about the location and some attributes of schools in
Greater London (in 2008). The locations are given as a grid reference (Easting, Northing). The
information is not real but is realistic. It should not, however, be used to make inferences about real
schools in London.
Of particular interest is the average attainment on leaving primary school (elementary school) of
pupils entering their first year of secondary school. Do some schools in London attract higher
attaining pupils more than others? The variable attainment contains this information.
A stripchart and then a histogram will show that (not surprisingly) there is variation in the average
prior attainment by school.
20
>
>
>
+
attach(schools.data)
stripchart(attainment, method="stack", xlab="Mean Prior Attainment by School")
hist(attainment, col="light blue", border="dark blue", freq=F, ylim=c(0,0.30),
xlab=Mean attainment)
Here the histogram is scaled so the total area sums to one. To this we can add a rug plot,
> rug(attainment)
30
>
>
>
>
>
>
+
lines(density(sort(attainment)))
xx <- seq(from=23, to=35, by=0.1)
yy <- dnorm(xx, mean(attainment), sd(attainment))
lines(xx, yy, lty="dotted")
rm(xx, yy)
legend("topright", legend=c("density curve","Normal curve"),
lty=c("solid","dotted"))
If would be interesting to know if attainment varies by school type. A simple way to consider this is
to produce a box plot. The data contain a series of dummy variables for each of a series of school
types (Voluntary Aided Church of England school: coe = 1; Voluntary Aided Roman Catholic: rc =
1; Voluntary controlled faith school: vol.con = 1; another type of faith school: other.faith = 1; a
selective school (sets an entrance exam): selective = 1). We will combine these into a single,
categorical variable then produce the box plot showing the distribution of average attainment by
school type.
First the categorical variable:
40
14
10
par(mai=c(1,1.4,0.5,0.5))
# Changes the graphic margins
boxplot(attainment ~ school.type, horizontal=T, xlab="Mean attainment", las=1,
cex.axis=0.8)
# Includes options to draw the boxes and labels horizontally
abline(v=mean(attainment), lty="dashed")
# Adds the mean value to the plot
legend("topright", legend="Grand Mean", lty="dashed")
Not surprisingly, the selective schools (those with an entrance exam) recruit the pupils with highest
average prior attainment.
15
We might also be interested in comparing those schools with the highest and lowest proportions of
Free School Meal eligible pupils to see if they are recruiting pupils with equal or differing mean
prior attainment. We expect a difference because free school meal eligibility is used as an indicator
of a low income household and there is a link between economic disadvantage and educational
progress in the UK.
>
#
>
#
>
20
16
It comes as little surprise to learn that those schools with the greatest proportions of FSM eligible
pupils are also those recruiting lower attaining pupils on average (mean attainment 26.6 Vs 29.6, t =
-15.0, p < 0.001, the 95% confidence interval is from -3.44 to 2.64).
Exploring this further, the Pearson correlation between the mean prior attainment of pupils entering
each school and the proportion of them that are FSM eligible is -0.689, and significant (p < 0.001):
> round(cor(fsm, attainment),3)
> cor.test(fsm, attainment)
Pearson's product-moment correlation
10
Of course, the use of the Pearson correlation assumes that the relationship is linear, so let's check:
> plot(attainment ~ fsm)
> abline(lm(attainment ~ fsm))
20
There is some suggestion the relationship might be curvilinear. However, we will ignore that here.
Finally, some regression models. The first seeks to explain the mean prior attainment scores for the
schools in London by the proportion of their intake who are free school meal eligible. (The result is
the line of best fit added to the scatterplot above).
The second model adds a variable giving the proportion of the intake of a white ethnic group.
The third adds a dummy variable indicating whether the school is selective or not.
> model1 <- lm(attainment ~ fsm, data=schools.data)
> summary(model1)
Call:
lm(formula = attainment ~ fsm, data = schools.data)
30
Residuals:
Min
1Q Median
-2.8871 -0.7413 -0.1186
3Q
0.5487
Max
3.6681
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 29.6190
0.1148 258.12
<2e-16 ***
fsm
-6.5469
0.3603 -18.17
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
40
17
Call:
lm(formula = attainment ~ fsm + white, data = schools.data)
Residuals:
Min
1Q Median
-2.9442 -0.7295 -0.1335
10
3Q
0.5111
Max
3.7837
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 30.1250
0.1979 152.21 < 2e-16 ***
fsm
-7.2502
0.4214 -17.20 < 2e-16 ***
white
-0.8722
0.2796
-3.12 0.00196 **
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 1.164 on 364 degrees of freedom
Multiple R-squared: 0.4887,
Adjusted R-squared: 0.4859
F-statistic: 173.9 on 2 and 364 DF, p-value: < 2.2e-16
20
Call:
lm(formula = attainment ~ fsm + white + selective, data = schools.data)
Residuals:
Min
1Q
-2.6262 -0.5620
30
Median
0.0537
3Q
0.5607
Max
3.6215
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 29.1706
0.1689 172.712
<2e-16 ***
fsm
-5.2381
0.3591 -14.586
<2e-16 ***
white
-0.2299
0.2249 -1.022
0.307
selective
3.4768
0.2338 14.872
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 0.9189 on 363 degrees of freedom
Multiple R-squared: 0.6823,
Adjusted R-squared: 0.6796
F-statistic: 259.8 on 3 and 363 DF, p-value: < 2.2e-16
Looking at the adjusted R-squared value, each model appears to be an improvement on the one that
precedes it (marginally so for model 2). However, looking at the last (model 3), we may suspect that
we could drop the white ethnicity variable with no significant loss in the amount of variance
explained. An analysis of variance confirms that to be the case.
40
18
1
2
364 307.42
363 306.54
The residual error, measured by the residual sum of squares (RSS), is not very different for the two
models, and that difference, 0.882, is not significant (F = 1.045, p = 0.307).
10
The schools data contain geographical coordinates and are therefore geographical data.
Consequently they can be mapped. The simplest way for point data is to use a 2-dimensional plot,
making sure the aspect ratio is fixed correctly.
> plot(Easting, Northing, asp=1, main="Map of London schools")
# The argument asp=1 fixes the aspect ratio correctly
Amongst the attribute data for the schools, the variable esl gives the proportion of pupils who speak
English as an additional language. It would be interesting for the size of the symbol on the map to
be proportional to it.
> plot(Easting, Northing, asp=1, main="Map of London schools",
+ cex=sqrt(esl*5))
It would also be nice to add a little colour to the map. We might, for example, change the default
plotting 'character' to a filled circle with a yellow background.
20
A more interesting option would be to have the circles filled with a colour gradient that is related to
a second variable in the data the proportion of pupils eligible for free school meals for example.
To achieve this, we can begin by creating a simple colour palette:
> palette <- c("yellow","orange","red","purple")
We now cut the free school meals eligibility variable into quartiles (four classes, each containing
approximately the same number of observations).
> map.class <-
30
The result is to split the fsm variable into four groups with the value 1 given to the first quarter of
the data (schools with the lowest proportions of eligible pupils), the value 2 given to the next
quarter, then 3, and finally the value 4 for schools with the highest proportions of FSM eligible
pupils.
There are, then, now four map classes and the same number of colours in the palette. Schools in
map class 1 (and with the lowest proportion of fsm-eligible pupils) will be coloured yellow, the next
class will be orange, and so forth.
Bringing it all together,
> plot(Easting, Northing, asp=1, main="Map of London schools",
+ cex=sqrt(esl*5), pch=21, bg=palette[map.class])
40
It would be good to add a legend, and perhaps a scale bar and North arrow. Nevertheless, as a first
map in R this isn't too bad!
19
Why don't we be a bit more ambitious and overlay the map on a Google Maps tile, adding a legend
as we do so? This requires us to load an additional library for R and to have an active Internet
connection.
> library(RgoogleMaps)
10
Assuming that the data frame, schools.data, remains in the workspace and attached (it will be if you
have followed the instructions above), and that the colour palette created above has not been
deleted, then the map shown in Figure 2.4 is created with the following code:
> MyMap <- MapBackground(lat=Lat, lon=Long)
> PlotOnStaticMap(MyMap, Lat, Long, cex=sqrt(esl*5), pch=21,
bg=palette[map.class])
> legend("topleft", legend=paste("<",tapply(fsm, map.class, max)),
pch=21, pt.bg=palette, pt.cex=1.5, bg="white", title="P(FSM-eligible)")
> legVals <- seq(from=0.2,to=1,by=0.2)
> legend("topright", legend=round(legVals,3), pch=21, pt.bg="white",
pt.cex=sqrt(legVals*5), bg="white", title="P(ESL)")
20
(If you are running the script for this session then the code you see on-screen will differ slightly.
That is because it has some error trapping included in it incase there is no Internet connection
available)
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
20
Remember that the data are simulated. The points shown on the map are not the true locations of
schools in London. Do not worry about understanding the code in detail the purpose is to see the
sort of things R can do with geographical data. We will look more closely at the detail in later
sessions.
First, we will take a copy of the schools data and convert it into an explicitly spatial object in R:
>
>
>
>
>
>
>
>
>
detach(schools.data)
schools.xy <- schools.data
library(sp)
attach(schools.xy)
coordinates(schools.xy) <- c("Easting", "Northing")
# Converts into a spatial object
class(schools.xy)
detach(schools.xy)
proj4string(schools.xy) <- CRS("+proj=tmerc datum=OSGB36")
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
21
We can learn from this that the six nearest schools to the first school in the data (row 1) are schools
5, 38, 2, 40, 223 and 6:
> nearest.six$nn[1,]
[1]
5 38
2 40 223
10
class(nearest.six)
20
The connections between each point and its neighbours can then be plotted. It may take a few
minutes.
> plot(neighbours, coordinates(schools.xy))
Having identified the six nearest neighbours to each school we could give each equal weight in a
spatial weights matrix or, alternatively, decrease the weight with distance away (so the first nearest
neighbour gets most weight and the sixth nearest the least). Creating a matrix with equal weight
given to all neighbours is sufficient for the time being.
30
(The other possibility is achieved by creating then supplying a list of general weights to the
function, see ?nb2listw)
We now have all the information required to test whether there are spatial dependencies in the
residuals. The answer is yes (Moran's I = 0.218, p < 0.001, indicating positive spatial
autocorrelation).
> lm.morantest(model4, spatial.weights)
Global Moran's I for regression residuals
40
data:
model: lm(formula = attainment ~ fsm + selective, data = schools.data)
weights: spatial.weights
Moran I statistic standard deviate = 7.9152, p-value = 1.235e-15
alternative hypothesis: greater
sample estimates:
Observed Moran's I
Expectation
Variance
0.2181914682
-0.0038585704
0.0007870118
22
2.7 Tidying up
It is better to save your workspace regularly whilst you are working (see Section 1.4.4, 'Saving and
loading workspaces', page 10) and certainly before you finish. Don't forget to include the
extension .RData when saving. Having done so, you can tidy-up the workspace.
> save.image(file.choose(new=T))
> rm(list=ls())
# Be careful, it deletes everything!
10
A simple introduction to graphics and statistical analysis in R is given in Statistics for Geography
and Environmental Science: An Introduction in R, available at http://www.social-statistics.org/?
p=354.
23
24
10
Let us create two objects, each a vector containing ten elements. The first will be the numbers from
one to ten, recorded as integers. The second will be the same sequence but now recorded as real
numbers (that is, 'floating point' numbers, those with a decimal place).
> b <- 1:10
> b
[1] 1 2 3 4 5 6 7 8 9 10
> c <- seq(from=1.0, to=10.0, by=1)
> c
[1] 1 2 3 4 5 6 7 8 9 10
9 10
This works because if we don't explicitly define the argument (by omitting from=1 etc.) then R will
assume we are giving values to the arguments in their default order, which in this case is in the
order from, to and by.Type ?seq and look under Usage for this to make a little more sense.
In any case, the two objects, b and c, appear the same on screen but one is an object of class integer
whereas the other is an object of class numeric and of type double (double precision in the memory
space).
30
> class(b)
[1] "integer"
> class(c)
[1] "numeric"
> typeof(c)
[1] "double"
Often it possible to coerce an object from one class and type to another.
40
25
[1] 1 2 3 4 5 6 7 8
> c <- as.character(c)
> class(c)
[1] "character"
> c
[1] "1" "2" "3" "4" "5"
9 10
"6"
"7"
"8"
"9"
"10"
The examples above are trivial. However, it is important to understand that seemingly generic
functions like summary(...) can produce outputs that are dependent upon the class type. Try, for
example,
10
20
> class(b)
[1] "numeric"
> summary(b)
Min. 1st Qu. Median
Mean 3rd Qu.
1.00
3.25
5.50
5.50
7.75
> class(c)
[1] "character"
> summary(c)
Length
Class
Mode
10 character character
Max.
10.00
In the first instance, a six number summary of the centre and spread of the numeric data is given.
That makes no sense for character data. The second summary gives the length of the vector, its class
type and its storage mode.
A more interesting example is provided if we consider the plot(...) command, used first with a
single data variable, secondly with two variables in a data table, and finally on a model of the
relationship between those two variables.
The first variable is created by generating 100 observations drawn randomly from a Normal
distribution with mean of 100 and a standard deviation of 20.
> var1 <- rnorm(n=100, mean=100, sd=20)
30
Being random, the data assigned to the variable will differ from user to user. Usually we would
want this. However, in this case it would be easier to ensure we get the same by ensuring we each
get the same 'random' draw:
> set.seed(1)
> var1 <- rnorm(n=100, mean=100, sd=20)
40
> class(var1)
[1] "numeric"
> length(var1)
[1] 100
> summary(var1)
Min. 1st Qu. Median
55.71
90.12 102.30
> head(var1)
[1] 87.47092 103.67287
> tail(var1)
[1] 131.73667 111.16973
Max.
148.00
83.59063
88.53469
75.50775
90.53199
They seem fine! Returning to the use of the plot(...) command, in this instance it simply plots the
data in order of their position in the vector.
> plot(var1)
26
To demonstrate a different interpretation of the plot command, a second variable is created that is a
function of the first but with some random error.
> set.seed(101)
> var2 <- 3 * var1 + 10 + rnorm(100, 0, 25)
# which, because n, mean and sd are the first three arguments into rnorm
# is the same as writing var2 <- 3 * var1 + 10 + rnorm(n=100, mean=100, sd=20)
> head(var2)
[1] 264.2619 334.8301 242.9887 411.0758 337.5397 290.1211
10
20
Next, the two variables are gathered together in a data table, of class data frame, where each row is
an observation and each column is a variable. There is more about data frames on page 29, in
Section 3.2 ('Data frames')
> mydata <- data.frame(x = var1, y = var2)
> class(mydata)
[1] "data.frame"
> head(mydata)
x
y
1 87.47092 264.2619
2 103.67287 334.8301
3 83.28743 242.9887
4 131.90562 411.0758
5 106.59016 337.5397
6 83.59063 290.1211
> nrow(mydata)
# The number of rows in the data
[1] 100
> ncol(mydata)
# The number of columns
[1] 2
In this case, plotting the data frame will produce a scatter plot (to which the line of best fit shown in
Figure 3.2 will be added shortly).
> plot(mydata)
30
If there had been more than two columns in the data table, or if they had not been arranged in x, y
order, then the plot could be produced by referencing the columns directly. All the following are
equivalent:
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
27
>
>
>
>
>
The attach(...) command could also be used. This is introduced in Section 3.2.2, 'Attaching a data
frame' on page 31.
Figure 3.2. A scatter plot. A line of best fit has been added.
The line of best fit in Figure 3.2 is a regression line. To fit the regression model, summarising the
relationship between y and x, use
10
model1 is an object of class lm, short for linear model. Using the
the relationship between y and x.
20
summary(...)
function summarises
> summary(model1)
Call:
lm(formula = y ~ x, data = mydata)
Residuals:
Min
1Q Median
3Q
Max
-57.102 -16.274
0.484 15.188 47.290
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
8.6462
13.6208
0.635
0.527
x
3.0042
0.1313 22.878
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 23.47 on 98 degrees of freedom
Multiple R-squared: 0.8423,
Adjusted R-squared: 0.8407
F-statistic: 523.4 on 1 and 98 DF, p-value: < 2.2e-16
30
28
The first plot is a check for non-constant variance and outliers, the second for normality of the
model residuals, the third is similar to the first, and the fourth identifies both extreme residuals and
leverage points.
10
These four plots can be viewed together, changing the default graphical parameters to show the
plots in a 2-by-2 array (as in Figure 3.3).
> par(mfrow = c(2,2))
> plot(model1)
Finally, we might like to go back to our previous scatter plot and add the regression line of best fit
to it,
> par(mfrow = c(1,1))
> plot(mydata)
> abline(model1)
The preceding section introduced the data frame as a class of object containing a table of data where
the variables are the columns of the data and the rows are the observations.
29
> class(mydata)
> summary(mydata)
Looking at the data summary, the object mydata contains two columns, labelled x and y. These
column headers can also be revealed by using
> names(mydata)
[1] "x" "y"
or with
> colnames(mydata)
[1] "x" "y"
10
The row names appear to be the numbers from 1 to 100 (the number of rows in the data), though
actually they are character data:
> rownames(mydata)
[1] "1"
"2"
"3"
"4"
> class(rownames(mydata))
[1] "character"
"5"
"6"
"7"
"8"
[etc.]
20
All at once:
> names(mydata) <- c("x","y")
> names(mydata)
[1] "x" "y"
30
The above can be especially useful when merging data tables with GIS shapefiles in R (because the
first entry in an attribute table for a shapefile usually is given an ID of 0). Otherwise, it is usually
easiest for the first row in a data table to be labelled 1, so let's put them back to how they were.
> rownames(mydata) = 1:nrow(mydata)
> rownames(mydata)
[1] "1"
"2"
"3"
"4"
"5"
"6"
"7"
"8" [etc.]
40
The square bracket notation can be used to index specific row, columns or cells in the data frame.
For example:
> mydata[1,]
x
y
1 87.47092 264.2619
> mydata[2,]
x
y
2 103.6729 334.8301
> round(mydata[2,],2)
x
y
30
10
2 103.67 334.83
> mydata[nrow(mydata),]
# The final row of the data
x
y
100 90.53199 261.236
> mydata[,1]
# The first column of data
[1] 87.47092 103.67287 83.28743 131.90562 [etc.]
> mydata[,2]
# The second column, which here is also
[1] 264.2619 334.8301 242.9887 411.0758 337.5397 [etc.]
> mydata[,ncol(mydata)]
# the final column of data
[1] 264.2619 334.8301 242.9887 411.0758 337.5397 [etc.]
> mydata[1,1]
# The data in the first row of the first column
[1] 87.47092
> mydata[5,2]
# The data in the fifth row of the second column
[1] 337.5397
> round(mydata[5,2],0)
[1] 338
30
> mydata$x
# Equivalent to mydata[,1] because the column name is x
[1] 87.47092 103.67287 83.28743 131.90562 106.59016 [etc.]
> mydata$y
[1] 264.2619 334.8301 242.9887 411.0758 337.5397 290.1211 [etc.]
> summary(mydata$x)
Min. 1st Qu. Median
Mean 3rd Qu.
Max.
55.71
90.12 102.30 102.20 113.80 148.00
> summary(mydata$y)
Min. 1st Qu. Median
Mean 3rd Qu.
Max.
140.4
284.1
314.1
315.6
355.7
447.6
> mean(mydata$x)
[1] 102.1777
> median(mydata$y)
[1] 314.1226
> sd(mydata$x)
# Gives the standard deviation of x
[1] 17.96399
> boxplot(mydata$y)
> boxplot(mydata$y, horizontal=T, main="Boxplot of variable y")
Sometimes any of the ways to access a specific part of a data table becomes tiresome and it is useful
to reference the column or variable name directly. For example, instead of having to type
mean(mydata[,1]), mean(mydata$x) or with(mydata, mean(x)) it would be easier just to refer to the
variable of interest, x, as in mean(x).
To achieve this the attach(...) command is used. Compare, for example,
> mean(x)
31
(which generates an error because there is not an object called x in the workspace; it is only a
column name within the data frame mydata) with
> attach(mydata)
> mean(x)
[1] 102.1777
(which works fine). If, to use the earlier analogy, objects in R's workspace are like box files, then
now you have opened one up and its contents (which include the variable x) are visible.
To detach the contents of the data frame use detach(...)
10
> detach(mydata)
> mean(x)
Error in mean(x) : object 'x' not found
It is sensible to use detach when the data frame is no longer being used or else confusion can arise
when multiple data frames contain the same column names, as in the following example:
20
30
> attach(mydata)
> mean(x)
# This will give the mean of mydata$x
[1] 102.1777
> mydata2 = data.frame(x = 1:10, y=11:20)
> head(mydata2)
x y
1 1 11
2 2 12
3 3 13
4 4 14
5 5 15
6 6 16
> attach(mydata2)
The following object(s) are masked from 'mydata':
x, y
> mean(x)
# This will now give the mean of mydata2$x
[1] 5.5
> detach(mydata2)
> mean(x)
[1] 102.1777
> detach(mydata)
> rm(mydata2)
Subsets of a data frame can be created by referencing specific rows within it. For example, imagine
we want a table only of those observations that have a a value above the mean of some variable.
40
50
> attach(mydata)
> subset <- which(x > mean(x))
> class(subset)
[1] "integer"
> subset
[1] 2 4 5 7 8 9 11 12 15 18 19 20 21 22 25 30 31 33 [etc.]
> mydata.sub <- mydata[subset,]
> head(mydata.sub)
x
y
2 103.6729 334.8301
4 131.9056 411.0758
5 106.5902 337.5397
32
7 109.7486 354.7155
8 114.7665 351.4811
9 111.5156 367.4726
Note how the row names of this subset have been inherited from the parent data frame.
A more direct approach is to define the subset as a logical vector that is either true or false
dependent upon whether a condition is met.
10
20
TRUE
TRUE
TRUE [etc.]
30
In the same way, to select those rows where x is greater than or equal to the mean of x and y is
greater than or equal to the mean of y
> mydata.sub <- mydata[x >= mean(x) & y >= mean(y),]
# The symbol & is used for and
Or, those rows where x is less than the mean of x or y is less than the mean of y
> mydata.sub <- mydata[x < mean(x) | y < mean(y),]
# The symbol | is used for or
40
50
33
R will, by default, report NA or an error when some calculations are tried with missing data:
> mean(mydata$x)
[1] NA
> quantile(mydata$y)
Error in quantile.default(mydata$y) :
missing values and NaN's not allowed if 'na.rm' is FALSE
To overcome this, the default can be changed or the missing data removed.
To ignore the missing data in the calculation,
10
Alternatively, there are various ways to remove the missing data. For example
> subset <- !is.na(mydata$x)
creates a logical vector which is true where the data values of x are not missing (the
expresion means not):
> head(subset)
[1] FALSE TRUE
20
TRUE
TRUE
TRUE
in the
TRUE
More succinctly,
> with(mydata, mean(x[!is.na(x)]))
[1] 102.3263
Alternatively, a new data frame can be created without any missing data whereby any row with any
missing value is omitted.
30
40
The accompanying file schools.csv (used in Session 2) contains information about the location and
some attributes of schools in Greater London (in 2008). The locations are given as a grid reference
(Easting, Northing). The information is not real but is realistic.
A standard way to read a file into a data frame, with cases corresponding to lines and variables to
fields in the file, is to use the read.table(...) command.
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
34
> ?read.table
In the case of schools.csv, it is comma delimited and has column headers. Looking through the
arguments for read.table the data might be read into R using
> schools.data <- read.table("schools.csv", header=T, sep=",")
This will only work if the file is located in the working directory, else the location (path) of the file
will need to be specified (or the working directory changed). More conveniently, use file.choose()
> schools.data <- read.table(file.choose(), header=T, sep=",")
Looking through the usage of read.table in the R help page, a variant of the command is found
where the defaults are for comma delimited data. So, most simply, we could use,
10
20
30
It seems to be fine.
For more about importing and exporting data in R, consult the R help document, R Data
Import/Export (see under the Help menu in R or http://cran.r-project.org/manuals.html).
3.3 Lists
A list is a little like a data frame but offers a more flexible way to gather objects of different classes
together. For example,
> mylist <- list(schools.data, model1, "a")
> class(mylist)
[1] "list"
> length(mylist)
[1] 3
Here the first component is the data frame containing the schools data. The second component is the
linear model created earlier. The third is the character a. To reference a specific component,
double square brackets are used:
> head(mylist[[1]], n=3)
35
FSM
EAL
SEN
1 0.659 0.583 0.031
2 0.391 0.424 0.001
3 0.708 0.943 0.038
10
> summary(mylist[[2]])
Call:
lm(formula = y ~ x, data = mydata)
Residuals:
Min
1Q Median
3Q
Max
-57.102 -16.274
0.484 15.188 47.290
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
8.6462
13.6208
0.635
0.527
x
3.0042
0.1313 22.878
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
Residual standard error: 23.47 on 98 degrees of freedom
Multiple R-squared: 0.8423,
Adjusted R-squared: 0.8407
F-statistic: 523.4 on 1 and 98 DF, p-value: < 2.2e-16
20
> class(mylist[[3]])
[1] "character"
The double square brackets can be combined with single ones. For example,
> mylist[[1]][1,]
FSM
EAL
SEN white blk.car blk.afr indian pakistani [etc.]
1 0.659 0.583 0.031 0.217
0.032
0.222 0.002
0.020
is the first row of the schools data. The first cell of the same data is
> mylist[[1]][1,1]
[1] 27
So, a simple function to divide the product of two numbers by their sum could be,
>
+
+
+
40
36
10
Note: If reading this in class it is likely that the packages have been installed already or you will
not have the administrative rights to install them. If so, this section is for information only.There
also is no need to install the packages if you have done so already (when following the instructions
under 'Before you begin' on p.3).
To install a specific package the install.packages(...) command is used, as in:
20
> install.packages("ctv")
Installing package(s) into /Users/ggrjh/Library/R/2.13/library
(as lib is unspecified)
trying URL 'http://cran.uk.r-project.org/bin/macosx/leopard/contrib/2.13/ctv_0.74.tgz'
Content type 'application/x-gzip' length 289693 bytes (282 Kb)
opened URL
==================================================
downloaded 282 Kb
The package needs to be installed once but loaded each time R is started, using the library(...)
command
> library("ctv")
In this case what has been installed is a package that will now allow all the packages associated
with the spatial task view to be installed together, using:
> install.views("Spatial")
30
Note that installing packages may, by default, require access to a directory/folder for which
administrative rights are required. If necessary, it is entirely possible to install R (and therefore the
additional packages) in, for example, 'My Documents' or on a USB stick.
3.5.2 Checking which packages are installed
You may want to save and/or tidy up your workspace before quitting R. See sections 1.5 and 2.7 on
pages 11 and 23.
37
38
For this session you will need to have the following libraries installed: sp, maptools, GISTools,
classInt, RColorBrewer, raster and spdep (see 'Installing and loading one or more of the packages
(libraries)', p. 37).
We begin by reading some XY Data into R. The file landprices.csv is in a comma separated format
and contains information about land parcels in Beijing, including a point georeference a centroid,
marking the centre of the land parcel. The data are simulated and not real but they are realistic.
20
Given the geographical coordinates, it is possible to map the data as they are in much the same way
as we did in Section 2.5 ('Some simple maps'). At its simplest,
> with(landdata, plot(x, y, asp=1))
However, given an interest in undertaking spatial analysis in R, it would be better to convert the
data into what R will explicitly recognise as a spatial object. For this we will require the sp library,
Assuming it is installed,
> library(sp)
30
We can now coerce landdata into an object of class spatial (sp) by telling R that the geographical
coordinates for the data are found in columns 1 and 2 of the current data frame.
> coordinates(landdata) = c(1,2)
> class(landdata)
[1] "SpatialPointsDataFrame"
attr(,"package")
[1] "sp"
The locations of the data points are now simply plotted using
> plot(landdata)
(type ?points and scroll down to below 'pch values' to see the options for different types of point
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
39
character)
4.1.2 The Coordinate Reference System (CRS)
10
20
The map still lacks a sense of geographical context so we will add a polygon shapefile giving the
boundaries of districts in Beijing. The file is called 'beijing_districts.shp'. This first needs to be
loaded into R which in turn requires the maptools library:
> library(maptools)
> districts <- readShapePoly(file.choose())
> summary(districts)
As before, the coordinate reference system is missing but is the same as for the land price data.
> proj4string(districts) <- crs
> summary(districts)
30
We can now plot the boundaries of the districts and then overlay the point data on top:
> plot(districts)
> plot(landdata, pch=21, bg="yellow", cex=0.7, add=T)
40
That is a variable we can map. The easiest way to do this is using the GIS Tools library,
> library(GISTools)
As with most libraries, if we want to know more about what it can do, type ? followed by its name
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
40
(so here ?GISTools) and follow the link to the main index. If you do, you will find there is a function
called choropleth with the description: Draws a choropleth map given a spatialPolygons object, a
variable and a shading scheme.
Currently we have the spatialPolygons object. It is the object districts.
> class(districts)
[1] "SpatialPolygonsDataFrame"
attr(,"package")
[1] "sp"
10
It would be good to add a legend to the map. To do so, use the following command and then click
towards the bottom-right of your map in the place where you would like the legend will go:
20
> locator(1)
$x
[1] 462440
$y
[1] 4407000
In practice, it may take some trial-and-error to get the legend in the right place.
Similarly, we can add a north arrow and a map scale,
30
41
What we see on a choropleth map and how we interpret it is a function of the classification used to
shade the areas. Typing ?auto.shading we discover that the default for autoshading is to use a
quantile classification with five (n + 1) categories and a red shading scheme. We might like to
compare this with using a standard deviation based classification and one based on the range.
> x <- districts@data$POPDEN
> shades2 <- auto.shading(x, cutter=sdCuts, cols=brewer.pal(5,"Greens"))
> shades3 <- auto.shading(x, cutter=rangeCuts, cols=brewer.pal(5,"Blues"))
10
We could now plot these maps in the same way as before. It will work; however, it may also
become tiresome typing the same code each time to overlay the point data and to add the
annotations. We could save ourselves the trouble by writing a simple function to do it:
> map.details <- function(shading) {
+
plot(landdata, pch=21, bg="yellow", cex=0.7, add=T)
+
choro.legend(461000,4407000,shading,fmt="%4.1f",title='Population density')
+
north.arrow(473000, 4445000, "N", len=1000, col="light gray")
+
map.scale(425000,4400000,10000,"km",5,subdiv=2,tcol='black',
+
scol='black',sfcol='black')
+ }
42
10
n=3)
DELE DRIVER DPARK Y0405 Y0607 Y0809
7.25
7.38 8.09
0
1
0
5.61
8.41 7.51
0
1
0
5.62
8.22 7.27
0
1
0
Altering the size is achieved simply enough by passing some function of the variable (LNPRICE) to
the character expansion argument (cex). For example,
> x <- landdata@data$LNPRICE
> plot(landdata, pch=21, bg="yellow", cex=0.2*x)
Shading them according to their value is a little harder. The process is to cut the values into groups
(using a quintile classification, for example). Then create a colour palette for each group. Finally,
map the points and shade them by group.
20
There are various ways this can be done. The first stage is to find the lower and upper values (the
break points) for each group and an easy way to do that is to use the classInt library that is designed
for the purpose.
> library(classInt)
For example, for a quantile classification with five groups the break points are:
> classIntervals(x, 5, "quantile")
(we can see the number of land parcels in each group is approximately but not exactly equal)
For a 'natural breaks' classification they are
> classIntervals(x, 5, "fisher")
30
or
> classIntervals(x, 5, "jenks")
classIntervals(...)
43
The second stage is to assign each of the land price values to one of the five groups. This is done
using the cut(...) function.
> groups <- cut(x, break.points, include.lowest=T, labels=F)
We can check the number of land parcels in each group by counting them
> table(groups)
groups
1
2
3
4
5
159 267 352 227 112
[7.865,8.82)
227
[8.82,11.06]
112
This is most easily done using the RColorBrewer library which is based on the ColorBrewer
website, http://www.colorbrewer.org and is designed to create nice looking colour palettes
especially for thematic maps.
> library(RColorBrewer)
20
In fact, you have used this library already when creating the choropleth maps. It is implicit in the
command auto.shading(x, cutter=sdCuts, cols=brewer.pal(5,'Greens')), for example, where
the function brewer.pal(...) is a call to RColorBrewer asking it to create a sequential palette of
five colours going from light to dark green.
We can create a colour palette in the same way,
> palette <- brewer.pal(5, "Greens")
# Use ?brewer.pal to find out about other colour schemes
Adding the legend is a little harder and uses the legend(...) function,
> legend("bottomright", legend=c("4.85 to <6.3", "6.3 to <7.105",
+ "7.105 to <7.865","7.865 to <8.82","8.82 to 11.06"), pch=21, pt.bg=palette,
+ pt.cex = c(0.2*5.6, 0.2*6.7, 0.2*7.5, 0.2*8.3, 0.2*9.9), title="Land value (log)")
40
Tip When creating maps in colour, be wary of choosing colours that cannot be distinguished from
one another by those with colour-blindness. Red-green combinations are particularly problematic.
Be aware, also, that many publications still require graphics to be in grayscale. You could use, for
example, palette <- brewer.pal(5, "Greys")
44
then define the length of each raster cell (here giving a 1km by 1km cell size as the units are metres)
> cell.length <- 1000
10
We will allow the grid to complete cover the districts of Beijing so will base its dimensions on the
bounding box (the minimum enclosing rectangle) for the districts. The bounding box is found using
> bbox(districts)
min
max
x 418358.5 473517.4
y 4391094.2 4447245.9
and this information will be used to calculate the number of columns (ncol) and the number of rows
(nrow) for the grid:
20
>
>
>
>
>
>
>
xmin
xmax
ymin
ymax
ncol
nrow
ncol
<<<<<<-
bbox(districts)[1,1]
bbox(districts)[1,2]
bbox(districts)[2,1]
bbox(districts)[2,2]
round((xmax - xmin) / cell.length, 0)
round((ymax - ymin) / cell.length, 0)
45
[1] 55
> nrow
[1] 56
The next stage is to define the (x, y) and attribute values of the points that we are going to aggregate
by averaging into the bank grid.
10
>
>
>
>
>
xs <- coordinates(landdata)[,1]
ys <- coordinates(landdata)[,2]
xy <- cbind(xs, ys)
x <- landdata@data$LNPRICE
land.grid = rasterize(xy, blank.grid, x, mean)
or customise it a little
20
>
>
>
>
>
+
>
46
10
over(...)
shows that the first point in the land price data is located in the 58 th of the districts. The reason that
it is the 58th and not the 57th is that the IDs (SP_ID) are numbered beginning from zero not one,
which is common for GIS. We can easily check this is correct:
> plot(districts[58,])
> plot(landdata[1,], pch=21, add=T)
These data cannot be plotted as they are. What we have is just a data table. It is not linked to any
map.
30
> class(joined.data)
[1] "data.frame"
However, they can be linked to the geography of the existing map to create a new Spatial Pointswith-attribute data object that can then be mapped in the way described in Section 4.3, p.43.
> combined.map <- SpatialPointsDataFrame(coordinates(landdata), joined.data)
> class(combined.map)
[1] "SpatialPointsDataFrame"
attr(,"package")
[1] "sp"
> proj4string(combined.map) <- crs
> head(combined.map@data)
40
47
10
Figure 4.4. The population density of the districts at each of the land parcel points
(the map is a result of a spatial join operation)
Secondly, the code used to complete the tasks and produces the maps allows for reproducibility: it
can be shared with (and checked by) other people. It can also be easily changed if, for example, you
wanted to slightly alter the point sizes or change the classes from coloured to grayscale. Making a
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
48
few tweaks to a script can be much faster than having to go through a number of drop-down menus,
tabs, right-clicks, etc. to achieve what you want.
Third, precisely because R's spatial capabilities do not standalone from the rest of its functionality,
they allow for the integration of statistical and spatial ways of working. Three examples follow.
4.6.1 Example 1: mapping regression residuals
We can use the combined dataset to fit a hedonic land price model, estimating some of the
predictors of land price at each of the locations. The variables are:
10
20
30
Residuals:
Min
1Q
Median
-2.83427 -0.59053 -0.03558
3Q
0.53943
Max
2.97878
Coefficients:
40
49
To obtain the residuals (errors) from this model we can use any of the functions residuals(...),
rstandard(...) or rstudent(...) to obtain the 'raw' residuals, the standarised residuals and the
Studentised residuals, respectively.
> residuals <- rstudent(model1)
> summary(residuals)
10
20
An advantage of using R is that we can now map the residuals to look for geographical patterns
that, if they exist, would violate the assumption of independent errors (and potentially affect both
the estimate of the model coefficients and their standard errors).
>
>
>
>
>
>
+
>
>
+
Figure 4.5. Map of the regression residuals from a model predicting the land parcel prices
50
Is there a geographical pattern to the residuals in Figure 4.5? Perhaps, although this raises the
question of what actually a random pattern would like. What there definitely is, is a significant
correlation between the residual value at any one point and that of its nearest neighbouring point:
10
> library(spdep)
# Loads the spatial dependence library
> knn1 <- knearneigh(combined.map, k=1, RANN=F)$nn
# Finds the first nearest neighbour to each point
> head(knn1, 3)
[,1]
[1,]
10
# The nearest neighbour to point 1 is point 10
[2,] 172
# The nearest neighbour to point 2 is point 172
[3,] 152
[etc.]
> cor.test(residuals, residuals[knn1])
Pearson's product-moment correlation
20
If we wished, we could now save the residual values as a new shapefile to be used in other GIS.
This is straightforward and uses the same procedure to create a Spatial Points-with-attribute data
object that we used in Section 4.5.
30
40
Once installed, rgdal can be used for spatial data import and export, and projection and
transformation, as documented in Chapter 4 of Bivand et al.
1 Note: it used to be the case that Mac Intel OS X binaries were not provided on CRAN, but could be installed from
the CRAN Extras repository with
> setRepositories(ind=1:2)
> install.packages("rgdal")
However, at the time of writing the Mac binaries are provided on CRAN and can be downloaded in the normal way
without having to change the source repository.
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
51
Reference:
Bivand, R.S., Pebesma, E.J. & Gmez-Rubio, V., 2008. Applied Spatial Data Analysis with R.
Berlin: Springer.
There are some excellent R spatial tips and tutorials on Chris Brunsdon's Rpubs site,
http://rpubs.com/chrisbrunsdon, and on James Cheshire's website, http://spatial.ly/r/.
Perhaps the most hardest thing is to remember which library to use when. At the risk of oversimplification:
20
52
10
If you have closed and restarted R since the last session, load the workspace session5.RData which
contains the districts polygons and combined map created previously in Session 4 (see Section
4.1.3, p.40 and Section 4.5, p.47). All the code for this session is contained in the file Session5.R
> load(file.choose())
A contiguity matrix is one that identifies polygons that share boundaries and (in what is called the
Queen's case) corners too. In other words, it identifies neighbouring areas. To do this we use the
poly2nb(...) function polygons to an object of class neighbours.
> contig <- poly2nb(districts)
20
30
A summary of the neighbours object shows that there are 134 regions (districts, which can be
confirmed using nrow(districts)) with each being linked to 5.46 others, on average. There are two
regions with no links (use plot(districts) and you can see them to the east of the map), and two
regions with 10 links. The Queen's case is assumed by default, see ?poly2nb.
> summary(contig)
Neighbour list object:
Number of regions: 134
Number of nonzero links: 732
Percentage nonzero weights: 4.076632
Average number of links: 5.462687
2 regions with no links:
20 80
Link number distribution:
0
2
1 2 3 4 5 6 7 8
1 6 8 23 29 26 21 9
1 least connected region:
129 with 1 link
2 most connected regions:
90 100 with 10 links
40
9 10
7 2
It is helpful to learn a little more about the structure of the contiguity object. It is an object of class
nb which is itself a type of list.
> class(contig)
[1] "nb"
> typeof(contig)
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
53
[1] "list"
Looking at the first part of this list we find that the first district has two neighbours: polygons 52
and 54.
> contig[[1]]
[1] 52 54
99 100
We can confirm this is correct by plotting the district and its neighbours on a map
10
> plot(districts)
> plot(districts[2,], col="red", add=T)
> plot(districts[c(3,4,6,99,100),], col="yellow", add=T)
It would not make sense to evaluate contiguity of point data. Instead, we could find, for example,
the six nearest neighbours to each point:
> knear6 <- knearneigh(combined.map, k=6, RANN=F)
20
30
40
This produces an object of class knn. Looking at its parts we find that the six nearest neighbours
(nn) to point 1 are points 10, 11, 1077, 1076, 93, 453, where 10 is the closest; that there are 1117
points in total (np); that we searched for the six nearest neighbours (k); that the points exist in a
two-dimensional space (dimension); and that we can see the coordinates of the points (labelled x).
> names(knear6)
[1] "nn"
"np"
"k"
> head(knear6$nn)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,]
10
11 1077 1076
93 453
[2,] 172 110 155 162 153 156
[3,] 152 149 150 151 169 168
[4,] 135 143 679 148 678 674
[5,]
32 164 166 165 167 168
[6,] 432 920 969 1024 919 968
> head(knear6$np)
[1] 1117
> head(knear6$k)
[1] 6
> head(knear6$dimension)
[1] 2
> head(knear6$x, n=3)
x
y
[1,] 454393.1 4417809
[2,] 442744.9 4417781
[3,] 444191.7 4416996
"dimension" "x"
Imagine we are interested in calculating the correlation between some variable (call it x) at each of
the points and at each of the points' closest neighbour. From the above [ head(knear6$nn)] we can
see this is the correlation between x 1, x2, x3, x4, x5, x6, (etc.) and x10, x172, x152, x135, x32, x432, (etc.).
For the correlation with the second closest neighbours it would be with x 11, x110, x149, x143, x164, x920,
(etc.), for the third closest, x1077, x155, x150, x679, x166, x969, (etc.), and so forth. Using the simulated data
about the price of land parcels in Beijing, we can calculate these correlations as follows:
> x <- combined.map$LNPRICE
# Or, combined.map@data$LNPRICE
54
10
What these values suggest is that even at the sixth nearest neighbour, the value of a land parcel at
any given point tends to be similar to the value of the land parcels around it an example of
positive spatial autocorrelation.
An issue is that the threshold of six nearest neighbours is purely arbitrary. An interesting question is
how far how many neighbours away we can typically go from a point and still find a similarity
in the land price values. One way to determine this would be to carry on with the calculations
above, repeating the procedure until we get to, say, the 250 th nearest neighbour. This is, in fact, what
we will do but automating the procedure. One way to achieve this is to use a for loop:
20
>
>
>
>
>
+
+
>
Another way which amounts to the same thing is to make use of R's ability to apply a function
sequentially to columns (or rows) in an array of data:
30
Looking at the plot (Figure 5.1) we find that the land prices become more dissimilar (less
correlated) the further away we go from each point, dropping to zero correlation from after about
the 200th nearest neighbour. The rate of decrease in the correlation is greatest to about the 35 th
neighbour, after which it begins to flatten.
40
We can also determine the p-values associated with each of these correlations and identify which
are not significant at a 99% confidence,
> pvals <- apply(knear250$nn, 2, function(i) cor.test(x, x[i])$p.value)
> which(pvals > 0.01)
[1] 63 88 110 115 121 125 134 136 137 138 140 142 145 146 [etc.]
It is from about the 100th neighbour that the correlations begin to become insignificant. Whether this
is usefully information or not is a moot point: a measure of statistical significance is really only an
indirect measure of the sample size. It may be better to make a decision about the threshold at
which the neighbours are not substantively correlated based on the actual correlations (the effect
sizes) rather than their p-values. Whilst it remains a subjective choice, here we will here use the 35 th
neighbour as the limit, before which the correlations are typically equal to r = 0.20 or greater.
55
To now convert this object of class knn to the same class of object that we had in Section 5.1.2
('Creating a contiguity matrix') we use
> knear35nb <- knn2nb(knear35)
> class(knear35nb)
[1] "nb"
> head(knear35nb, n=1)
[[1]]
[1]
8
10
11
14
19
10
54
56
57
82
91
92
93
[etc.]
Figure 5.1. The Pearson correlation between the land parcel values at
each point and their nth nearest neighbour
5.1.4 Identifying neighbours by (physical) distance apart
20
It is also possible to identify the neighbours of points by their Euclidean or Great Circle distance
apart using the function dnerneigh(...). For example, if we wanted to identify all points between
100 and 1000 metres of each other:
> d100to1000 <- dnearneigh(combined.map, 100, 1000)
> class(d100to1000)
[1] "nb"
> d100to1000
Neighbour list object:
Number of regions: 1117
Number of nonzero links: 9822
Percentage nonzero weights: 0.7872154
56
10
What we created in Section 5.1 was a list of neighbours where we had flexibility to decide about
what counts as a neighbour. The next stage will be to convert it into a spatial weights matrix so we
can use it for various methods of spatial analysis. This extra stage of conversion may seem like an
unnecessary additional chore. However, the creation of the spatial weights matrix allows us to
define the strength of relationship between neighbours. For example, we may want to give more
weight to neighbours that are located closer together and less weight to those that are further apart
(decreasing to zero beyond a certain threshold).
5.2.1 Creating a binary list of weights
We could create a simple binary 'matrix' from any of our existing lists of neighbours. In principle:
> spcontig <- nb2listw(contig, style="B")
Error in nb2listw(contig, style = "B") : Empty neighbour sets found
20
Note, however, the error message, which arises because two of the Chinese districts do not share a
boundary with others,
> contig
In this case, we shall have to instruct the function to permit an empty set
> spcontig <- nb2listw(contig, style="B", zero.policy=T)
The same problem does not arise for the one hundred nearest neighbours (by definition, it cannot
each point has neighbours) but it does for the distance based list:
> spknear35 <- nb2listw(knear35nb, style="B")
> spd100to1000 <- nb2listw(d100to1000, style="B")
Error in nb2listw(d100to1000, style = "B") : Empty neighbour sets found
> spd100to1000 <- nb2listw(d100to1000, style="B", zero.policy=T)
30
40
Looking at the first of these objects we can see how it has been constructed. It contains binary
weights (style B); district 1 has two neighbours, districts 52 and 54; and both of those have been
given a weight of one (all other districts therefore have a weight of zero with district 1). Similarly,
district 2 has neighbours 3, 4, 6, 999 and 100, each with a weight of one.
> names(spcontig)
[1] "style"
"neighbours" "weights"
> spcontig$style
[1] "B"
> head(spcontig$neighbours, n=2)
[[1]]
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
57
[1] 52 54
[[2]]
[1]
3
99 100
Using binary weights where (1) indicates two places are neighbours, and (0) indicates they are not,
may create a problem when different places have different numbers of neighbours (as is the case for
both the contiguity and distance-based approaches). Imagine a calculation where the result is in
someway dependent upon the sum of the weights involved. For example,
n
y i = wij x j
All things being equal, we expect places with more neighbours to generate larger values of y simply
because they have more non-zero values contributing to the sum. A way around this problem is to
scale the weights so that for any one place they sum to one a process known as rowstandardisation and actually the default option:
20
30
99 100
In the case of the contiguity matrix, district 1 still has neighbours 52 and 54, and district 2 still has
neighbours 3, 4, 6, 99 and 100, but the weights are now row-standardised (style W) and in each case
they sum to one.
5.2.3 Creating an inverse distance weighting (IDW)
40
A more ambitious undertaking is to decrease the weighting given to two points according to their
distance apart, reducing to zero, for example, beyond the 35 th nearest neighbour. To achieve this, we
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
58
begin by calculating the distances between each of the points, using the spDists(...) function. This
calculate the distances between two sets of points where the points' locations are defined by an (x,
y) coordinates (or by longitude and latitude: see ?spDists). To obtain the (x, y) coordinates of all the
land parcels contained in our combined map we could use the function coordinates(...), therefore
obtaining the distance-between-points matrix using
> d.matrix <- spDists(coordinates(combined.map), coordinates(combined.map))
10
Either way, the same result is achieved: an np by np matrix where np is the number of points and
the matrix contains the distances between them:
> d.matrix
# The full matrix. It's too large to show on screen.
> head(d.matrix[1,1:10])
# The distances from point 1 to the first 10 others
[1]
0.000 11648.171 10233.788 10360.952 9345.806 3618.377
> nrow(d.matrix)
[1] 1117
This is showing that the distance from point 1 to point 2 is 11.6km. The distance from point 1 to
itself is, of course, zero and the matrix is symmetric,
20
> d.matrix[1,2]
[1] 11648.17
> d.matrix[2,1]
[1] 11648.17
If we are going to reduce the weighting to zero beyond the 35 th nearest neighbour then we don't
actually need the full distance matrix, only the distances from each point to those 35 neighbours.
We have already identified the nearest 35 neighbours for each point but for the sake of
completeness let's do it again:
30
Looking at the results we know, for example, that the nearest neighbours to point 1 are points 10,
11, 1077, etc. so the distances we need to extract from the distance matrix are row 1, columns 10,
11, 1077, and so forth. For point 2 the nearest neighbours are 172, 110, 155, etc. so from the
distance matrix we need row 2, columns 172, 110, 155,
> head(knear35$nn, n=2)
[,1] [,2] [,3] [,4] [,5] [etc.]
[1,]
10
11 1077 1076
93 [etc.]
[2,] 172 110 155 162 153 [etc.]
For point 1 we may obtain the distances to its 35 nearest neighbours using,
40
For point 2:
> i <- knear35$nn[2,]
59
> head(i)
[1] 172 110 155 162 153 156
> distances <- d.matrix[2,i]
> head(distances)
[1] 220.4186 293.3992 827.5340 844.6190 845.0816 868.7554
The same logic underpins the following code except instead of manually obtaining the distances for
each point in turn, it loops through them all sequentially. It is also calculates weights that are
1
inversely related to the distance from a point to its neighbours (here wij = 0.5 is used).
d ij
10
20
We can now create the list of neighbours (an object of class nb) and have a corresponding list of
general weights (based on inverse distance weighting) that together allow for the final spatial
weights matrix to be created:
> knear35nb <- knn2nb(knear35)
> head(knear35nb, n=2)
# The list of neighbours
[[1]]
[1]
10
11
91
92
[etc.]
[[2]]
[1]
61
66
67
77
[etc.]
30
[[2]]
[1] 0.06735594 0.05838087 0.03476219 0.03440880 0.03439938 [etc.]
> spknear35IDW <- nb2listw(knear35nb, glist=d.weights)
# Creates the spatial weights matrix, now with IDW
Looking at the result we find that point 1 still has points 10, 11, 91, 92 and so forth as its neighbours
(as it should, it would be worrying if that had changed!) but looking at their weighting, it decreases
with distance away.
> head(spknear35IDW$neighbours, n=1)
[[1]]
[1]
10
11
91
92 [etc.]
40
wij
1
STD
0.5 but are rescaled to w ij =
d ij
wi
because they are row-standardised. This means that the inverse distance weighting is a function of
the local distribution of points around each point not just how far away away they are. For example,
Note, however, that the weights are not actually wij =
60
consider a point where all its neighbours are quite far from it. Using a strictly distance-based
weighting each of those neighbours should receive a low weighting. However, once row
standardisation is applied those low weights will be scaled upwards to sum to one. Reciprocally,
imagine a point where all its neighbours are very close. Using a distance-based weighting those
neighbours should receive a high weighting; in effect, though, they will be scaled downwards by the
row standardisation. This may sound undesirable and counter to the objectives of inverse distance
weighting, and can be prevented by changing the weights style
> spknear35IDWC <- nb2listw(knear35nb, glist=d.weights, style="C")
10
However, imagine the points sample across both urban and rural areas. The distances between
points will most likely be smaller in the urban regions (where the density of points is greater,
reflecting the greater population density), with greater distances between points in the rural regions.
If row standardisation is not applied then the net result will be to give more weight to the urban
parts of the region such that any subsequent calculation dependent upon the sum of the weights will
be more strongly influenced by the urban areas than by the rural ones. Therefore careful though
needs to be given to the style of weights to use.
5.2.4 Variants of the above
Common forms of inverse distance weighting include the bisquare and Gassian functions. These
are, respectively,
2
w ij =(1
d ij
2
d MAX
) where dMAX is the threshold beyond which the weights are set to zero; and
2
wij =exp(0.5
20
30
40
d ij
2
d MAX
61
Once we have the spatial weights we can use them to create a spatially lagged variable. For
example, if xi is the value of the land parcel at point i, then its spatial lag is the mean value of the
land parcels that are the neighbours of i, where those neighbours are defined by the spatial weights.
More precisely, it is the weighted mean value if, for example, inverse distance weighting has been
employed. It is straightforward to calculate the spatially lagged variable. For example,
> x <- combined.map$LNPRICE
> lagx <- lag.listw(spknear35gaus, x)
10
Having done so, the correlation between points and their neighbours can be calculated,
> cor.test(x, lagx)
Pearson's product-moment correlation
20
Here there is evidence of significant positive spatial autocorrelation that the land price at one
point tends to be similar to the land prices of its neighbours. This can be seen if we plot the two
variables on a scatter plot, although the relationship is also somewhat noisy and may not be linear.
>
>
>
>
plot(lagx ~ x)
best.fit <- lm(lagx ~ x)
abline(best.fit)
summary(best.fit)
Call:
lm(formula = lagx ~ x)
30
Residuals:
Min
1Q Median
-2.2042 -0.3933 -0.0004
3Q
0.3912
Max
1.6712
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.70723
0.12272
46.51
<2e-16 ***
x
0.23176
0.01639
14.14
<2e-16 ***
--Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
40
62
Figure 5.1. The relationship between the land price values and the spatial lag of those values
5.3.2 A Moran plot and test
What we created in Figure 5.1 is known as a Moran plot. A more direct way of producing it is to use
the moran.plot(...) function,
> moran.plot(x, spknear35gaus)
which flags potential outliers / influential observations. To suppress their labelling, include the
argument labels=F.
The Moran coefficient and related test provide a measure of the spatial autocorrelation in the data,
given the spatial weightings.
10
20
Essentially the Moran statistic is a correlation value, although it need not be exactly zero in the
presence of no correlation (here the expected value is not zero but slightly negative) and can go
beyond the range -1 to +1. The interpretation though is that the price of the land parcels and their
neighbours are positively correlated: there is a tendency for like-near-like values.
More strictly, we should acknowledge the note found under ?moran.test(...) that the derivation of
the tests assumes the matrix is symmetric, which it is not (because where A is one of the nearest
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
63
neighbours to B it does not follow that B is necessarily one of the nearest neighbours to A):
> spknear35gaus
Characteristics of weights list object:
Neighbour list object:
Number of regions: 1117
Number of nonzero links: 39095
Percentage nonzero weights: 3.133393
Average number of links: 35
Non-symmetric neighbours list
# Note that the 'matrix' is not symmetric
10
Weights style: C
Weights constants summary:
n
nn
S0
S1
S2
C 1117 1247689 1117 58.55285 4623.092
30
data:
model: lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809, data = combined.map)
weights: listw2U(spknear35gaus)
Moran I statistic standard deviate = 16.0962, p-value < 2.2e-16
alternative hypothesis: greater
sample estimates:
Observed Moran's I
Expectation
Variance
9.683406e-02
-3.893216e-03
3.916053e-05
The estimated correlation is about 0.097. Not huge, perhaps, but significant enough to question the
assumption of independence.
40
Note that this result is dependent on the spatial weightings. If we change them, then the results of
the Moran test will change also. For example, using the bi-square weighings (from Section 5.2.4,
p.61):
> lm.morantest(model1, listw2U(spknear35bisq))
Global Moran's I for regression residuals
64
data:
model: lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809, data = combined.map)
weights: listw2U(spknear35bisq)
Moran I statistic standard deviate = 13.2469, p-value < 2.2e-16
alternative hypothesis: greater
sample estimates:
Observed Moran's I
Expectation
Variance
9.651698e-02
-3.961995e-03
5.753359e-05
10
Here the change is slight, largely because the rate of decay of the inverse distance weighting matters
rather less than the number of neighbours it decays to. It both cases above the threshold is 35, a
number we obtained by judgement from Figure 5.1. Imagine we had chosen 150 instead. The results
we then get (using a Gaussian decay function) are,
Moran I statistic standard deviate = 13.9404, p-value < 2.2e-16
alternative hypothesis: greater
sample estimates:
Observed Moran's I
Expectation
Variance
3.230500e-02
-2.469718e-03
6.222665e-06
20
which, although still statistically significant, reduces the Moran's I value to about a third of its
previous value.
5.5 Summary
The general process for creating spatial weights in R is as follows:
(a) read-in the (X, Y) data or shapefile into the R workspace and (in doing-so) convert it into a
spatial object (see Session 4).
(b) Decide how neighbouring observations will be defined: by nearest neighbour, by distance,
by contiguity for example.
(c) Convert the object of class nb into an object of class listw (spatial weights). For k-nearest
neighbours there is a prior stage of converting the knn object into class nb.
30
(d) At the time of creating the spatial weights object you need to decide what type of weights to
use, for example binary or row-standardised. You may also supply a list of general weights
to produce inverse distance weighting.
65
66
This session requires the spdep, Gwmodel and lme4 libraries to have been installed.
6.1 Introduction
6.1.1 Getting Started
If you have closed and restarted R since the last session, load the workspace session6.RData which
contains the districts map and the combined (synthetic) land parcel and district data from Session 4
as well as the spatial weights with a Gaussian decay to the 35 th neighbour created in Session 5.
> load(file.choose())
> library(spdep)
> ls()
[1] "combined.map" "districts"
20
"spknear35gaus"
Recall that the (log) of the land price values show significant spatial variation,
> moran.test(combined.map$LNPRICE, spknear35gaus)
Moran's I test under randomisation
data: combined.map$LNPRICE
weights: spknear35gaus
Moran I statistic standard deviate = 39.663, p-value < 2.2e-16
alternative hypothesis: greater
sample estimates:
Moran I statistic
Expectation
Variance
0.2657983392
-0.0008960573
0.0000452122
30
Our challenge is to try and explain some of that variation within a regression framework.
6.1.2 OLS regression
We begin by re-fitting the regression model from the end of the previous session, noting once again
the apparent spatial dependencies in the residuals that violates the assumption of independence:
> model1 <- lm(LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809, data=combined.map)
> lm.morantest(model1, listw2U(spknear35gaus))
Global Moran's I for regression residuals
data:
model: lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
67
10
In addition to their apparent lack of independence we may also note that the residuals appear to
show evidence of heteroskedasticity of non-constant variance (they are therefore not independent
nor identically distributed):
> plot(residuals(model1) ~ fitted(model1))
# Plot the residuals against the fitted values
> abline(h=0, lty="dotted")
# Add a horizontal line at residual value = 0
> lines(lowess(fitted(model1), residuals(model1)), col="red")
# Add a trend line. We are hoping not to find a trend but
# to see that the residuals are random noise around 0.
# They are not.
20
however that same code won't work for the spatial models we produce below. Instead, we can write
a function to produce what we need,
> hetero.plot <- function(model) {
+
plot(residuals(model) ~ fitted(model))
+
abline(h=0, lty="dotted")
+
lines(lowess(fitted(model), residuals(model)), col="red")
+ }
> hetero.plot(model1)
30
There are no quick fixes for the violated assumption of independent and identically distributed
errors. The violation suggests we cannot take on trust the standard errors, t- and p-values shown
under the model summary. It is likely that the standard errors for at least some of the predictor
variables have been under-estimated (because if we have spatial dependencies in the residuals then
they likely arise from spatial dependencies in the data which in turn mean we have less degrees of
freedom than we think we have).
> summary(model1)
Call:
lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN +
JOBDEN + Y0405 + Y0607 + Y0809, data = combined.map)
40
Residuals:
Min
1Q
Median
-2.83427 -0.59053 -0.03558
3Q
0.53943
Max
2.97878
Coefficients:
(Intercept)
DCBD
DELE
DRIVER
DPARK
POPDEN
JOBDEN
68
Y0405
-0.183913
Y0607
0.152825
Y0809
0.803620
--Signif. codes: 0 ***
0.057289
0.087152
0.118338
-3.210 0.00136 **
1.754 0.07979 .
6.791 1.81e-11 ***
10
If we have doubts about this model because of the spatial patterning of the errors (which are likely
but not necessarily caused by the patterning of the Y variable) then we need to consider other
approaches.
One option is to fit a spatial simultaneous autoregressive error model which decomposes the error
into two parts: a spatially lagged component and a remaining error: y= X + W +
20
Fitting the model and comparing it with the standard regression model we find that two of the
predictor variables (DELE and POPDEN) no longer are significant at a conventional level and that
the standard errors for many have risen. The lambda (a measure of spatial autocorrelation) is
significant. The model fits the data better than the previous model (the AIC score is lower and the
log likelihood value greater, as is the pseudo-R 2 value):
> model2 <- errorsarlm(LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN +
Y0405 + Y0607 + Y0809, data=combined.map, spknear35gaus)
> summary(model2)
Call:errorsarlm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK +
+ Y0405 + Y0607 + Y0809, data = combined.map,
listw = spknear35gaus)
Residuals:
Min
1Q
Median
-2.724707 -0.555175 -0.050399
30
40
3Q
0.484010
POPDEN + JOBDEN
Max
2.750120
Type: error
Coefficients: (asymptotic standard errors)
Estimate Std. Error z value Pr(>|z|)
(Intercept) 12.7929865 1.0803367 11.8417 < 2.2e-16
DCBD
-0.4528868 0.1179775 -3.8388 0.0001237
DELE
-0.0430191 0.0414930 -1.0368 0.2998391
DRIVER
0.0805519 0.0389786 2.0666 0.0387749
DPARK
-0.2225801 0.0640377 -3.4758 0.0005094
POPDEN
0.0040076 0.0012339 3.2479 0.0011624
JOBDEN
0.0032269 0.0035468 0.9098 0.3629286
Y0405
-0.2185441 0.0548818 -3.9821 6.831e-05
Y0607
0.2488569 0.0832995 2.9875 0.0028127
Y0809
0.9301437 0.1131248 8.2223 2.220e-16
***
*
***
**
***
**
***
69
> AIC(model1)
[1] 2865.731
> AIC(model2)
[1] 2783.096
20
Although the spatial error model (above) fits the data better than the standard OLS model, it tells us
only that there is an unexplained spatial structure to the residuals, not what caused them. It may
offer better estimates of the model parameters and their statistical significance but it does not
presuppose any particular spatial process generating the patterns in the land price values. A different
model that explicitly tests for whether the land value at a point is functionally dependent on the
values of neighbouring points is the spatially lagged y model: Y =W y+ X +
The model is fitted in R using,
30
> model3 <- lagsarlm(LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405
+ Y0607 + Y0809, data=combined.map, spknear35gaus)
> summary(model3)
Call:lagsarlm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN +
JOBDEN + Y0405 + Y0607 + Y0809, data = combined.map, listw = spknear35gaus)
Residuals:
Min
1Q
Median
-2.818629 -0.577205 -0.051596
40
3Q
0.517413
Max
3.005430
Type: lag
Coefficients: (asymptotic standard errors)
Estimate Std. Error z value Pr(>|z|)
(Intercept) 9.3098717 0.8336866 11.1671 < 2.2e-16
DCBD
-0.1753427 0.0585675 -2.9939 0.0027547
DELE
-0.0706011 0.0323879 -2.1799 0.0292677
DRIVER
0.0334482 0.0304653 1.0979 0.2722435
DPARK
-0.2591509 0.0467432 -5.5441 2.954e-08
***
**
*
***
70
POPDEN
JOBDEN
Y0405
Y0607
Y0809
0.0042982
0.0057660
-0.1869054
0.1928584
0.8452182
10
The model is an improvement on the OLS model but does not appear to fit the data as well as the
error model (the lagged y model has a greater AIC, lower log likelihood and lower pseudo-R 2):
20
> AIC(model3)
[1] 2849.458
> logLik(model3)
'log Lik.' -1412.729 (df=12)
> cor(combined.map$LNPRICE, fitted(model3))^2
[1] 0.3074552
Moreover (and possibly related to the heteroskedasticity) significant autocorrelation remains in the
residuals from this model. We can, of course, plot these residuals (see Session 4, 'Using R as a
simple GIS' for further details). Here we will write a simple function to do so.
30
40
50
71
> quickmap(x)
Figure 6.1. The residuals from the spatial lagged y model still
display evidence of positive spatial autocorrelation
10
20
Note that the beta estimates of the lagged y-model cannot be interpreted in the same way as for a
standard OLS model. For example, the beta estimate of 0.004 for the POPDEN variable does not
mean that if (hypothetically) we increased that variable by one unit at each location we should then
expect the (log) of the land parcel to everywhere increase by 0.004 even holding the other X
variables constant. The reason is because if we did raise the value it would start something akin to a
'chain reaction' through the feedback of Y via the lagged Y values which will have a different
overall effect at different locations. That (equilibrium) effect is obtained from premultiplying by
1
( I W ) a given change in x at a location, holding x constant for other locations. The code
below, based on Ward & Gleditsch (2008, pp.47) will do that, taking each location in turn. However
I would advise against running it here as it takes a long time. For further details see Ward &
Gleditsch pp. 44-50.
>
>
>
>
>
>
>
>
+
+
+
+
+
+
## You are advised not to run this. Based on Ward & Gleditsch pp.47
n <- nrow(combined.map)
I <- matrix(0, nrow=n, ncol=n)
diag(I) <- 1
rho <- model3$rho
weights.matrix <- listw2mat(spknear35gaus)
results <- rep(NA, times=10)
for (i in 1:10) {
cat("\nCalculating for point",i," of ",n)
xvector <- rep(0, times=n)
xvector[i] <- 1
impact <- solve(I - rho * weights.matrix) %*% xvector * 0.004
results[i] <- impact[i]
}
72
6.2.3 Choosing between the models using Lagrange Multiplier (LM) Tests
Before fitting the spatial error and lagged y models (above), we could have looked for evidence in
support of them using the function lm.LMtests(...). This tests the basic OLS specification against
the more general spatial error and lagged y models. Robust tests also are given. There is evidence
for both of the spatial models in favour of the simpler OLS model but it is stronger (in purely
statistical terms) for the error model.
> lm.LMtests(model1, spknear35gaus, test="all")
Lagrange multiplier diagnostics for spatial dependence
10
data:
model: lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809, data = combined.map)
weights: spknear35gaus
LMerr = 199.8088, df = 1, p-value < 2.2e-16
20
30
data:
model: lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809, data = combined.map)
weights: spknear35gaus
RLMlag = 5.1518, df = 1, p-value = 0.02322
73
Warning message:
In lm.LMtests(model1, spknear35gaus, test = "all") :
Spatial weights matrix not row standardized
So far we have accommodated the spatial dependencies in the data by consideration to the error and
with consideration to the dependent (y) variable. Attention now turns to the predictor variables. An
extension to the lagged y model is to lag all the included x variables too.
10
> model4 <- lagsarlm(LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405
+ Y0607 + Y0809, data=combined.map, spknear35gaus, type="mixed")
> summary(model4)
Call:lagsarlm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN +
JOBDEN + Y0405 + Y0607 + Y0809, data = combined.map, listw = spknear35gaus,
type = "mixed")
Residuals:
Min
1Q
Median
-2.703670 -0.547982 -0.012966
20
30
40
3Q
0.496047
Max
2.831382
Type: mixed
Coefficients: (asymptotic standard errors)
Estimate Std. Error z value
(Intercept)
14.42342251 2.30649891 6.2534
DCBD
-0.70355874 0.22856175 -3.0782
DELE
-0.00261626 0.04397004 -0.0595
DRIVER
0.09144563 0.04468328 2.0465
DPARK
-0.06440409 0.08154334 -0.7898
POPDEN
0.00313902 0.00130863 2.3987
JOBDEN
-0.00063843 0.00377879 -0.1690
Y0405
-0.21727049 0.05452762 -3.9846
Y0607
0.22959603 0.08351013 2.7493
Y0809
0.90803383 0.11300773 8.0351
lag.(Intercept) -9.63048411 2.68285336 -3.5896
lag.DCBD
0.65596211 0.23856166 2.7497
lag.DELE
-0.03552211 0.06711110 -0.5293
lag.DRIVER
-0.04970303 0.07485862 -0.6640
lag.DPARK
-0.12152627 0.13681302 -0.8883
lag.POPDEN
0.00164266 0.00261945 0.6271
lag.JOBDEN
0.01011556 0.00665176 1.5207
lag.Y0405
0.23932841 0.25856976 0.9256
lag.Y0607
-0.62537354 0.35590512 -1.7571
lag.Y0809
-1.36572890 0.53207778 -2.5668
Pr(>|z|)
4.017e-10
0.0020826
0.9525531
0.0407043
0.4296362
0.0164527
0.8658360
6.760e-05
0.0059719
8.882e-16
0.0003311
0.0059658
0.5965952
0.5067167
0.3743980
0.5305917
0.1283262
0.3546615
0.0788947
0.0102646
***
**
*
*
***
**
***
***
**
74
What we find is that the lag of the distance to the CBD (measured on a lag scale) is significant but
note that the direction of the relationship is different from that for the original DCBD variable the
sign has reversed. The same has happened to the dummy variable indicating the sale of the land
parcel in the years 2008 to 2009. Taking the first case, what it suggests is that the relationship
between land price value and the (log of) distance to the CBD is not linear. Adding the square of
this variable to the original OLS model improves the model fit:
> model1b <- update(model1, . ~ . + I(DCBD^2))
> summary(model1b)
10
Call:
lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN +
JOBDEN + Y0405 + Y0607 + Y0809 + I(DCBD^2), data = combined.map)
Residuals:
Min
1Q
Median
-2.84512 -0.59732 -0.04968
3Q
0.53605
Max
2.97359
Coefficients:
20
30
40
However, doing the same in the spatial model results in a situation where the distance to CBD
variable no longer is significant under the spatial error model nor under the lagged y model though
in the latter case it is more borderline and the square of the variable remains significant:
> model2b <- update(model2, . ~ . + I(DCBD^2))
75
> summary(model2b)
Call:errorsarlm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK +
POPDEN + JOBDEN + Y0405 + Y0607 + Y0809 + I(DCBD^2), data = combined.map,
listw = spknear35gaus)
Residuals:
Min
1Q
Median
-2.73137 -0.55895 -0.05405
10
20
3Q
0.48248
Max
2.74627
Type: error
Coefficients: (asymptotic standard errors)
Estimate Std. Error z value Pr(>|z|)
(Intercept) 9.2141717 3.7866485 2.4333 0.014961
DCBD
0.3841081 0.8585578 0.4474 0.654595
DELE
-0.0366181 0.0420046 -0.8718 0.383337
DRIVER
0.0813644 0.0389130 2.0909 0.036534
DPARK
-0.2145776 0.0645093 -3.3263 0.000880
POPDEN
0.0039035 0.0012375 3.1543 0.001609
JOBDEN
0.0040846 0.0036396 1.1223 0.261754
Y0405
-0.2191549 0.0548720 -3.9939 6.499e-05
Y0607
0.2466920 0.0833052 2.9613 0.003063
Y0809
0.9292095 0.1130977 8.2160 2.220e-16
I(DCBD^2)
-0.0500019 0.0509827 -0.9808 0.326710
**
**
***
**
***
**
***
30
40
3Q
0.531172
Max
2.948786
Type: lag
Coefficients: (asymptotic standard errors)
Estimate Std. Error z value Pr(>|z|)
(Intercept) 4.0409714 2.6745551 1.5109 0.1308153
DCBD
1.0508575 0.5886085 1.7853 0.0742086
DELE
-0.0611436 0.0326664 -1.8718 0.0612404
DRIVER
0.0373019 0.0304562 1.2248 0.2206611
DPARK
-0.2445848 0.0472913 -5.1719 2.318e-07
POPDEN
0.0040491 0.0010413 3.8884 0.0001009
JOBDEN
0.0082186 0.0031842 2.5811 0.0098486
Y0405
-0.1890889 0.0564567 -3.3493 0.0008102
***
***
***
***
76
Y0607
Y0809
I(DCBD^2)
0.1796598
0.8462846
-0.0717869
10
We can, if we wish, include specific lagged X variables in the OLS model. The process is to create
them then include them in the model. The lag of DCBD and the lag of Y0809 are the most obvious
candidates to include (from model 4, above). To create the lagged variables,
20
The result seems to fit the data better than the original OLS model (AIC score of 2848.3 Vs 2865.7
remember, the lower the better) but actually the lag of DCBD appears not to be significant in this
model whilst significant spatial autocorrelation appears to remain:
> summary(model1c)
Call:
lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN +
JOBDEN + Y0405 + Y0607 + Y0809 + lag.DCBD + lag.Y0809, data = combined.map)
30
Residuals:
Min
1Q Median
-2.9443 -0.5892 -0.0318
3Q
0.5279
Max
2.8866
Coefficients:
40
77
10
data:
model: lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809 + lag.DCBD + lag.Y0809,
data = combined.map)
weights: spknear35gaus
Moran I statistic standard deviate = 15.2495, p-value < 2.2e-16
alternative hypothesis: greater
sample estimates:
Observed Moran's I
Expectation
Variance
8.890303e-02
-4.717012e-03
3.769016e-05
6.3 Discussion
20
Although the revised spatial models are presenting different results in regard to the distance to CBD
variable and its effect on land price, and these differ again from the revised OLS models, there may
be an argument that, in a general sense, the models are indicating the same thing. Distance to CBD
is a geographic measure. It tries to explain something about land values by their distance from the
CBD. The spatial error and lagged y models introduce geographical considerations in other ways
(and in addition to the distance to CBD variable). Looking at the results of the Lagrange Multiplier
tests (but we could also use the AIC or Log Likelihood scores) the spatial error model remains the
'preferred' model. And that, again, is another clue: the complexity of the spatial patterning of the
land parcel prices has yet to be fully captured by our model; its causes remain largely unexplained.
> lm.LMtests(model1b, spknear35gaus, test=c("LMerr", "LMlag"))
Lagrange multiplier diagnostics for spatial dependence
30
data:
model: lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809 + I(DCBD^2), data =
combined.map)
weights: spknear35gaus
LMerr = 169.6169, df = 1, p-value < 2.2e-16
40
data:
model: lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809 + I(DCBD^2), data =
combined.map)
weights: spknear35gaus
LMlag = 18.0596, df = 1, p-value = 2.141e-05
78
Warning message:
In lm.LMtests(model1b, spknear35gaus, test = c("LMerr", "LMlag")) :
Spatial weights matrix not row standardized
The stages of fitting a Geographically Weighted Regression model are first to load the GWR library,
calculate a distance matrix containing the distances between points, calibrate the bandwidth for the
local model fitting, fit the model, then look for evidence of spatial variations in the estimates.
First the library. There are two we could use. The first is library(spgwr). However, we will use the
more recently developed library(GWmodel) which contains a suite of tools for geographically
weighted types of analysis.
> library(GWmodel)
20
Now the bandwidth, here using a nearest neighbours metric and a Gaussian decay for the inverse
distance weighting (for other options, see ?bw.gwr). The bandwidth is found using a cross-validation
optimisation procedure.
> bw <- bw.gwr(LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN + JOBDEN + Y0405 +
Y0607 + Y0809, data=combined.map, adaptive=T, dMat=distances)
> bw
[1] 30
Here the bandwidth decreases to zero at the 30th neighbour, encouragingly similar to the 35 th
neighbour value we have used for our spatial weights throughout this session. Next the model is
fitted:
30
Looking at the results we can see that the model has a better fit to the data than any other fitted thus
far. The summary of the GWR estimates is of how the (local) beta estimates vary across the study
region. For example, from the inter-quartile range, the effect of distance to CBD on land value
prices is typically found to be from -0.682 to -0.030.
40
> gwr.model
[..]
***********************************************************************
*
Results of Geographically Weighted Regression
*
***********************************************************************
*********************Model calibration information*********************
Kernel function: gaussian
Adaptive bandwidth: 30 (number of nearest neighbours)
79
10
20
The estimates themselves for each of the land parcel points are found within the gwr model's spatial
data frame:
> names(gwr.model$SDF)
30
The parts of this data frame with the original variable names are the local beta estimates, those with
the suffice _SE are the corresponding standard errors, together giving the t-values, marked _TV.
6.4.2 Mapping the estimates
If we map the local beta estimates for the distance to CBD variable, the spatially variable effect
becomes clear,
> x <- gwr.model$SDF$DCBD
> par(mfrow=c(1,2))
# This will allow for two maps to be drawn side-by-side
> quickmap(x)
However, we might wish to ignore those that are locally insignificant at a 95% confidence.
> quickmap(x, subset=abs(gwr.model$SDF$DCBD_TV) > 1.96)
40
Doing so, what we appear to find is that there are clusters of land parcels where distance from the
CBD has a greater effect on their value than is true for surrounding locations.
80
Figure 6.2. The local beta estimates for the distance to CBD variable estimated
using Geographically Weighted Regression. In the right-side plot the 'insignificant'
estimates are omitted.
6.4.3 Testing for significance of the GWR parameter variability
(Note: at the time of the writing there is an error in the function montecarlo.gwr(...) included in
the GWmodel package version 1.2-1 that will be changed in future updates. Load the file
montecarlo.R using the source(file.choose()) function to import a corrected version.)
The function montecarlo.gwr(...) uses a randomisation approach to undertake significance testing
for the variability of the estimated local beta estimates (the regression parameters). The default
number of simulations is 99 (which, with the actual estimates, gives 100 sets of values in total).
That is quite a low number but can be used for illustrative purposes. In practice it would be better to
raise it to 999, 9999 or even more.
10
20
(Intercept)
DCBD
DELE
DRIVER
DPARK
POPDEN
JOBDEN
Y0405
Y0607
Y0809
p-value
0.00
0.00
0.00
0.00
0.00
0.00
0.01
0.03
0.00
0.00
81
10
The simplest multilevel model is one that does nothing more than estimate the mean of the land
price values, uses that as the sole predictor of the land price values, and then partitions the errors in
the way described above. In other words, it is a regression model for which there is only an
intercept term (no slope) and the residual variances are estimated at the lower and higher
geographical levels.
There are a number of packages to fit multilevel models in R. We shall use...
> library(lme4)
To fit the model with an intercept-only using standard OLS estimation and no partitioning of the
residual variance we would type,
> nullLMmodel <- lm(LNPRICE ~ 1, data=combined.map)
> logLik(nullLMmodel)
'log Lik.' -1617.09 (df=2)
To fit the corresponding model using a multilevel approach where the residual variance is assumed
to be random at both the land parcel and district scales the notation is similar but includes the
parentheses identifying the scales. 1 indicates the lower level whilst the variable SP_ID arises from
the overlay of the point and polygonal data undertaken in Section 4.5, p. 47, 'Spatially joining data'
and gives a unique ID for each district.
30
40
The key thing to note here is the proportion of the residual variance that is at the district level,
> 0.3537 / (0.3537 + 0.7222)
[1] 0.328748
- almost one third. This is a sizeable amount and is suggestive of the spatial patterning of the land
An Introduction to Mapping and Spatial Modelling in R. Richard Harris, 2013
82
parcel values. The log likelihood of this model is greater than for the OLS model,
> logLik(nullMLmodel)
'log Lik.' -1491.938 (df=3)
The likelihood ratio test statistic is two times the difference in the log likelihood values for two
models, here
> 2 * as.numeric((logLik(nullMLmodel) - logLik(nullLMmodel)))
[1] 250.3039
10
Assessing against a chi-squared distribution with 1 degree of freedom (the difference in the degrees
of freedom for the two models, arising from the estimation of the additional, higher-level error
variance) we find 'the probability the result (i.e. the improved likelihood value) has arisen by
chance' is essentially zero:
> 1 - pchisq(250, 1)
[1] 0
Having established that there is an appreciable amount of variation in the land parcel prices at the
district scale, our next stage will be to refit our predictive regression mode but again with a
multilevel framework. Recall, for example, (OLS) model1,
20
> model1$call
lm(formula = LNPRICE ~ DCBD + DELE + DRIVER + DPARK + POPDEN +
JOBDEN + Y0405 + Y0607 + Y0809, data = combined.map)
30
Even with the predictor variables now included there remains an appreciable amount of variation
between districts,
40
Consequently there is strong support in favour of the multilevel model over the OLS one,
> 2
[1]
> 1
[1]
* as.numeric((logLik(MLmodel) - logLik(model1)))
25.90044
- pchisq(25.9, 1)
3.595691e-07
83
There is much more we could undertake in regard to the multilevel model, including allowing the
effect of each predictor variable to vary from one district to another (a random intercepts and slopes
model). Here, however, we shall confide ourselves to mapping the district level residuals to
identify those districts where the land parcel values are higher or lower than average.
The process of doing so begins by obtaining the residuals at the higher level of the hierarchy, using
the function ranef(...) (short for random effects)
> district.resids <- ranef(MLmodel)
10
The output from this function is a list, in this case of length 1 (this being the number of levels
above the land parcel level for which that variance has been estimated i.e. the district level).
> typeof(district.resids)
[1] "list"
> length(district.resids)
[1] 1
Inspecting its contents we find that it is a data frame containing the IDs of the districts and
information telling us whether the district-level effect is one of raising or decreasing the land parcel
prices:
20
3rd Qu.
0.21810
Max.
0.61750
Note that not every one of the census districts in Beijing will be included in this output. That is
because not every district contains a land parcel that was sold in the period of the data. We therefore
have to match those districts for which we do have data back to the original map of all districts,
which we can then use to map the residuals. First the matching,
30
>
>
>
>
40
Looking at the map, there appear to be clusters of districts with higher than expected land parcel
prices, some contiguous or close to districts with lower than expected prices. Not all these residual
values are necessarily significantly different from zero. Nevertheless, what we seem to have
evidence for again is the complexity of the geographical patterning.
84
85
86
87
88