## Tuesday, December 3, 2013

### La Belle France - Map styling with D3js

When I looked to the vectorial web mapping tools for the first time, I was completely carried away by this example made with Kartograph by the software creator Gregor Aisch.
This example was, actually, the reason I learned Kartograph and published some posts here.
Later, I learned D3js, which was also amazing, but I didn't find a map like La Bella Italia. So I tried to do a similar one myself.

To make the map, I had to reproduce a the result of the Kartograph example to understand how the effects were achieved. Then, get some base data to make the map, and finally, code the example.

### Getting the data

The map has, basically, data about the land masses, the countries and the French regions.
I tried data from different places (one of them, Eurostat, that ended in this example), until I decided to use the Natural Earth data. After some attempts, I decided to use the 1:10m data. The files are:

ogr2ogr -clipsrc -10 35 15 60 countries.shp ne_10m_admin_0_countries.shp
ogr2ogr -clipsrc -10 35 15 60 -where "adm0_a3 = 'FRA'" regions.shp ne_10m_admin_1_states_provinces.shp
3. ne_10m_land.shp, downloaded from github, since the official version gave errors when converted to TopoJSON. Clipped using:
ogr2ogr -clipsrc -10 35 15 60  land.shp ne_10m_land.shp
With that, the land, countries and regions are available. To merge them into a single TopoJson file, I used:
topojson -o ../data.json countries.shp regions.shp

### The html code

Since the code is quite long, and I think that I will made some more posts about specific parts of the technique, the comments are a bit shorter than usually.

#### CSS

Note how is the font AquilineTwo.ttf  loaded:
@font-face {
font-family: 'AquilineTwoRegular';
src: url('AquilineTwo-webfont.eot');
src: url('AquilineTwo-webfont.eot?#iefix') format('embedded-opentype'),
url('AquilineTwo-webfont.woff') format('woff'),
url('AquilineTwo-webfont.ttf') format('truetype'),
url('AquilineTwo-webfont.svg#AquilineTwoRegular') format('svg');
font-weight: normal;
font-style: normal;

}


Later, the font can be set using .attr("font-family","AquilineTwoRegular")

To achieve the effects, some layers are loaded more than one time, so different filters can be applied to get the shades and blurred borders:
svg.selectAll(".bgback")
.data(topojson.feature(data, data.objects.land).features)
.enter()
.append("path")
.attr("class", "bgback")
.attr("d", path)
.style("filter","url(#oglow)")
.style("stroke", "#999")
.style("stroke-width", 0.2);
In this case, the land masses are drawn, applying the effect named oglow, which looks like:
var oglow = defs.append("filter")
.attr("id","oglow");
oglow.append("feColorMatrix")
.attr("in","SourceGraphic")
.attr("type", "matrix")
.attr("values", "0 0 0 0 0   0 0 0 0 0   0 0 0 0 0   0 0 0 1 0")
oglow.append("feMorphology")
.attr("operator","dilate")
oglow.append("feColorMatrix")
.attr("type", "matrix")
.attr("values", "0 0 0 0 0.6 0 0 0 0 0.5333333333333333 0 0 0 0 0.5333333333333333  0 0 0 1 0")
.attr("result","r0");
oglow.append("feGaussianBlur")
.attr("in","r0")
.attr("stdDeviation","4")
.attr("result","r1");
oglow.append("feComposite")
.attr("operator","out")
.attr("in","r1")
.attr("result","comp");
To see how svg filters work, many pages are available. I got them looking at the Kartograph example generated html.

The labels aren't inside the TopoJSON (although they could be!), so I decided the labels to add and put them into an array:
var cities = [
{'pos': [2.351, 48.857], 'name': 'Paris'},
{'pos':[5.381, 43.293], 'name': 'Marseille'},
{'pos':[3.878, 43.609], 'name': 'Montpellier'},
{'pos':[4.856, 45.756], 'name': 'Lyon'},
{'pos':[1.436, 43.602], 'name': 'Toulouse'},
{'pos':[-0.566, 44.841], 'name': 'Bordeaux'},
{'pos':[-1.553, 47.212], 'name': 'Nantes'},
{'pos':[8.737, 41.925], 'name': 'Ajaccio'},
];
Then, adding them to the map is easy:
var city_labels =svg.selectAll(".city_label")
.data(cities)
.enter();

city_labels
.append("text")
.attr("class", "city_label")
.text(function(d){return d.name;})
.attr("font-family", "AquilineTwoRegular")
.attr("font-size", "18px")
.attr("fill", "#544")
.attr("x",function(d){return projection(d.pos)[0];})
.attr("y",function(d){return projection(d.pos)[1];});

city_labels
.append("circle")
.attr("r", 3)
.attr("fill", "black")
.attr("cx",function(d){return projection(d.pos)[0];})
.attr("cy",function(d){return projection(d.pos)[1];});
Note that the positions must be calculated transforming the longitude and latitude using the d3js projection functions.

#### The ship

To draw the ship, tow things are necessary, the path and the ship.
To draw the path:
var ferry_path = [[8.745, 41.908],
[8.308, 41.453],
[5.559, 43.043],
[5.268, 43.187],
[5.306, 43.289]
];
var shipPathLine = d3.svg.line()
.interpolate("cardinal")
.x(function(d) { return projection(d)[0]; })
.y(function(d) { return projection(d)[1]; });

var shipPath = svg.append("path")
.attr("d",shipPathLine(ferry_path))
.attr("stroke","#000")
.attr("class","ferry_path");
Basically, d3.svg.line is used to interpolate the points, making the line smoother. This is easier than the Kartograph way with geopaths, where the Bézier control points have to be calculated. d3.svg.line is amazing, more than what I thought before.
I don't know if the way to calculate the projected points is the best one, since I do it twice for each point, which is ugly.
To move the ship, a ship image is appended, and then moved with a setInterval:
  var shipPathEl = shipPath.node();
var shipPathElLen = shipPathEl.getTotalLength();

var pt = shipPathEl.getPointAtLength(0);
var shipIcon = svg.append("image")
.attr("x", pt.x - 10)
.attr("y", pt.y - 5.5)
.attr("width", 15)
.attr("height", 8);

var i = 0;
var delta = 0.05;
var dist_ease = 0.2;
var delta_ease = 0.9;
setInterval(function(){

pt = shipPathEl.getPointAtLength(i*shipPathElLen);
shipIcon
.transition()
.ease("linear")
.duration(1000)
.attr("x", pt.x - 10)
.attr("y", pt.y - 5.5);

//i = i + delta;

if (i < dist_ease){
i = i + delta * ((1-delta_ease) + i*delta_ease/dist_ease);
}else if (i > 1 - dist_ease){
i = i + delta * (1 - ((i - (1 - dist_ease)) * (delta_ease/dist_ease)));
}else{
i = i + delta;
}
if (i+0.0001 >= 1 || i-0.0001 <= 0)
delta = -1 * delta;
},1000);
The ship position is calculated every second, and moved with a d3js transition to make it smooth (calculating everything more often didn't give this smooth effect)
The speed of the ship is changed depending on the proximity to the harbour, to a void the strange effect of the ship crashing into it. The effect is controlled by dist_ease and delta_ease parameters, that change the distance where the speed is changed, and the amount of speed changed.

### What's next

• The SVG filters should be explained in  a better way, maybe packing them into functions as Kartograph does.
• SVG rendering lasts quite a lot in my computer. The same happens with Kartograph, so the problem comes from the SVG rendering. Anyway, could be improved.
• A canvas version would be nice.

La Bella Italia -- The example I have used as a model
Natural Earth

## Sunday, November 24, 2013

### Fast tip: Drawing raster NO DATA values with MapServer

MapServer is a great way to draw web maps, using it either as a WMS server or creating image files directly.
Is the raster has a No Data value, MapServer reads it from the GDAL library and doesn't draw the pixels with the value. But what happens if you want to draw the pixels with No Data values?

The solution, that I found here, is setting the LAYER property PROCESSING in this way:

PROCESSING "NODATA=OFF"
which disables the regular MapServer behaviour, which is, quite logically,  not drawing the pixels with the NODATA value.
Once this is done, just create a CLASS with the No Data Value  ¡in the EXPRESSION tag, so the pixels are coloured as you prefer.

### Why did I need this stuff

Maybe this is not a common need, but when drawing weather radar images, is a good idea to indicate the areas not covered by the radar in some colour, so it's easy to distinguish between zones where there is no rain from the zones where no data is available. Like in these examples:

The Catalan Weather Service radar in La Panadella with the range visible in gray, using the no data values
The Spanish Agency radar network, with the not covered areas in gray.

Strangely, finding the solution was quite difficult, so I put it in this post. Maybe somebody, someday, will find faster than me.

## Saturday, October 26, 2013

### Using EUROSTAT's data with D3.js

EUROSTAT is the European Union office for statistics. Its goal is to provide statistics at European level that enable comparisons between regions and countries.

The NUTS classification (Nomenclature of territorial units for statistics) is a hierarchical system for dividing up the economic territory of the EU. Many of the EUROSTAT data is relative to these regions, and this data is a good candidate to be mapped.

The problem is that EUROSTAT provides some shapefiles in one hand, but without the actual names for the regions, nor other information, and some tables  with the name and population for every region. This data has to be joined to be able to map properly.

In this example, he EUROSTAT's criminality statistics are mapped using these provided files.

You can see the working example at bl.ocks.org, and get all the source code at GitHub.
There is also a second example to see how to change the data easily.

### Organizing the data

At their site, EUROSTAT provides different Shapefiles for different scales. I think that the best option for maps of all Europe is to use NUTS_2010_10M_SH.zip. Using the 1:60 million scale could be an option, but the file doesn't contain the NUTS3 polygons, so only data at country or big region level is possible.

After downloading the polygons, you will see that no names or population are inside the fields, only the NUTS code. So to make a map, we have to label every polygon. To do it, I downloaded  the NUTS2010.zip file, which contains a csv file with the names for every NUTS region. At the web, it's also possible to find the population in a file, looking for the text Annual average population (1 000) by sex - NUTS 3 regions. Then, I made the following script:

'''
Joins a shp file from EUROSTAT with the xls with the region names so
the resulting shp file has the name of every region and the code.
Roger Veciana, oct 2013
'''
from osgeo import ogr
from os.path import exists
from os.path import basename
from os.path import splitext
from os import remove

def create_shp(in_shp, out_shp, csv_data):
print "Extracting data"

in_ds = ogr.Open( in_shp )
if in_ds is None:
print "Open failed.\n"
sys.exit( 1 )
in_lyr = in_ds.GetLayerByName( splitext(basename(in_shp))[0] )
if in_lyr is None:
print "Error opening layer"
sys.exit( 1 )

if exists(out_shp):
remove(out_shp)
driver_name = "ESRI Shapefile"
drv = ogr.GetDriverByName( driver_name )
if drv is None:
print "%s driver not available.\n" % driver_name
sys.exit( 1 )
out_ds = drv.CreateDataSource( out_shp )
if out_ds is None:
print "Creation of output file failed.\n"
sys.exit( 1 )
proj = in_lyr.GetSpatialRef()
##Creating the layer with its fields
out_lyr = out_ds.CreateLayer(
splitext(basename(out_shp))[0], proj, ogr.wkbMultiPolygon )
lyr_def = in_lyr.GetLayerDefn ()
for i in range(lyr_def.GetFieldCount()):
out_lyr.CreateField ( lyr_def.GetFieldDefn(i) )

field_defn = ogr.FieldDefn( "NAME", ogr.OFTString )
out_lyr.CreateField ( field_defn )

field_defn = ogr.FieldDefn( "COUNTRY", ogr.OFTString )
out_lyr.CreateField ( field_defn )

field_defn = ogr.FieldDefn( "COUNTRY_CO", ogr.OFTString )
out_lyr.CreateField ( field_defn )

field_defn = ogr.FieldDefn( "POPULATION", ogr.OFTInteger )
out_lyr.CreateField ( field_defn )

##Writing all the features
for feat_in in in_lyr:
value = feat_in.GetFieldAsString(feat_in.GetFieldIndex('NUTS_ID'))
if value in csv_data:
#print csv_data[value]
feat_out = ogr.Feature( out_lyr.GetLayerDefn())
feat_out.SetField( 'NUTS_ID', value )
feat_out.SetField( 'NAME', csv_data[value]['label'] )
feat_out.SetField( 'POPULATION', csv_data[value]['population'] )
feat_out.SetField( 'COUNTRY_CO', csv_data[value]['id_country'] )
feat_out.SetField( 'COUNTRY', csv_data[csv_data[value]['id_country']]['label'] )
feat_out.SetField( 'STAT_LEVL_', csv_data[value]['level'] )
feat_out.SetField( 'SHAPE_AREA', feat_in.GetFieldAsString(feat_in.GetFieldIndex('SHAPE_AREA')) )
feat_out.SetField( 'SHAPE_LEN', feat_in.GetFieldAsString(feat_in.GetFieldIndex('SHAPE_LEN')) )

feat_out.SetGeometry(feat_in.GetGeometryRef())
out_lyr.CreateFeature(feat_out)
in_ds = None
out_ds = None

'''
Reads the NUTS csv population file and returns the data in a dict
'''
csv_data = {}
f = open(csv_file, "r")
for line in f:
line_data = line.split(";")
try:
csv_data[line_data[0]] = int(float(line_data[4]) * 1000)
except Exception, e:
csv_data[line_data[0]] = -9999
f.close

return csv_data
'''
Reads a NUTS csv file and returns the data in a dict
'''
csv_data = {}
f = open(csv_file, "r")
for line in f:
line_data = line.split(";")
csv_data[line_data[0]] = {'label': line_data[1],
'level': line_data[2],
'id_country': line_data[3],
'code_country': line_data[4],
'population': 0}
if line_data[0] in population_data:
csv_data[line_data[0]]['population'] = population_data[line_data[0]]

f.close

return csv_data

if __name__ == '__main__':
create_shp('NUTS_2010_10M_SH/Data/NUTS_RG_10M_2010.shp', 'regions.shp', csv_data)

• Basically, we iterate for every polygon in the shp file, and look into the csv data for the attributes to add.
• Population data and region name are in tow separate files. To read them, there are two functions: read_population and read_names.
• create_shp function reads the geometries, joins them with the population and name, and finally writes the output file.
• Notice that NUTS regions overlap. There are levels 1, 2 and 3. The first one is at country level, the second takes big areas from the countries, and the third one are the actual departments in every country. To map any data is necessary to choose the level at which the map has to be represented, or mix the different levels if it's necessary, as in this case.
To create a map, I chose the criminality data. The file is again at EUROSTAT, under the text Crimes recorded by the police - NUTS 3 regions. Despite its name, some regions are NUTS3, and some NUTS2. Most of  the files from the site labeled with NUTS regions should work with only little changes in the map code.

### Creating a D3.js map to show the data

To draw the data in a map I've used a similar solution as the one in the post D3js Electoral Map:

Crimes recorded by the police - NUTS 3 regions

.background {
fill: #fff;
stroke: #ccc;
}

.tooltip{ background-color:rgba(200,200,200,0.5);;
margin: 10px;
height: 90px;
width: 150px;
}

Crimes recorded by the police - NUTS 3 regions

var width = 600,
height = 600;

var projection = d3.geo.stereographic()
.center([3.9,43.0])
.scale(1000)
.translate([width / 4 , height / 2]);

var path = d3.geo.path()
.projection(projection);

var svg = d3.select("#map").append("svg")
.attr("width", 960)
.attr("height", 500);

var domain = [0,5]
var color = d3.scale.linear().domain(domain).range(['blue', 'red']);

d3.csv("crim_gen_reg_1_Data.csv", function(stats) {
data = {};
stats.forEach(function(d) {
if (d.TIME == 2010){
data[d.GEO] = d.Value;
}
});

d3.json("/rveciana/raw/5919944/regions.json", function(error, europe) {
svg.selectAll(".region")
.data(topojson.feature(europe, europe.objects.regions).features)
.enter()
.append("path")
.filter(function(d) { return !isNaN(parseFloat(data[d.properties.NUTS_ID])); })
.attr("class", "region")
.attr("d", path)
.style("stroke", "#999")
.style("stroke-width", 0.2)
.style("fill", function(d) {
if (!isNaN(parseFloat(data[d.properties.NUTS_ID])))
return color(100000*data[d.properties.NUTS_ID]/d.properties.POPULATION);
else
return "#999";
})
.style("opacity", function(d){
if (!isNaN(parseFloat(data[d.properties.NUTS_ID])))
return 1;
else
return 0;
})

function tooltipText(d){
if (isNaN(parseFloat(data[d.properties.NUTS_ID]))) {
var crimes = "No Data";
} else {
var crimes = data[d.properties.NUTS_ID];
}
return "<b>" + d.properties.NAME + "</b>"
+ "<br/> pop: " + d.properties.POPULATION
+ "<br/> crimes: " + crimes;
}
svg.call(legend({width:300, posx: 100, posy:400, elements: 6, domain: domain, title:"Number of crimes / 100 000 inhabitants"}));
});
});

Source: eurostat


• Lines 5 to 18 give style to the map background and the tooltip
• Lies 29 to 43 set up the map, with its size, projection and svg element
• Line 45 sets the color scale (this is different to the electoral map example). The colors will vary from blue to red. Discrete colors are an other option. Just changing this line, you can change the whole map colors.
• Lines 47 to 53 read and parse the csv with the criminality data into  an object (we will only use 2010 data)
• Then lines 56 to 78, the map is actually drawn. Note
• Line 60 filters only the regions that have data in the CSV file. For some countries, this is the NUTS2 region (see Italy) and for others, NUTS3 (like Germany). The following lines are only run if there is data (see D3js filter)
• Line 66 sets the color of the region. Note that calls the scale defined at line 45, and calculates the number by multiplying the crimes by 100000 and dividing by the region population to get the density
• Line 71 sets the region opacity to 0 if there is no data for it. With the filter could be avoided
• Lines 78 and 81 set the tooltip box, taken from this example.
• Line 98 calls the function that draws the legend. The function is in a separate file to make it easy to reuse

var legend = function(accessor){
return function(selection){

//Draw the legend for the map
var legend = selection.append("g");
var legendText = accessor.title;
var numSquares = accessor.elements;
var xLegend = 0;
var yLegend = 0;
var widthLegend = 400;

var title_g = legend.append("g");
var values_g = legend.append("g");

var legendTitle = title_g.append("text")
.text("Legend")
.attr("font-family", "sans-serif")
.attr("font-size", "18px")
.attr("fill", "black");
var bbox = legendTitle.node().getBBox();
legendTitle.attr("x", xLegend + widthLegend/2 - bbox.width/2);
legendTitle.attr("y", yLegend + 20);

var legendTextEl = title_g.append("text")
.text(legendText)
.attr("y", yLegend + 75)
.attr("font-family", "sans-serif")
.attr("font-size", "14px")
.attr("fill", "black");
var bbox = legendTextEl.node().getBBox();
legendTextEl.attr("x", xLegend + widthLegend/2 - bbox.width/2);

for (i=0; i < numSquares; i++){
values_g.append("rect")
.attr("x", xLegend + (i+0.1*i/numSquares)*(widthLegend/numSquares))
.attr("y", yLegend + 25)
.attr("width", (widthLegend/numSquares)*0.9)
.attr("height", 20)
.attr("fill", color(accessor.domain[0] + i * accessor.domain[1]/(numSquares-1)));
var value_text = values_g.append("text")
.text(accessor.domain[0] + i * accessor.domain[1]/(numSquares-1))
.attr("y", yLegend + 55)
.attr("font-family", "sans-serif")
.attr("font-size", "12px")
.attr("fill", "black");
var bbox = value_text.node().getBBox();
value_text
.attr("x", xLegend + (i+0.1*i/numSquares)*(widthLegend/numSquares) + (widthLegend/numSquares)*(0.9/2)- bbox.width/2);

}
var scale = accessor.width / 400;
legend.attr("transform","translate("+accessor.posx+","+accessor.posy+") scale("+scale+") ");

};
};


•  Note how the function is called. The argument accessor is the one thas have the object properties (if called from a geometry, for instance), and selection is the node where the function is called from.
• The code is not very D3jsnic, using things as a for loop but it works and can be reused for drawing other legends from a D3js scale.

## Saturday, October 5, 2013

### Castor project's earthquakes map with D3js

The Castor project is a submarine natural gas storage facility located in front of the eastern Iberian Peninsula coast. The idea is to store the Algerian gas inside an old oilfield cavity. At least this is what I understood (sorry, geologists).
Somehow, when the facility started working, a series of earthquakes have started to occur. At the beginning, the platform owners said it wasn't related to their activity, but now everybody agrees that it is, and the activity has stopped, but not the earthquakes.

I didn't find a nice map about the earthquakes epicenters, so I thought that D3js would be a good option to do it.

 The star represents the aproximate platform location, and the circles, the epicenters. It's easy to see why they are related to the platform activity.

The animated map is at my bl.ocks.org page. The explanations are in Catalan, but basically say the same as here.

### Getting the data

The data about the significant earthquakes around Catalonia can be found at the Catalan Geologic Institute web site, but the format for the reports is not very convenient to get the data, so I made this python script to get it:

# -*- coding: utf-8 -*-
#http://jramosgarcia.wordpress.com/2013/10/01/que-es-el-proyecto-castor/
import urllib
import json
import dateutil.parser
import datetime

def get_data(url):
filehandle = urllib.urlopen(url)
filehandle.close()

lines = html.splitlines()

for i in range(len(lines)):

if lines[i].find('Latitud') > 0:
lat = lines[i].strip().split(" ")[1].replace("º","")
if lines[i].find('Longitud') > 0:
lon = lines[i].strip().split(" ")[1].replace("º","")
if lines[i].find('mol del dia') > 0:
date = lines[i + 1].strip().replace(" >/div<","")
if lines[i].find('Hora origen') > 0:
hour = lines[i].strip().split(" ")[4]
if lines[i].find('Magnitud') > 0:
magnitude = lines[i+1].strip().split(" ")[0]

date_array  = date.split("/")
hour_array = hour.split(":")

date_time = datetime.datetime(int(date_array[2]),int(date_array[1]),int(date_array[0]),int(hour_array[0]), int(hour_array[1]))

data = {'lat':lat, 'lon':lon, 'datetime': date_time.isoformat(), 'magnitude': magnitude}
return json.dumps(data)

if __name__ == "__main__":
url_list = [
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130910175510/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130913095842/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130918142943/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130920104607/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130924091301/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130925125029/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130929084140/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130929192416/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130930005900/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20130930051316/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20131001045206/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20131001055709/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20131002121626/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20131002232928/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20131003012732/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20131003031301/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20131004113222/comact.html',
'http://www.igc.cat/web/gcontent/ca/sismologia/sismescomact/comhistcat/20131004120323/comact.html'
]

f = open("data.json","w")
f.write("[")
json_data = ""
for url in url_list:
json_data = json_data + get_data( url ) + ", "

f.write(json_data[0:-2])
f.write("]")
f.close()


• The web pages are in the list, since the interesting reports have to be chosen one by one. It would be nice to have a better way to do it.
• Then, the get_data function just parses the text in the way that all the reports are parsed properly.The data is stored in a json file to make easier it's use from D3js.

### Using D3js to visualize the data

I used this example by Mike Bostock to create the background  map. The tiles origin has been changed because the example tiles are not available at this zoom level, and to have more points of interest to situate the earthquake locations.

This is the code:


.stroke {
fill: none;
stroke: #000;
stroke-width: .5;
}

<script>
/**
Based on Mike Bostock's http://bl.ocks.org/mbostock/4150951
*/
var width = 960,
height = 500;

var projection = d3.geo.mercator()
.center([0.5491, 40.4942])
.scale(20000);

var path = d3.geo.path()
.projection(projection);

var tile = d3.geo.tile()
.scale(projection.scale() * 2 * Math.PI)
.translate(projection([0, 0]))
.zoomDelta((window.devicePixelRatio || 1) - .5);

var svg = d3.select("body").append("svg")
.attr("width", width)
.attr("height", height);

var tiles = tile();

var defs = svg.append("defs");

var magnitudeScale = d3.scale.linear().domain([2,5]).range([5,30]);
d3.json("data.json", function(error, locations) {

svg.append("g")

.selectAll("image")
.data(tiles)
.enter().append("image")
.attr("xlink:href", function(d) { return "http://" + ["a", "b", "c", "d"][Math.random() * 4 | 0] + ".tiles.mapbox.com/v3/examples.map-vyofok3q/" + d[2] + "/" + d[0] + "/" + d[1] + ".png"; })
.attr("width", Math.round(tiles.scale) + 1)
.attr("height", Math.round(tiles.scale) + 1)
.attr("x", function(d) { return Math.round((d[0] + tiles.translate[0]) * tiles.scale); })
.attr("y", function(d) { return Math.round((d[1] + tiles.translate[1]) * tiles.scale); });

svg.append("g")
.append('path')
.attr("d","m 0,0 -8.47858,-5.22254 -8.31623,5.47756 2.34696,-9.67752 -7.77927,-6.21653 9.92909,-0.75852 3.50829,-9.31953 3.78972,9.20873 9.94748,0.45679 -7.58687,6.44982 z")
.attr("stroke","black")
.attr("stroke-width",2)
.style("fill", d3.rgb(90, 90, 90))
.attr("transform", "translate("+projection([0.66879, 40.33503])[0]+","+projection([0.66879, 40.33503])[1]+")");

var locationsG = svg.append("g");

locationsG.append('circle')
.attr('class','location')
.attr("r", 5)
.attr("cx", projection([loc.lon, loc.lat])[0])
.attr("cy", projection([loc.lon, loc.lat])[1])
.style("fill", d3.rgb(255, 0, 0).darker(2))
.style("opacity", 0.8)
.transition()
.duration(1000)
.attr("r", magnitudeScale(loc.magnitude))
.transition()
.delay(2000)
.duration(2000)
.style("opacity",0.3);

locationsG
.append("text")
.text(loc.magnitude)
.attr("x", projection([loc.lon, loc.lat])[0] - 10)
.attr("y", projection([loc.lon, loc.lat])[1] + 5)
.attr("font-family", "sans-serif")
.attr("font-size", "12px")
.attr("fill", "black")
.style("opacity",0)
.transition()
.duration(1000)
.style("opacity",1)
.transition()
.delay(2000)
.duration(2000)
.style("opacity",0);
}

//addLocation({"lat": "40.43", "magnitude": "2.7", "lon": "0.7", "datetime": "2013-10-09T06:43:16"});

var intDate = new Date("2013-09-10T00:00:00Z");
var maxDate = new Date("2013-10-04T00:00:00Z");
var usedLocations = new Array();

var dateTitle = svg
.append("text")
.attr("id", "dataTitle")
.text(intDate.toLocaleDateString())
.attr("x", 70)
.attr("y", 20)
.attr("font-family", "sans-serif")
.attr("font-size", "20px")
.attr("fill", "black");

var interval = setInterval(function() {

dateTitle.text(intDate.toLocaleDateString());

for (i = 0; imaxDate){
clearInterval(interval);
}

}

intDate.setDate(intDate.getDate() + 1);
}, 1000);

});



•  Lines 30 and 48 are the most important ones to create the tile map. In this case, no zoom or pan is set, so the code is quite simple.
• The line 57 creates the star indicating the platform location. I made the star using Inkscape and captured the code using the tool to see the generated svg. To situate the symbol on the map, a transform attribute is used, translating the symbol to the pixel calculated projecting the location coordinates.
• To add the earthquake epicenters:
• The function addLocation (line 68) adds the circle and the text indicating the earthquake magnitude. To locate them, the projection function is used again. Two transitions are used to make the circle grow and then get brighter. The same for making the text disappear.
• An interval is created (line 118) to change the date (one day per second). For every iteration, the date is evaluated and the earthquakes occured during this day plotted using the addLocation function. I didn't find a "more D3js" solution. The earthquakes already plotted are stored in an array to avoid plotting them more than once.
The date on the map is also changed for every iteration.

## Thursday, September 5, 2013

### Reading WRF NetCDF files with GDAL python

Since I work at a meteorological service I have to deal quite often with numerical weather prediction models. I use to work with GRIB files that have the data in the format I need to draw simple maps.
But sometimes, I need to calculate stuff that require the original data, since GRIB files are usually reprojected and simplified in vertical levels.
In the case of the WRF model, the output is in NetCDF format. The data is in sigma levels instead the pressure levels I need, and the variables such as temperature don't come directly in the file, but need to be calculated.
The post is actually written to order the things I've been using for myself, but I've added it to the blog because I haven't found step by step instructions.

## Getting the data

I have used some data from the model run at the SMC, where I work, but you can  get a WRF output from the ucar.edu web site. The projection is different from the one I have used, and it has 300 sigma levels instead of 30, but it should work.

## The NetCDF format

Every NETCDF file contains a collection of datasets with different sizes and types. Each dataset can have one or more layers, can have unidimensional arrays in it, etc. So the "traditional" GDAL datamodel can't work directly, since assumes that a file have multiple layers of the same size. That's why "subdatasets" are introduced. So opening the file directly (or running gdalinfo) only gives some metadata about all the datasets, but no actual data can be read or consulted. To do it, the subdataset must be opened instead.

### Opening a subdataset

Since opening the file directly doesn't open the actual dataset, but the "container", a different notation is used, maintaining the Open method. The filename string must be preceded by NETCDF: and continued by :VAR_NAME
To open the variable XLONG subdataset, the order would be:
gdal.Open('NETCDF:"'+datafile+'":XLONG')


## ARWPost

ARWPost is a program that reads WRF outputs in FORTRAN. I have used it's code to know how to calculate the actual model variables from the model NetCDF file.
There is a file for calculating each variable, some for the interpolation, etc. so it has helped me a lot when trying to understand how everything works.
The problem of ARWPost is that the usage is quite complicated (change a configuration file and executing), so it was not very practical to me.

## Sigma Levels

Instead of using constant pressure levels, the wrf model works using the Sigma coordinate system
This makes a bit more difficult extracting the data at a constant pressure, since for every point in the horizontal grid, a different level position is needed to get the value.

Besides, not all the variables use the same sigma levels. There are two different level sets known as "half levels" and "full levels", as shown in the picture:
 Image adapted from the Wikipedia Sigma Levels entry

Of course, this makes things a bit more complicated.

A layer sigma value in the half level is the average between full level above and below.

## Getting the geopotential height at a pressure level, step by step

### Setting the right projection

NetCDF can hold the following projections:
'lambert', 'polar', 'mercator', 'lat-lon' and 'rotated_ll'. The projection is defined using the label MAP_PROJ at the original file metadata (not the subdatasets metadata). To get all the metadata file, jut use:
ds_in = gdal.Open(datafile)


All the metadata comes with the prefix NC_GLOBAL#

In my case, I have MAP_PROJ=2 ,so the projection is polar stereographic (the code is the position in the projection list, starting by 1). I'll set it with:

proj_out = osr.SpatialReference()
proj_out.SetPS(90, -1.5, 1, 0, 0)


where -1.5 is the STAND_LON label at the original file metadata and 90 is the POLE_LAT label.

### Finding the geotransform

I'm not sure if the method I use is the best one, but the data I get is consistent.
The metadata given at the main file doesn't set the geotransform properly (or, at least, I'm not able to get it), since it sets only the center point coordinates, and the deltax and deltay. But this deltas are not referenced to the same point, so getting the real geotransform is not easy.

Fortunately, the files come with the fields XLONG and XLAT, that contain the longitude and latitude for each point in the matrix. So taking the corners is possible to get the geotransform, although the coordinates have to be reprojected before:
ds_lon = gdal.Open('NETCDF:"'+datafile+'":XLONG')
ds_lat = gdal.Open('NETCDF:"'+datafile+'":XLAT')

ds_lon = None
ds_lat = None
proj_gcp = osr.SpatialReference()
proj_gcp.ImportFromEPSG(4326)
transf = osr.CoordinateTransformation(proj_gcp, proj_out)
ul = transf.TransformPoint(float(longs[0][0]), float(lats[0][0]))
lr = transf.TransformPoint(float(longs[len(longs)-1][len(longs[0])-1]), float(lats[len(longs)-1][len(longs[0])-1]))
ur = transf.TransformPoint(float(longs[0][len(longs[0])-1]), float(lats[0][len(longs[0])-1]))
ll = transf.TransformPoint(float(longs[len(longs)-1][0]), float(lats[len(longs)-1][0]))
gt0 = ul[0]
gt1 = (ur[0] - gt0) / len(longs[0])
gt2 = (lr[0] - gt0 - gt1*len(longs[0])) / len(longs)
gt3 = ul[1]
gt5 = (ll[1] - gt3) / len(longs)
gt4 = (lr[1] - gt3 - gt5*len(longs) ) / len(longs[0])
geotransform = (gt0,gt1,gt2,gt3,gt4,gt5)


### Getting the geopotential height

From the ARWPost code, I know that the geopotential height is:

H = (PH + PHB)/9.81

So the PH and PHB subdatasets must be added and divided by the gravity to get the actual geopotential height

The code:
ds_ph = gdal.Open('NETCDF:"'+datafile+'":PH')
ds_phb = gdal.Open('NETCDF:"'+datafile+'":PHB')
num_bands = ds_ph.RasterCount
data_hgp = numpy.zeros(((num_bands, ds_ph.RasterYSize, ds_ph.RasterXSize)))
for i in range(num_bands):
data_hgp[i,:,:] = ( ds_ph.GetRasterBand(num_bands - i).ReadAsArray() + ds_phb.GetRasterBand(num_bands - i).ReadAsArray() ) / 9.81
ds_ph = None
ds_phb = None


Note that the bands are inverted in order and that in the raster, the order of the elements is [layer][y][x]. This is needed later to calculate layer positions.

### Calculating the geopotential height for a given Air Pressure

At this moment, we have the geopotential height for every sigma level, which is not very useful for working with the data. People prefers to have the geopotential height for a given pressure (850 hPa, 500 hPa, and so on).

The model gives the pressure values at every point, calculated doing:
Press = P + PB
in Pa or
Press = (P + PB ) / 100
in hPa

The code for reading the pressure:
ds_p = gdal.Open('NETCDF:"'+datafile+'":P')
ds_pb = gdal.Open('NETCDF:"'+datafile+'":PB')
num_bands = ds_p.RasterCount
data_p = numpy.zeros(((num_bands, y_size, x_size)))
for i in range(num_bands):
data_p[i,:,:] = ( ds_p.GetRasterBand(num_bands - i).ReadAsArray() + ds_pb.GetRasterBand(num_bands - i).ReadAsArray() ) / 100
ds_p = None
ds_pb = None


Now we can calculated at which layer on the pressure subdataset the given pressure would be.
NumPy has a function that calculates at which position a value would be in an ordered array called searchsorted: http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html
numpy.searchsorted([1,2,3,4,5], 2.5)


would give
2, since the value 2.5 is between 2 and 3 (positions 1 and 2).
I needed two tricks to use it:
*The function needs to have an ascending ordered array. That's why the layers order is reversed in the code, because the air pressure is descending with height, and we need the opposite to use this function.
*The function can only run in 1-d arrays. To use it in our 3-d array, the numpy.apply_along_axis function is used:
numpy.apply_along_axis(lambda a: a.searchsorted(p_ref), axis = 0, arr = data_p)
where p_ref is the pressure we want for the geopotential height.
That's why the layer number is the first element in the array, because is the only way to get apply_along_axis work.

The final code for the calculation:
p_ref = 500
data_positions = numpy.apply_along_axis(lambda a: a.searchsorted(p_ref), axis = 0, arr = data_p)


Now we know between which layers a given pressure would be for every point. To know the actual pressures at the upper and lower level for each point, the numpy.choose function can be used:
p0 = numpy.choose(data_positions - 1, data_p)
p1 = numpy.choose(data_positions, data_p)
layer_p = (data_positions - 1) + (p_ref - p0) / (p1 - p0)


Now we know the position in the pressure layers. i.e. if we want 850 hPa, ant for one point the upper layer 3 is 900 hPa and the lower layer 2 is 800 hPa, the value would be 2.5

But for the geopotential height, the sigma level is not the same, since the pressure is referenced to the half levels, and the geopotential to the full levels.
So the we will add 0.5 layers to the pressure layer to get the geopotential height layer:

layer_h = layer_p + 0.5

Then the final value will be:
h0 = numpy.floor(layer_h).astype(int)
h1 = numpy.ceil(layer_h).astype(int)
data_out = ( numpy.choose(h0, data_hgp) * (h1 - layer_h) ) + ( numpy.choose(h1, data_hgp) * (layer_h - h0) )


Note that the h0 and h1 (the positions of the layers above and below the final position) must be integers to be used with the choose function.

### Extrapolation

If the wanted pressure is 500hPa, a data layer above and below will be always found. But when we get closer to the land, like at 850 hPa, another typical pressure to show the geopotential height, we find that in some points, the pressure value is above the pressure layers of the model. So an extrapolation is needed, and the previous code, that assumed that the value would always between two layers will fail.

So the data calculated above must be filtered for each case. For the non interpolated data:
data_out = data_out * (data_positions < num_bands_p)


The extrapolation is defined in the file module_interp.f90 from the ARWPost source code. There are two defined cases:
1. The height is below the lowest sigma level, but above the ground height defined for that point
2. The height is below the ground level

The first case is quite simple, because the surface data for many variables is available, so it only changes the layer used to calculate the interpolation:
##If the pressure is between the surface pressure and the lowest layer
array_filter = numpy.logical_and(data_positions == num_bands_p, p_ref*100 < data_psfc)
zlev = ( ( (100.*p_ref - 100*data_p[-1,:,:])*data_hgt + (data_psfc - 100.*p_ref)*data_hgp[-1,:,:] ) / (data_psfc - 100*data_p[-1,:,:]))
data_out = data_out + (zlev * array_filter )


The second case needs to extrapolate below the ground level. The code at the file module_interp.f90 iterates over each element in the array, but using numpy will increase the speed a lot, so let's see how is it done:
expon=287.04*0.0065/9.81
ptarget = (data_psfc*0.01) - 150.0

kupper = numpy.apply_along_axis(lambda a: numpy.where(a == numpy.min(a))[0][0] + 1, axis = 0, arr = numpy.absolute(data_p - ptarget))

pbot=numpy.maximum(100*data_p[-1,:,:], data_psfc)
zbot=numpy.minimum(data_hgp[-1,:,:], data_hgt)

tbotextrap=numpy.choose(kupper, data_tk) * pow((pbot/(100*numpy.choose(kupper, data_p))),expon)
tvbotextrap=tbotextrap*(0.622+data_qv[-1,:,:])/(0.622*(1+data_qv[-1,:,:]))

zlev = zbot+((tvbotextrap/0.0065)*(1.0-pow(100.0*p_ref/pbot,expon)))

data_out = data_out + ( zlev * array_filter )


• Expon and ptarget are calculated directly (ptarget is an array, but the operations are simple)
• kupper is a parameter that could be defined as the level where the absolute value of the pressure - ptarget changes. Since we have a 3d array, being the 0th dimension the level, we can calculate the values in the 2d xy matrix by using the apply_along_axis method. This method will take all the levels and run the function defined in the first parameter passing the second parameter as the argument. In our case, the position (where) of the minimum value (min). We add the +1 to get the same result than the original code.
So in one line, we do what in the original code needs three nested loops!
The value is inverse to the ones at the original code, because we changed the pressure order.
• pbot and zbot are only to take the value closer to the ground betweeen the lowest level and the ground level defined by the model
• tbotextrap is the extrapolated temperature. The choose function is used to get the value at the calculated kupper level for each element in the matrix.
• tvbotextrap is the virtual temperature for tbotextrap. The original code uses an external function, but I have joined everything. The formula for the virtual temperature is virtual=tmp*(0.622+rmix)/(0.622*(1.+rmix))
• zlev can be calculated directly with the values calculated before, and is added to the output after applying the filter.

### Drawing the map using Basemap

The output file can be opened with QGIS, or any other GIS capable to read GeoTIFF files.

But is possible to draw it directly from the script using Basemap:
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
m = Basemap(projection='cyl',llcrnrlat=40.5,urcrnrlat=43,\
llcrnrlon=0,urcrnrlon=3.5,resolution='h')

cs = m.contourf(longs,lats,data_out,numpy.arange(1530,1600,10),latlon=True)
m.drawcoastlines(color='gray')
m.drawcountries(color='gray')

cbar.set_label('m')

plt.title('Geopotential height at 850hPa')
plt.show()


• First, we initialize the projection (latlon directly in this case), setting the map limits.
• contourf will draw the isobands for each 10m.
• Then, just draw the borders and coas lines and show the map.