Monday, July 7, 2014

Using the D3 trail layout to draw the Hayian tracks

I wrote many examples (1, 2, 3 and 4) and some entries in the blog (1 and 2) showing how to draw animated paths on a map using the D3 library.
But since then, Benjamin Schmidt wrote a D3 layout, called trail layout, that simplifies a lot doing this kind of stuff.
Since the layout is new, and hasn't got many examples (actually, two made by the author), I'll try to show how to work with it.

The trail layout

How does the trail layout work? The author defines it as:
This is a layout function for creating paths in D3 where (unlike the native d3.svg.line() element) you need to apply specific aesthetics to each element of the line.
Basically, the input is a set of points, and the layout takes them and creates separate segments to join them. This segments can be either line or d SVG elements.

Let's see the simplest example:




    • In this case, the points are defined as an array of objects with the x and y properties. If the x and y are named this way, the layout takes them directly. If they are called for instance lon and lat, the layout must be told how to get them.
    • Line 10 creates the SVG
    • Line 14 initializes the layout. In this case, the layout is using the coordType xy, which means that as a result will give the initial and end point for each segment, convenient for drawing SVG line elements. The other option is using the coordinates value, which is convenient for drawing d elements, as we will see later.
    •  Line 15 is where the data is set and the layout is retrieved
    • The last step is where the lines are actually drawn. 
      • For each data element, the line svg is added
      • The styles are applied
      • The extremes of the line are set using the attributes x1, y1, x2, y2

    How to use coordinates as the coordType:


    The following example created the trail as a set of SVG line elements, but the trail layout has an option for creating it as a set of SVG d elements (paths).
    You can see the example here. The data, in this case, is the Hayian track. As you can see, it's quite similar as the former example, with the following differences:
    • Since in this case we are using geographical coordinates, a projection must be set, and also a d3.geo.path to convert the data into x and y positions, as usual when drawing d3 maps
    • When initializing the trail layout, coordinates must be set as the coordType.
    • Since the data elements do not store the positions with the name x and y, the layout has to be told how the retrieve them using the positioner:
      .positioner(function(d) {return [d.lon, d.lat];})
    • When drawing the trail, a path element is appended instead the line element, and the d attribute is set with the path  function defined above.

    Creating the map with the trail

     

    Once the basic usage of the trail layout is known, let's reproduce the Hayian path example (simplified for better understanding):
    
    
    
    
    
    
    
    
    • The map creation is as usual (explained here)
    • Lines 49 to 51 create the trail layout as in the former example
    • Line 67 creates the trail, but with some differences:
      • the beginning and the end of the line are the same point at the beginning, so the line is not drawn at this moment (lines 69 to 72)
      • The stroke colour is defined as a function of the typhoon class using the colour scale (line 75)
      • A transition is defined to create the effect of the line drawing slowly
      • The ease is defined as linear, important in this case where we join a transition for each segment.
      • The delay is set to draw one segment after the other. The time (500 ms) must be the same as the one set at duration
      • Finally, the changed values are x2 and y2, that is, the final point of the line, which are changed to their actual values
    • The complete example, with the typhoon icon and the date is also available
    It's possible to use paths instead of lines to draw the map, as in the first version. The whole code is here, but the main changes are in the last section:
    hayan_trail.enter()
          .append('path')
          .attr("d", path)
          .style("stroke-width",7)
          .attr("stroke", function(d){return color_scale(d.class);})
          .style('stroke-dasharray', function(d) {
            var node = d3.select(this).node();
            if (node.hasAttribute("d")){
              var l = d3.select(this).node().getTotalLength();
              return l + 'px, ' + l + 'px';
            }
          })
          .style('stroke-dashoffset', function(d) {
            var node = d3.select(this).node();
            if (node.hasAttribute("d"))
              return d3.select(this).node().getTotalLength() + 'px';
          })
          .transition()
          .delay(function(d,i) {return i*1000})
          .duration(1000)
          .ease("linear")
          .style('stroke-dashoffset', function(d) {
              return '0px';
          });
    • The strategy here is to change the stroke-dasharray  and stroke-dashoffset style values as in this example, and changing it later so the effect takes place.
    • At the beginning, both values are the same length as the path. This way, the path doesn't appear. The length is calculated using the JavaScript function getTotalLength
    • After the transition, the stroke-offset value will be 0, and the path is fully drawn

    Conclusion

    I recommend using the trail layout instead of the method from my old posts. It's much cleaner, fast, easy, and let's changing each segment separately.
    The only problem I find is that when the stroke width gets thicker, the angles of every segment make strange effects, because the union between them doesn't exist. 

    This didn't happen with the old method. I can't imagine how to avoid this using lines, but using the coordinates option could be solved transforming the straight lines for curved lines.

    Wednesday, April 16, 2014

    D3 map Styling tutorial IV: Drawing gradient paths

    After creating the last D3js example, I was unsatisfied with the color of the path. It changed with the typhoon class at every moment, but it wasn't possible to see the class at every position. When I saw this example by Mike Bostock, I found the solution.

    Understanding the gradient along a stroke example

    First, how to adapt the Mike Bostock's Gradient Along Stroke example to a map.
    The map is drawn using the example Simple path on a map, from this post. The only change is that the dashed path is changed with the gradient.
    You can see the result here.
    The differences from drawing a simple path start at the line 100:
    var line = d3.svg.line()
          .interpolate("cardinal")
          .x(function(d) { return projection([d.lon, d.lat])[0]; })
          .y(function(d) { return projection([d.lon, d.lat])[1]; });
    
      svg.selectAll("path")
          .data(quad(sample(line(track), 8)))
        .enter().append("path")
          .style("fill", function(d) { return color(d.t); })
          .style("stroke", function(d) { return color(d.t); })
          .attr("d", function(d) { return lineJoin(d[0], d[1], d[2], d[3], trackWidth); });

    •  The line definition remains the same. From every element it gets, it takes the lat and lon attributes, projecting them, and assigning them to the x and y path properties
    • A color function is defined at line 41, which will interpolate the color value from green to red:
      var color = d3.interpolateLab("#008000", "#c83a22");
    • The data is not the line(track) directly, as in the former example, but passed through the functinos sample and quad.
    • The sample function assigns a property t with values between 0 and 1, which is used to get the color at every point.
    • Finally, the function lineJoin is used to draw a polygon for the sampled area.
    The functions used in the Mike Bostock's example aren't explained, I'll try to do it a little:
    • sample takes a line (the data applied to a line function), and iterates with the precision parameter as increment along the string, creating an array with all the calculated points.
    • quad takes the points calculated by the sample function and returns an array with the adjacent points (i-1, i, i+1, i+2).
    • lineJoin takes the four points generated by quad, and draws the polygon, with the help of lineItersect and perp functions.

    Drawing the typhoon track with the colors according to the typhoon class


    The final example draws the typhoon path changing smoothly the color according to the typhoon class.
    The animation of the path, and the rotating icon are explained in the third part of the tutorial. In this case, the way to animate the path will change.
    For each position of the typhoon, a gradient path is drawn, because the gradient is always between two colors. So the part of the code that changes is:
          //Draw the path, only when i > 0 in otder to have two points
          if (i>0){
            color0 = color_scale(track[i-1].class);
            color1 = color_scale(track[i].class);
    
            var activatedTrack = new Array();
            
            activatedTrack.push(track[i-1]);
            activatedTrack.push(track[i]);
    
            var color = d3.interpolateLab(color0, color1);
            path_g.selectAll("path"+i)
            .data(quad(sample(line(activatedTrack), 1)))
            .enter().append("path")
              .style("fill", function(d) { return color(d.t);})
              .style("stroke", function(d) { return color(d.t); })
              .attr("d", function(d) { return lineJoin(d[0], d[1], d[2], d[3], trackWidth); });
          }
    
          i = i + 1;
              if (i==track.length)
                clearInterval(animation)

    • Inside the animation interval (line 145), the gradient path is create for each position (starting with the second one to have two points)
    • The two colors are taken from the point information
    • An array with the two points is created, with the name activatedTrack. I tried using more points, but the result is very similar.
    • The color interpolation is calculated (line 172)
    • The gradient colored path is created (line 173). Note that the name is path+i, to make different paths each iteration, and not to overwrite them. The method is the same as the one used in the first section.
    Besides, an invisible path with all the positions is created, so the typhoon icon can be moved as it was in the third part of the tutorial.

    Links


    Monday, March 31, 2014

    Slides for the workshop "Introduction to Python for geospatial uses"

    Last 26th, 27th and 28th of March, the 8as Jornadas SIG Libre were held in Girona, where I had the opportunity to give a workshop about Python for geospatial uses.



    The slides in Spanish:
    http://rveciana.github.io/introduccion-python-geoespacial

    The Slides in English:
    http://rveciana.github.io/introduccion-python-geoespacial/index_en.html

    The example files in both languages:
    https://github.com/rveciana/introduccion-python-geoespacial

    The meeting was awesome, if you have the opportunity and understand Spanish, come next year!

    Monday, March 24, 2014

    Shaded relief images using GDAL python

    After showing how to colour a DEM file, classifying it, and calculating its isobands, this post shows how to create a shaded relief image from it.
    The resulting image
    A shaded relief image simulates the shadow thrown upon a relief map. This shadow is usually blended with some colouring, related to the altitude, a terrain classification, etc.
    The shadow is usually drawn considering that the sun is at 315 degrees of azimuth and 45 degrees over the horizon, which never happens at the north hemisphere. This values avoid strange perceptions, such as seeing the mountain tops as the bottom of a valley.

    In this example, the script calculates the hillshade image, a coloured image, and blends them into the shaded relief image.

    As usual, all the code, plus the sample DEM file, can be found at GitHub.

    The hillshade image

    I didn't know how to create a shaded relief image using numpy. Eric Gayer helped me with some samples, and I found some other information here.
    The script is:
    """
    Creates a shaded relief file from a DEM.
    """
    
    from osgeo import gdal
    from numpy import gradient
    from numpy import pi
    from numpy import arctan
    from numpy import arctan2
    from numpy import sin
    from numpy import cos
    from numpy import sqrt
    from numpy import zeros
    from numpy import uint8
    import matplotlib.pyplot as plt
    
    def hillshade(array, azimuth, angle_altitude):
            
        x, y = gradient(array)
        slope = pi/2. - arctan(sqrt(x*x + y*y))
        aspect = arctan2(-x, y)
        azimuthrad = azimuth*pi / 180.
        altituderad = angle_altitude*pi / 180.
         
     
        shaded = sin(altituderad) * sin(slope)\
         + cos(altituderad) * cos(slope)\
         * cos(azimuthrad - aspect)
        return 255*(shaded + 1)/2
    
    ds = gdal.Open('w001001.tiff')  
    band = ds.GetRasterBand(1)  
    arr = band.ReadAsArray()
    
    hs_array = hillshade(arr,315, 45)
    plt.imshow(hs_array,cmap='Greys')
    plt.show()

    • The script draws the image using matplotlib, to make it easy
    • The hillshade function starts calculating the gradient for the x and y directions using the numpy.gradient function. The result are two matrices of the same size than the original, one for each direction.
    • From the gradient, the aspect and slope can be calculated. The aspect will give the mountain orientation, which will be illuminated depending on the azimuth angle. The slopewill change the illumination depending on the altitude angle.
    • Finally, the hillshade is calculated.

     shaded_relief.py

     The shaded relief image is calculated using the algorithm explained in the post Colorize PNG from a raster file and the hillshade.
    As in the coloring post, the image is read by blocks to improve the performance, because it uses a lot of arrays, and doing it at once with a big image can take a lot of resources.
    I will coment the code block by block, to make it easier. The full code is here.

    The main function, called shaded_relief, is the most important, and calls the different algorithms:
    def shaded_relief(in_file, raster_band, color_file, out_file_name,
        azimuth=315, angle_altitude=45):
        '''
        The main function. Reads the input image block by block to improve the performance, and calculates the shaded relief image
        '''
    
        if exists(in_file) is False:
                raise Exception('[Errno 2] No such file or directory: \'' + in_file + '\'')    
        
        dataset = gdal.Open(in_file, GA_ReadOnly )
        if dataset == None:
            raise Exception("Unable to read the data file")
        
        band = dataset.GetRasterBand(raster_band)
    
        block_sizes = band.GetBlockSize()
        x_block_size = block_sizes[0]
        y_block_size = block_sizes[1]
    
        #If the block y size is 1, as in a GeoTIFF image, the gradient can't be calculated, 
        #so more than one block is used. In this case, using8 lines gives a similar 
        #result as taking the whole array.
        if y_block_size < 8:
            y_block_size = 8
    
        xsize = band.XSize
        ysize = band.YSize
    
        max_value = band.GetMaximum()
        min_value = band.GetMinimum()
    
        #Reading the color table
        color_table = readColorTable(color_file)
        #Adding an extra value to avoid problems with the last & first entry
        if sorted(color_table.keys())[0] > min_value:
            color_table[min_value - 1] = color_table[sorted(color_table.keys())[0]]
    
        if sorted(color_table.keys())[-1] < max_value:
            color_table[max_value + 1] = color_table[sorted(color_table.keys())[-1]]
        #Preparing the color table
        classification_values = color_table.keys()
        classification_values.sort()
    
        max_value = band.GetMaximum()
        min_value = band.GetMinimum()
    
        if max_value == None or min_value == None:
            stats = band.GetStatistics(0, 1)
            max_value = stats[1]
            min_value = stats[0]
    
        out_array = zeros((3, ysize, xsize), 'uint8')
    
        #The iteration over the blocks starts here
        for i in range(0, ysize, y_block_size):
            if i + y_block_size < ysize:
                rows = y_block_size
            else:
                rows = ysize - i
            
            for j in range(0, xsize, x_block_size):
                if j + x_block_size < xsize:
                    cols = x_block_size
                else:
                    cols = xsize - j
    
                dem_array = band.ReadAsArray(j, i, cols, rows)
                
                hs_array = hillshade(dem_array, azimuth, 
                    angle_altitude)
    
                rgb_array = values2rgba(dem_array, color_table, 
                    classification_values, max_value, min_value)
    
                hsv_array = rgb_to_hsv(rgb_array[:, :, 0], 
                    rgb_array[:, :, 1], rgb_array[:, :, 2]) 
    
                hsv_adjusted = asarray( [hsv_array[0], 
                    hsv_array[1], hs_array] )          
    
                shaded_array = hsv_to_rgb( hsv_adjusted )
                
                out_array[:,i:i+rows,j:j+cols] = shaded_array
        
        #Saving the image using the PIL library
        im = fromarray(transpose(out_array, (1,2,0)), mode='RGB')
        im.save(out_file_name)
    • After opening the file, at line 20 comes the first interesting point. If the image is read block by block, some times the blocks will have only one line, as in the GeoTIFF images. With this situation, the y gradient won't be calculated, so the hillshade function will fail. I've seen that taking only two lines gives coarse results, and with lines the result is more or less the same as taking the whole array. The performance won't be as good as using only one block, but works faster anyway.
    • Lines 32 to 51 read the color table and file maximim and minumum. This has to be outside the values2rgba function, since is needed only once.
    • Lines 54 to 66 control the block reading. For each iteration, a small array will be read (line 67). This is what will be processed. The result will be written in the output array defined at line 52, that has the final size.
    • Now the calculations start:
      • At line 69, the hillshade is calculated
      • At line 72, the color array is calculated
      • At line 75, the color array is changed from rgb values to hsv. 
      • At line 78, the value (the v in hsv) is changed to the hillshade value. This will blend both images. I took the idea from this post.
      • Then the image is transformed to rgb again (line 81) and written into the output array (line 83)
    • Finally, the array is transformed to a png image using the PIL library. The numpy.transpose function is used to re-order the matrix, since the original values are with the shape (3, height, width), and the Image.fromarray function needs (height, width, 3). An other way to do this is using scipy.misc.imsave (that would need scipy installed just for that), or the Image.merge function.

    The colouring funcion is taken from the post  Colorize PNG from a raster file, but modifying it so the colors are only continuous, since the discrete option doesn't give nice results in this case:
    def values2rgba(array, color_table, classification_values, max_value, min_value):
        '''
        This function calculates a the color of an array given a color table. 
        The color is interpolated from the color table values.
        '''
        rgba = zeros((array.shape[0], array.shape[1], 4), dtype = uint8)
    
        for k in range(len(classification_values) - 1):
            if classification_values[k] < max_value and (classification_values[k + 1] > min_value ):
                mask = logical_and(array >= classification_values[k], array < classification_values[k + 1])
    
                v0 = float(classification_values[k])
                v1 = float(classification_values[k + 1])
    
                rgba[:,:,0] = rgba[:,:,0] + mask * (color_table[classification_values[k]][0] + (array - v0)*(color_table[classification_values[k + 1]][0] - color_table[classification_values[k]][0])/(v1-v0) )
                rgba[:,:,1] = rgba[:,:,1] + mask * (color_table[classification_values[k]][1] + (array - v0)*(color_table[classification_values[k + 1]][1] - color_table[classification_values[k]][1])/(v1-v0) )
                rgba[:,:,2] = rgba[:,:,2] + mask * (color_table[classification_values[k]][2] + (array - v0)*(color_table[classification_values[k + 1]][2] - color_table[classification_values[k]][2])/(v1-v0) )
                rgba[:,:,3] = rgba[:,:,3] + mask * (color_table[classification_values[k]][3] + (array - v0)*(color_table[classification_values[k + 1]][3] - color_table[classification_values[k]][3])/(v1-v0) )
        return rgba
       
    The hillshade function is the same explained at the first point
    The functions rgb_to_hsv and hsv_to_rgb are taken from this post, and change the image mode from rgb to hsv and hsv to rgb.

    Links

    Tuesday, February 25, 2014

    3D terrain visualization with python and Mayavi2

    I have always wanted to draw these 3D terrains like those in www.shadedrelief.com, which are amazing. But the examples were all using software I don't use, so I tried to do it with python.
    The final result


    As usual, you can get all the source code and data at my GitHub page.

     Getting the data

    After trying different locations, I decided to use the mountain of Montserrat, close to Barcelona, since it has nice stone towers that are a good test for the DEM and the 3D visualization. An actual picture of the zone used is this one:
    Montserrat monastery
    The building is a good reference, since the stone only areas make the result testing much more difficult.
    All the data has been downloaded from the ICGC servers:
    • The DEM data was downloaded from the Vissir3 service, going to the section catàleg i descàrregues -> MDE 5x5. The file is named met5v10as0f0392Amr1r020.txt, but I cut a small part of it, to make mayavi2 work smoother using:

      gdalwarp -te 401620 4604246 403462 4605867 -s_srs EPSG:25831 -t_srs EPSG:25831 met5v10as0f0392Amr1r020.txt dem.tiff
    • The picture to drap over the dem file can be downloaded using the WMS service given by the ICGC:

      http://geoserveis.icc.cat/icc_mapesbase/wms/service?REQUEST=GetMap&VERSION=1.1.0&SERVICE=WMS&SRS=EPSG:25831&BBOX=401620.0,4604246.0,403462.0,4605867.0&WIDTH=1403&HEIGHT=1146&LAYERS=orto5m&STYLES=&FORMAT=JPEG&BGCOLOR=0xFFFFFF&TRANSPARENT=TRUE&EXCEPTION=INIMAGE
     It's not as automatic as I would like, but if it's possible to download a DEM and the corresponding image, it's possible to create the 3D image.

    Creating the image

    First, let's plot the DEM file in 3D using mayavi2:
    """
    Plotting the terrain DEM with Mayavi2
    """
    
    from osgeo import gdal
    from mayavi import mlab
    
    ds = gdal.Open('dem.tiff')
    data = ds.ReadAsArray()
    
    mlab.figure(size=(640, 800), bgcolor=(0.16, 0.28, 0.46))
    
    mlab.surf(data, warp_scale=0.2) 
    
    mlab.show()
    • In first place, we import as usual, gdal and numpy. Also the mlab library from mayavi, which lets set the mayavi canvas.
    • The data is read, as usual, with the gdal ReadAsArray method. 
    • The figure is created. This works like creating the Image object in the PIL library, creating the canvas where the data wil be drawn. In this case, the size is 640 x 800 pixels, making the figure bigger can affect the performance in some old computers. bgcolor sets the blue color as the background.
    • The surf method will plot the surface. The input has to be a 2D numpy array, which is what we have.  
      • The warp_scale argument sets the vertical scale. In this case, letting the default value (1?) creates a really exaggerated effect so its better to play a little to get a more realistic effect.
      • The colors depend of the Z value at each point, and can be changed using the color or colormap option.
    • The show() method makes the image to stay visible when running the example from a script. If you use ipython, you don't need this step.
    • If you want to save the figure as a png, you can either use the icon in the mayavi window or call the method mlab.savefig('image_name')
    • If you want to move the camera (change the prespective), you can use the roll/yaw/pitch methods:
      f = mlab.gcf()
      camera = f.scene.camera
      camera.yaw(45)

    The plotted DEM
    Now, let's put an aerial image over the 3D visualization:
    """
    Draping an image over a terrain surface
    """
    from osgeo import gdal
    from tvtk.api import tvtk
    from mayavi import mlab
    import Image
    
    ds = gdal.Open('dem.tiff')
    data = ds.ReadAsArray()
    im1 = Image.open("ortofoto.jpg")
    im2 = im1.rotate(90)
    im2.save("/tmp/ortofoto90.jpg")
    bmp1 = tvtk.JPEGReader()
    bmp1.file_name="/tmp/ortofoto90.jpg" #any jpeg file
    
    my_texture=tvtk.Texture()
    my_texture.interpolate=0
    my_texture.set_input(0,bmp1.get_output())
    
    
    mlab.figure(size=(640, 800), bgcolor=(0.16, 0.28, 0.46))
    
    surf = mlab.surf(data, color=(1,1,1), warp_scale=0.2) 
    surf.actor.enable_texture = True
    surf.actor.tcoord_generator_mode = 'plane'
    surf.actor.actor.texture = my_texture
    
    mlab.show()

    • The most important new import is tvtk. TVTK is a python api that allows to work with VTK objects. Actually, my knowledge of Mayavi2 is very limited, but I see TVTK as an extension.
    • The DEM data is read the same way, using the ReadAsArray method.
    • The aerial image, named ortofoto.jpg, is not in the correct orientation. It took me a lot of time to get what was happening. I rotate it opening the image with the PIL library and using the rotate method (lines 11 to 13)
    • Then, the tvtk object with the texture is created, loading the image with a JPEGReader object, and assigning it to the Texture object (lines 14 to 19). 
    • The figure and the 3D surface is created as in the other example (lines 22 and 24)
    • Then, the surface is modified to show the image over it (lines 25 to 28). 
    The final result

    The result is correct, but the aerial image is a little moved from the place it should be so, since the terrain is really steep, part of the buildings are drawn on the stone walls! I edited the WMS coordinates a little so the result is slightly better. Anyway, the method is correct.

    Links

    • GitHub: The source code and example data
    • Mayavi2: 3D scientific data visualization and plotting 
    • This entry to a mailing list gave me the tip to create the example
    • If you want to do more or less the same, but using JavaScript, Bjørn Sandvik posted this excellent example.
    • ShadedRelief: Cool 3D and shaded relief examples.