Imediava's Blog

Just another WordPress.com site

Tutorial on ember states and routing

Single page apps, states

In the quest for improving web applications many developers are starting to consider single-page apps as a way to offer a better user experience, more similar to native applications, without the nuances of having to wait after every server request [see single apps challenges and benefits].

Even if we call them single-page apps, they are composed of different steps that would correspond to different pages in a normal app. If we imagine a blog, a list of all the posts can be a step and a post with all its comments another.

In this post we will see ember’s approach to define apps based on steps through the construction of an extremely simple single page app.

Ember’s approach

States

The different steps of an application, are called  states  in ember and are organized in layers. At any moment an application can only be in one of its leaf states. Being in a state implies being in all its direct ancestors. If we take the following example:

Hierarchy of states

An application can only be in states 1.a, 1.b and 2. However if an application is in state 1.a, it means it is also in state 1 and 0 because they are both direct ancestors of 1.a.

Ember basics

We also need to understand that ember apps are based on three main building blocks:

  • Templates:  Strings that contain variables that are replaced by values when they are evaluated. Ember uses handlebars template engine.
  • Controllers: Objects used to manipulate data in the application
  • Views: Elements responsible for: “combining templates with data to render as sections of a page’s DOM, and … responding to user-initiated events.”  Basically a view has an associated template and uses a controller to access the data it needs to render it.

I’ll call the combination of this three elements a section.

Elements that form a section

Ember tools to build multi-state apps

Apart from the main building blocks, ember provides two tools that are extremely useful when building an app based on states: the {{outlet}} tag and the Router.

  • The {{outlet}} tag allows to build a hierarchy of sections by providing a mean for a container template to include a child template
  • The Router allows to specify all the possible states of an app and map them to urls.

Joining them together we can create a tree of states, where every state has an associated section. This makes possible the construction of really complex apps where the sates are decoupled from each other while the acceptable combination of states is well defined.

Hierarchy of states associated to sections

Our simple application

Once we know the concepts behind the approach we’ll show an example of how it works with our basic application. Our example will only have a master section with a child. We will build the master section and then we will see how we can add a child section with its corresponding state. Adding subsequent children will only mean repeating the same steps adapting them to our new states.

The master view

For our master section of the application will use the following handlebar template:


<script type="text/x-handlebars" data-template-name="master">
This is the master.
  {{outlet}}
</script>

The {{outlet}} helper is the container for the child template, it will server to render the main section inside the master.

To define the controller and the view for the master section we need to pass them as the ApplicationController and ApplicationView parameters when we build Ember.Application. The need to be called like that to conform with ember’s convention:


App = Ember.Application.create({

  ApplicationController : Ember.Controller.extend({
  }),

  ApplicationView : Ember.View.extend({
    templateName: "master" // We use the master template for this view
  }),

});

To add the states we will pass a Router property of type Ember.Router to our application. The router will be in charge of representing the different application’s states and matching them to urls (see Ember.Router for a deeper explanation on how this type works). It needs a root parameter that represents the app’s root state and as we’ve said before is associated to ApplicationView and ApplicationController.


App = Ember.Application.create({

  //......

  Router : Ember.Router.extend({
    // Every rooter must have a root state
    // whose controller and view are ApplicationController and ApplicationView
    root: Ember.Route.extend({
	// here will have all the states accessible by the user
    })
  })
});

The main state

With this simple code we already have our master section working. Now we need a child section that will render inside the master.

Again we need to define a template for it:


<script type="text/x-handlebars" data-template-name="main">
My main with its context:  {{ variable }}.
</script>

We also create the controller and the view for the state. We will call them, according to ember’s convention, with the state’s name with a capital letter.


App = Ember.Application.create({

  //......

  MainView : Ember.View.extend({
    templateName: "main"
  }),

  MainController : Ember.Controller.extend({
    variable: "my main",
  }),

  //......

});

We still need to add the main section’s state to the rooter. Since the main is inside the master we will place our main state inside the root state.


  root: Ember.Route.extend({
     // here will have all the states accessible by the user
     main: Ember.Route.extend({
     }),
  })

Old single page apps didn’t allow to bookmark a state or go back to a previous state by clicking on the back button because states where not represented by an url. However this is not a problem with ember that allows assigning a mapping between an state and a url.

To make our main state available at ‘#/main’ we add the following to the state:


  root: Ember.Route.extend({
     // here will have all the states accessible by the user
     main: Ember.Route.extend({
        //main => /main/ - will be accessible at #/main/
        route: '/main/',
     })
  })

Finally we want to tell ember that when we are in the main state, the main view should be rendered inside the master. For that we need to add a callback with the connectOutlets property passed to the state. This callback must contain the following method call:


router.get('applicationController').connectOutlet('main')

Since we have followed ember’s convention this method internally creates the instance of MainView, assigns it to the MainController singleton and then passes it to the ‘applicationController’. The framework knows that it has to take MainView and MainController because they follow the naming convention for the ‘main’ state.


  root: Ember.Route.extend({
     // here will have all the states accessible by the user
     main: Ember.Route.extend({
        route: '/main/',
	connectOutlets: function(router, event) {
          router.get('applicationController').connectOutlet('main');
        }
     }),

   //........

   });

With this we finally have our final app based on ember states, with our main page accessible at http://whatevertherootis/#/main:

App = Ember.Application.create({

  ApplicationController : Ember.Controller.extend({
  }),

  ApplicationView : Ember.View.extend({
    templateName: "master"
  }),

  MainView : Ember.View.extend({
    templateName: "main"
  }),

  MainController : Ember.Controller.extend({
    variable: "my main",
  }),

  Router : Ember.Router.extend({
    root: Ember.Route.extend({
      main: Ember.Route.extend({
        route: '/main/',
        connectOutlets: function(router, event) {
          router.get('applicationController').connectOutlet('main');
        }
      })
    })
  })
});

The source code for the full application with and index.html page can be downloaded from here.In later posts I will show how to add another state and link it from our main, how to redirect the root of our app to the main state and how to add nested states to our app.

Added benefits of using require.js

In a previous post I explained how require.js can help organize javascript code into modules that behave in a way similar to packages in programming languages like java. The main benefit of this approach is that clients don’t need to know a module’s internal dependencies to import it.

In the following post I’ll explain two other derived benefits of using require.js.

Benefit on code quality

Cleaner APIs and namespace

When defining modules with require.js the developer is forced to think about what variables the module is going to share. Forcing the programmer to decide what he wants to expose, and what are implementation specific details that need to be hidden leads to better encapsulated code.

In addition to it when the developer is importing a module with require.js the properties and methods exported by the module are accessible through a “package” variable. This allows two different modules to export an attribute with the same name without one of them hiding the value from the other.

To give an equivalent example it is as if in python a whole module was imported instead of each of its components. If someone wanted to avoid having to repeat the name of a module for a variable that is accessed many times it could still be done by creating a new variable:

// myModule1 = {myVar:1}
// myModule2 = {myVar:2}

// in python: import myModule1, myModule2
requirejs(['myModule1', 'myModule2'], function (m1, m2) {
	

        console.log(m1.myVar);
        console.log(m2.myVar);
        
        //To simulate loading a module's attribbute

        //in python: from myModule import myVar
        myVar = m1.myVar;

});

The added benefit to this is that importing modules in this way the namespace is not polluted. There are few chances of conflicts between variables it doesn’t matter how many modules are imported. This allows things like loading two different versions of the same library.

Benefit on performance

Loading smaller resources with lower requests

There are mainly two factors associated with the way the resources are loaded in the browser that affect the loading time of a page.

  1. The size of the resources: The bigger the resources the longer the loading time.
  2. The number of resources: The more resources the longer the time because there are more requests needed.

To deal with the first the best approach when talking about javascript files is to minify the code. Minify consists in removing all characters of the code that are only useful to make the code more readable.

For decreasing the number of web requests the usual solution (without getting into caching) is to group all the files into one without modifying the code. This way the number of requests needed to get the resources from the server is reduced to one.

Luckily require.js comes with a tool to automatically minify and group all the modules into one. This tool is capable of minifying every CSS file in your project and minify and group all the javascript files whose dependencies have been defined as require.js modules. The instructions of how to use the tool can be found at http://requirejs.org/docs/optimization.html.

PS: For those interested in further optimization of their webpages performance, I strongly recommend to have a look at the following link with a list of recommendations by the guys from yahoo.

Introduction to Require.js

In a programming language like Java you normally don’t need to pay attention to how dependencies are loaded. All you need to do is define your dependencies with a fully qualified name (e.g. import java.utils.Collection) and the loading is done transparently by the JVM.

However when you move to JavaScript the landscape changes and you realize that loading dependencies is not a simple process. JavaScript doesn’t initially define a smart module system, what means that the programmer needs to take care of it by himself, making modularisation difficult.

Let’s have a look at an example of a project and the dependencies that are defined within its modules. The dependencies are represented as arrows from the client module on the left to the required module on the right:

module1 → module 2 → module3, module4

With this system in JavaScript, every time you need to use the module1, all the other modules need to be imported in the right order in the following manner:

<script type="text/javascript" src="module4”></script
<script type="text/javascript" src="module3”></script>
<script type="text/javascript" src="module2"></script>
<script type="text/javascript" src="module1"></script>

Whereas in Java a similar system could be defined like this:

//Module 1
import module2; 

class Module1 { .. }

//Module 2
import module3;
import module4;

class Module2 { .. }

// And a client module would only need to import module1
import module1;

class Client { .. }

Disadvantages of Javascript

We can see that JavaScript’s approach is less adequate:

  • It means that every time we want to use module1 we have to first have a look at which are its dependencies recursively.
  • What makes it even worst those dependencies need to be defined every time the module is used.
  • If the dependencies change, they need to be redefined in every module that imports module1. If someone is using module1 and forgets or is not informed of the changes, its code would stop working without any previous notice regardless of if there have been any changes in module1′s public API.

CommonJS and Require.js

The good news are that there are ways on JavaScript of defining hierarchies of dependencies in a way pretty similar to that of languages like Java.

In platforms like Node and Rhino, you can use the “require” function that allows to load dependencies of modules that have been defined using the CommonJS specification. However this loading mechanism is not specially suited for JavaScript on the browser where loading synchronously is not the more appropriate mechanism. Luckily Require.js exists, a library that allows defining dependencies between modules in a way that is appropriate for the browser.

Applying Require.js to our initial example

To allow a module to be imported with requirejs you need to wrap your javascript in a module format defined by the library. Fortunately the format is really simple. Let’s start applying it to the leafs of our hierarchy, the modules 3 and 4.

define(function(){

     // Code defining the module (private code)
     ....

     // It returns the properties that form the module's public API
     return {
         myMethod1: function(){ .. } 
         myProp1: value
     };
});

All we need to do is wrap our code in a call to define and pass it a callback function. The callback function must return the methods and properties of the module that you want to make available to its clients , it’s public API . This is done in a way that follows the Module Pattern a pattern that has as purpose to preserve the encapsulation of modules. Despite of wether or not you’re interested in using requirejs you should definitely use the Module Pattern when programming in Javascript to protect the clients of your module and to avoid polluting the namespace.

When you’re defining modules that have dependencies on other modules like module1 and module2, the definition doesn’t change that much. The only modification requires to provide as the first argument to the define function an array with the dependencies of the modules (the imports) and as many parameters as imported modules to the callback function. Taking as an example module2 the definition would be the following.

define([“module3”, “module4”], function(module3, module4){

   // Any code 
   ...

   // Accessing the methods of the imported modules is done
   // through the parameters of the callback function

   module3.myMethod1();
});

Loading any module would follow the same pattern keeping the benefit of only having to define direct dependencies.

Finally defining the starting point

Any Javascript application that uses requirejs needs to have a starting point. It is something similar to how an application in Java has to have a class with a main method. This starting point uses the require function to define its dependencies. This function works just like define by passing an array of dependencies and a callback function with the code that will run when the dependencies are loaded.

require([“module1”], function(module1){
   // Code to run as a main method
   ....
});

Best Javascript Tools

I remember when I started creating webpages coming from a background in Java that I looked at Javascript and I really didn’t get it. I guess it didn’t help the fact that the critics about the language were spread all over the internet. It gave me the impression of a really old language, the legacy of a list of wrong design decisions. Overall I felt like it was something I had to deal with but I should avoid, trying to do as much as possible on the server.

However, now after I’ve had some experience with it I realize that it was just a prejudiced impression. It’s a fact that Javascript is an old language and there are some things that have been done wrong over its history.

Nonetheless, Javascript is living a renaissance. There is a really active community backing it and many innovations going on. Many of the mistakes in its design have been addressed and overall it is just a matter of having the expertise to be able to avoid them. As with any programming language, it is a matter of using it well. The great news with Javascript are that thanks to its community there are many available tools that make this task of “programming well” easier.

This is the list of my favourite Javascript libraries for this task:

  • Underscore – A really interesting library of useful functions to operate with collections, arrays, objects or other functions.
  • Coffeescript – A language that takes influences from ruby and provides among other things list comprehensions, a smarter way of dealing with variables without polluting the namespace and many other useful tricks that can make Javascript development simple. Coffeescript compiles to a well formated Javascript, thus allowing for easy debugging.
  • Require.js – A library for defining dependencies between Javascript files, Require.js facilitates a better modularization of code. It also allows smart resource loading and resource compression, making page loading faster. (See an introduction to the library here)
  • Backbone – An MVC-like framework for defining complex one page web applications.
  • Mustache: A logic-less template engine that is available for many languages including Javascript. Its markup is powerful while keeping developers from cluttering templates with complexity. It can be replaced by handlebars whose templates are fully compatible with Mustache but provides even more powerful features. Both can be plugged into Backbone’s template-agnostic views system.

  • My intention is that this article serves as an introduction for a list of posts where I delve into each of this tools features, advantages/shortcomings and alternatives.

Create a new list-item on google sites api

Here’s a quick explanation for those who are using the google sites api but can’t find the way to create a new list-item in an existing listpage.

Generally the problem is that the documentation seems to point to “CreatePage” as the method to use to create any kind of item. However list-item is one of the “more complex” kinds described in the following paragraph :

To create more complex entry kind that are populated on creation (e.g. a listpage with column headings), you’ll need to create the gdata.sites.data.ContentEntry manually, fill in the properties of interest, and call client.Post().

What means that for creating a new list-item we need to first create the entry manually and then “Post” it:

# Create new_entry
client.Post(new_entry, '/feeds/content/mydomain.com/mysites-page-name/')

It’s all about reading the fine print..

Data binding on grails – The basics

Grails data binding is a simple tool that becomes really useful when having to assign values from request parameters to domain objects. Thanks to databinding assigning a whole bunch of properties can be done in one line of code:

Book book = new Book(params)

If the domain object already exists the equivalent is:

book.properties = params

If we have a domain object with many properties and with associations this can save loads of boilerplate code. As an example we are going to take a book domain class that has the following properties:

String name
String isbn
Date dateOfRelease
Person author

Without data-binding assigning all those properties would mean having to do:

book.name = params.name
book.isbn = params.isbn
book.dateOfRelease = new Date().parse("dd-MM-YYYY", params.dateOfRelease)

It is important to notice the obvious fact that if we want to bind the parameters of a web request (for example the result of a form submission) automatically, we need to use the same names in the fields sent through the form as in the domain class. However applying this simple convention that may as well be positive to preserve consistency, the process of gathering the result of a web request and create a new domain object can be simplified to one sentence. No need to set the parameters one by one.

In the case of one to one associations data binding is even more useful because it avoids having to create the domain objects. Let’s say our initial book class has an author property whose type is Person. If we didn’t have databinding associating this new property would mean having to do:

def author = new Person()
author.name = "John"
book.author = author

While with databinding if we had our parameters map with this content:

params ["author.name"] = "John"

The person object would be created with its name and it would be associated to the book.author property automatically.

In this first article we’ve seen how databinding can help avoiding having to write boilerplate code, simplifying notably the task of creating domain objects. In following episodes we’ll see how databinding works for many ended associations and other benefits of this approach such as converting automatically from string to the appropriate data type thanks to Spring’s propertyEditors.

Web Scraping with Groovy (3 of 3) – JSoup

In previous articles we’ve had a look at how to use Groovy [4] and Groovy + XPath [5] for scraping web pages. In the following one we are going to see how the JSoup library can make it even easier.

Jsoup

Jsoup is a very powerful Java library i have just recently discovered. As a Java library, it can be used with any JVM language, so we are going to use it with groovy thus benefiting from the features of both.

With Jsoup is really easy to fetch and parse an url, we just need to use one convenient method. The code to get the url for the example we’ve been using in the previous articles is as simple as this:

@Grapes( @Grab('org.jsoup:jsoup:1.6.1'))
Document doc = Jsoup.connect("http://www.bing.com/search?q=web+scraping").get();

We just define our dependency in the Jsoup library (thanks to grape) and then we call the method connect in the Jsoup class. This creates a Connection object whose parameters can be modified to allow things like setting cookies on it. After creating the Connection object calling it’s get method will actually retrieved the webpage, parse it as a DOM and return a Document object.

CSS selectors

JSoup’s most important feature is that it allows to use CSS selectors, a way to identify parts of a webpage that should be familiar to any JQuery or CSS user. CSS selectors are in my opinion the best existent way to filter elements in a web.

With the Document object we got before, the full code for filtering the links of interest for our example would be:

def results = doc.select("#results h3 a")

As you can see calling the select method we can use the same selector we would use with JQuery, what makes the query really easy.

To summarize i will show a summary of the advantages of Jsoup:

Summary

To sum up Jsoup is somewhat recent but comes with features that make it in my opinion the best Java library for web scraping. I recommend anyone with interest in scraping with Java to go to Jsoup’s page that is full of good examples of how to use the library.

Nonetheless, I encourage everyone to express your opinions about which one you think is the best Java library for web scraping.

Pros Cons
Simplifies URL fetching to the extreme (just one method.) XPath filtering is more standarized.
Facilitates the use of cookies.
Allows the of use “CSS” selectors known by any JQuery user.
In my opinion the best way to select an element or a list of elements in a webpage. (For other similar opinions see references [1] [2] [3])).

Links

Links to comparisons of XPath and CSS selectors:

[1] http://ejohn.org/blog/xpath-css-selectors/
[2] http://chrisfjay.blogspot.com/2007/08/css-and-xpath-selectors.html
[3] http://saucelabs.com/blog/index.php/2011/05/why-css-locators-are-the-way-to-go-vs-xpath/

Previous articles about web scraping with groovy:

[4] http://imediava.wordpress.com/2011/08/18/web-scraping-with-groovy-1-of-3/
[5] http://imediava.wordpress.com/2011/08/30/web-scraping-with-groovy-2-of-3/

Edited 22/10/2011: Grab with multiple named parameters has been replaced by the more concise version with only one parameter as suggested by Guillaume Laforge.

Web Scraping with Groovy 2 of 3 – XPath

In the previous article Web Scraping with Groovy 1/3 we talked about how we could use groovy features to make web scraping easy. In the following, we’ll exploit Java/Groovy interoperability using some additional Java libraries to simplify even further the process using XPath.

We are going to keep using the same practical example we used in the previous article that consisted of fetching ( http://www.bing.com/search?q=web+scraping ) and obtaining results titles that matched $(‘#results h3 a’) .

Web Scraping with XPath

URL fetching can be done exactly like in the previous article, however, parsing needs to be completely modified. The reason for that is that Java’s XPath support is prepared for DOM documents, nonetheless I still haven’t found any HTML DOM parser that can be used with Java XPath. On the other hand, there are many available HTML SAX parsers like the popular TagSoup which we already used in the first post.

After a considerable effort the only solution I have found is provided at Building a DOM with TagSoup. Adapted to our example the code looks like the following:


import org.ccil.cowan.tagsoup.Parser;
import org.xml.sax.XMLReader;
import org.xml.sax.InputSource;
import javax.xml.transform.*;
import javax.xml.xpath.*

def urlString = "http://www.bing.com/search?q=web+scraping"
URL url = new URL(urlString);

@Grapes( @Grab('org.ccil.cowan.tagsoup:tagsoup:1.2') )
XMLReader reader = new Parser();
//Transform SAX to DOM
reader.setFeature(Parser.namespacesFeature, false);
reader.setFeature(Parser.namespacePrefixesFeature, false);
Transformer transformer = TransformerFactory.newInstance().newTransformer();
DOMResult result = new DOMResult();
transformer.transform(new SAXSource(reader, new InputSource(url.openStream())), result);

With the parsed html we now can use XPath expressivity to filter elements in the web DOM. XPath allows better selection than GPath in a declarative way and benefiting from using a standard that can be ported to other programming languages easily. To select the same elements as in the first examples we will just need:

def xpath = XPathFactory.newInstance().newXPath()

//JQuery selector: $('#results h3 a')
def results = xpath.evaluate( '//*[@id=\'results\']//h3/a', result.getNode(), XPathConstants.NODESET )

Simulating the ‘#’ operator with XPath is quite complex compared with the simplicity of JQuery selectors. However XPath is powerful enough to express anything that can be expressed with them and it comes with its own advantages such as the possibility to select all elements that have a children of a specific type. For example:

'//p[a]' - // Selects all "p" elements that have an "a" element

That is something that is impossible to do with CSS selectors.

Summary

Pros Cons
Very powerful and capable of covering any filtering need Needs a hack to allow using html parsing with Java SDK XPath support
Less verbose than GPath It’s less prepared for html, what makes it more verbose than CSS selectors for operators like ‘#’ or ‘.’

Next

In the next article, the last of this series, I will talk about JSoup a library that I have just recently discovered but which offers in my opinion the best alternative. We will see not only how this library simplifies element filtering but also how it comes with additional features to make web scraping even easier.

Edited 22/10/2011: Grab with multiple named parameters has been replaced by the more concise version with only one parameter as suggested by Guillaume Laforge.

Web Scraping with Groovy (1 of 3)

Web Scraping

Web Scraping consists in extracting information from a webpage in an automatic way. It works from a combination of url fetching and html parsing. As an example for this article we are going to extract the main titles for the results of searching “web scraping” in Microsoft’s Bing.

As a reference for the article, searching “web search” with Bing is equivalent to accessing the following URL: http://www.bing.com/search?q=web+scraping

And the results’ titles are selected applying the following JQuery selector to the webpage’s DOM:

$('#results h3 a')

Scraping with Groovy

Groovy features make screen scraping easy. Url fetching in groovy makes use of Java
classes like java.net.URL yet it’s facilitated by Groovy’s additional methods, in this case withReader.

import org.ccil.cowan.tagsoup.Parser;
    
String ENCODING = "UTF-8"

@Grapes( @Grab('org.ccil.cowan.tagsoup:tagsoup:1.2') )       
def PARSER = new XmlSlurper(new Parser() )

def url = "http://www.bing.com/search?q=web+scraping"

new URL(url).withReader (ENCODING) { reader -> 

    def document = PARSER.parse(reader) 
    // Extracting information
}

Html parsing can be done with any of the many available html-parsing java tools like tagsoup or cyberneko. In this example we have used tagsoup and we can see how easy we declare our dependency on the library thanks to Grapes.

On top of that groovy’s xmlslurper and gpath allow to access specific parts of the parsed html in a convenient way. For the example of the article we would just need a line of code
to extract the titles of the search results:

//JQuery selector: $('#results h3 a')
//Example 1
document.'**'.find{ it['@id'] == 'results'}.ul.li.div.div.h3.a.each { println it.text() }
//Example 2
document.'**'.find{ it['@id'] == 'results'}.'**'.findAll{ it.name() == 'h3'}.a.each { println it.text() }

In the snippet I have provided two different ways of achieving the same goal.

For both examples we first use groovy’s ‘**’ to search for all document’s children in depth, this way we can find which one has as its id results.

Then for the first example we specify the full element path from the results element to the links that represent the titles. As we can see this is less handy than just saying “i want all h3 descendants” the way it is done with JQuery.

The second example does exactely that, using ‘**’ operator it asks for all elements of type h3. However, if we keep comparing it with the way it is done with JQuery we find the solution quite complex.

Summary

Pros Cons
Easy URL fetching thanks to withReader Verbose for filtering descendants at lower levels
Parsing simplyfied thanks to XmlSlurper and Grapes for declaring dependencies Filtering based on id, class or attributes is complex comparing it with (#,.,or [attribute=]) in JQuery

To Sum up, we have seen that web scraping is made easier thanks to Groovy. However it comes with some inconveniencies, above all if we compare it with how easy it is to select elements with JQuery selectors.

In my next post i’m going to explore other libraries that simplify element filtering providing support for things like XPath or even CSS selectors.

PS: This example’s code is really simple but it you still want to access it, it is available at this gist

PS2: This set of articles is now going to be three articles long. With the first dedicated to GPath, the seconde to XPath and the last to the most interesting of all of them in my opinion JSoup.

Edited 22/10/2011: Grab with multiple named parameters has been replaced by the more concise version with only one parameter as suggested by Guillaume Laforge.

Show additional information about an html table row with JQuery

In webpages HTML tables are often used to show information about a list of items. Sometimes we need to allow the user to get additional information about an item by clicking on its row. In this post, I am going to show with a really simple example how to create this interesting effect with JQuery, AJAX, and HTML5 data attributes.

The flow of information for this sample it’s similar to any AJAX request:

Flow of information for an AJAX request

  • When the user clicks on a row it fires a JavaScript event that is captured by a JavaScript function (1).
  • Our function request the additional information about our item to a webserver (2)
  • Finally the function receives the response and modifies the HTML with it (3).

Next, I am going to get into the details for every step of the process in our example:

1. Capturing click event

First, we use JQuery to add to every row in our table and event listener in charge of handling the click event.

$(document).ready(function() {

	$('tr').click(function () {

		});

	});

});

Then we need a way to identify the exact row that has fired the click event. For this task I have used HTML5 data attributes. For a really good explanation on HTML5 data attributes there’s a great explanation available at Jhon Resig’s blog. For a shorter explanation let’s say that this attributes allows us to add custom attributes to an HTML element. Every row in the HTML or our sample has a data-code attribute:

<tr data-code="smith">

				

		    <td>John</td>

	

		    <td>Smith</td>



		    <td>john-smith@doesnt-exist.com</td>

		

		    <td>555-045678</td>

				

</tr>

Thanks to that we can get the item code with JQuery, doing:

$(document).ready(function() {

	$('tr').click(function () {

	  	var codigo = $(this).data("code")
		});

	});

});

2. Requesting user’s information with AJAX

Now to get the item’s additional information we can use its code, and use it to make a request on our webserver. To make the request we use JQuery’s ajax function:

$(document).ready(function() {

	$('tr').click(function () {

	  var codigo = $(this).data("code")

	  $.ajax({

		  url: codigo + ".html",

		  dataType: "text",

		  cache: false,

		  success: function(html){

		  }

		});

	});

});

In the code we can see that the request uses the item’s code to ask the webserver explicitly about the clicked item.

3. Updating the webpage with the additional information

Finally when we receive the response from the webserver, we use it to show it in our panel for additional information with JQuery’s html function.

$(document).ready(function() {

	$('tr').click(function () {

	  var codigo = $(this).data("code")

	  $.ajax({

		  url: codigo + ".html",

		  dataType: "text",

		  cache: false,

		  success: function(html){

		    $("#add-info-panel").html(html);

		  }

		});

	});

});

PS: For the full code of our example just click here.

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: