Quantcast
Channel: Matthew Daly
Viewing all 158 articles
Browse latest View live

Switching to my own static site generator

$
0
0

As you may have seen if you’re visiting the site, I’ve finally switched over from Octopress to the static site generator I’ve been working on for the last few months. Apologies if you’re seeing lots of old posts in your RSS reader - there must have been an inconsistency between the RSS feed for this and that for Octopress.

I actually still really like Octopress, however I’m not and have never been a big fan of Ruby. Python and JavaScript are my two main go-to languages (although I do a lot of work professionally with PHP as well), so I wanted a solution in one of those languages, but I wanted something that was very similar to Octopress in every other way. I also wanted the facility to easily concatenate and minify static files as part of my deployment process to make the whole thing as lean as possible, so it made sense to build it as a Grunt plugin and create a Yeoman generator for building the boilerplate for the blog. Also, it’s always easier to work with your own code, and so using templates I wrote myself should make it quicker and easier for me to customise the blog how I want.

While deploying it did throw up a few errors that I’ve had to fix, it’s gone fairly smoothly and I’m pretty happy with it, although I will no doubt spend some time tweaking it over the next few weeks. It’s built with GitHub Pages in mind, but the fact that it’s built using Grunt should make it straightforward to switch to a different deployment method - during development I’ve actually used grunt-rsync to deploy to my Raspberry Pi and grunt-bitbucket-pages to deploy to Bitbucket in order to test it and both work absolutely fine. There are also Grunt plugins for deploying via FTP around, so if you want to check it out, then as long as you have at least some familiarity with Grunt you should be able to deploy it however you wish. The generator is meant to be only a starting point for your own site, so by all means check it out, tinker with the styling and templates, and make it your own. I will be very happy indeed if I see someone else using it in the wild.

Static site generators are generally somewhat harder to use than a CMS like WordPress, but they have many advantages:

  • Lighter - you can quite easily host a static site with just Nginx on a Raspberry Pi
  • Faster - with no database or actual dynamic content on the server, just flat HTML, your site will be far quicker to load than a WordPress blog
  • Cheaper to host
  • Easy to deploy - if your workflow is very command-line based like mine is, it’s very quick and easy to get blogging

If you can get away with using a static site generator rather than a database-driven blogging system, then it’s well worth doing so.


Extending our Node.js and Redis chat server

$
0
0

In this tutorial, we’re going to extend the chat system we built in the first tutorial to include the following functionality:

  • Persisting the data
  • Prompting users to sign in and storing their details in a Redis-backed session

In the process, we’ll pick up a bit more about using Redis.

Persistence

Our first task is to make our messages persist when the session ends. Now, in order to do this, we’re going to use a list. A list in Redis can be thought of as equivalent to an array or list in most programming languages, and can be retrieved by passing the key in a similar fashion to how you would retrieve a string.

As usual, we will write our test first. Open up test/test.js and replace the test for sending a message with this:

// Test sending a message
describe('Test sending a message', function () {
it("should return 'Message received'", function (done) {
// Connect to server
var socket = io.connect('http://localhost:5000', {
'reconnection delay' : 0,
'reopen delay' : 0,
'force new connection' : true
});
// Handle the message being received
socket.on('message', function (data) {
expect(data).to.include('Message received');
client.lrange('chat:messages', 0, -1, function (err, messages) {
// Check message has been persisted
var message_list = [];
messages.forEach(function (message, i) {
message_list.push(message);
});
expect(message_list[0]).to.include('Message received');
// Finish up
socket.disconnect();
done();
});
});
// Send the message
socket.emit('send', { message: 'Message received' });
});
});

The main difference here is that we use our Redis client to get the list chat:messages, and check to see if our message appears in it. Now, let’s run our test to ensure it fails:

$ npm test
> babblr@1.0.0 test /Users/matthewdaly/Projects/babblr
> grunt test --verbose
Initializing
Command-line options: --verbose
Reading "Gruntfile.js" Gruntfile...OK
Registering Gruntfile tasks.
Initializing config...OK
Registering "grunt-contrib-jshint" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Loading "jshint.js" tasks...OK
+ jshint
Registering "grunt-coveralls" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Loading "coverallsTask.js" tasks...OK
+ coveralls
Registering "grunt-mocha-istanbul" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Loading "index.js" tasks...OK
+ istanbul_check_coverage, mocha_istanbul
Loading "Gruntfile.js" tasks...OK
+ test
Running tasks: test
Running "test" task
Running "jshint" task
Running "jshint:all" (jshint) task
Verifying property jshint.all exists in config...OK
Files: test/test.js, index.js -> all
Options: force=false, reporterOutput=null
OK
>> 2 files lint free.
Running "mocha_istanbul:coverage" (mocha_istanbul) task
Verifying property mocha_istanbul.coverage exists in config...OK
Files: test
Options: require=[], ui=false, globals=[], reporter=false, timeout=false, coverage=false, slow=false, grep=false, dryRun=false, quiet=false, recursive=false, mask="*.js", root=false, print=false, noColors=false, harmony=false, coverageFolder="coverage", reportFormats=["cobertura","html","lcovonly"], check={"statements":false,"lines":false,"functions":false,"branches":false}, excludes=false, mochaOptions=false, istanbulOptions=false
>> Will execute: node /Users/matthewdaly/Projects/babblr/node_modules/istanbul/lib/cli.js cover --dir=/Users/matthewdaly/Projects/babblr/coverage --report=cobertura --report=html --report=lcovonly /Users/matthewdaly/Projects/babblr/node_modules/mocha/bin/_mocha -- test/*.js
Listening on port 5000
server
Starting the server
Test the index route
✓ should return a page with the title Babblr (484ms)
Test sending a message
1) should return 'Message received'
Stopping the server
1 passing (552ms)
1 failing
1) server Test sending a message should return 'Message received':
Uncaught AssertionError: expected undefined to include 'Message received'
at /Users/matthewdaly/Projects/babblr/test/test.js:62:48
at try_callback (/Users/matthewdaly/Projects/babblr/node_modules/redis/index.js:592:9)
at RedisClient.return_reply (/Users/matthewdaly/Projects/babblr/node_modules/redis/index.js:685:13)
at HiredisReplyParser.<anonymous> (/Users/matthewdaly/Projects/babblr/node_modules/redis/index.js:321:14)
at HiredisReplyParser.emit (events.js:95:17)
at HiredisReplyParser.execute (/Users/matthewdaly/Projects/babblr/node_modules/redis/lib/parser/hiredis.js:43:18)
at RedisClient.on_data (/Users/matthewdaly/Projects/babblr/node_modules/redis/index.js:547:27)
at Socket.<anonymous> (/Users/matthewdaly/Projects/babblr/node_modules/redis/index.js:102:14)
at Socket.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:765:14)
at Socket.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:427:10)
at emitReadable (_stream_readable.js:423:5)
at readableAddChunk (_stream_readable.js:166:9)
at Socket.Readable.push (_stream_readable.js:128:10)
at TCP.onread (net.js:529:21)
=============================================================================
Writing coverage object [/Users/matthewdaly/Projects/babblr/coverage/coverage.json]
Writing coverage reports at [/Users/matthewdaly/Projects/babblr/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 96.97% ( 32/33 ), 5 ignored
Branches : 100% ( 6/6 ), 1 ignored
Functions : 80% ( 4/5 )
Lines : 96.97% ( 32/33 )
================================================================================
>>
Warning: Task "mocha_istanbul:coverage" failed. Use --force to continue.
Aborted due to warnings.
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0

Our test fails, so now we can start work on implementing the functionality we need. First of all, when a new message is sent, we need to push it to the list. Amend the new message handler in index.js to look like this:

// Handle new messages
io.sockets.on('connection', function (socket) {
// Subscribe to the Redis channel
subscribe.subscribe('ChatChannel');
// Handle incoming messages
socket.on('send', function (data) {
// Publish it
client.publish('ChatChannel', data.message);
// Persist it to a Redis list
client.rpush('chat:messages', 'Anonymous Coward : ' + data.message);
});
// Handle receiving messages
var callback = function (channel, data) {
socket.emit('message', data);
};
subscribe.on('message', callback);
// Handle disconnect
socket.on('disconnect', function () {
subscribe.removeListener('message', callback);
});
});

The only significant change is the Persist it to a Redis list section. Here we call the RPUSH command to push the current message to chat:messages. RPUSH pushes a message to the end of the list. There’s a similar command, LPUSH, which pushes an item to the beginning of the list, as well as LPOP and RPOP, which remove and return an item from the beginning and end of the list respectively.

Next we need to handle displaying the list when the main route loads. Replace the index route in index.js with this:

// Define index route
app.get('/', function (req, res) {
// Get messages
client.lrange('chat:messages', 0, -1, function (err, messages) {
/* istanbul ignore if */
if (err) {
console.log(err);
} else {
// Get messages
var message_list = [];
messages.forEach(function (message, i) {
/* istanbul ignore next */
message_list.push(message);
});
// Render page
res.render('index', { messages: message_list});
}
});
});

Here we use the client to return all messages in the list by using the LRANGE command and defining the slice as being from the start to the end of the list. We then loop through the mesages and push each to a list, before passing that list to the view.

Speaking of which, we also need to update views/index.hbs:

{{> header }}
<div class="container">
<div class="row">
<div class="col-md-8">
<div class="conversation">
{{#each messages}}
<p>{{this}}</p>
{{/each}}
</div>
</div>
<div class="col-md-4">
<form>
<div class="form-group">
<label for="message">Message</label>
<textarea class="form-control" id="message" rows="20"></textarea>
<a id="submitbutton" class="btn btn-primary form-control">Submit</a>
<div>
</form>
</div>
</div>
</div>
{{> footer }}

This just loops through the messages and prints each one in turn. Now let’s run our tests and make sure they pass:

$ npm test
> babblr@1.0.0 test /Users/matthewdaly/Projects/babblr
> grunt test --verbose
Initializing
Command-line options: --verbose
Reading "Gruntfile.js" Gruntfile...OK
Registering Gruntfile tasks.
Initializing config...OK
Registering "grunt-contrib-jshint" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Loading "jshint.js" tasks...OK
+ jshint
Registering "grunt-coveralls" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Loading "coverallsTask.js" tasks...OK
+ coveralls
Registering "grunt-mocha-istanbul" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Loading "index.js" tasks...OK
+ istanbul_check_coverage, mocha_istanbul
Loading "Gruntfile.js" tasks...OK
+ test
Running tasks: test
Running "test" task
Running "jshint" task
Running "jshint:all" (jshint) task
Verifying property jshint.all exists in config...OK
Files: test/test.js, index.js -> all
Options: force=false, reporterOutput=null
OK
>> 2 files lint free.
Running "mocha_istanbul:coverage" (mocha_istanbul) task
Verifying property mocha_istanbul.coverage exists in config...OK
Files: test
Options: require=[], ui=false, globals=[], reporter=false, timeout=false, coverage=false, slow=false, grep=false, dryRun=false, quiet=false, recursive=false, mask="*.js", root=false, print=false, noColors=false, harmony=false, coverageFolder="coverage", reportFormats=["cobertura","html","lcovonly"], check={"statements":false,"lines":false,"functions":false,"branches":false}, excludes=false, mochaOptions=false, istanbulOptions=false
>> Will execute: node /Users/matthewdaly/Projects/babblr/node_modules/istanbul/lib/cli.js cover --dir=/Users/matthewdaly/Projects/babblr/coverage --report=cobertura --report=html --report=lcovonly /Users/matthewdaly/Projects/babblr/node_modules/mocha/bin/_mocha -- test/*.js
Listening on port 5000
server
Starting the server
Test the index route
✓ should return a page with the title Babblr (1262ms)
Test sending a message
✓ should return 'Message received' (48ms)
Stopping the server
2 passing (2s)
=============================================================================
Writing coverage object [/Users/matthewdaly/Projects/babblr/coverage/coverage.json]
Writing coverage reports at [/Users/matthewdaly/Projects/babblr/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 100% ( 40/40 ), 7 ignored
Branches : 100% ( 8/8 ), 2 ignored
Functions : 85.71% ( 6/7 )
Lines : 100% ( 40/40 )
================================================================================
>> Done. Check coverage folder.
Running "coveralls" task
Running "coveralls:app" (coveralls) task
Verifying property coveralls.app exists in config...OK
Files: coverage/lcov.info
Options: src="coverage/lcov.info", force=false
Submitting file to coveralls.io: coverage/lcov.info
>> Failed to submit 'coverage/lcov.info' to coveralls: Bad response: 422 {"message":"Couldn't find a repository matching this job.","error":true}
>> Failed to submit coverage results to coveralls
Warning: Task "coveralls:app" failed. Use --force to continue.
Aborted due to warnings.
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0

As before, don’t worry about Coveralls not working - it’s only an issue when it runs on Travis CI. If everything else is fine, our chat server should now persist our changes.

Sessions and user login

At present, it’s hard to carry on a conversation with someone using this site because you can’t see who is responding to you. We need to implement a mechanism to obtain a username for each user, store it in a session, and then use it to identify all of a user’s messages. In this case, we’re going to just prompt the user to enter a username of their choice, but if you wish, you can use something like Passport.js to allow authentication using third-party services - I’ll leave that as an exercise for the reader.

Now, Express doesn’t include any support for sessions out of the box, so we have to install some additional libraries:

$ npm install connect-redis express-session body-parser --save

The express-session library is middleware for Express that allows for storing and retrieving session variables, while connect-redis allows it to use Redis to store this data. We used body-parser for the URL shortener to process POST data, so we will use it again here. Now, we need to set it up. Replace the part of index.js before we set up the templating with this:

/*jslint node: true */
'use strict';
// Declare variables used
var app, base_url, client, express, hbs, io, port, RedisStore, rtg, session, subscribe;
// Define values
express = require('express');
app = express();
port = process.env.PORT || 5000;
base_url = process.env.BASE_URL || 'http://localhost:5000';
hbs = require('hbs');
session = require('express-session');
RedisStore = require('connect-redis')(session);
// Set up connection to Redis
/* istanbul ignore if */
if (process.env.REDISTOGO_URL) {
rtg = require('url').parse(process.env.REDISTOGO_URL);
client = require('redis').createClient(rtg.port, rtg.hostname);
subscribe = require('redis').createClient(rtg.port, rtg.hostname);
client.auth(rtg.auth.split(':')[1]);
subscribe.auth(rtg.auth.split(':')[1]);
} else {
client = require('redis').createClient();
subscribe = require('redis').createClient();
}
// Set up session
app.use(session({
store: new RedisStore({
client: client
}),
secret: 'blibble'
}));

This just sets up the session and configures it to use Redis as the back end. Don’t forget to change the value of secret.

Now, let’s plan out how our username system is going to work. If a user visits the site and there is no session set, then they should be redirected to a new route, /login. Here they will be prompted to enter a username. Once a satisfactory username (eg one or more characters) has been submitted via the form, it should be stored in the session and the user redirected to the index. There should also be a /logout route to destroy the session and redirect the user back to the login form.

First, let’s implement a test for fetching the login form in test/test.js:

// Test submitting to the login route
describe('Test submitting to the login route', function () {
it('should store the username in the session and redirect the user to the index', function (done) {
request.post({ url: 'http://localhost:5000/login',
form:{username: 'bobsmith'},
followRedirect: false},
function (error, response, body) {
expect(response.headers.location).to.equal('/');
expect(response.statusCode).to.equal(302);
done();
});
});
});

This test sends a POST request containing the field username with the value bobsmith. We expect to be redirected to the index route.

Let’s run the test to make sure it fails:

$ npm test
> babblr@1.0.0 test /Users/matthewdaly/Projects/babblr
> grunt test --verbose
Initializing
Command-line options: --verbose
Reading "Gruntfile.js" Gruntfile...OK
Registering Gruntfile tasks.
Initializing config...OK
Registering "grunt-contrib-jshint" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Loading "jshint.js" tasks...OK
+ jshint
Registering "grunt-coveralls" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Loading "coverallsTask.js" tasks...OK
+ coveralls
Registering "grunt-mocha-istanbul" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Loading "index.js" tasks...OK
+ istanbul_check_coverage, mocha_istanbul
Loading "Gruntfile.js" tasks...OK
+ test
Running tasks: test
Running "test" task
Running "jshint" task
Running "jshint:all" (jshint) task
Verifying property jshint.all exists in config...OK
Files: test/test.js, index.js -> all
Options: force=false, reporterOutput=null
OK
>> 2 files lint free.
Running "mocha_istanbul:coverage" (mocha_istanbul) task
Verifying property mocha_istanbul.coverage exists in config...OK
Files: test
Options: require=[], ui=false, globals=[], reporter=false, timeout=false, coverage=false, slow=false, grep=false, dryRun=false, quiet=false, recursive=false, mask="*.js", root=false, print=false, noColors=false, harmony=false, coverageFolder="coverage", reportFormats=["cobertura","html","lcovonly"], check={"statements":false,"lines":false,"functions":false,"branches":false}, excludes=false, mochaOptions=false, istanbulOptions=false
>> Will execute: node /Users/matthewdaly/Projects/babblr/node_modules/istanbul/lib/cli.js cover --dir=/Users/matthewdaly/Projects/babblr/coverage --report=cobertura --report=html --report=lcovonly /Users/matthewdaly/Projects/babblr/node_modules/mocha/bin/_mocha -- test/*.js
express-session deprecated undefined resave option; provide resave option index.js:9:1585
express-session deprecated undefined saveUninitialized option; provide saveUninitialized option index.js:9:1585
Listening on port 5000
server
Starting the server
Test the index route
✓ should return a page with the title Babblr (45ms)
Test the login route
✓ should return a page with the text Please enter a handle
Test submitting to the login route
1) should store the username in the session and redirect the user to the index
Test sending a message
✓ should return 'Message received' (42ms)
Stopping the server
3 passing (122ms)
1 failing
1) server Test submitting to the login route should store the username in the session and redirect the user to the index:
Uncaught AssertionError: expected undefined to equal '/'
at Request._callback (/Users/matthewdaly/Projects/babblr/test/test.js:61:58)
at Request.self.callback (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:373:22)
at Request.emit (events.js:98:17)
at Request.<anonymous> (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:1318:14)
at Request.emit (events.js:117:20)
at IncomingMessage.<anonymous> (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:1266:12)
at IncomingMessage.emit (events.js:117:20)
at _stream_readable.js:944:16
at process._tickCallback (node.js:442:13)
=============================================================================
Writing coverage object [/Users/matthewdaly/Projects/babblr/coverage/coverage.json]
Writing coverage reports at [/Users/matthewdaly/Projects/babblr/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 100% ( 45/45 ), 7 ignored
Branches : 100% ( 8/8 ), 2 ignored
Functions : 87.5% ( 7/8 )
Lines : 100% ( 45/45 )
================================================================================
>>
Warning: Task "mocha_istanbul:coverage" failed. Use --force to continue.
Aborted due to warnings.
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0

Now, all we need to do to make this test pass is create a view containing the form and define a route to display it. First, we’ll define our new route in index.js:

// Define login route
app.get('/login', function (req, res) {
// Render view
res.render('login');
});

Next, we’ll create our new template at views/login.hbs:

{{> header }}
<div class="container">
<div class="row">
<div class="col-md-12">
<form action="/login" method="POST">
<div class="form-group">
<label for="Username">Please enter a handle</label>
<input type="text" class="form-control" size="20" required id="username" name="username"></input>
<input type="submit" class="btn btn-primary form-control"></input>
<div>
</form>
</div>
</div>
</div>
{{> footer }}

Let’s run our tests and make sure they pass:

$ npm test
> babblr@1.0.0 test /Users/matthewdaly/Projects/babblr
> grunt test --verbose
Initializing
Command-line options: --verbose
Reading "Gruntfile.js" Gruntfile...OK
Registering Gruntfile tasks.
Initializing config...OK
Registering "grunt-contrib-jshint" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Loading "jshint.js" tasks...OK
+ jshint
Registering "grunt-coveralls" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Loading "coverallsTask.js" tasks...OK
+ coveralls
Registering "grunt-mocha-istanbul" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Loading "index.js" tasks...OK
+ istanbul_check_coverage, mocha_istanbul
Loading "Gruntfile.js" tasks...OK
+ test
Running tasks: test
Running "test" task
Running "jshint" task
Running "jshint:all" (jshint) task
Verifying property jshint.all exists in config...OK
Files: test/test.js, index.js -> all
Options: force=false, reporterOutput=null
OK
>> 2 files lint free.
Running "mocha_istanbul:coverage" (mocha_istanbul) task
Verifying property mocha_istanbul.coverage exists in config...OK
Files: test
Options: require=[], ui=false, globals=[], reporter=false, timeout=false, coverage=false, slow=false, grep=false, dryRun=false, quiet=false, recursive=false, mask="*.js", root=false, print=false, noColors=false, harmony=false, coverageFolder="coverage", reportFormats=["cobertura","html","lcovonly"], check={"statements":false,"lines":false,"functions":false,"branches":false}, excludes=false, mochaOptions=false, istanbulOptions=false
>> Will execute: node /Users/matthewdaly/Projects/babblr/node_modules/istanbul/lib/cli.js cover --dir=/Users/matthewdaly/Projects/babblr/coverage --report=cobertura --report=html --report=lcovonly /Users/matthewdaly/Projects/babblr/node_modules/mocha/bin/_mocha -- test/*.js
express-session deprecated undefined resave option; provide resave option index.js:9:1585
express-session deprecated undefined saveUninitialized option; provide saveUninitialized option index.js:9:1585
Listening on port 5000
server
Starting the server
Test the index route
✓ should return a page with the title Babblr (64ms)
Test the login route
✓ should return a page with the text Please enter a handle
Test sending a message
✓ should return 'Message received' (78ms)
Stopping the server
3 passing (179ms)
=============================================================================
Writing coverage object [/Users/matthewdaly/Projects/babblr/coverage/coverage.json]
Writing coverage reports at [/Users/matthewdaly/Projects/babblr/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 100% ( 45/45 ), 7 ignored
Branches : 100% ( 8/8 ), 2 ignored
Functions : 87.5% ( 7/8 )
Lines : 100% ( 45/45 )
================================================================================
>> Done. Check coverage folder.
Running "coveralls" task
Running "coveralls:app" (coveralls) task
Verifying property coveralls.app exists in config...OK
Files: coverage/lcov.info
Options: src="coverage/lcov.info", force=false
Submitting file to coveralls.io: coverage/lcov.info
>> Failed to submit 'coverage/lcov.info' to coveralls: Bad response: 422 {"message":"Couldn't find a repository matching this job.","error":true}
>> Failed to submit coverage results to coveralls
Warning: Task "coveralls:app" failed. Use --force to continue.
Aborted due to warnings.
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0

Next, we need to process the submitted form, set the session, and redirect the user back to the index. First, let’s add another test:

// Test submitting to the login route
describe('Test submitting to the login route', function () {
it('should store the username in the session and redirect the user to the index', function (done) {
request.post({ url: 'http://localhost:5000/login',
form:{username: 'bobsmith'},
followRedirect: false},
function (error, response, body) {
expect(response.headers.location).to.equal('http://localhost:5000');
expect(response.statusCode).to.equal(301);
done();
});
});
});

This test submits the username, and makes sure that the response received is a 301 redirect to the index route. Let’s check to make sure it fails:

$ npm test
> babblr@1.0.0 test /Users/matthewdaly/Projects/babblr
> grunt test --verbose
Initializing
Command-line options: --verbose
Reading "Gruntfile.js" Gruntfile...OK
Registering Gruntfile tasks.
Initializing config...OK
Registering "grunt-contrib-jshint" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Loading "jshint.js" tasks...OK
+ jshint
Registering "grunt-coveralls" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Loading "coverallsTask.js" tasks...OK
+ coveralls
Registering "grunt-mocha-istanbul" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Loading "index.js" tasks...OK
+ istanbul_check_coverage, mocha_istanbul
Loading "Gruntfile.js" tasks...OK
+ test
Running tasks: test
Running "test" task
Running "jshint" task
Running "jshint:all" (jshint) task
Verifying property jshint.all exists in config...OK
Files: test/test.js, index.js -> all
Options: force=false, reporterOutput=null
OK
>> 2 files lint free.
Running "mocha_istanbul:coverage" (mocha_istanbul) task
Verifying property mocha_istanbul.coverage exists in config...OK
Files: test
Options: require=[], ui=false, globals=[], reporter=false, timeout=false, coverage=false, slow=false, grep=false, dryRun=false, quiet=false, recursive=false, mask="*.js", root=false, print=false, noColors=false, harmony=false, coverageFolder="coverage", reportFormats=["cobertura","html","lcovonly"], check={"statements":false,"lines":false,"functions":false,"branches":false}, excludes=false, mochaOptions=false, istanbulOptions=false
>> Will execute: node /Users/matthewdaly/Projects/babblr/node_modules/istanbul/lib/cli.js cover --dir=/Users/matthewdaly/Projects/babblr/coverage --report=cobertura --report=html --report=lcovonly /Users/matthewdaly/Projects/babblr/node_modules/mocha/bin/_mocha -- test/*.js
express-session deprecated undefined resave option; provide resave option index.js:9:1585
express-session deprecated undefined saveUninitialized option; provide saveUninitialized option index.js:9:1585
Listening on port 5000
server
Starting the server
Test the index route
✓ should return a page with the title Babblr (476ms)
Test the login route
✓ should return a page with the text Please enter a handle
Test submitting to the login route
1) should store the username in the session and redirect the user to the index
Test sending a message
✓ should return 'Message received' (42ms)
Stopping the server
3 passing (557ms)
1 failing
1) server Test submitting to the login route should store the username in the session and redirect the user to the index:
Uncaught AssertionError: expected undefined to equal 'http://localhost:5000'
at Request._callback (/Users/matthewdaly/Projects/babblr/test/test.js:61:58)
at Request.self.callback (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:373:22)
at Request.emit (events.js:98:17)
at Request.<anonymous> (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:1318:14)
at Request.emit (events.js:117:20)
at IncomingMessage.<anonymous> (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:1266:12)
at IncomingMessage.emit (events.js:117:20)
at _stream_readable.js:944:16
at process._tickCallback (node.js:442:13)
=============================================================================
Writing coverage object [/Users/matthewdaly/Projects/babblr/coverage/coverage.json]
Writing coverage reports at [/Users/matthewdaly/Projects/babblr/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 100% ( 45/45 ), 7 ignored
Branches : 100% ( 8/8 ), 2 ignored
Functions : 87.5% ( 7/8 )
Lines : 100% ( 45/45 )
================================================================================
>>
Warning: Task "mocha_istanbul:coverage" failed. Use --force to continue.
Aborted due to warnings.
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0

Now, in order to process POST data we’ll need to use body-parser. Amend the top of index.js to look like this::

/*jslint node: true */
'use strict';
// Declare variables used
var app, base_url, bodyParser, client, express, hbs, io, port, RedisStore, rtg, session, subscribe;
// Define values
express = require('express');
app = express();
bodyParser = require('body-parser');
port = process.env.PORT || 5000;
base_url = process.env.BASE_URL || 'http://localhost:5000';
hbs = require('hbs');
session = require('express-session');
RedisStore = require('connect-redis')(session);
// Set up connection to Redis
/* istanbul ignore if */
if (process.env.REDISTOGO_URL) {
rtg = require('url').parse(process.env.REDISTOGO_URL);
client = require('redis').createClient(rtg.port, rtg.hostname);
subscribe = require('redis').createClient(rtg.port, rtg.hostname);
client.auth(rtg.auth.split(':')[1]);
subscribe.auth(rtg.auth.split(':')[1]);
} else {
client = require('redis').createClient();
subscribe = require('redis').createClient();
}
// Set up session
app.use(session({
store: new RedisStore({
client: client
}),
secret: 'blibble'
}));
// Set up templating
app.set('views', __dirname + '/views');
app.set('view engine', "hbs");
app.engine('hbs', require('hbs').__express);
// Register partials
hbs.registerPartials(__dirname + '/views/partials');
// Set URL
app.set('base_url', base_url);
// Handle POST data
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({
extended: true
}));

Next, we define a POST route to handle the username input:

// Process login
app.post('/login', function (req, res) {
// Get username
var username = req.body.username;
// If username length is zero, reload the page
if (username.length === 0) {
res.render('login');
} else {
// Store username in session and redirect to index
req.session.username = username;
res.redirect('/');
}
});

This should be fairly straightforward. This route accepts a username parameter. If this parameter is not present, the user will see the login form again. Otherwise, they are redirected back to the index.

Now, if you check coverage/index.html after running the tests again, you’ll notice that there’s a gap in our coverage for the scenario when a user submits an empty username. Let’s fix that - add the following test to test/test.js:

// Test empty login
describe('Test empty login', function () {
it('should show the login form', function (done) {
request.post({ url: 'http://localhost:5000/login',
form:{username: ''},
followRedirect: false},
function (error, response, body) {
expect(response.statusCode).to.equal(200);
expect(body).to.include('Please enter a handle');
done();
});
});
});

Let’s run our tests again:

$ npm test
> babblr@1.0.0 test /Users/matthewdaly/Projects/babblr
> grunt test --verbose
Initializing
Command-line options: --verbose
Reading "Gruntfile.js" Gruntfile...OK
Registering Gruntfile tasks.
Initializing config...OK
Registering "grunt-contrib-jshint" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-contrib-jshint/package.json...OK
Loading "jshint.js" tasks...OK
+ jshint
Registering "grunt-coveralls" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-coveralls/package.json...OK
Loading "coverallsTask.js" tasks...OK
+ coveralls
Registering "grunt-mocha-istanbul" local Npm module tasks.
Reading /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Parsing /Users/matthewdaly/Projects/babblr/node_modules/grunt-mocha-istanbul/package.json...OK
Loading "index.js" tasks...OK
+ istanbul_check_coverage, mocha_istanbul
Loading "Gruntfile.js" tasks...OK
+ test
Running tasks: test
Running "test" task
Running "jshint" task
Running "jshint:all" (jshint) task
Verifying property jshint.all exists in config...OK
Files: test/test.js, index.js -> all
Options: force=false, reporterOutput=null
OK
>> 2 files lint free.
Running "mocha_istanbul:coverage" (mocha_istanbul) task
Verifying property mocha_istanbul.coverage exists in config...OK
Files: test
Options: require=[], ui=false, globals=[], reporter=false, timeout=false, coverage=false, slow=false, grep=false, dryRun=false, quiet=false, recursive=false, mask="*.js", root=false, print=false, noColors=false, harmony=false, coverageFolder="coverage", reportFormats=["cobertura","html","lcovonly"], check={"statements":false,"lines":false,"functions":false,"branches":false}, excludes=false, mochaOptions=false, istanbulOptions=false
>> Will execute: node /Users/matthewdaly/Projects/babblr/node_modules/istanbul/lib/cli.js cover --dir=/Users/matthewdaly/Projects/babblr/coverage --report=cobertura --report=html --report=lcovonly /Users/matthewdaly/Projects/babblr/node_modules/mocha/bin/_mocha -- test/*.js
express-session deprecated undefined resave option; provide resave option index.js:9:1669
express-session deprecated undefined saveUninitialized option; provide saveUninitialized option index.js:9:1669
Listening on port 5000
server
Starting the server
Test the index route
✓ should return a page with the title Babblr (44ms)
Test the login route
✓ should return a page with the text Please enter a handle
Test submitting to the login route
✓ should store the username in the session and redirect the user to the index
Test empty login
✓ should show the login form
Test sending a message
✓ should return 'Message received' (41ms)
Stopping the server
5 passing (145ms)
=============================================================================
Writing coverage object [/Users/matthewdaly/Projects/babblr/coverage/coverage.json]
Writing coverage reports at [/Users/matthewdaly/Projects/babblr/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 100% ( 54/54 ), 7 ignored
Branches : 100% ( 10/10 ), 2 ignored
Functions : 88.89% ( 8/9 )
Lines : 100% ( 54/54 )
================================================================================
>> Done. Check coverage folder.
Running "coveralls" task
Running "coveralls:app" (coveralls) task
Verifying property coveralls.app exists in config...OK
Files: coverage/lcov.info
Options: src="coverage/lcov.info", force=false
Submitting file to coveralls.io: coverage/lcov.info
>> Failed to submit 'coverage/lcov.info' to coveralls: Bad response: 422 {"message":"Couldn't find a repository matching this job.","error":true}
>> Failed to submit coverage results to coveralls
Warning: Task "coveralls:app" failed. Use --force to continue.
Aborted due to warnings.
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0

Our test now passes (bar, of course, Coveralls failing). Our next step is to actually do something with the session. Now, the request module we use in our test requires a third-party module called tough-cookie to work with cookies, so we need to install that:

$ npm install tough-cookie --save-dev

Next, amend the login test as follows:

// Test submitting to the login route
describe('Test submitting to the login route', function () {
it('should store the username in the session and redirect the user to the index', function (done) {
request.post({ url: 'http://localhost:5000/login',
form:{username: 'bobsmith'},
jar: true,
followRedirect: false},
function (error, response, body) {
expect(response.headers.location).to.equal('/');
expect(response.statusCode).to.equal(302);
// Check the username
request.get({ url: 'http://localhost:5000/', jar: true }, function (error, response, body) {
expect(body).to.include('bobsmith');
done();
});
});
});
});

Here we’re using a new parameter, namely jar - this tells request to store the cookies. We POST the username to the login form, and then we get the index route and verify that the username is shown in the request. Check the test fails, then amend the index route in index.js as follows:

// Define index route
app.get('/', function (req, res) {
// Get messages
client.lrange('chat:messages', 0, -1, function (err, messages) {
/* istanbul ignore if */
if (err) {
console.log(err);
} else {
// Get username
var username = req.session.username;
// Get messages
var message_list = [];
messages.forEach(function (message, i) {
/* istanbul ignore next */
message_list.push(message);
});
// Render page
res.render('index', { messages: message_list, username: username });
}
});
});

Note we get the username and pass it through to the view. We need to adapt the header view to display the username. Amend views/partials/header.hbs to look like this:

<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Babblr</title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Place favicon.ico and apple-touch-icon.png in the root directory -->
<link rel="stylesheet" href="/bower_components/bootstrap/dist/css/bootstrap.min.css">
<link rel="stylesheet" href="/bower_components/bootstrap/dist/css/bootstrap-theme.min.css">
<link rel="stylesheet" href="/css/style.css">
</head>
<body>
<!--[if lt IE 7]>
<p class="browsehappy">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p>
<![endif]-->
<nav class="navbar navbar-inverse navbar-static-top" role="navigation">
<div class="container-fluid">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#header-nav">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="/">Babblr</a>
<div class="collapse navbar-collapse navbar-right" id="header-nav">
<ul class="nav navbar-nav">
{{#if username}}
<li><a href="/logout">Logged in as {{ username }}</a></li>
{{else}}
<li><a href="/login">Log in</a></li>
{{/if}}
</ul>
</div>
</div>
</div>
</nav>

Note the addition of a logout link, which we will implement later. Let’s check our tests pass:

$ grunt test
Running "jshint:all" (jshint) task
>> 2 files lint free.
Running "mocha_istanbul:coverage" (mocha_istanbul) task
express-session deprecated undefined resave option; provide resave option index.js:9:1669
express-session deprecated undefined saveUninitialized option; provide saveUninitialized option index.js:9:1669
Listening on port 5000
server
Starting the server
Test the index route
✓ should return a page with the title Babblr (44ms)
Test the login route
✓ should return a page with the text Please enter a handle
Test submitting to the login route
✓ should store the username in the session and redirect the user to the index
Test empty login
✓ should show the login form
Test sending a message
✓ should return 'Message received' (45ms)
Stopping the server
5 passing (156ms)
=============================================================================
Writing coverage object [/Users/matthewdaly/Projects/babblr/coverage/coverage.json]
Writing coverage reports at [/Users/matthewdaly/Projects/babblr/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 100% ( 55/55 ), 7 ignored
Branches : 100% ( 10/10 ), 2 ignored
Functions : 88.89% ( 8/9 )
Lines : 100% ( 55/55 )
================================================================================
>> Done. Check coverage folder.
Running "coveralls:app" (coveralls) task
>> Failed to submit 'coverage/lcov.info' to coveralls: Bad response: 422 {"message":"Couldn't find a repository matching this job.","error":true}
>> Failed to submit coverage results to coveralls
Warning: Task "coveralls:app" failed. Use --force to continue.
Aborted due to warnings.

Excellent! Next, let’s implement the test for our logout route:

// Test logout
describe('Test logout', function () {
it('should log the user out', function (done) {
request.post({ url: 'http://localhost:5000/login',
form:{username: 'bobsmith'},
jar: true,
followRedirect: false},
function (error, response, body) {
expect(response.headers.location).to.equal('/');
expect(response.statusCode).to.equal(302);
// Check the username
request.get({ url: 'http://localhost:5000/', jar: true }, function (error, response, body) {
expect(body).to.include('bobsmith');
// Log the user out
request.get({ url: 'http://localhost:5000/logout', jar: true }, function (error, response, body) {
expect(body).to.include('Log in');
done();
});
});
});
});
});

This is largely the same as the previous test, but adds some additional content at the end to test logging out afterwards. Let’s run the test:

$ grunt test
Running "jshint:all" (jshint) task
>> 2 files lint free.
Running "mocha_istanbul:coverage" (mocha_istanbul) task
express-session deprecated undefined resave option; provide resave option index.js:9:1669
express-session deprecated undefined saveUninitialized option; provide saveUninitialized option index.js:9:1669
Listening on port 5000
server
Starting the server
Test the index route
✓ should return a page with the title Babblr (536ms)
Test the login route
✓ should return a page with the text Please enter a handle
Test submitting to the login route
✓ should store the username in the session and redirect the user to the index
Test empty login
✓ should show the login form
Test logout
1) should log the user out
Test sending a message
✓ should return 'Message received' (49ms)
Stopping the server
5 passing (682ms)
1 failing
1) server Test logout should log the user out:
Uncaught AssertionError: expected 'Cannot GET /logout\n' to include 'Log in'
at Request._callback (/Users/matthewdaly/Projects/babblr/test/test.js:105:45)
at Request.self.callback (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:373:22)
at Request.emit (events.js:98:17)
at Request.<anonymous> (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:1318:14)
at Request.emit (events.js:117:20)
at IncomingMessage.<anonymous> (/Users/matthewdaly/Projects/babblr/node_modules/request/request.js:1266:12)
at IncomingMessage.emit (events.js:117:20)
at _stream_readable.js:944:16
at process._tickCallback (node.js:442:13)
=============================================================================
Writing coverage object [/Users/matthewdaly/Projects/babblr/coverage/coverage.json]
Writing coverage reports at [/Users/matthewdaly/Projects/babblr/coverage]
=============================================================================
=============================== Coverage summary ===============================
Statements : 100% ( 55/55 ), 7 ignored
Branches : 100% ( 10/10 ), 2 ignored
Functions : 88.89% ( 8/9 )
Lines : 100% ( 55/55 )
================================================================================
>>
Warning: Task "mocha_istanbul:coverage" failed. Use --force to continue.
Aborted due to warnings.

Now we have a failing test, let’s implement our logout route. Add the following route to index.js:

// Process logout
app.get('/logout', function (req, res) {
// Delete username from session
req.session.username = null;
// Redirect user
res.redirect('/');
});

If you run your tests again, they should now pass.

Now that we have the user’s name stored in the session, we can make use of it. First, let’s amend static/js/main.js so that it no longer adds a default username:

$(document).ready(function () {
'use strict';
// Set up the connection
var field, socket, output;
socket = io.connect(window.location.href);
// Get a reference to the input
field = $('textarea#message');
// Get a reference to the output
output = $('div.conversation');
// Handle message submit
$('a#submitbutton').on('click', function () {
// Create the message
var msg;
msg = field.val();
socket.emit('send', { message: msg });
field.val('');
});
// Handle incoming messages
socket.on('message', function (data) {
// Insert the message
output.append('<p>' + data + '</p>');
});
});

Then, in index.js, we need to declare a variable for our session middleware, which will be shared between Socket.IO and Express:

// Declare variables used
var app, base_url, bodyParser, client, express, hbs, io, port, RedisStore, rtg, session, sessionMiddleware, subscribe;

Then we amend the session setup to make it easier to reuse for Socket.IO:

// Set up session
sessionMiddleware = session({
store: new RedisStore({
client: client
}),
secret: 'blibble'
});
app.use(sessionMiddleware);

Towards the end of the file, before we set up our handlers for Socket.IO, we integrate our sessions:

// Integrate sessions
io.use(function(socket, next) {
sessionMiddleware(socket.request, socket.request.res, next);
});

Finally, we rewrite our session handlers to use the username from the session:

// Handle new messages
io.sockets.on('connection', function (socket) {
// Subscribe to the Redis channel
subscribe.subscribe('ChatChannel');
// Handle incoming messages
socket.on('send', function (data) {
// Define variables
var username, message;
// Get username
username = socket.request.session.username;
if (!username) {
username = 'Anonymous Coward';
}
message = username + ': ' + data.message;
// Publish it
client.publish('ChatChannel', message);
// Persist it to a Redis list
client.rpush('chat:messages', message);
});
// Handle receiving messages
var callback = function (channel, data) {
socket.emit('message', data);
};
subscribe.on('message', callback);
// Handle disconnect
socket.on('disconnect', function () {
subscribe.removeListener('message', callback);
});
});

Note here that when a message is sent, we get the username from the session, and if it’s empty, set it to Anonymous Coward. We then prepend it to the message, publish it, and persist it.

One final thing…

One last job remains. At present, users can pass JavaScript through in messages, which is not terribly secure! We need to fix it. Amend the send handler as follows:

// Handle incoming messages
socket.on('send', function (data) {
// Define variables
var username, message;
// Strip tags from message
message = data.message.replace(/<[^>]*>/g, '');
// Get username
username = socket.request.session.username;
if (!username) {
username = 'Anonymous Coward';
}
message = username + ': ' + message;
// Publish it
client.publish('ChatChannel', message);
// Persist it to a Redis list
client.rpush('chat:messages', message);
});

Here we use a regex to strip out any HTML tags from the message - this will prevent anyone injecting JavaScript into our chat client.

And that’s all, folks! If you want to check out the source for this lesson it’s in the repository on GitHub, tagged lesson-2. If you want to carry on working on this on your own, there’s still plenty you can do, such as:

  • Adding support for multiple rooms
  • Using Passport.js to allow logging in using third-party services such as Twitter or Facebook
  • Adding formatting for messages, either by using something like Markdown, or a client-side rich text editor

As you can see, it’s surprising how much you can accomplish using only Redis, and under certain circumstances it offers a lot of advantages over a relational database. It’s always worth thinking about whether Redis can be used for your project.

Syntax highlighting in fenced code blocks in Vim

$
0
0

Just thought I’d share a little trick I picked up recently. As you may know, GitHub flavoured Markdown (which I use for this blog) supports fenced code blocks, allowing you to specify a language for a block of code in a Markdown file.

If you put the following code in your .vimrc, you can get syntax highlighting in those code blocks when you open up a Markdown file in Vim:

"Syntax highlighting in Markdown
au BufNewFile,BufReadPost *.md set filetype=markdown
let g:markdown_fenced_languages = ['bash=sh', 'css', 'django', 'handlebars', 'javascript', 'js=javascript', 'json=javascript', 'perl', 'php', 'python', 'ruby', 'sass', 'xml', 'html']

This does depend on having the appropriate syntax files installed. However, you can easily add in syntax files for many other languages that Vim supports, and there are third-party ones available to install - in my case, I’ve got the handlebars one installed, which doesn’t come with Vim.

Adding a new search engine to my site

$
0
0

I’ve just finished implementing a new search engine for this site. Obviously, with it using a static site generator, searching a relational database isn’t an option. For a long while I’d just been getting by with Google’s site-specific search, which worked, but meant leaving the site to view the search results.

Now, I’ve implemented a client-side search system using Lunr.js. It wasn’t too time consuming, and as the index is generated with the rest of the site and loaded with the page, the response is almost instantaneous. I may write a future blog post on how to integrate Lunr.js with your site, as it’s very handy and is an ideal solution for implementing search on a static site.

How I added search to my site with Lunr.js

$
0
0

As I mentioned a while back, I recently switched the search on my site from Google’s site-specific search to Lunr.js. Since my site is built with a static site generator, I can’t implement search using database queries, and I was keen to have an integrated search method that would be fast and not require server-side scripting, and Lunr.js seemed to fit the bill.

The first task in implementing it was to generate the index. As I wrote the Grunt task that generates the blog, I amended that task to generate an index at the same time as I generated the posts. I installed Lunr.js with the following command:

npm install lunr --save

I then imported it in the task, and set up the field names:

var lunr = require('lunr');
searchIndex = lunr(function () {
this.field('title', { boost: 10 });
this.field('body');
this.ref('href');
});

This defined fields for the title, body, and hyperlink, and set the hyperlink as the reference. The variable searchIndex represents the Lunr index.

Next, I looped through the posts, and passed the appropriate details to be added to the index:

for (post in post_items) {
var doc = {
'title': post_items[post].meta.title,
'body': post_items[post].post.rawcontent,
'href': post_items[post].path
};
store[doc.href] = {
'title': doc.title
};
searchIndex.add(doc);
}

At this point, post_items represents an array of objects, with each object representing a blog post. Note that the body field is set to the value of the item’s attribute post.rawcontent, which represents the raw Markdown rather than the compiled HTML.

I then store the title in the store object, so that it can be accessed using the href field as a key.

I then do the same thing when generating the pages:

// Add them to the index
var doc = {
'title': data.meta.title,
'body': data.post.rawcontent,
'href': permalink + '/'
};
store[doc.href] = {
'title': data.meta.title
};
searchIndex.add(doc);

Note that this is already inside the loop that generates the pages, so I don’t include that.

We then write the index to a file:

// Write index
grunt.file.write(options.www.dest + '/lunr.json', JSON.stringify({
index: searchIndex.toJSON(),
store: store
}));

That takes care of generating our index, but we need to implement some client-side code to handle the search. We need to include Lunr.js on the client side as well, (I recommend using Bower to do so), alongside jQuery. If you include both, the following code should do the trick:

$(document).ready(function () {
'use strict';
// Set up search
var index, store;
$.getJSON('/lunr.json', function (response) {
// Create index
index = lunr.Index.load(response.index);
// Create store
store = response.store;
// Handle search
$('input#search').on('keyup', function () {
// Get query
var query = $(this).val();
// Search for it
var result = index.search(query);
// Output it
var resultdiv = $('ul.searchresults');
if (result.length === 0) {
// Hide results
resultdiv.hide();
} else {
// Show results
resultdiv.empty();
for (var item in result) {
var ref = result[item].ref;
var searchitem = '<li><a href="' + ref + '">' + store[ref].title + '</a></li>';
resultdiv.append(searchitem);
}
resultdiv.show();
}
});
});
});

This should be easy to understand. On load, we fetch and parse the lunr.json file from the server, and load the index. We then set up an event handler for the keyup event on an input with the ID of search. We get the value of the input, and query our index, and we loop through our results and display them.

I was pleased with how straightforward it was to implement search with Lunr.js, and it works well. It’s also a lot faster than any server-side solution since the index is generated during the build process, and is loaded with the rest of the site, so the only factor in the speed of the response is how quick your browser executes JavaScript. You could probably also use it with a Node.js application by generating the index dynamically, although you’d probably want to cache it to some extent.

My static site generator post on Sitepoint

$
0
0

I wrote an article for Sitepoint recently about creating a static site generator as a Grunt plugin, similar to the one for this site. You can find it here.

Setting ETags in Laravel 5

$
0
0

Although I’d prefer to use Python or Node.js, there are some times when circumstances dictate that I need to use PHP for a project at work. In the past, I used CodeIgniter, but that was through nothing more than inertia. For some time I’d been planning to switch to Laravel, largely because of the baked-in PHPUnit support, but events conspired against me - one big project that came along had a lot in common with an earlier one, so I forked it rather than starting over.

Recently I built a REST API for a mobile app, and I decided to use that to try out Laravel (if it had been available at the time, I’d have gone for Lumen instead). I was very pleased with the results - I was able to quickly put together the back end I wanted, with good test coverage, and the tinker command in particular was useful in debugging. The end result is fast and efficient, with query caching in place using Memcached to improve response times.

I also implemented a simple middleware to add ETags to HTTP responses and compare them on incoming requests, returning a 304 Not Modified status code if they are the same, which is given below:

<?php namespace App\Http\Middleware;
use Closure;
class ETagMiddleware {
/**
* Implement Etag support
*
* @param \Illuminate\Http\Request $request
* @param \Closure $next
* @return mixed
*/
public function handle($request, Closure $next)
{
// Get response
$response = $next($request);
// If this was a GET request...
if ($request->isMethod('get')) {
// Generate Etag
$etag = md5($response->getContent());
$requestEtag = str_replace('"', '', $request->getETags());
// Check to see if Etag has changed
if($requestEtag && $requestEtag[0] == $etag) {
$response->setNotModified();
}
// Set Etag
$response->setEtag($etag);
}
// Send response
return $response;
}
}

This is based on a solution for Laravel 4 by Nick Verwymeren, but implemented as Laravel 5 middleware, not a Laravel 4 filter. To use this with Laravel 5, save this as app/Http/Middleware/ETagMiddleware.php. Then add this to the $middleware array in app/Http/Kernel.php:

'App\Http\Middleware\ETagMiddleware',

It’s quite simple to write this kind of middleware with Laravel, and using something like this is a no-brainer for most web apps considering the bandwidth it will likely save your users.

Getting django-behave and Celery to work together

$
0
0

I ran into a small issue today. I’m working on a Django app which uses Celery to handle certain tasks that don’t need to return a response within the context of the HTTP request. I also wanted to use django_behave for running BDD tests. The trouble is that both django_behave and Celery provide their own custom test runners that extend the default Django test runner, and so it looked like I might have to choose between the two.

However, it turned out that the Celery one was actually very simple, with only a handful of changes needing to be made to the default test runner to make it work with Celery. I was therefore able to create my own custom test runner that inherited from DjangoBehaveTestSuiteRunner and applied the changes necessary to get Celery working with it. Here is the test runner I wrote, which was saved as myproject/runner.py:

from django.conf import settings
from djcelery.contrib.test_runner import _set_eager
from django_behave.runner import DjangoBehaveTestSuiteRunner
class CeleryAndBehaveRunner(DjangoBehaveTestSuiteRunner):
def setup_test_environment(self, **kwargs):
_set_eager()
settings.BROKER_BACKEND = 'memory'
super(CeleryAndBehaveRunner, self).setup_test_environment(**kwargs)

To use it, you need to set the test runner in settings.py

TEST_RUNNER = 'myproject.runner.CeleryAndBehaveRunner'

Once that was done, my tests worked flawlessly with Celery, and the Behave tests ran as expected.


Handling images as base64 strings with Django REST Framework

$
0
0

I’m currently working on a Phonegap app that involves taking pictures and uploading them via a REST API. I’ve done this before, and I found at that time that the best way to do so was to fetch the image as a base-64 encoded string and push that up, rather than the image file itself. However, the last time I did so, I was using Tastypie to build the API, and I’ve since switched over to Django REST Framework as my API toolkit of choice.

It didn’t take long to find this gist giving details of how to do so, but it didn’t work as is, partly because I was using Python 3, and partly because the from_native method has gone as at Django REST Framework 3.0. It was, however, straightforward to adapt it to work. Here’s my solution:

import base64, uuid
from django.core.files.base import ContentFile
from rest_framework import serializers
# Custom image field - handles base 64 encoded images
class Base64ImageField(serializers.ImageField):
def to_internal_value(self, data):
if isinstance(data, str) and data.startswith('data:image'):
# base64 encoded image - decode
format, imgstr = data.split(';base64,') # format ~= data:image/X,
ext = format.split('/')[-1] # guess file extension
id = uuid.uuid4()
data = ContentFile(base64.b64decode(imgstr), name = id.urn[9:] + '.' + ext)
return super(Base64ImageField, self).to_internal_value(data)

This solution will handle both base 64 encoded strings and image files. Then, just use this field as normal.

New laptop

$
0
0

For a while now it’s been obvious that I needed a new laptop. My main workhorse for a while has been a 2008 MacBook, but I’m not really a fan of Mac OS X and it was stuck on Snow Leopard, so it was somewhat behind the times. It was also painfully slow by modern standards - regenerating this site took a couple of minutes. I had two other reasonably modern laptops, but one was too big and cumbersome, while the other was a Dell Mini, which isn’t really fast enough for a developer. When I last bought a laptop, I wasn’t even a developer, so it was long past time I got a more suitable machine.

I therefore took the plunge and ordered a new Dell XPS 13 Developer Edition, which arrived today. It’s an absolutely beautiful machine, and it’s extremely light. It’s also a lot faster than any other machine I own. The screen is exceptionally sharp, and setting it up was nice and easy.

After an hour or so with this machine, I’m already really happy with it. We’ll have to see whether I still think so after a few months using it.

Exploring the HStoreField in Django 1.8

$
0
0

One of the most interesting additions in Django 1.8 is the new Postgres-specific fields. I started using PostgreSQL in preference to MySQL for Django apps last year, and so I was interested in the additional functionality they offer.

By far the biggest deal out of all of these was the new HStoreField type. PostgreSQL added a JSON data type a little while back, and HStoreField allows you to use that field type. This is a really big deal because it allows you to store arbitrary data as JSON and query it. Previously, you could of course just store data as JSON in a text field, but that lacked the same ability to query it. This gives you many of the advantages of a NoSQL document database such as MongoDB in a relational database. For instance, you can store different products with different data about them, and crucially, query them by that data. Previously, the only way to add arbitrary product data and be able to query it was to have it in a separate table, and it was often cumbersome to join them when fetching multiple products.

Let’s see a working example. We might be building an online store where products can have all kinds of arbitrary data stored about them. One product might be a plastic box, and you’d need to list the capacity as an additional attribute. Another product might be a pair of shoes, which have no capacity, but do have a size. It might be difficult to model this otherwise, but HStoreField is perfect for this kind of data.

First, let’s set up our database. I’ll assume you already have PostgreSQL up and running via your package manager. First, we need to create our database:

$ createdb djangostore

Next, we need to create a new user for this database with superuser access:

$ createuser store -s -P

You’ll be prompted for a password - I’m assuming this will just be password here. Next, we need to connect to PostgreSQL using the psql utility:

$ psql djangostore -U store -W

You’ll be prompted for your new password. Next, run the following command:

# CREATE EXTENSION hstore;
# GRANT ALL PRIVILEGES ON DATABASE djangostore TO store;
# \q

The first command installs the HStore extension. Next we make sure our new user has the privileges required on the new database:

We’ve now created our database and a user to interact with it. Next, we’ll set up our Django install:

$ cd Projects
$ mkdir djangostore
$ cd djangostore
$ pyvenv venv
$ source venv/bin/activate
$ pip install Django psycopg2 ipdb
$ django-admin.py startproject djangostore
$ python manage.py startapp store

I’m assuming here that you’re using Python 3.4. On Ubuntu, getting it working is a bit more involved.

Next, open up djangostore/settings.py and amend INSTALLED_APPS to include the new app and the PostgreSQL-specific functionality:

INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.postgres',
'store',
)

You’ll also need to configure the database settings:

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'djangostore',
'USER': 'store',
'PASSWORD': 'password',
'HOST': 'localhost',
'PORT': '',
}
}

We need to create an empty migration to use HStoreField:

$ python manage.py makemigrations --empty store

This command should create the file store/migrations/0001_initial.py. Open this up and edit it to look like this:

# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.contrib.postgres.operations import HStoreExtension
class Migration(migrations.Migration):
dependencies = [
]
operations = [
HStoreExtension(),
]

This will make sure the HStore extension is installed. Next, let’s run these migrations:

$ python manage.py migrate
Operations to perform:
Synchronize unmigrated apps: messages, staticfiles, postgres
Apply all migrations: sessions, store, admin, auth, contenttypes
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying sessions.0001_initial... OK
Applying store.0001_initial... OK

Now, we’re ready to start creating our Product model. Open up store/models.py and amend it as follows:

from django.contrib.postgres.fields import HStoreField
from django.db import models
# Create your models here.
class Product(models.Model):
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
name = models.CharField(max_length=200)
description = models.TextField()
price = models.FloatField()
attributes = HStoreField()
def __str__(self):
return self.name

Note that HStoreField is not part of the standard group of model fields, and needs to be imported from the Postgres-specific fields module. Next, let’s create and run our migrations:

$ python manage.py makemigrations
$ python manage.py migrate

We should now have a Product model where the attributes field can be any arbitrary data we want. Note that we installed ipdb earlier - if you’re not familiar with it, this is an improved Python debugger, and also pulls in ipython, an improved Python shell, which Django will use if available.

Open up the Django shell:

$ python manage.py shell

Then, import the Product model:

from store.models import Product

Let’s create our first product - a plastic storage box:

box = Product()
box.name = 'Box'
box.description = 'A big box'
box.price = 5.99
box.attributes = { 'capacity': '1L', "colour": "blue"}
box.save()

If we take a look, we can see that the attributes can be returned as a Python dictionary:

In [12]: Product.objects.all()[0].attributes
Out[12]: {'capacity': '1L', 'colour': 'blue'}

We can easily retrieve single values:

In [15]: Product.objects.all()[0].attributes['capacity']
Out[15]: '1L'

Let’s add a second product - a mop:

mop = Product()
mop.name = 'Mop'
mop.description = 'A mop'
mop.price = 12.99
mop.attributes = { 'colour': "red" }
mop.save()

Now, we can filter out only the red items easily:

In [2]: Product.objects.filter(attributes__contains={'colour': 'red'})
Out[2]: [<Product: Mop>]

Here we search for items where the colour attribute is set to red, and we only get back the mop. Let’s do the same for blue items:

In [3]: Product.objects.filter(attributes__contains={'colour': 'blue'})
Out[3]: [<Product: Box>]

Here it returns the box. Let’s now search for an item with a capacity of 1L:

In [4]: Product.objects.filter(attributes__contains={'capacity': '1L'})
Out[4]: [<Product: Box>]

Only the box has the capacity attribute at all, and it’s the only one returned. Let’s see what happens when we search for an item with a capacity of 2L, which we know is not present:

In [5]: Product.objects.filter(attributes__contains={'capacity': '2L'})
Out[5]: []

No items returned, as expected. Let’s look for any item with the capacity attribute:

In [6]: Product.objects.filter(attributes__has_key='capacity')
Out[6]: [<Product: Box>]

Again, it only returns the box, as that’s the only one where that key exists. Note that all of this is tightly integrated with the existing API for the Django ORM. Let’s add a third product, a food hamper:

In [3]: hamper = Product()
In [4]: hamper.name = 'Hamper'
In [5]: hamper.description = 'A food hamper'
In [6]: hamper.price = 19.99
In [7]: hamper.attributes = {
...: 'contents': 'ham, cheese, coffee',
...: 'size': '90cmx60cm'
...: }
In [8]: hamper.save()

Next, let’s return only those items that have a contents attribute that contains cheese:

In [9]: Product.objects.filter(attributes__contents__contains='cheese')
Out[9]: [<Product: Hamper>]

As you can see, the HStoreField type allows for quite complex queries, while allowing you to set arbitrary values for an individual item. This overcomes one of the biggest issues with relational databases - the inability to set arbitrary data. Previously, you might have to work around it in some fashion, such as creating a table containing attributes for individual items which had to be joined on the product table. This is very cumbersome and difficult to use, especially when you wanted to work with more than one product. With this approach, it’s easy to filter products by multiple values in the HStore field, and you get back all of the attributes at once, as in this example:

In [13]: Product.objects.filter(attributes__capacity='1L', attributes__colour='blue')
Out[13]: [<Product: Box>]
In [14]: Product.objects.filter(attributes__capacity='1L', attributes__colour='blue')[0].attributes
Out[14]: {'capacity': '1L', 'colour': 'blue'}

Similar functionality is coming in a future version of MySQL, so it wouldn’t be entirely surprising to see HStoreField become more generally available in Django in the near future. For now, this functionality is extremely useful and makes for a good reason to ditch MySQL in favour of PostgreSQL for your future Django apps.

Testing Django views in isolation

$
0
0

One thing you may hear said often about test-driven development is that as far as possible, you should test everything in isolation. However, it’s not always immediately clear how you actually go about doing this. In Django, it’s fairly easy to get your head around testing models in isolation because they’re single objects that you can just create, save, and then check their attributes. Forms are also quite easy to test, because you can just set the parameters with the appropriate values and check that the validation works as expected. With views, it’s much harder to imagine how you’d go about testing them in isolation, and often people just settle for writing higher-level functional tests instead. While functional tests are important, they’re also slower than unit tests, which makes it less likely they’ll be run often. So I thought I’d show you a quick and simple example of testing a Django view in isolation.

One of the little projects I’ve written in the past to help get my head around certain aspects of Django is a code-snippet sharing Django application which I named Snippetr. The index route of this application is a form for submitting a brand-new code snippet and I’ll show you how we would write a test for that.

Testing a GET request

Before now, you may well have used the Django test client to test views. That is fine for higher-level tests, but if you want to test a view in isolation, it’s no use because it emulates a real web server and all of the middleware and authentication, which we want to keep out of the way. Instead, we need to use RequestFactory:

from django.test import RequestFactory

RequestFactory actually implements a subset of the functionality of the Django test client, so while it will feel somewhat familiar, it won’t have all the same functionality. For instance, it doesn’t support middleware, so rather than logging in using the test client’s login() method, you instead attach a user directly to the request, as in this example:

request = RequestFactory()
request.user = user

You have to specify the URL in the request, but you also have to explicitly pass the request through to the view you want to test, which can be a bit confusing. Let’s see it in context. First of all, we want to write a test for making a GET request:

class SnippetCreateViewTest(TestCase):
"""
Test the snippet create view
"""
def setUp(self):
self.user = UserFactory()
self.factory = RequestFactory()
def test_get(self):
"""
Test GET requests
"""
request = self.factory.get(reverse('snippet_create'))
request.user = self.user
response = SnippetCreateView.as_view()(request)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.context_data['user'], self.user)
self.assertEqual(response.context_data['request'], request)

First of all, we define a setUp() method that creates a user and an instance of RequestFactory() for use in the test. Note that I’m using Factory Boy to define UserFactory in order to make it easier to work with. Also, if you have more than one view to test, you should create a base class containing the setUp() method that your view tests inherit from.

Next, we have our test for making a GET request. Note that we’re using the reverse() method to get the route for the view named snippet_create. You’ll need to import this as follows if you’re not yet using it:

from django.core.urlresolvers import reverse

We then attach our user object to the request manually, and fetch the response by passing the request to the view as follows:

    response = SnippetCreateView.as_view()(request)

Note that this is the syntax used for class-based views - we call the view’s as_view() method. For a function-based view, the syntax is a bit simpler:

    response = my_view(request)

We then test our response as usual. In this case, the view adds some additional context data, and we check that we can access that, as well as checking the status code.

Testing a POST request

Testing a POST request is a little more challenging in this case because submitting the form will create a new Snippet object and we don’t want to interact with the model layer at all if we can help it. We want to test the view in isolation, partly because it will be faster, and partly because it’s a good idea. We can do this by mocking the Snippet model’s save() method.

To do so, we need to import two things from the mock library. If you’re using Python 3.4 or later, then mock is part of unittest as unittest.mock. Otherwise, it’s a separate library you need to install with pip. Here’s the import statement for those on Python 3.4 or later:

from unittest.mock import patch, MagicMock

And for those on earlier versions:

from mock import patch, MagicMock

Now, our test for the POST requests should look like this:

@patch('snippets.models.Snippet.save', MagicMock(name="save"))
def test_post(self):
"""
Test post requests
"""
# Create the request
data = {
'title': 'My snippet',
'content': 'This is my snippet'
}
request = self.factory.post(reverse('snippet_create'), data)
request.user = self.user
# Get the response
response = SnippetCreateView.as_view()(request)
self.assertEqual(response.status_code, 302)
# Check save was called
self.assertTrue(Snippet.save.called)
self.assertEqual(Snippet.save.call_count, 1)

Note first of all the following line:

    @patch('snippets.models.Snippet.save', MagicMock(name="save"))

Here we’re saying that in this test, when the save() method of the Snippet model is called, it should instead call a mocked version, which lacks the functionality and only registers that it has been called and a few details about it.

Next, we put together the data to be passed through and create a POST request for it. As before, we attach the user to the request. We then pass the request through in the same way as for the GET request. We also check that the response code was 302, meaning that the user would be redirected elsewhere after the form was submitted correctly.

Finally, we assert that Snippet.save.called is true. called is a Boolean value, representing whether the method was called or not. We also check the value of Snippet.save.call_count, which is a count of the number of times the method was called - here we check that it’s set to 1.

As you can see, while the request factory is a little harder than the Django test client to figure out, it’s not too difficult once you get the hang of it. By combining it with judicious use of mock, you can easily test your views in isolation, and without having to interact with the database or set up any middleware, these tests will be much faster than those using the Django test client.

When you should not use Wordpress

$
0
0

I must admit, I’ve had a rather bad experience with WordPress recently. The site in question was an e-commerce site, built with WordPress and WooCommerce. In development, we originally put the site on shared hosting, but after a while the hosting company told us off because it was using too much database space, so we moved to a VPS earlier than we normally would. With the benefit of hindsight, we probably should have seen that as the first warning sign.

Then, once the site was up and running on the VPS, it got slower and slower, and eventually the server was killing MySQL off because it was using too many resources. I decided to install a benchmarking plugin and investigate why it was so slow. On loading the home page, it became obvious why the site was so slow - there were in excess of 300 queries on the home page. Looking elsewhere, some other pages were even worse, with one making over 1,000 queries!

At this point, I was practically hyperventilating. If I had written a web app that made that many queries on one page from scratch, I’d be seriously considering whether I was cut out for this industry. With an off-the-shelf CMS, you do have to accept some degree of bloat as a trade-off for quicker development time, but these numbers beggar belief.

I was able to mitigate this to some extent. First, I cut down the number of products shown on individual pages and audited the installed plugins, removing ones we could do without. This still left a lot more queries than I liked.

The next step was to enable caching. I installed Memcached and Varnish (incidentally, if you haven’t used Varnish before, you should check it out - it can make a huge difference for slow sites). I then installed and configured W3 Total Cache to work with them. This didn’t solve the fundamental problem of the initial page loads being too database-intensive, but it did mean that the result was cached for some time afterwards, making things easier on subsequent users.

This still wasn’t enough, however. The admin was still very slow, and often crashed. I actually wound up having to write a shell script that would check to see if MySQL was running and restart it if it wasn’t, and set up a cron job to run it every minute, just to ensure I wasn’t having to restart it myself. The issue was only really dealt with once we upped the specs on the VPS from 1GB RAM and 1 core to 3GB RAM and 2 cores, which should really have been overkill for something like WordPress.

As it turned out, the issue wasn’t exactly helped by the fact that someone had been making an unusually persistent attempt to brute-force wp-login.php. I was able to mitigate this by password-protecting it in the .htaccess file and adding some custom rules to fail2ban. But the fundamental problem remained that the resources used by WordPress to load a single page were grossly excessive.

Since then, we’ve continued to have some difficulties with it. There are some rather arcane criteria for calculating the shipping costs, and implementing them has been a real uphill struggle. We’ve also had to deal with breakages in the theme when updating WooCommerce, and other painful issues. It feels at times like the site will never be “done done”.

Now, I’ve had some issues with WordPress before, but this was by far the nastiest I’d ever seen, and it made me think very hard about when we should and should not consider WordPress as a solution. In hindsight, it would have been much easier to use Laravel to build the site from scratch - it would have made for a much leaner, more efficient site, updating the templates would have been a breeze, and implementing additional functionality would have been straightforward.

NB: I’m trying hard to make sure this is NOT one of those “WordPress sucks” blog posts. I’ll admit that I agree with many of the points from a lot of those, and I abandoned WordPress for my own site a long time ago in favour of a static site generator, but there are times when it is appropriate to use it. What I’m trying to do here is to help others avoid making the mistakes we did recently by giving some advice on when you should and should not use WordPress. Of course, your mileage may vary.

Why was WordPress inappropriate here?

With the benefit of hindsight, I can say that WordPress was definitely not the right solution in this case, and I will be advising against using it in similar circumstances. But why was it inappropriate?

  • Less flexible than rolling a custom solution - While the ecosystem of plugins and themes make it possible to use WordPress for a lot of use cases outside the core functionality of the platform, those plugins and themes aren’t infinitely flexible. If you want to do something one way and the plugin you’re using doesn’t support that, you’re out of luck unless you can fork the plugin or write a new one.
  • Dependence on third party plugins - While we were working on the site, WooCommerce made some changes that broke the theme we were using. We were using a child theme, but updating the parent theme alone didn’t fix it - we had to then apply some of the changes to the child theme as well, which was extremely fiddly. As a result, we’re now very wary about updating plugins and themes. Yet we don’t dare put it off too long, because in my experience attempts to break into WordPress are common, and if you fail to install an upgrade that fixes a vulnerability in good time, you can easily find yourself getting a phone call about a site having been hacked (as I did in December last year).
  • Poor performance - This is a big one, and I have therefore broken it down further:
    • Loading styling from the database - Many of the high end, customisable themes have large numbers of configuration options that can be used to style the site. The downside of these is that it creates additional queries to the database to fetch that data. Unless you have some form of caching in place, that data is loaded for every single request to the front end, generating a significant number of additional queries. You can mitigate this by rolling your own custom WordPress theme for the site, however.
    • Too many queries - My experience has been that as a general rule of thumb, it’s much quicker to make a smaller number of more complex queries to a database than to make a larger number of simple queries. If you build a custom web app, you will always know exactly what data you want to retrieve on a particular page and through careful use of joins, can retrieve exactly the data you need with as few queries as possible. Being a generic solution, WordPress doesn’t know exactly what data you need on any one page, and so may fetch the data using an excessive number of queries. It may also fetch data you don’t actually need.
    • Suboptimal database layout - The database schema for WordPress was originally created with a blog in mind, and may not always be optimal for your particular use case.
    • Caching is not a silver bullet - You can do a lot to improve performance by installing Memcached and Varnish, and configuring a caching plugin to work with them. However, this doesn’t solve the problem of the excessive number of queries, it only mitigates the effects somewhat. Not everything can be cached, and the expensive queries will still have to be run at some point. Caching only increases the time between the queries. Also, configuring Varnish in particular can be something of a black art, and it’s easy to miss something and find out some functionality or other hasn’t been working.

WordPress has a lot of technical limitations and deficiencies from a programmer’s point of view. For all that, it works, it’s easy to set up, and there’s a wide variety of plugins and themes available, so it’s often an appropriate choice. While the performance is poorer than I would like, the harsh truth is that often it doesn’t matter - if your site isn’t serving a huge amount of page requests, a few extra queries don’t actually make all that much difference (within reason, of course). My concern is that use of WordPress when it’s entirely inappropriate is widespread.

Is WordPress being overused?

Archer - WordPress? The Dane Cook of content management systems?

I suspect I’m running the risk of being branded a hipster for saying this (“Now it’s popular, you hate WordPress…”), but the fact that WordPress is widespread and popular does not mean that it’s the best solution for your project. Nor does the fact that it’s technically possible to use it for your project.

A few years ago, I built a now-defunct site and mobile app for a client that monitored web pages, or product prices on web pages, for changes, and notified the user when a change occurred. It was built using CodeIgniter 2, and had an integrated blog. At one point, the client was unhappy because it wasn’t built with WordPress, believing that this was the reason why few people were signing up. To use WordPress for this project would have involved building the additional functionality, including the API for the mobile app, as a plugin, which would have slowed down development considerably - in my experience it’s generally much harder to build something as a WordPress plugin than using an MVC framework due to the lack of separation of concerns, which makes the code base more confusing.

This is a good example of the alarming trend I’ve noticed in the last few years whereby a large number of people seem to be under the mistaken impression that WordPress is some kind of all-singing, all-dancing general purpose solution for building websites. I suspect that the reason for this may be that WordPress is commonplace enough that people outside of the web industry have often heard of it, and therefore they often ask for it since it’s what they’ve heard of, not knowing whether or not it’s actually appropriate for their needs. What isn’t always apparent to non-developers is that it’s often considerably easier for a developer to implement the core functionality of WordPress using a modern MVC framework than it is for them to implement the other functionality using WordPress, and as the functionality is being built with your exact use case in mind, the user interface is often more straightforward than the WordPress admin. Also, the WordPress privilege system can make it difficult for you to limit the user to just the functionality you want them to have, resulting in a situation where either you give the users a potentially dangerous level of access, or force them to contact you to make certain changes, making more work for you.

I’ve heard plenty of people say things like “WordPress is a framework” and “A competent developer can build anything with WordPress”. These claims are utter hogwash. A competent developer is smart enough to recognise that WordPress is not a one-size fits all solution and it’s not always appropriate to use it - you can easily spend more time trying to get it to do something off the beaten track than it would take to build that functionality from scratch. I think the way that Automattic are trying to promote WordPress as an application framework is a really bad idea - trying to use it for this is much more cumbersome than using a modern PHP framework like Laravel.

Even if you ignore the technical deficiencies of WordPress, it is too opinionated to be a good solution for use as a framework, and as such you’ll spend a lot of time trying to work around the existing implementations of existing functionality when they don’t quite meet your requirements.

Conclusion

For all its flaws, WordPress is very useful. It’s generally a good choice for blogs, brochure-style sites, and small e-commerce solutions where the client is not too fussy about the details of how it works. For virtually every other situation, I plan on looking elsewhere in future.

A quick and easy Varnish primer

$
0
0

As I mentioned in an earlier post, I recently had the occasion to use Varnish to improve the performance of a website that otherwise would have been unreliable and unusably slow due to WordPress making an excessive number of queries. The difference it made was nothing short of staggering, and I’m not exaggerating when I say it saved the day. I now use Ansible for provisioning new WordPress sites, and Varnish is now a standard part of my WordPress site setup playbook.

However, Varnish can be quite fiddly to configure, and it was something of a baptism of fire for me to learn how to configure it appropriately for this use case. I did make a few mistakes that caused problems down the line, so I thought I’d share the details of how I got it working for that particular site.

What is Varnish?

From the website:

Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.

In other words, you run it on the usual HTTP or HTTPS port, move your usual web server to a different port, and configure it, and it will cache web pages so they can be served more quickly to subsequent visitors.

Be warned - Varnish is not something where you can generally stick with the default settings. The default behaviour does make a lot of sense, but in practice almost no-one will be able to get away with leaving the configuration unchanged.

Installing Varnish

If you’re using Debian or a derivative such as Ubuntu, Varnish is available via apt-get:

$ sudo apt-get install varnish

You may also want to install the documentation:

$ sudo apt-get install varnish-doc

If you’re using Apache I’d also recommend installing libapache2-mod-rpaf and enabling it with sudo a2enmod rpaf - without this, Apache will log all incoming requests as coming from the same server.

I’m assuming you already have a normal web server installed. I’ll assume you’re using Apache, but it shouldn’t be hard to adapt these instructions to work with Nginx. I’m also assuming that the site you want to use Varnish for is a WordPress site with WooCommerce and W3 Total Cache installed. However, this is only for example purposes. If you want to use Varnish for a different web app, you’ll need to plan your caching strategy around that web app yourself.

Please also note that this is using Varnish 4.0, which is the version available with Debian Jessie. If you’re using an older operating system, you may have Varnish 3.0 in the repositories - be warned, the configuration language changed in Varnish 4.0, so the examples here will not work with older versions of Varnish.

By default, Varnish runs on port 6081, which is fine for testing it out, but once you want to go live it’s not what you want. When it’s time to go live, you’ll need to open up /etc/default/varnish and edit the value of DAEMON_OPTS to something like this:

DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"

Note that the -a flag represents the port Varnish is running on.

If you’re using an operating system that uses systemd, such as Debian Jessie, this alone won’t be sufficient. Create a new file at /etc/systemd/system/varnish.service and enter the following:

[Unit]
Description=Varnish HTTP accelerator
[Service]
Type=forking
LimitNOFILE=131072
LimitMEMLOCK=82000
ExecStartPre=/usr/sbin/varnishd -C -f /etc/varnish/default.vcl
ExecStart=/usr/sbin/varnishd -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -s malloc,256m
ExecReload=/usr/share/varnish/reload-vcl
[Install]
WantedBy=multi-user.target

Next, we need to move our web server to a different port. We’ll use port 8080. Replace the contents of /etc/apache2/ports.conf with this:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default
# This is also true if you have upgraded from before 2.2.9-3 (i.e. from
# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and
# README.Debian.gz
NameVirtualHost *:8080
Listen 8080
<IfModule mod_ssl.c>
# If you add NameVirtualHost *:443 here, you will also have to change
# the VirtualHost statement in /etc/apache2/sites-available/default-ssl
# to <VirtualHost *:443>
# Server Name Indication for SSL named virtual hosts is currently not
# supported by MSIE on Windows XP.
Listen 443
</IfModule>
<IfModule mod_gnutls.c>
Listen 443
</IfModule>

You’ll also need to change the ports for the individual site files under /etc/apache2/sites-available, as in this example:

<VirtualHost *:8080>
ServerAdmin webmaster@localhost
DocumentRoot /var/www
<Directory />
Options FollowSymLinks
AllowOverride All
</Directory>
<Directory /var/www/>
Options FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Writing our VCL file

Next, we come to our Varnish configuration proper, which resides at /etc/varnish/default.vcl. The vcl stands for Varnish Configuration Language, and it has a syntax somewhat reminiscent of C.

The default behaviour for Varnish is as follows:

  • It does not cache requests that contain cookie or authorization headers
  • It does not cache requests which the backend HTTP server indicates should not be cached
  • It will only cache GET and HEAD requests

This behaviour is unlikely to meet your needs. We’ll therefore work through the Varnish config file I wrote for this WordPress site in the hope that it will teach you enough to adapt it to your own needs.

vcl 4.0;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
acl purge {
"127.0.0.1";
"localhost";
}
sub vcl_recv {
# Never cache PUT, PATCH, DELETE or POST requests
if (req.method == "PUT" || req.method == "PATCH" || req.method == "DELETE" || req.method == "POST") {
return (pass);
}
# Never cache cart, account, checkout or addons
if (req.url ~ "^/(cart|my-account|checkout|addons)") {
return (pass);
}
# Never cache adding to cart
if ( req.url ~ "\?add-to-cart=" ) {
return (pass);
}
# Never cache admin or login
if ( req.url ~ "^/wp-(admin|login|cron)" ) {
return (pass);
}
# Never cache WooCommerce API
if ( req.url ~ "wc-api" ) {
return (pass);
}
# Remove has_js and CloudFlare/Google Analytics __* cookies and statcounter is_unique
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js|is_unique)=[^;]*", "");
# Remove a ";" prefix, if present.
set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
# Remove the wp-settings-1 cookie
set req.http.Cookie = regsuball(req.http.Cookie, "wp-settings-1=[^;]+(; )?", "");
# Remove the wp-settings-time-1 cookie
set req.http.Cookie = regsuball(req.http.Cookie, "wp-settings-time-1=[^;]+(; )?"
, "");
# Remove the wp test cookie
set req.http.Cookie = regsuball(req.http.Cookie, "wordpress_test_cookie=[^;]+(; )?", "");
# Static content unique to the theme can be cached (so no user uploaded images)
# The reason I don't take the wp-content/uploads is because of cache size on bigger blogs
# that would fill up with all those files getting pushed into cache
if (req.url ~ "wp-content/themes/" && req.url ~ "\.(css|js|png|gif|jp(e)?g)") {
unset req.http.cookie;
}
# Even if no cookies are present, I don't want my "uploads" to be cached due to their potential size
if (req.url ~ "/wp-content/uploads/") {
return (pass);
}
# any pages with captchas need to be excluded
if (req.url ~ "^/contact/")
{
return(pass);
}
# Check the cookies for wordpress-specific items
if (req.http.Cookie ~ "wordpress_" || req.http.Cookie ~ "comment_") {
# A wordpress specific cookie has been set
return (pass);
}
# allow PURGE from localhost
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return(synth(405, "Not allowed."));
}
return (purge);
}
# Force lookup if the request is a no-cache request from the client
if (req.http.Cache-Control ~ "no-cache") {
return (pass);
}
# Try a cache-lookup
return (hash);
}
sub vcl_backend_response {
set beresp.grace = 5m;
}

Let’s take a closer look at the first part of the config:

vcl 4.0;
backend default {
.host = "127.0.0.1";
.port = "8080";
}

Here we define that we’re using version 4.0 of VCL, and that the host to use as a back end is port 8080 on the same server. If your normal HTTP server is running on a different port, you will need to set it here. Also, note that you can use a different host as the backend.

acl purge {
"127.0.0.1";
"localhost";
}

We also set which hosts can trigger a purge of the cache, namely localhost and 127.0.0.1. The web app hosted on the server can then make an HTTP PURGE request to a given path, which will clear that path from the cache. In our case, W3 Total Cache supports this - if it’s a custom web app, you’ll need to implement this functionality yourself to clear the cache when new content is added.

Next, we start the vcl_recv subroutine. This is where we define our rules for deciding whether or not to serve content from the cache. Let’s look at our first rule:

sub vcl_recv {
# Never cache PUT, PATCH, DELETE or POST requests
if (req.method == "PUT" || req.method == "PATCH" || req.method == "DELETE" || req.method == "POST") {
return (pass);
}

Here, we declare that we should never cache any PUT, PATCH, DELETE or POST requests, on the basis that these change the state of the application. This ensures that things like contact forms will work as expected.

Note that we’re getting the value of req.method to determine the HTTP verb used. The req object has many other properties we’ll see being used.

# Never cache cart, account, checkout or addons
if (req.url ~ "^/(cart|my-account|checkout|addons)") {
return (pass);
}
# Never cache adding to cart
if ( req.url ~ "\?add-to-cart=" ) {
return (pass);
}
# Never cache admin or login
if ( req.url ~ "^/wp-(admin|login|cron)" ) {
return (pass);
}
# Never cache WooCommerce API
if ( req.url ~ "wc-api" ) {
return (pass);
}

Next, we define a series of regular expressions, and if the URL (represented by req.url) matches that regex, then the request is passed straight through to Apache without Varnish getting involved. In this case, we never want to cache the following sections:

  • The shopping cart, checkout, addons page or account page
  • The Add to cart button
  • The WordPress admin and login screen, and cron requests
  • The WooCommerce API

You’ll need to consider which parts of your site must always serve the latest content and which don’t need everything to be fully up to date. Typically admin areas any anything interactive must not be cached, while the front page is usually fine.

# Remove has_js and CloudFlare/Google Analytics __* cookies and statcounter is_unique
set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js|is_unique)=[^;]*", "");
# Remove a ";" prefix, if present.
set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", "");
# Remove the wp-settings-1 cookie
set req.http.Cookie = regsuball(req.http.Cookie, "wp-settings-1=[^;]+(; )?", "");
# Remove the wp-settings-time-1 cookie
set req.http.Cookie = regsuball(req.http.Cookie, "wp-settings-time-1=[^;]+(; )?"
, "");
# Remove the wp test cookie
set req.http.Cookie = regsuball(req.http.Cookie, "wordpress_test_cookie=[^;]+(; )?", "");

Cookies, even ones set on the client side such as those for Google Analytics, can prevent content from being cached. To prevent this, you need to configure Varnish to discard these cookies before passing them on to Apache. In this case, we want to exclude Google Analytics and various WordPress cookies.

# Static content unique to the theme can be cached (so no user uploaded images)
if (req.url ~ "wp-content/themes/" && req.url ~ "\.(css|js|png|gif|jp(e)?g)") {
unset req.http.cookie;
}

Here we allow static content that’s part of the site theme to be cached since that doesn’t change often, so we unset the cookies for that request.

# Even if no cookies are present, I don't want my "uploads" to be cached due to their potential size
if (req.url ~ "/wp-content/uploads/") {
return (pass);
}

Here we prevent any user-uploaded content from being cached, since that can change often.

# any pages with captchas need to be excluded
if (req.url ~ "^/contact/")
{
return(pass);
}

Captchas must obviously never be cached since that will break them. In this case, we assume that the contact form has a captcha, so it gets excluded from the cache.

# Check the cookies for wordpress-specific items
if (req.http.Cookie ~ "wordpress_" || req.http.Cookie ~ "comment_") {
# A wordpress specific cookie has been set
return (pass);
}

Here we check for remaining WordPress-specific cookies. These would indicate that a user is signed in, in which case we may want to serve them all the latest content rather than displaying content from the cache.

# allow PURGE from localhost
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return(synth(405, "Not allowed."));
}
return (purge);
}

Remember where we allowed the local server to clear the cache? This section actually carries out the purge when it receives a request from an authorised client.

# Force lookup if the request is a no-cache request from the client
if (req.http.Cache-Control ~ "no-cache") {
return (pass);
}

Here we check to see if the Cache-Control HTTP header is set to no-cache. If so, we pass it straight through to Apache.

# Try a cache-lookup
return (hash);
}

This is the last rule under vcl_recv, because it only reaches this point if the request has got past all the other rules. It tries to fetch the page from the cache. If the page is not in the cache, it passes it on to Apache and will cache the response.

sub vcl_backend_response {
set beresp.grace = 5m;
}

This is where we set how long responses are cached for. Here we’ve set it to 5 minutes.

With that done, we should be ready to restart Varnish and Apache. If you are using an operating system with systemd, then the following commands should restart Apache and Varnish:

$ sudo systemctl reload apache2.service
$ sudo systemctl reload varnish.service

For those not yet using systemd, try this instead:

$ sudo service apache2 restart
$ sudo service varnish restart

If you then visit your site and inspect the HTTP headers using your browser’s dev tools, you’ll notice the new HTTP header X-Varnish in the response. This tells you that Varnish is up and running. If you make sure you’re logged out, you should hopefully see that if you load a page, and then load it again, the second response is noticeably quicker.

Installing and configuring Varnish is a relatively quick and easy way of helping your website scale to be able to serve many more users, and if the site becomes popular all of a sudden, it can make a huge difference as to whether the site can stand up to the load or not. If you need more information on how to configure Varnish for your own needs, I recommend consulting the excellent documentation.

Building a real-time Twitter stream with Node.js, React.js and Redis

$
0
0

In the last year or so, React.js has taken the world of web development by storm. A major reason for this is that it makes it possible to build isomorphic web applications - web apps where the same code can run on the client and the server. Using React.js, you can create a template that will be executed on the server when the page first loads, and then the same template can be used to re-render the content when it’s updated, whether that’s via AJAX, WebSockets or another method entirely.

In this tutorial, I’ll show you how to build a simple Twitter streaming app using Node.js. I’m actually not the only person to have built this to demonstrate React.js, but this is my own particular take on this idea, since it’s such an obvious use case for React.

What is React.js?

A lot of people get rather confused over this issue. It’s not correct to compare React.js with frameworks like Angular.js or Backbone.js. It’s often described as being just the V in MVC - it represents only the view layer. If you’re familiar with Backbone.js, I think it’s reasonable to compare it to Backbone’s views, albeit with it’s own templating syntax. It does not provide the following functionality like Angular and Backbone do:

  • Support for models
  • Any kind of helpers for AJAX requests
  • Routing

If you want any of this functionality, you need to look elsewhere. There are other libraries around that offer this kind of functionality, so if you want to use React as part of some kind of MVC structure, you can do so - they’re just not a part of the library itself.

React.js uses a so-called “virtual DOM” - rather than re-rendering the view from scratch when the state changes, it instead retains a virtual representation of the DOM in memory, updates that, then figures out what changes are required to update the existing DOM and applies them. This means it only needs to change what actually changes, making it faster than other client-side templating systems. Combined with the ability to render on the server side, React allows you to build high-performance apps that combine the initial speed and SEO advantages of conventional web apps with the responsiveness of single-page web apps.

To create components with React, it’s common to use an XML-like syntax called JSX. It’s not mandatory, but I highly recommend you do so as it’s much more intuitive than creating elements with Javascript.

Getting started

You’ll need a Twitter account, and you’ll need to create a new Twitter app and obtain the security credentials to let you access the Twitter Streaming API. You’ll also need to have Node.js installed (ideally using nvm) - at this time, however, you can’t use Node 4.0 because of issues with Redis. You will also need to install Redis and hiredis - if you’ve worked through my previous Redis tutorials you’ll have these already.

We’ll be using Gulp.js as our build system, and Bower to install some client-side packages, so they need to be installed globally:

$ npm install -g gulp bower

We’ll also be using Compass to help with our stylesheets:

$ sudo gem install compass

With that all done, let’s start work on our app. First, run the following command to create your package.json:

$ npm init

I’m assuming you’re well-acquainted enough with Node.js to know what this does, and can answer the questions without difficulty. I won’t cover writing tests in this tutorial as, but set your test command to gulp test and you should be fine.

Next, we need to install our dependencies:

$ npm install --save babel compression express hbs hiredis lodash morgan react redis socket.io socket.io-client twitter
$ npm install --save-dev browserify chai gulp gulp-compass gulp-coveralls gulp-istanbul gulp-jshint gulp-mocha gulp-uglify jshint-stylish reactify request vinyl-buffer vinyl-source-stream

Planning our app

Now, it’s worth taking a few minutes to plan the architecture of our app. We want to have the app listen to the Twitter Streaming API and filter for messages with any arbitrary string in them - in this case we’ll be searching for “javascript”, but you can set it to anything you like. That means that that part needs to be listening all the time, not just when someone is using the app. Also, it doesn’t fit neatly into the usual request-response cycle - if several people visit the site at once, we could end up with multiple connections to fetch the same data, which is really not efficient, and could cause problems with duplicate tweets showing up.

Instead, we’ll have a separate worker.js file which runs constantly. This will listen for any matching messages on Twitter. When one appears, rather than returning it itself, it will publish it to a Redis channel, as well as persisting it. Then, the web app, which will be the index.js file, will be subscribed to the same channel, and will receive the tweet and push it to all current users using Socket.io.

This is a good example of a message queue, and it’s a common pattern. It allows you to create dedicated sections of your app for different tasks, and means that they will generally be more robust. In this case, if the worker goes down, users will still be able to see some tweets, and if the server goes down, the tweets will still be persisted to Redis. In theory, this would also allow you to scale your app more easily by allowing movement of different tasks to different servers, and several app servers could interface with a single worker process. The only downside I can think of is that on a platform like Heroku you’d need to have a separate dyno for the worker process - however, with Heroku’s pricing model changing recently, since this needs to be listening all the time it won’t be suitable for the free tier anyway.

First let’s create our gulpfile.js:

var gulp = require('gulp');
var jshint = require('gulp-jshint');
var source = require('vinyl-source-stream');
var buffer = require('vinyl-buffer');
var browserify = require('browserify');
var reactify = require('reactify');
var mocha = require('gulp-mocha');
var istanbul = require('gulp-istanbul');
var coveralls = require('gulp-coveralls');
var compass = require('gulp-compass');
var uglify = require('gulp-uglify');
var paths = {
scripts: ['components/*.jsx'],
styles: ['src/sass/*.scss']
};
gulp.task('lint', function () {
return gulp.src([
'index.js',
'components/*.js'
])
.pipe(jshint())
.pipe(jshint.reporter('jshint-stylish'));
});
gulp.task('compass', function() {
gulp.src('src/sass/*.scss')
.pipe(compass({
css: 'static/css',
sass: 'src/sass'
}))
.pipe(gulp.dest('static/css'));
});;
gulp.task('test', function () {
gulp.src('index.js')
.pipe(istanbul())
.pipe(istanbul.hookRequire())
.on('finish', function () {
gulp.src('test/test.js', {read: false})
.pipe(mocha({ reporter: 'spec' }))
.pipe(istanbul.writeReports({
reporters: [
'lcovonly',
'cobertura',
'html'
]
}))
.pipe(istanbul.enforceThresholds({ thresholds: { global: 90 } }))
.once('error', function () {
process.exit(0);
})
.once('end', function () {
process.exit(0);
});
});
});
gulp.task('coveralls', function () {
gulp.src('coverage/lcov.info')
.pipe(coveralls());
});
gulp.task('react', function () {
return browserify({ entries: ['components/index.jsx'], debug: true })
.transform(reactify)
.bundle()
.pipe(source('bundle.js'))
.pipe(buffer())
.pipe(uglify())
.pipe(gulp.dest('static/jsx/'));
});
gulp.task('default', function () {
gulp.watch(paths.scripts, ['react']);
gulp.watch(paths.styles, ['compass']);
});

I’ve added tasks for the tests and JSHint if you choose to implement them, but the only ones I’ve actually used are the compass and react tasks. The compass task compiles our Sass files into CSS, while the react task uses Browserify to take our React components and various modules installed using NPM and build them for use in the browser, as well as minifying them. Note that we installed React and lodash with NPM? We’re going to be able to use them in the browser and on the server, thanks to Browserify.

Next, let’s create our worker.js file:

/*jslint node: true */
'use strict';
// Get dependencies
var Twitter = require('twitter');
// Set up Twitter client
var client = new Twitter({
consumer_key: process.env.TWITTER_CONSUMER_KEY,
consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
access_token_key: process.env.TWITTER_ACCESS_TOKEN_KEY,
access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET
});
// Set up connection to Redis
var redis;
if (process.env.REDIS_URL) {
redis = require('redis').createClient(process.env.REDIS_URL);
} else {
redis = require('redis').createClient();
}
client.stream('statuses/filter', {track: 'javascript', lang: 'en'}, function(stream) {
stream.on('data', function(tweet) {
// Log it to console
console.log(tweet);
// Publish it
redis.publish('tweets', JSON.stringify(tweet));
// Persist it to a Redis list
redis.rpush('stream:tweets', JSON.stringify(tweet));
});
// Handle errors
stream.on('error', function (error) {
console.log(error);
});
});

Most of this file should be fairly straightforward. We set up our connection to Twitter (you’ll need to set the various environment variables listed here using the appropriate method for your operating system), and a connection to Redis.

We then stream the Twitter statuses that match our filter. When we receive a tweet, we log it to the console (feel free to comment this out in production if desired), publish it to a Redis channel called tweets, and push it to the end of a Redis list called stream:tweets. When an error occurs, we output it to the console.

Let’s use Bootstrap to style the app. Create the following .bowerrc file:

{
"directory": "static/bower_components"
}

Then run bower init to create your bower.json file, and install Bootstrap with bower install --save sass-bootstrap.

With that done, create the file src/sass/style.scss and enter the following:

@import "compass/css3/user-interface";
@import "compass/css3";
@import "../../static/bower_components/sass-bootstrap/lib/bootstrap.scss";

This includes some dependencies from Compass, as well as Bootstrap. We won’t be using any of the Javascript features of Bootstrap, so we don’t need to worry too much about that.

Next, we need to create our view files. As React will be used to render the main part of the page, these will be very basic, with just the header, footer, and a section where the content can be rendered. First, create views/index.hbs:

{{> header }}
<div class="container">
<div class="row">
<div class="col-md-12">
<div id='view'>{{{ markup }}}</div>
</div>
</div>
</div>
<script id="initial-state" type="application/json">{{{state}}}</script>
{{> footer }}

As promised, this a very basic layout. Note the markup variable, which is where the markup generated by React will be inserted when rendered on the server, and the state variable, which will contain the JSON representation of the data used to generate that markup. By passing that data through, you can ensure that the instance of React on the client has access to the same raw data as was passed through to the view on the server side, so that when the data needs to be re-rendered, it can be done so correctly.

We’ll also define partials for the header and footer. The header should be in views/partials/header.hbs:

<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Tweet Stream</title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Place favicon.ico and apple-touch-icon.png in the root directory -->
<link rel="stylesheet" type="text/css" href="/css/style.css">
</head>
<body>
<!--[if lt IE 7]>
<p class="browsehappy">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p>
<![endif]-->
<nav class="navbar navbar-inverse navbar-static-top" role="navigation">
<div class="container-fluid">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#header-nav">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="/">Tweet Stream</a>
<div class="collapse navbar-collapse navbar-right" id="header-nav">
</div>
</div>
</div>
</nav>

The footer should be in views/partials/footer.hbs:

<script src="/jsx/bundle.js"></script>
</body>
</html>

Note that we load the Javascript file /jsx/bundle.js - this is the output from the command gulp react.

Creating the back end

The next step is to implement the back end of the website. Add the following code as index.js:

/*jslint node: true */
'use strict';
require('babel/register');
// Get dependencies
var express = require('express');
var app = express();
var compression = require('compression');
var port = process.env.PORT || 5000;
var base_url = process.env.BASE_URL || 'http://localhost:5000';
var hbs = require('hbs');
var morgan = require('morgan');
var React = require('react');
var Tweets = React.createFactory(require('./components/tweets.jsx'));
// Set up connection to Redis
var redis, subscribe;
if (process.env.REDIS_URL) {
redis = require('redis').createClient(process.env.REDIS_URL);
subscribe = require('redis').createClient(process.env.REDIS_URL);
} else {
redis = require('redis').createClient();
subscribe = require('redis').createClient();
}
// Set up templating
app.set('views', __dirname + '/views');
app.set('view engine', "hbs");
app.engine('hbs', require('hbs').__express);
// Register partials
hbs.registerPartials(__dirname + '/views/partials');
// Set up logging
app.use(morgan('combined'));
// Compress responses
app.use(compression());
// Set URL
app.set('base_url', base_url);
// Serve static files
app.use(express.static(__dirname + '/static'));
// Render main view
app.get('/', function (req, res) {
// Get tweets
redis.lrange('stream:tweets', 0, -1, function (err, tweets) {
if (err) {
console.log(err);
} else {
// Get tweets
var tweet_list = [];
tweets.forEach(function (tweet, i) {
tweet_list.push(JSON.parse(tweet));
});
// Render page
var markup = React.renderToString(Tweets({ data: tweet_list.reverse() }));
res.render('index', {
markup: markup,
state: JSON.stringify(tweet_list)
});
}
});
});
// Listen
var io = require('socket.io')({
}).listen(app.listen(port));
console.log("Listening on port " + port);
// Handle connections
io.sockets.on('connection', function (socket) {
// Subscribe to the Redis channel
subscribe.subscribe('tweets');
// Handle receiving messages
var callback = function (channel, data) {
socket.emit('message', data);
};
subscribe.on('message', callback);
// Handle disconnect
socket.on('disconnect', function () {
subscribe.removeListener('message', callback);
});
});

Let’s go through this bit by bit:

/*jslint node: true */
'use strict';
require('babel/register');

Here we’re using Babel, which is a library that allows you to use new features in Javascript even if the interpreter doesn’t support it. It also includes support for JSX, allowing us to require JSX files in the same way we would require Javascript files.

// Get dependencies
var express = require('express');
var app = express();
var compression = require('compression');
var port = process.env.PORT || 5000;
var base_url = process.env.BASE_URL || 'http://localhost:5000';
var hbs = require('hbs');
var morgan = require('morgan');
var React = require('react');
var Tweets = React.createFactory(require('./components/tweets.jsx'));

Here we include our dependencies. Most of this will be familiar if you’ve used Express before, but we also use React to create a factory for a React component called Tweets.

// Set up connection to Redis
var redis, subscribe;
if (process.env.REDIS_URL) {
redis = require('redis').createClient(process.env.REDIS_URL);
subscribe = require('redis').createClient(process.env.REDIS_URL);
} else {
redis = require('redis').createClient();
subscribe = require('redis').createClient();
}
// Set up templating
app.set('views', __dirname + '/views');
app.set('view engine', "hbs");
app.engine('hbs', require('hbs').__express);
// Register partials
hbs.registerPartials(__dirname + '/views/partials');
// Set up logging
app.use(morgan('combined'));
// Compress responses
app.use(compression());
// Set URL
app.set('base_url', base_url);
// Serve static files
app.use(express.static(__dirname + '/static'));

This section sets up the various dependencies of our app. We set up two connections to Redis - one for handling subscriptions, the other for reading from Redis in order to populate the view.

We also set up our views, logging, compression of the HTTP response, a base URL, and serving static files.

// Render main view
app.get('/', function (req, res) {
// Get tweets
redis.lrange('stream:tweets', 0, -1, function (err, tweets) {
if (err) {
console.log(err);
} else {
// Get tweets
var tweet_list = [];
tweets.forEach(function (tweet, i) {
tweet_list.push(JSON.parse(tweet));
});
// Render page
var markup = React.renderToString(Tweets({ data: tweet_list.reverse() }));
res.render('index', {
markup: markup,
state: JSON.stringify(tweet_list)
});
}
});
});

Our app only has a single view. When the root is loaded, we first of all fetch all of the tweets stored in the stream:tweets list. We then convert them into an array of objects.

Next, we render the Tweets component to a string, passing through our list of tweets, and store the resulting markup. We then pass through this markup and the string representation of the list of tweets to the template.

// Listen
var io = require('socket.io')({
}).listen(app.listen(port));
console.log("Listening on port " + port);
// Handle connections
io.sockets.on('connection', function (socket) {
// Subscribe to the Redis channel
subscribe.subscribe('tweets');
// Handle receiving messages
var callback = function (channel, data) {
socket.emit('message', data);
};
subscribe.on('message', callback);
// Handle disconnect
socket.on('disconnect', function () {
subscribe.removeListener('message', callback);
});
});

Finally, we set up Socket.io. On a connection, we subscribe to the Redis channel tweets. When we receive a tweet from Redis, we emit that tweet so that it can be rendered on the client side. We also handle disconnections by removing our Redis subscription.

Creating our React components

Now it’s time to create our first React component. We’ll create a folder called components to hold all of our component files. Our first file is components/index.jsx:

var React = require('react');
var Tweets = require('./tweets.jsx');
var initialState = JSON.parse(document.getElementById('initial-state').innerHTML);
React.render(
<Tweets data={initialState} />,
document.getElementById('view')
);

First of all, we include React and the same Tweets component we require on the server side (note that we need to specify the .jsx extension). Then we fetch the initial state from the script tag we created earlier. Finally we render the Tweets components, passing through the initial state, and specify that it should be inserted into the element with an id of view. Note that we store the initial state in data - inside the component, this can be accessed as this.props.data.

This particular component is only ever used on the client side - when we render on the server side, we don’t need any of this functionality since we insert the markup into the view element anyway, and we don’t need to specify the initial data in the same way.

Next, we define the Tweets component in components/tweets.jsx:

var React = require('react');
var io = require('socket.io-client');
var TweetList = require('./tweetlist.jsx');
var _ = require('lodash');
var Tweets = React.createClass({
componentDidMount: function () {
// Get reference to this item
var that = this;
// Set up the connection
var socket = io.connect(window.location.href);
// Handle incoming messages
socket.on('message', function (data) {
// Insert the message
var tweets = that.props.data;
tweets.push(JSON.parse(data));
tweets = _.sortBy(tweets, function (item) {
return item.created_at;
}).reverse();
that.setProps({data: tweets});
});
},
getInitialState: function () {
return {data: this.props.data};
},
render: function () {
return (
<div>
<h1>Tweets</h1>
<TweetList data={this.props.data} />
</div>
)
}
});
module.exports = Tweets;

Let’s work our way through each section in turn:

var React = require('react');
var io = require('socket.io-client');
var TweetList = require('./tweetlist.jsx');
var _ = require('lodash');

Here we include React and the Socket.io client, as well as Lodash and our TweetList component. With React.js, it’s recommend that you break up each individual part of your interface into a single component - here Tweets is a wrapper for the tweets that includes a heading. TweetList will be a list of tweets, and TweetItem will be an individual tweet.

var Tweets = React.createClass({
componentDidMount: function () {
// Get reference to this item
var that = this;
// Set up the connection
var socket = io.connect(window.location.href);
// Handle incoming messages
socket.on('message', function (data) {
// Insert the message
var tweets = that.props.data;
tweets.push(JSON.parse(data));
tweets = _.sortBy(tweets, function (item) {
return item.created_at;
}).reverse();
that.setProps({data: tweets});
});
},

Note the use of the componentDidMount method - this fires when a component has been rendered on the client side for the first time. You can therefore use it to set up events. Here, we’re setting up a callback so that when a new tweet is received, we get the existing tweets (stored in this.props.data, although we copy this to that so it works inside the callback), push the tweet to this list, sort it by the time created, and set this.props.data to the new value. This will result in the tweets being re-rendered.

getInitialState: function () {
return {data: this.props.data};
},

Here we set the initial state of the component - it sets the value of this.state to the object passed through. In this case, we pass through an object with the attribute data defined as the value of this.props.data, meaning that this.state.data is the same as this.props.data.

render: function () {
return (
<div>
<h1>Tweets</h1>
<TweetList data={this.props.data} />
</div>
)
}
});
module.exports = Tweets;

Here we define our render function. This can be thought of as our template. Note that we include TweetList inside our template and pass through the data. Afterwards, we export Tweets so it can be used elsewhere.

Next, let’s create components/tweetlist.jsx:

var React = require('react');
var TweetItem = require('./tweetitem.jsx');
var TweetList = React.createClass({
render: function () {
var that = this;
var tweetNodes = this.props.data.map(function (item, index) {
return (
<TweetItem key={index} text={item.text}></TweetItem>
);
});
return (
<ul className="tweets list-group">
{tweetNodes}
</ul>
)
}
});
module.exports = TweetList;

This component is much simpler - it only has a render method. First, we get our individual tweets and for each one define a TweetItem component. Then we create an unordered list and insert the tweet items into it. We then export it as TweetList.

Our final component is the TweetItem component. Create the following file at components/tweetitem.jsx:

var React = require('react');
var TweetItem = React.createClass({
render: function () {
return (
<li className="list-group-item">{this.props.text}</li>
);
}
});
module.exports = TweetItem;

This component is quite simple. It’s just a single list item with the text set to the value of the tweet’s text attribute.

That should be all of our components done. Time to compile our Sass and run Browserify:

$ gulp compass
$ gulp react

Now, if you make sure you have set the appropriate environment variables, and then run node worker.js in one terminal, and node index.js in another, and visit http://localhost:5000/, you should see your Twitter stream in all its glory! You can also try it with Javascript disabled, or in a text-mode browser such as Lynx, to demonstrate that it still renders the page without having to do anything on the client side - you’re only missing the constant updates.

Wrapping up

I hope this gives you some idea of how you can easily use React.js on both the client and server side to make web apps that are fast and search-engine friendly while also being easy to update dynamically. You can find the source code on GitHub.

Hopefully I’ll be able to publish some later tutorials that build on this to show you how to build more substantial web apps with React.


Learning more about React.js and Flux

$
0
0

Udemy have very kindly provided some vouchers for free access to their course, “Build Web Apps with ReactJS and Flux” for me to give away to subscribers. To redeem them, follow the link above and use the voucher code MatthewDalysBlog.

There’s only 50 in total, and they are on a first-come, first-serve basis, so I suggest you redeem them sooner rather than later.

Mocking external APIs in Python

$
0
0

It’s quite common to have to integrate an external API into your web app for some of your functionality. However, it’s a really bad idea to have requests be sent to the remote API when running your tests. At best, it means your tests may fail due to unexpected circumstances, such as a network outage. At worst, you could wind up making requests to paid services that will cost you money, or sending push notifications to clients. It’s therefore a good idea to mock these requests in some way, but it can be fiddly.

In this post I’ll show you several ways you can mock an external API so as to prevent requests being sent when running your test suite. I’m sure there are many others, but these have worked for me recently.

Mocking the client library

Nowadays many third-party services realise that providing developers with client libraries in a variety of languages is a good idea, so it’s quite common to find a library for interfacing with a third-party service. Under these circumstances, the library itself is usually already thoroughly tested, so there’s no point in you writing additional tests for that functionality. Instead, you can just mock the client library so that the request is never sent, and if you need a response, then you can specify one that will remain constant.

I recently had to integrate Stripe with a mobile app backend, and I used their client library. I needed to ensure that I got the right result back. In this case I only needed to use the Token object’s create() method. I therefore created a new MockToken class that inherited from Token, and overrode its create() method so that it only accepted one card number and returned a hard-coded response for it:

from stripe.resource import Token, convert_to_stripe_object
from stripe.error import CardError
class MockToken(Token):
@classmethod
def create(cls, api_key=None, idempotency_key=None,
stripe_account=None, **params):
if params['card']['number'] != '4242424242424242':
raise CardError('Invalid card number', None, 402)
response = {
"card": {
"address_city": None,
"address_country": None,
"address_line1": None,
"address_line1_check": None,
"address_line2": None,
"address_state": None,
"address_zip": None,
"address_zip_check": None,
"brand": "Visa",
"country": "US",
"cvc_check": "unchecked",
"dynamic_last4": None,
"exp_month": 12,
"exp_year": 2017,
"fingerprint": "49gS1c4YhLaGEQbj",
"funding": "credit",
"id": "card_17XXdZGzvyST06Z022EiG1zt",
"last4": "4242",
"metadata": {},
"name": None,
"object": "card",
"tokenization_method": None
},
"client_ip": "192.168.1.1",
"created": 1453817861,
"id": "tok_42XXdZGzvyST06Z0LA6h5gJp",
"livemode": False,
"object": "token",
"type": "card",
"used": False
}
return convert_to_stripe_object(response, api_key, stripe_account)

Much of this was lifted straight from the source code for the library. I then wrote a test for the payment endpoint and patched the Token class:

class PaymentTest(TestCase):
@mock.patch('stripe.Token', MockToken)
def test_payments(self):
data = {
"number": '1111111111111111',
"exp_month": 12,
"exp_year": 2017,
"cvc": '123'
}
response = self.client.post(reverse('payments'), data=data, format='json')
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)

This replaced stripe.Token with MockToken so that in this test, the response from the client library was always going to be the expected one.

If the response doesn’t matter and all you need to do is be sure that the right method would have been called, this is easier. You can just mock the method in question using MagicMock and assert that it has been called afterwards, as in this example:

class ReminderTest(TestCase):
def test_send_reminder(self):
# Mock PushService.create_message()
PushService.create_message = mock.MagicMock(name="create_message")
# Call reminder task
send_reminder()
# Check user would have received a push notification
PushService.create_message.assert_called_with([{'text': 'My push', 'conditions': ['UserID', 'EQ', 1]}])

Mocking lower-level requests

Sometimes, no client library is available, or it’s not worth using one as you only have to make one or two requests. Under these circumstances, there are ways to mock the actual request to the external API. If you’re using the requests module, then there’s a responses module that’s ideal for mocking the API request.

Suppose we have the following code:

import json, requests
def send_request_to_api(data):
# Put together the request
params = {
'auth': settings.AUTH_KEY,
'data': data
}
response = requests.post(settings.API_URL, data={'params': json.dumps(params)})
return response

Using responses we can mock the response from the server in our test:

class APITest(TestCase):
@responses.activate
def test_send_request(self):
# Mock the API
responses.add(responses.POST,
settings.API_URL,
status=200,
content_type="application/json",
body='{"item_id": "12345678"}')
# Call function
data = {
"surname": "Smith",
"location": "London"
}
send_request_to_api(data)
# Check request went to correct URL
assert responses.calls[0].request.url == settings.API_URL

Note the use of the @responses.activate decorator. We use responses.add() to set up each URL we want to be able to mock, and pass through details of the response we want to return. We then make the request, and check that it was made as expected.

You can find more details of the responses module here.

Summary

I’m pretty certain that there are other ways you can mock an external API in Python, but these ones have worked for me recently. If you use another method, please feel free to share it in the comments.

My experience using PHP 7 in production

$
0
0

In the last couple of weeks I’ve been working on a PHP web app. Nothing unusual there, except this was the first time we’d used PHP 7 in production. We discussed the possibility a while back, and eventually decided that for certain projects we’d use PHP 7 without waiting another year or so (or maybe longer) for a version of Debian stable with it by default. I wanted to talk about how our experience has been using it in production.

Background

We’ve never really had a fixed stack that we work with at work before until recently - it was largely based on personal preferences and experience. For many jobs, especially content-based sites, we generally used WordPress - it has its issues, but it does fine for a lot of work. For more complex websites, I tended to use CodeIgniter because I’d learned it during my previous job and knew it fairly well, but I was not terribly happy with it - it’s a bit too basic and simplistic, as well as being somewhat behind the times, and I only really kept using it through inertia. For mobile app backends, I tended to use Django, partly for the admin interface, and partly because Django REST Framework makes it easy to build a REST API quickly and easily in a way that wasn’t viable with CodeIgniter.

This state of affairs couldn’t really continue. I love Python and Django, but I was the only one at work who had ever used Python, so in the event I got hit by a bus there would have been no-one who could have taken over from me. As for CodeIgniter, it was clearly falling further and further behind the curve, and I was sick of it and looking to replace it. Ideally we needed a PHP framework as both myself and my colleague knew it.

I’d also been playing around with Laravel on a few little projects, but I didn’t get the chance to use it for a new web app until autumn last year. Around the same time, we hired a third developer, who also had some experience using Laravel. In addition, the presence of Lumen meant that we could use that for smaller apps or services that were too small to use Laravel. We therefore decided to adopt Laravel as our default framework - in future we’d only use something else if there was a particular justification for it. I was rather sad to have to abandon Django for work, but pleased to have something more modern than CodeIgniter for PHP projects.

This also enabled us to standardize our new server builds. Over the last year or so I’ve been pushing to automate what we can of our server setup using Ansible. We now have two standard stacks that we plan to use for future projects. One is for WordPress sites and consists of:

  • Debian stable
  • Apache
  • MySQL
  • PHP 5.6
  • Memcached
  • Varnish

The other is for Laravel or Lumen web apps or APIs and consists of:

  • Debian stable
  • Nginx
  • PHP 7
  • PostgreSQL
  • Redis

It took some time to decide what we wanted to settle on, and indeed we had a mobile app backend that went up around Christmas time that we wrote with Laravel, but deployed to Apache with PHP 5.6 because when we first pushed it up PHP 7 wasn’t out yet. However, given that Laravel 5 already had good support for PHP 7, we decided we’d consider it for the next app. I tend to use PostgreSQL rather than MySQL these days because it has a lot of nifty features like JSON fields and full text search, and using an ORM minimises the learning curve in switching, and Redis is much more versatile than Memcached, so they were vital parts of our stack.

Our first PHP 7 app

As it happened, we had a Laravel app in the pipeline that was ideal. In the summer of last year, we were hired to make an existing site responsive. In the end, it turned out not to be viable - it was built with Zend Framework, which none of us had ever touched before, and the front end used a lot of custom widgets and fields tied together with RequireJS. The whole thing was rather unwieldy and extremely difficult to maintain and develop. In the end, we decided to tell the client it wasn’t worth developing further and offer to rewrite the whole thing from scratch using Laravel and AngularJS, with Browserify used to handle JavaScript modules - the basic idea was quite simple, it was just the implementation that was overly complex, and AngularJS made it possible to do the same kind of thing with a fraction of the code, so a rewrite in only a few weeks was perfectly viable.

I’d already built a simple prototype to demonstrate the viability of a from-scratch rewrite using Laravel and Angular, and once the client had agreed to the rewrite, we were able to work on this further. As the web app was going to be particularly useful on mobile devices, I wanted to ensure that the performance was as good as I could possibly make it. By the time we were looking at deploying it to a server, three months had passed since PHP 7 had been first released, and I figured that was long enough for the most serious issues to be resolved, and we could definitely do with the very significant speed boost we’d get from using PHP 7 for this app.

I use Jenkins to run my unit tests, and so I decided to try installing PHP 7 on the Jenkins server and using that to run the tests. The results were encouraging - nothing broke as a result of the switch. So we therefore decided that when we deployed it, we’d try it with PHP 7, and if it failed, we’d switch to PHP 5.6.

I opted to use FPM with Nginx rather than Apache and mod_php as since the web app was purely custom we didn’t really need things like .htaccess, and while the amount of static content was limited, Nginx might well perform better for this use case. The results are fairly encouraging - the document for the home page is typically being returned in under 40ms, with the uncached homepage taking around 1.5s in total to load, despite having to load several external fonts. In its current state, the web app scores a solid 93% on YSlow, which I’m very happy with. I don’t know how much of that is down to using PHP 7, but choosing to use it was definitely a good call. I have had absolutely zero issues with it during that time.

Summary

As always, you should bear in mind that your needs may not be the same as mine, and it could well be that you need something that PHP 7 doesn’t yet provide. However, I have had a very good experience with PHP 7 in production. I may have had to jump through a few more hoops to get it up and running, and there may be some level of risk associated with using PHP 7 when it’s only been available for three months, but it’s more than justified by the speed we get from our web app. Using a configuration management system like Ansible means that even if you do have to jump through some extra hoops, it’s relatively easy to automate that process so it’s not as much of an issue as you might think. For me, using PHP 7 with a Laravel app has worked as well as I could have possibly hoped.

Building a location aware web app with GeoDjango

$
0
0

PostgreSQL has excellent support for geographical data thanks to the PostGIS extension, and Django allows you to take full advantage of it thanks to GeoDjango. In this tutorial, I’ll show you how to use GeoDjango to build a web app that allows users to search for gigs and events near them.

Requirements

I’ve made the jump to Python 3, and if you haven’t done so yet, I highly recommend it - it’s not hard, and there’s very few modules left that haven’t been ported across. As such, this tutorial assumes you’re using Python 3. You’ll also need to have Git, PostgreSQL and PostGIS installed - I’ll leave the details of doing so up to you as it varies by platform, but you can generally do so easily with a package manager on most Linux distros. On Mac OS X I recommend using Homebrew. If you’re on Windows I think your best bet is probably to use a Vagrant VM.

We’ll be using Django 1.9 - if by the time you read this a newer version of Django is out, it’s quite possible that some things may have changed and you’ll need to work around any problems caused. Generally search engines are the best place to look for this, and I’ll endeavour to keep the resulting Github repository as up to date as I can, so try those if you get stuck.

Getting started

First of all, let’s create our database. Make sure you’re running as a user that has the required privileges to create users and databases for PostgreSQL and run the following command:

$ createdb gigfinder

This creates the database. Next, we create the user:

$ createuser -s giguser -P

You’ll be prompted to enter a password for the new user. Next, we want to use the psql command-line client to interact with our new database:

$ psql gigfinder

This connects to the database. Run these commands to set up access to the database and install the PostGIS extension:

# GRANT ALL PRIVILEGES ON DATABASE gigfinder TO giguser;
# CREATE EXTENSION postgis;
# \q

With our database set up, it’s time to start work on our project. Let’s create our virtualenv in a new folder:

$ pyvenv venv

Then activate it:

$ source venv/bin/activate

Then we install Django, along with a few other production dependencies:

$ pip install django-toolbelt

And record our dependencies:

$ pip freeze > requirements.txt

Next, we create our application skeleton:

$ django-admin.py startproject gigfinder .

We’ll also create a .gitignore file:

venv/
.DS_Store
*.swp
node_modules/
*.pyc

Let’s commit our changes:

$ git init
$ git add .gitignore requirements/txt manage.py gigfinder
$ git commit -m 'Initial commit'

Next, let’s create our first app, which we will call gigs:

$ python manage.py startapp gigs

We need to add our new app to the INSTALLED_APPS setting. While we’re there we’ll also add GIS support and set up the database connection. First, add the required apps to INSTALLED_APPS:

INSTALLED_APPS = [
...
'django.contrib.gis',
'gigs',
]

Next, configure the database:

DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': 'gigfinder',
'USER': 'giguser',
'PASSWORD': 'password',
},
}

Let’s run the migrations:

$ python manage.py migrate
Operations to perform:
Apply all migrations: sessions, contenttypes, admin, auth
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying sessions.0001_initial... OK

And create our superuser account:

$ python manage.py createsuperuser

Now, we’ll commit our changes:

$ git add gigfinder/ gigs/
$ git commit -m 'Created gigs app'
[master e72a846] Created gigs app
8 files changed, 24 insertions(+), 3 deletions(-)
create mode 100644 gigs/__init__.py
create mode 100644 gigs/admin.py
create mode 100644 gigs/apps.py
create mode 100644 gigs/migrations/__init__.py
create mode 100644 gigs/models.py
create mode 100644 gigs/tests.py
create mode 100644 gigs/views.py

Our first model

At this point, it’s worth thinking about the models we plan for our app to have. First we’ll have a Venue model that contains details of an individual venue, which will include a name and a geographical location. We’ll also have an Event model that will represent an individual gig or event at a venue, and will include a name, date/time and a venue as a foreign key.

Before we start writing our first model, we need to write a test for it, but we also need to be able to create objects easily in our tests. We also want to be able to easily examine our objects, so we’ll install iPDB and Factory Boy:

$ pip install ipdb factory-boy
$ pip freeze > requirements.txt

Next, we write a test for the Venue model:

from django.test import TestCase
from gigs.models import Venue
from factory.fuzzy import BaseFuzzyAttribute
from django.contrib.gis.geos import Point
import factory.django, random
class FuzzyPoint(BaseFuzzyAttribute):
def fuzz(self):
return Point(random.uniform(-180.0, 180.0),
random.uniform(-90.0, 90.0))
# Factories for tests
class VenueFactory(factory.django.DjangoModelFactory):
class Meta:
model = Venue
django_get_or_create = (
'name',
'location'
)
name = 'Wembley Arena'
location = FuzzyPoint()
class VenueTest(TestCase):
def test_create_venue(self):
# Create the venue
venue = VenueFactory()
# Check we can find it
all_venues = Venue.objects.all()
self.assertEqual(len(all_venues), 1)
only_venue = all_venues[0]
self.assertEqual(only_venue, venue)
# Check attributes
self.assertEqual(only_venue.name, 'Wembley Arena')

Note that we randomly generate our location - this is done as suggested in this Stack Overflow post.

Now, running our tests brings up an expected error:

$ python manage.py test gigs
Creating test database for alias 'default'...
E
======================================================================
ERROR: gigs.tests (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: gigs.tests
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/loader.py", line 369, in _get_module_from_name
__import__(name)
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 2, in <module>
from gigs.models import Venue
ImportError: cannot import name 'Venue'
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Destroying test database for alias 'default'...

Let’s create our Venue model in gigs/models.py:

from django.contrib.gis.db import models
class Venue(models.Model):
"""
Model for a venue
"""
pass

For now, we’re just creating a simple dummy model. Note that we import models from django.contrib.gis.db instead of the usual place - this gives us access to the additional geographical fields.

If we run our tests again we get an error:

$ python manage.py test gigs
Creating test database for alias 'default'...
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.ProgrammingError: relation "gigs_venue" does not exist
LINE 1: SELECT "gigs_venue"."id" FROM "gigs_venue" ORDER BY "gigs_ve...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/commands/test.py", line 30, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/commands/test.py", line 74, in execute
super(Command, self).execute(*args, **options)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/commands/test.py", line 90, in handle
failures = test_runner.run_tests(test_labels)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/test/runner.py", line 532, in run_tests
old_config = self.setup_databases()
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/test/runner.py", line 482, in setup_databases
self.parallel, **kwargs
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/test/runner.py", line 726, in setup_databases
serialize=connection.settings_dict.get("TEST", {}).get("SERIALIZE", True),
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/base/creation.py", line 78, in create_test_db
self.connection._test_serialized_contents = self.serialize_db_to_string()
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/base/creation.py", line 122, in serialize_db_to_string
serializers.serialize("json", get_objects(), indent=None, stream=out)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/serializers/__init__.py", line 129, in serialize
s.serialize(queryset, **options)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/serializers/base.py", line 79, in serialize
for count, obj in enumerate(queryset, start=1):
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/base/creation.py", line 118, in get_objects
for obj in queryset.iterator():
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "gigs_venue" does not exist
LINE 1: SELECT "gigs_venue"."id" FROM "gigs_venue" ORDER BY "gigs_ve...

Let’s update our model:

from django.contrib.gis.db import models
class Venue(models.Model):
"""
Model for a venue
"""
name = models.CharField(max_length=200)
location = models.PointField()

Then create our migration:

$ python manage.py makemigrations
Migrations for 'gigs':
0001_initial.py:
- Create model Venue

And run it:

$ python manage.py migrate
Operations to perform:
Apply all migrations: gigs, sessions, contenttypes, auth, admin
Running migrations:
Rendering model states... DONE
Applying gigs.0001_initial... OK

Then if we run our tests:

$ python manage.py test gigs
Creating test database for alias 'default'...
.
----------------------------------------------------------------------
Ran 1 test in 0.362s
OK
Destroying test database for alias 'default'...

They should pass. Note that Django may complain about needing to delete the test database before running the tests, but this should not cause any problems. Let’s commit our changes:

$ git add requirements.txt gigs/
$ git commit -m 'Venue model in place'

With our venue done, let’s turn to our Event model. Amend gigs/tests.py as follows:

from django.test import TestCase
from gigs.models import Venue, Event
from factory.fuzzy import BaseFuzzyAttribute
from django.contrib.gis.geos import Point
import factory.django, random
from django.utils import timezone
class FuzzyPoint(BaseFuzzyAttribute):
def fuzz(self):
return Point(random.uniform(-180.0, 180.0),
random.uniform(-90.0, 90.0))
# Factories for tests
class VenueFactory(factory.django.DjangoModelFactory):
class Meta:
model = Venue
django_get_or_create = (
'name',
'location'
)
name = 'Wembley Arena'
location = FuzzyPoint()
class EventFactory(factory.django.DjangoModelFactory):
class Meta:
model = Event
django_get_or_create = (
'name',
'venue',
'datetime'
)
name = 'Queens of the Stone Age'
datetime = timezone.now()
class VenueTest(TestCase):
def test_create_venue(self):
# Create the venue
venue = VenueFactory()
# Check we can find it
all_venues = Venue.objects.all()
self.assertEqual(len(all_venues), 1)
only_venue = all_venues[0]
self.assertEqual(only_venue, venue)
# Check attributes
self.assertEqual(only_venue.name, 'Wembley Arena')
class EventTest(TestCase):
def test_create_event(self):
# Create the venue
venue = VenueFactory()
# Create the event
event = EventFactory(venue=venue)
# Check we can find it
all_events = Event.objects.all()
self.assertEqual(len(all_events), 1)
only_event = all_events[0]
self.assertEqual(only_event, event)
# Check attributes
self.assertEqual(only_event.name, 'Queens of the Stone Age')
self.assertEqual(only_event.venue.name, 'Wembley Arena')

Then we run our tests:

$ python manage.py test gigs
Creating test database for alias 'default'...
E
======================================================================
ERROR: gigs.tests (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: gigs.tests
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/loader.py", line 369, in _get_module_from_name
__import__(name)
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 2, in <module>
from gigs.models import Venue, Event
ImportError: cannot import name 'Event'
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Destroying test database for alias 'default'...

As expected, this fails, so create an empty Event model in gigs/models.py:

class Event(models.Model):
"""
Model for an event
"""
pass

Running the tests now will raise an error due to the table not existing:

$ python manage.py test gigs
Creating test database for alias 'default'...
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.ProgrammingError: relation "gigs_event" does not exist
LINE 1: SELECT "gigs_event"."id" FROM "gigs_event" ORDER BY "gigs_ev...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 353, in execute_from_command_line
utility.execute()
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/__init__.py", line 345, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/commands/test.py", line 30, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/base.py", line 348, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/commands/test.py", line 74, in execute
super(Command, self).execute(*args, **options)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/base.py", line 399, in execute
output = self.handle(*args, **options)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/management/commands/test.py", line 90, in handle
failures = test_runner.run_tests(test_labels)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/test/runner.py", line 532, in run_tests
old_config = self.setup_databases()
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/test/runner.py", line 482, in setup_databases
self.parallel, **kwargs
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/test/runner.py", line 726, in setup_databases
serialize=connection.settings_dict.get("TEST", {}).get("SERIALIZE", True),
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/base/creation.py", line 78, in create_test_db
self.connection._test_serialized_contents = self.serialize_db_to_string()
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/base/creation.py", line 122, in serialize_db_to_string
serializers.serialize("json", get_objects(), indent=None, stream=out)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/serializers/__init__.py", line 129, in serialize
s.serialize(queryset, **options)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/serializers/base.py", line 79, in serialize
for count, obj in enumerate(queryset, start=1):
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/base/creation.py", line 118, in get_objects
for obj in queryset.iterator():
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/models/sql/compiler.py", line 848, in execute_sql
cursor.execute(sql, params)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "gigs_event" does not exist
LINE 1: SELECT "gigs_event"."id" FROM "gigs_event" ORDER BY "gigs_ev...

So let’s populate our model:

class Event(models.Model):
"""
Model for an event
"""
name = models.CharField(max_length=200)
datetime = models.DateTimeField()
venue = models.ForeignKey(Venue)

And create our migration:

$ python manage.py makemigrations
Migrations for 'gigs':
0002_event.py:
- Create model Event

And run it:

$ python manage.py migrate
Operations to perform:
Apply all migrations: auth, admin, sessions, contenttypes, gigs
Running migrations:
Rendering model states... DONE
Applying gigs.0002_event... OK

And run our tests:

$ python manage.py test gigs
Creating test database for alias 'default'...
..
----------------------------------------------------------------------
Ran 2 tests in 0.033s
OK
Destroying test database for alias 'default'...

Again, you may be prompted to delete the test database, but this should not be an issue.

With this done, let’s commit our changes:

$ git add gigs
$ git commit -m 'Added Event model'
[master 47ba686] Added Event model
3 files changed, 67 insertions(+), 1 deletion(-)
create mode 100644 gigs/migrations/0002_event.py

Setting up the admin

For an application like this, you’d expect the curators of the site to maintain the gigs and venues stored in the database, and that’s an obvious use case for the Django admin. So let’s set our models up to be available in the admin. Open up gigs/admin.py and amend it as follows:

from django.contrib import admin
from gigs.models import Venue, Event
admin.site.register(Venue)
admin.site.register(Event)

Now, if you start up the dev server as usual with python manage.py runserver and visit http://127.0.0.1:8000/admin/, you can see that our Event and Venue models are now available. However, the string representations of them are pretty useless. Let’s fix that. First, we amend our tests:

from django.test import TestCase
from gigs.models import Venue, Event
from factory.fuzzy import BaseFuzzyAttribute
from django.contrib.gis.geos import Point
import factory.django, random
from django.utils import timezone
class FuzzyPoint(BaseFuzzyAttribute):
def fuzz(self):
return Point(random.uniform(-180.0, 180.0),
random.uniform(-90.0, 90.0))
# Factories for tests
class VenueFactory(factory.django.DjangoModelFactory):
class Meta:
model = Venue
django_get_or_create = (
'name',
'location'
)
name = 'Wembley Arena'
location = FuzzyPoint()
class EventFactory(factory.django.DjangoModelFactory):
class Meta:
model = Event
django_get_or_create = (
'name',
'venue',
'datetime'
)
name = 'Queens of the Stone Age'
datetime = timezone.now()
class VenueTest(TestCase):
def test_create_venue(self):
# Create the venue
venue = VenueFactory()
# Check we can find it
all_venues = Venue.objects.all()
self.assertEqual(len(all_venues), 1)
only_venue = all_venues[0]
self.assertEqual(only_venue, venue)
# Check attributes
self.assertEqual(only_venue.name, 'Wembley Arena')
# Check string representation
self.assertEqual(only_venue.__str__(), 'Wembley Arena')
class EventTest(TestCase):
def test_create_event(self):
# Create the venue
venue = VenueFactory()
# Create the event
event = EventFactory(venue=venue)
# Check we can find it
all_events = Event.objects.all()
self.assertEqual(len(all_events), 1)
only_event = all_events[0]
self.assertEqual(only_event, event)
# Check attributes
self.assertEqual(only_event.name, 'Queens of the Stone Age')
self.assertEqual(only_event.venue.name, 'Wembley Arena')
# Check string representation
self.assertEqual(only_event.__str__(), 'Queens of the Stone Age - Wembley Arena')

Next, we run our tests:

$ python manage.py test gigs
Creating test database for alias 'default'...
FF
======================================================================
FAIL: test_create_event (gigs.tests.EventTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 74, in test_create_event
self.assertEqual(only_event.__str__(), 'Queens of the Stone Age - Wembley Arena')
AssertionError: 'Event object' != 'Queens of the Stone Age - Wembley Arena'
- Event object
+ Queens of the Stone Age - Wembley Arena
======================================================================
FAIL: test_create_venue (gigs.tests.VenueTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 52, in test_create_venue
self.assertEqual(only_venue.__str__(), 'Wembley Arena')
AssertionError: 'Venue object' != 'Wembley Arena'
- Venue object
+ Wembley Arena
----------------------------------------------------------------------
Ran 2 tests in 0.059s
FAILED (failures=2)
Destroying test database for alias 'default'...

They fail as expected. So let’s update gigs/models.py:

from django.contrib.gis.db import models
class Venue(models.Model):
"""
Model for a venue
"""
name = models.CharField(max_length=200)
location = models.PointField()
def __str__(self):
return self.name
class Event(models.Model):
"""
Model for an event
"""
name = models.CharField(max_length=200)
datetime = models.DateTimeField()
venue = models.ForeignKey(Venue)
def __str__(self):
return "%s - %s" % (self.name, self.venue.name)

For the venue, we just use the name. For the event, we use the event name and the venue name.

Now, we run our tests again:

$ python manage.py test gigs
Creating test database for alias 'default'...
..
----------------------------------------------------------------------
Ran 2 tests in 0.048s
OK
Destroying test database for alias 'default'...

Time to commit our changes:

$ git add gigs
$ git commit -m 'Added models to admin'
[master 65d051f] Added models to admin
3 files changed, 15 insertions(+), 1 deletion(-)

Our models are now in place, so you may want to log into the admin and create a few venues and events so you can see it in action. Note that the location field for the Venue model creates a map widget that allows you to select a geographical location. It is a bit basic, however, so let’s make it better. Let’s install django-floppyforms:

$ pip install django-floppyforms

And add it to our requirements:

$ pip install -r requirements.txt

Then add it to INSTALLED_APPS in gigfinder/setttings.py:

INSTALLED_APPS = [
...
'django.contrib.gis',
'gigs',
'floppyforms',
]

Now we create a custom point widget for our admin, a custom form for the venues, and a custom venue admin:

from django.contrib import admin
from gigs.models import Venue, Event
from django.forms import ModelForm
from floppyforms.gis import PointWidget, BaseGMapWidget
class CustomPointWidget(PointWidget, BaseGMapWidget):
class Media:
js = ('/static/floppyforms/js/MapWidget.js',)
class VenueAdminForm(ModelForm):
class Meta:
model = Venue
fields = ['name', 'location']
widgets = {
'location': CustomPointWidget()
}
class VenueAdmin(admin.ModelAdmin):
form = VenueAdminForm
admin.site.register(Venue, VenueAdmin)
admin.site.register(Event)

Note in particular that we define the media for our widget so we can include some required Javascript. If you run the dev server again, you should see that the map widget in the admin is now provided by Google Maps, making it much easier to identify the correct location of the venue.

Time to commit our changes:

$ git add gigfinder/ gigs/ requirements.txt
$ git commit -m 'Customised location widget'

With our admin ready, it’s time to move on to the user-facing part of the web app.

Creating our views

We will keep the front end for this app as simple as possible for the purposes of this tutorial, but of course you should feel free to expand upon this as you see fit. What we’ll do is create a form that uses HTML5 geolocation to get the user’s current geographical coordinates. It will then return events in the next week, ordered by how close the venue is. Please note that there are plans afoot in some browsers to prevent HTML5 geolocation from working unless content is server over HTTPS, so that may complicate things.

How do we query the database to get this data? It’s not too difficult, as shown in this example:

$ python manage.py shell
Python 3.5.1 (default, Mar 25 2016, 00:17:15)
Type "copyright", "credits" or "license" for more information.
IPython 4.1.2 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: from gigs.models import *
In [2]: from django.contrib.gis.geos import Point
In [3]: from django.contrib.gis.db.models.functions import Distance
In [4]: location = Point(52.3749159, 1.1067473, srid=4326)
In [5]: Venue.objects.all().annotate(distance=Distance('location', location)).order_by('distance')
Out[5]: [<Venue: Diss Corn Hall>, <Venue: Waterfront Norwich>, <Venue: UEA Norwich>, <Venue: Wembley Arena>]

I’ve set up a number of venues using the admin, one round the corner, two in Norwich, and one in London. I then imported the models, the Point class, and the Distance function, and created a Point object. Note that the Point is passed three fields - the first and second are the latitude and longitude, respectively, while the srid field takes a value of 4326. This field represents the Spatial Reference System Identifier used for this query - we’ve gone for WGS 84, which is a common choice and is referred to with the SRID 4326.

Now, we want the user to be able to submit the form and get the 5 nearest events in the next week. We can get the date for this time next week as follows:

In [6]: next_week = timezone.now() + timezone.timedelta(weeks=1)

Then we can get the events we want, sorted by distance, like this:

In [7]: Event.objects.filter(datetime__gte=timezone.now()).filter(datetime__lte=next_week).annotate(distance=Distance('venue__location', location)).order_by('distance')[0:5]
Out[7]: [<Event: Primal Scream - UEA Norwich>, <Event: Queens of the Stone Age - Wembley Arena>]

With that in mind, let’s write the test for our view. The view should contain a single form that accepts a user’s geographical coordinates - for convenience we’ll autocomplete this with HTML5 geolocation. On submit, the user should see a list of the five closest events in the next week.

First, let’s test the GET request. Amend gigs/tests.py as follows:

from django.test import TestCase
from gigs.models import Venue, Event
from factory.fuzzy import BaseFuzzyAttribute
from django.contrib.gis.geos import Point
import factory.django, random
from django.utils import timezone
from django.test import RequestFactory
from django.core.urlresolvers import reverse
from gigs.views import LookupView
class FuzzyPoint(BaseFuzzyAttribute):
def fuzz(self):
return Point(random.uniform(-180.0, 180.0),
random.uniform(-90.0, 90.0))
# Factories for tests
class VenueFactory(factory.django.DjangoModelFactory):
class Meta:
model = Venue
django_get_or_create = (
'name',
'location'
)
name = 'Wembley Arena'
location = FuzzyPoint()
class EventFactory(factory.django.DjangoModelFactory):
class Meta:
model = Event
django_get_or_create = (
'name',
'venue',
'datetime'
)
name = 'Queens of the Stone Age'
datetime = timezone.now()
class VenueTest(TestCase):
def test_create_venue(self):
# Create the venue
venue = VenueFactory()
# Check we can find it
all_venues = Venue.objects.all()
self.assertEqual(len(all_venues), 1)
only_venue = all_venues[0]
self.assertEqual(only_venue, venue)
# Check attributes
self.assertEqual(only_venue.name, 'Wembley Arena')
# Check string representation
self.assertEqual(only_venue.__str__(), 'Wembley Arena')
class EventTest(TestCase):
def test_create_event(self):
# Create the venue
venue = VenueFactory()
# Create the event
event = EventFactory(venue=venue)
# Check we can find it
all_events = Event.objects.all()
self.assertEqual(len(all_events), 1)
only_event = all_events[0]
self.assertEqual(only_event, event)
# Check attributes
self.assertEqual(only_event.name, 'Queens of the Stone Age')
self.assertEqual(only_event.venue.name, 'Wembley Arena')
# Check string representation
self.assertEqual(only_event.__str__(), 'Queens of the Stone Age - Wembley Arena')
class LookupViewTest(TestCase):
"""
Test lookup view
"""
def setUp(self):
self.factory = RequestFactory()
def test_get(self):
request = self.factory.get(reverse('lookup'))
response = LookupView.as_view()(request)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed('gigs/lookup.html')

Let’s run our tests:

$ python manage.py test gigs
Creating test database for alias 'default'...
E
======================================================================
ERROR: gigs.tests (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: gigs.tests
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/loader.py", line 369, in _get_module_from_name
__import__(name)
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 9, in <module>
from gigs.views import LookupView
ImportError: cannot import name 'LookupView'
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
Destroying test database for alias 'default'...

Our first issue is that we can’t import the view in the test. Let’s fix that by amending gigs/views.py:

from django.shortcuts import render
from django.views.generic.base import View
class LookupView(View):
pass

Running the tests again results in the following:

$ python manage.py test gigs
Creating test database for alias 'default'...
.E.
======================================================================
ERROR: test_get (gigs.tests.LookupViewTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 88, in test_get
request = self.factory.get(reverse('lookup'))
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/urlresolvers.py", line 600, in reverse
return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)))
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/core/urlresolvers.py", line 508, in _reverse_with_prefix
(lookup_view_s, args, kwargs, len(patterns), patterns))
django.core.urlresolvers.NoReverseMatch: Reverse for 'lookup' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
----------------------------------------------------------------------
Ran 3 tests in 0.154s
FAILED (errors=1)
Destroying test database for alias 'default'...

We can’t resolve the URL for our new view, so we need to add it to our URLconf. First of all, save this as gigs/urls.py:

from django.conf.urls import url
from gigs.views import LookupView
urlpatterns = [
# Lookup
url(r'', LookupView.as_view(), name='lookup'),
]

Then amend gigfinder/urls.py as follows:

from django.conf.urls import url, include
from django.contrib import admin
urlpatterns = [
url(r'^admin/', admin.site.urls),
# Gig URLs
url(r'', include('gigs.urls')),
]

Then run the tests:

$ python manage.py test gigs
Creating test database for alias 'default'...
.F.
======================================================================
FAIL: test_get (gigs.tests.LookupViewTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 90, in test_get
self.assertEqual(response.status_code, 200)
AssertionError: 405 != 200
----------------------------------------------------------------------
Ran 3 tests in 0.417s
FAILED (failures=1)
Destroying test database for alias 'default'...

We get a 405 response because the view does not accept GET requests. Let’s resolve that:

from django.shortcuts import render_to_response
from django.views.generic.base import View
class LookupView(View):
def get(self, request):
return render_to_response('gigs/lookup.html')

If we run our tests now:

$ python manage.py test gigs
Creating test database for alias 'default'...
.E.
======================================================================
ERROR: test_get (gigs.tests.LookupViewTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 89, in test_get
response = LookupView.as_view()(request)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/views/generic/base.py", line 88, in dispatch
return handler(request, *args, **kwargs)
File "/Users/matthewdaly/Projects/gigfinder/gigs/views.py", line 6, in get
return render_to_response('gigs/lookup.html')
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/shortcuts.py", line 39, in render_to_response
content = loader.render_to_string(template_name, context, using=using)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/template/loader.py", line 96, in render_to_string
template = get_template(template_name, using=using)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/template/loader.py", line 43, in get_template
raise TemplateDoesNotExist(template_name, chain=chain)
django.template.exceptions.TemplateDoesNotExist: gigs/lookup.html
----------------------------------------------------------------------
Ran 3 tests in 0.409s
FAILED (errors=1)
Destroying test database for alias 'default'...

We see that the template is not defined. Save the following as gigs/templates/gigs/includes/base.html:

<!DOCTYPE html>
<html>
<head>
<title>Gig finder</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"></link>
</head>
<body>
<h1>Gig Finder</h1>
<div class="container">
<div class="row">
{% block content %}{% endblock %}
</div>
</div>
<script src="https://code.jquery.com/jquery-2.2.2.min.js" integrity="sha256-36cp2Co+/62rEAAYHLmRCPIych47CvdM+uTBJwSzWjI=" crossorigin="anonymous"></script>
<script language="javascript" type="text/javascript" src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
</body>
</html>

And the following as gigs/templates/gigs/lookup.html:

{% extends "gigs/includes/base.html" %}
{% block content %}
<form role="form" action="/" method="post">{% csrf_token %}
<div class="form-group">
<label for="latitude">Latitude:</label>
<input id="id_latitude" name="latitude" type="text" class="form-control"></input>
</div>
<div class="form-group">
<label for="longitude">Longitude:</label>
<input id="id_longitude" name="longitude" type="text" class="form-control"></input>
</div>
<input class="btn btn-primary" type="submit" value="Submit" />
</form>
<script language="javascript" type="text/javascript">
navigator.geolocation.getCurrentPosition(function (position) {
var lat = document.getElementById('id_latitude');
var lon = document.getElementById('id_longitude');
lat.value = position.coords.latitude;
lon.value = position.coords.longitude;
});
</script>
{% endblock %}

Note the JavaScript to populate the latitude and longitude. Now, if we run our tests:

$ python manage.py test gigs
Creating test database for alias 'default'...
...
----------------------------------------------------------------------
Ran 3 tests in 1.814s
OK
Destroying test database for alias 'default'...

Success! We now render our form as expected. Time to commit:

$ git add gigs gigfinder
$ git commit -m 'Implemented GET handler'

Handling POST requests

Now we need to be able to handle POST requests and return the appropriate results. First, let’s write a test for it in our existing LookupViewTest class:

def test_post(self):
# Create venues to return
v1 = VenueFactory(name='Venue1')
v2 = VenueFactory(name='Venue2')
v3 = VenueFactory(name='Venue3')
v4 = VenueFactory(name='Venue4')
v5 = VenueFactory(name='Venue5')
v6 = VenueFactory(name='Venue6')
v7 = VenueFactory(name='Venue7')
v8 = VenueFactory(name='Venue8')
v9 = VenueFactory(name='Venue9')
v10 = VenueFactory(name='Venue10')
# Create events to return
e1 = EventFactory(name='Event1', venue=v1)
e2 = EventFactory(name='Event2', venue=v2)
e3 = EventFactory(name='Event3', venue=v3)
e4 = EventFactory(name='Event4', venue=v4)
e5 = EventFactory(name='Event5', venue=v5)
e6 = EventFactory(name='Event6', venue=v6)
e7 = EventFactory(name='Event7', venue=v7)
e8 = EventFactory(name='Event8', venue=v8)
e9 = EventFactory(name='Event9', venue=v9)
e10 = EventFactory(name='Event10', venue=v10)
# Set parameters
lat = 52.3749159
lon = 1.1067473
# Put together request
data = {
'latitude': lat,
'longitude': lon
}
request = self.factory.post(reverse('lookup'), data)
response = LookupView.as_view()(request)
self.assertEqual(response.status_code, 200)
self.assertTemplateUsed('gigs/lookupresults.html')

If we now run this test:

$ python manage.py test gigs
Creating test database for alias 'default'...
..F.
======================================================================
FAIL: test_post (gigs.tests.LookupViewTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 117, in test_post
self.assertEqual(response.status_code, 200)
AssertionError: 405 != 200
----------------------------------------------------------------------
Ran 4 tests in 1.281s
FAILED (failures=1)
Destroying test database for alias 'default'...

We can see that it fails because the POST method is not supported. Now we can start work on implementing it. First, let’s create a form in gigs/forms.py:

from django.forms import Form, FloatField
class LookupForm(Form):
latitude = FloatField()
longitude = FloatField()

Next, edit gigs/views.py:

from django.shortcuts import render_to_response
from django.views.generic.edit import FormView
from gigs.forms import LookupForm
from gigs.models import Event
from django.utils import timezone
from django.contrib.gis.geos import Point
from django.contrib.gis.db.models.functions import Distance
class LookupView(FormView):
form_class = LookupForm
def get(self, request):
return render_to_response('gigs/lookup.html')
def form_valid(self, form):
# Get data
latitude = form.cleaned_data['latitude']
longitude = form.cleaned_data['longitude']
# Get today's date
now = timezone.now()
# Get next week's date
next_week = now + timezone.timedelta(weeks=1)
# Get Point
location = Point(longitude, latitude, srid=4326)
# Look up events
events = Event.objects.filter(datetime__gte=now).filter(datetime__lte=next_week).annotate(distance=Distance('venue__location', location)).order_by('distance')[0:5]
# Render the template
return render_to_response('gigs/lookupresults.html', {
'events': events
})

Note that we’re switching from a View to a FormView so that it can more easily handle our form. We could render the form using this as well, but as it’s a simple form I decided it wasn’t worth the bother. Also, note that the longitude goes first - this caught me out as I expected the latitude to be the first argument.

Now, if we run our tests, they should complain about our missing template:

$ python manage.py test gigs
Creating test database for alias 'default'...
..E.
======================================================================
ERROR: test_post (gigs.tests.LookupViewTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/matthewdaly/Projects/gigfinder/gigs/tests.py", line 116, in test_post
response = LookupView.as_view()(request)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/views/generic/base.py", line 88, in dispatch
return handler(request, *args, **kwargs)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/views/generic/edit.py", line 222, in post
return self.form_valid(form)
File "/Users/matthewdaly/Projects/gigfinder/gigs/views.py", line 31, in form_valid
'events': events
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/shortcuts.py", line 39, in render_to_response
content = loader.render_to_string(template_name, context, using=using)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/template/loader.py", line 96, in render_to_string
template = get_template(template_name, using=using)
File "/Users/matthewdaly/Projects/gigfinder/venv/lib/python3.5/site-packages/django/template/loader.py", line 43, in get_template
raise TemplateDoesNotExist(template_name, chain=chain)
django.template.exceptions.TemplateDoesNotExist: gigs/lookupresults.html
----------------------------------------------------------------------
Ran 4 tests in 0.506s
FAILED (errors=1)
Destroying test database for alias 'default'...

So let’s create gigs/templates/gigs/lookupresults.html:

{% extends "gigs/includes/base.html" %}
{% block content %}
<ul>
{% for event in events %}
<li>{{ event.name }} - {{ event.venue.name }}</li>
{% endfor %}
</ul>
{% endblock %}

Now, if we run our tests, they should pass:

$ python manage.py test gigs
Creating test database for alias 'default'...
....
----------------------------------------------------------------------
Ran 4 tests in 0.728s
OK
Destroying test database for alias 'default'...

However, if we try actually submitting the form by hand, we get the error CSRF token missing or incorrect. Edit views.py as follows to resolve this:

from django.shortcuts import render_to_response
from django.views.generic.edit import FormView
from gigs.forms import LookupForm
from gigs.models import Event
from django.utils import timezone
from django.contrib.gis.geos import Point
from django.contrib.gis.db.models.functions import Distance
from django.template import RequestContext
class LookupView(FormView):
form_class = LookupForm
def get(self, request):
return render_to_response('gigs/lookup.html', RequestContext(request))
def form_valid(self, form):
# Get data
latitude = form.cleaned_data['latitude']
longitude = form.cleaned_data['longitude']
# Get today's date
now = timezone.now()
# Get next week's date
next_week = now + timezone.timedelta(weeks=1)
# Get Point
location = Point(longitude, latitude, srid=4326)
# Look up events
events = Event.objects.filter(datetime__gte=now).filter(datetime__lte=next_week).annotate(distance=Distance('venue__location', location)).order_by('distance')[0:5]
# Render the template
return render_to_response('gigs/lookupresults.html', {
'events': events
})

Here we’re adding the request context so that the CSRF token is available.

If you run the dev server, add a few events and venues via the admin, and submit a search, you’ll see that you’re returning events closest to you first.

Now that we can submit searches, we’re ready to commit:

$ git add gigs/
$ git commit -m 'Can now retrieve search results'

And we’re done! Of course, you may want to expand on this by plotting each gig venue on a map, or something like that, in which case there’s plenty of methods of doing so - you can retrieve the latitude and longitude in the template and use Google Maps to display them. I’ll leave doing so as an exercise for the reader.

I can’t say that working with GeoDjango isn’t a bit of a struggle at times, but being able to make spatial queries in this fashion is very useful. With more and more people carrying smartphones, you’re more likely than ever to be asked to build applications that return data based on someone’s geographical location, and GeoDjango is a great way to do this with a Django application. You can find the source on Github.

Writing faster Laravel tests

$
0
0

Nowadays, Laravel tends to be my go-to PHP framework, to the point that we use it as our default framework at work. A big part of this is that Laravel is relatively easy to test, making practicing TDD a lot easier.

Out of the box running Laravel tests can be quite slow, which is a big issue - if your test suite takes several minutes to run, that’s a huge disruption. Also, Laravel doesn’t create a dedicated test database - instead it runs the tests against the same database you’re using normally, which is almost always not what you want. I’ll show you how to set up a dedicated test database, and how to use an in-memory SQLite database for faster tests. This results in cleaner and easier-to-maintain tests, since you can be sure the test database is restored to a clean state at the end of every test.

Setup

Our first step is to make sure that when a new test begins, the following should happen:

  • We should create a new transaction
  • We should empty and migrate our database

Then, at the end of each test:

  • We should roll back our transaction to restore the database to its prior state

To do so, we can create custom setUp() and tearDown() methods for our base TestCase class. Save this in tests/TestCase.php:

<?php
class TestCase extends Illuminate\Foundation\Testing\TestCase
{
/**
* The base URL to use while testing the application.
*
* @var string
*/
protected $baseUrl = 'http://localhost';
/**
* Creates the application.
*
* @return \Illuminate\Foundation\Application
*/
public function createApplication()
{
$app = require __DIR__.'/../bootstrap/app.php';
$app->make(Illuminate\Contracts\Console\Kernel::class)->bootstrap();
return $app;
}
public function setUp()
{
parent::setUp();
DB::beginTransaction();
Artisan::call('migrate:refresh');
}
public function tearDown()
{
DB::rollBack();
parent::tearDown();
}
}

That takes care of building up and tearing down our database for each test.

EDIT: Turns out there’s actually a much easier way of doing this already included in Laravel. Just import and add either use DatabaseMigrations; or use DatabaseTransactions; to the TestCase class. The first will roll back the database and migrate it again after each test, while the second wraps each test in a transaction.

Using an in-memory SQLite database for testing purposes

It’s not always practical to do this, especially if you rely on database features in PostgreSQL that aren’t available in SQLite, but if it is, it’s probably worth using an in-memory SQLite database for your tests. If you want to do so, here’s some example settings you might want to use in phpunit.xml:

<env name="APP_ENV" value="testing"/>
<env name="CACHE_DRIVER" value="array"/>
<env name="DB_CONNECTION" value="sqlite"/>
<env name="DB_DATABASE" value=":memory:"/>

This can result in a very significant speed boost.

I would still recommend that you test against your production database, but this can be easily handed off to a continuous integration server such as Jenkins, since that way it won’t disrupt your workflow.

During TDD, you’ll typically run your tests several times for any change you make, so if they’re too slow it can have a disastrous effect on your productivity. But with a few simple changes like this, you can ensure your tests run as quickly as possible. This approach should also be viable for Lumen apps.

Viewing all 158 articles
Browse latest View live