Server Stuff 2: REST api

This post should probably have come before MQTT because the work took place before I installed MQTT on my server, but MQTT got my interest and I wanted to write it up as I went along. My first foray into the server portion of this project was to write a very basic (and probably by competent node standards, bad) REST api.

The API is based off of this boilerplate. A boilerplate like this makes setup much easier. I’m going to go through the installation of the boilerplate, the configuration of Nginx, node, express and the whole rest of the plumbing stuff. After that, I’ll discuss the layout of the API itself and the mongo database.

TLDR: I don’t care about any of the plumbing!

Plumbing

Node

The first part of the plumbing is to install node. Node and NPM installation directions can be found for all kind of environments, but I decided to clone the github repositories and build them both, from here and here. Node must be built and installed first (NPM relies on it), after which NPM can be installed.

Nginx

Nginx was my choice for a webserver because I haven’t used it before and its one of the big top 3 or 4 webservers, and I already have experience with Apache (IIS is out of the question because I am on ubuntu). Running a webserver is not necessarily essential to a node server since node itself is already a webserver, but there are reasons to pair node with a general purpose webserver. One such reason is the same described here: it allows you to run multiple node services at one time, all on port 80. In addition, the webserver that is optimized for static content can be used for static html files while node can be used for REST api’s and dynamic content.

You can read here about how to install Nginx in ubuntu. The main thing that we’re wanting to do is set up what is known as a reverse proxy, whereby requests come into Nginx on port 80, and somehow Nginx knows which node instance to forward all the traffic to. The node instances can run on any local port, like 3000, but still be accessible on port 80 externally by a reverse proxy. In my case, when my domain is accessed with the prefix cm.XXXXcom/api/<endpoint>. I could also host any other number of subdomains for different projects, all node servers for different things.

To make the reverse proxy work, you have to configure it in Nginx with the config file located in /etc/nginx/conf.d/cm.<domain>.<tld>.conf to be something like below.

server { listen 80; server_name ~cm.<domain>.<tld>$; location / { proxy_pass http://localhost:<port&gt;; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection ‘upgrade’; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }}

After writing the config file, try a >sudo nginx -t to make sure the config files are OK, and then >sudo service nginx start. At this point nginx should be good to go. Note: I snagged those settings so there may be ones I didn’t need or some useful ones I don’t have.

DNS

To get subdomains working (which you may or may not care about), you have to have a CNAME DNS record, along with your normal nameserver records. You can read about CNAME records here.

Boilerplate

The boilerplate can be cloned from here. After cloning the boilerplate, you can do an >npm install to get some of the dependencies and then bower install for the rest. You might want to install grunt-cli and bower globally (using npm for both, >npm install grunt-cli -g). Thats almost all it takes to get the boilerplate set up. The one thing you have to customize is the port in server.js and make sure this agrees with your nginx conf file.

API

After setting all of that up the API itself is pretty straightforward and easy. The goal of this first version was simply to make it possible to submit device readings, query for readings, query devices, and query for a devices configuration. This version of the API is as flexible as possible, making it easy to quickly add a new device (with a configuration) and submit readings.

The storage for devices, readings, and configurations is the nosql database Mongo, which allows readings to take a flexible form, so I am not locked into a specific schema.

API endpoints:

GET /api/devices
Returns: [deviceid1,deviceid2…]

GET /api/config/{KEY}
Returns: 1 record associated with that unique key

POST /api/search/{max_records}
Body: any JSON search object like {sensor: “temperature”} or {} for all readings
Returns: [{…},…,{…}]

POST /api/reading
Body: {key:XXXX, …}

There are definitely insecurities in this and it will surely be revised. The only “secure” portion of the configuration is the KEY. Posting readings requires a pre-defined key (the only necessary field in the reading object). This is not by any means fully secure, but it provides some security against basic junk requests or hijacking while making the development of a visualization page easy. The API can be secured eventually by enabling SSL encryption, which will protect the KEY from being sniffed over the air or wire.

This basic API should suffice to get started with the lower level parts of the system (like the microcontroller and the sensors), while having something to compare with the other serverside technologies I’m trying (like the MQTT server).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s