So, yesterday I had set a task of putting together the GUI for the collect and compute routes. The GUI, as described in yesterday’s post, would make use of /api/v1/api_coverage route that uses Flask’s internal application structure to track which functions have and have not been implemented. Simple enough.

Some preliminary testing reminded me that this route depends on the sspi_metadata database, which contains useful information like the list of all the indicators and the country groups. In particular, the /api/v1/api_coverage route uses the list of all indicator codes in sspi_metadata to return a JSON object with the appropriate boolean value next to each indicator. Currently this powers a little progress tracker, but the same function could be used to implement the GUI.

The only problem was that the sspi_metadata file had not been yet loaded onto the Linode, which meant that the /api/v1/api_coverage route would not work until I got that data onto the server. In my local environment, I had written a janky little POST route setup to receive data I was sending via the httr library from R. The main issue with it, which I had run into a number of times over the course of the last couple months, was that it’s troublesome to get the session in R to hold onto the login cookie assigned by flask-login long enough to POST the data to the appropriate @login_required route. This resulted in numerous headaches, so I’ve been avoiding hammering out these issues for a little while.

But, it seemed, today was the day. To successfully POST to the server, I’d first first need to get through the HTTP Basic Authentication that I setup with .htpasswd to protect the site while in development. Unfortunately, I had forgotten that I’d put this protection in place and spent a few very confused minutes wondering why even my GET requests were receiving a 401 error from R and Postman even though my browser was working just fine.

Turns out, I’d set my browser to remember my HTTP login credentials, which was convenient for me checking the site but proved very inconvenient indeed when it came time to try to deal with data. Well, only about fifteen minutes and a bunch of frustrated spamming down the drain. I’ll simply pass in the WWW-Authorize Basic Authorization header with my request and be on my way….

\403 Error. Sigh. Even with the appropriate authorization, the server is refusing to process the request. O…K? It’s around this time I finally wise up and check the actual error logs on the server. And, sure enough, ModSecurity has flagged some admittedly suspicious traffic from my IP address.

I was sitting on a bench by South Hall, right across from the Campanile. It was getting chilly as the sun was starting to set, and I was completely frustrated, so I figured I’d walk to my office. On the way, I worked the problem in my head. I really didn’t want to fiddle with ModSecurity: I configured it months ago and don’t really want to disable or weaken any protections. So then posting data in this way isn’t going to work. How to get the data onto the server then? Well, it’s all just in JSON files. Instead of posting them through the internet, I could just run a route the picks them up from the filesystem. After all, this is not an operation that needs to be run regularly, and storing a local JSON copy on the machine might not be the worst redundancy in the world.

And thus it was settled. It’s really amazing how changing your physical environment can completely shift your perspective on an abstract problem. A math professor of mine once advised us that most mathematicians have their greatest insights after stepping away from their desks for a walk after turning something over for a while. This definitely held true for me here.

So anyway, I got to my office and dashed out a few routes that would create a few buttons for loading and reloading static files into the appropriate databases. It works like this:

  • /api/v1, the page which will house all the GUI controls for the backend, makes an AJAX call to the api/v1/local route under the heading “Local Data”
  • /api/v1/local calls a function to check for files in the local directory and harvest their names. These are used to populate, render and return a Jinja2 template serverside equipped with the harvested names and buttons for each file in the directory, which are then slotted into the HTML for the main page when the AJAX request completes.
  • /api/v1/local/reload/<database> is a POST route operated by each of the buttons. The appropriate <database> argument gets dropped into the action field for each form when the template gets rendered serverside. When the button is clicked, the form is submitted, which activates the route. The input string is handled safely by a mapping function that fetches the appropriate database client, then all observations are dropped then immediately reloaded from the local file. The only thing left to do was to scp over the copies of the JSON files from my machine to the Linode, which was easy enough. A few more minor snags having to do with the filesystem along the way notwithstanding, the basic idea worked like a charm.

The actual deployment site is now a fully operational demo, in the sense that there are no major features left to figure out an implementation for. There’s a backend that collects, stores, and cleans data. There’s a front end that renders charts and tables and runs smoothly enough. And its all running on my Linode server, there for the .htpasswd-bearing world to see. Huzzah!