It took me a while to get Ajax calls working with Angular and Django REST. This is what finally got it done.

NOTE: The implementation below is not really REST. It is more intended to illustrate some workarounds and custom methods. If the client can adhere to REST, a lot of the code below won’t be needed. See the Django REST tutorial for a simpler implementation.

First, both frameworks use templates and they both use the same default syntax {{ }}. There are a couple ways to make them both play together. The easiest way is to surround Angular code with Django’s {% verbatim %} tags.

{% verbatim %}
        <tr ng-repeat="dbconn in dbconns">
            <td>{{ dbconn.name }}</td>
            <td>{{ dbconn.schemaname }}</td>
            <td>{{ dbconn.hostname }}</td>
            <td>{{ dbconn.port }}</td>
            <td>{{ dbconn.username }}</td>
            <td><button ng-click="remove()">DELETE</button></td>
        </tr>
    </table>
{% endverbatim %}

When defining your Angular app, there are some necessary configurations to use. I’ll let the comments do the explaining.

var app = angular.module('myappname', ['ngCookies']).
    config([
    '$httpProvider',
    function($httpProvider) {
        // Change content type for POST so Django gets correct request object
        $httpProvider.defaults.headers.post['Content-Type'] = 'application/x-www-form-urlencoded';
        // 2 reasons: Allows request.is_ajax() method to work in Django
        // Also, so 500 errors are returned in responses (for debugging)
        $httpProvider.defaults.headers.common['X-Requested-With'] = 'XMLHttpRequest';
    }]).
    run([
    '$http',
    '$cookies',
    function($http, $cookies) {
        // Handles the CSRF token for POST
        $http.defaults.headers.post['X-CSRFToken'] = $cookies.csrftoken;
    }]);

The bottom line in the code above has a dependency on the angular-cookies.js file (in Angular’s standard bundle). So don’t forget to load that script before you load your Angular app.

<script src="{% static "myappname/js/angular-cookies.js" %}"></script>

JSON Data Exchange Example

Here is an example Ajax call method on the Angular Controller.

app.controller('MyAppController', function ($scope, $http){

    /* Post example */
    $scope.add = function(formdata) {
        // Add requesttype to data object
        formdata['requesttype'] = 'ADD';
        $http(
            {method: 'POST',
            url: '',
            data: $.param(formdata)
        }).
        success(function(data, status, headers, config) {
            // this callback will be called asynchronously
            // when the response is available
            $scope.dbconns = data['current_conns'];
        }).
        error(function(data, status, headers, config) {
            // called asynchronously if an error occurs
            // or server returns response with an error status.
            alert("Status: " + status + ", Data: " + data);
        });
    };
});
NOTE: There is a jQuery dependency to use $.param()

views.py

The code below shows an example of using a custom ['requesttype'] property to handle different types of Ajax calls in the same function.

from django.shortcuts import render
from django.http import HttpResponse
from managedb.models import DatabaseConnection
from managedb.serializers import DatabaseConnectionSerializer
from rest_framework.renderers import JSONRenderer

def example_page(request):
    # Draw initial page
    if request.method == 'GET':
        return render(request, 'homepage.html', {})

    # Handle posts
    elif request.method == 'POST':

        # Make sure request is Ajax
        if request.is_ajax():

            # Handle DELETE requests
            if request.POST['requesttype'] == 'DELETE':

                # Retrieve and then delete object by id
                current_conn = DatabaseConnection.objects.get(id=request.POST['id'])
                current_conn.delete()

                # Get updated list of connections
                context = {}
                # Use custom ModelSerializer to get data from QuerySet
                context['current_conns'] = DatabaseConnectionSerializer(DatabaseConnection.objects.all()).data

                # Return response as JSON
                return HttpResponse(JSONRenderer().render(context), content_type='application/json')

            # Handle ADD requests
            elif request.POST['requesttype'] == 'ADD':
                # TODO - Do stuff here
                pass

            else:
                # TODO - Handle wrong request types here
                pass


    else:
        # TODO - Handle wrong request types here
        pass

    return HttpResponse('error')

serializers.py

And then just to show the Class definition of the serializer used above.

from rest_framework import serializers
from managedb.models import DatabaseConnection

class DatabaseConnectionSerializer(serializers.HyperlinkedModelSerializer):
    class Meta:
        model = DatabaseConnection

        # Optional to include certain fields
        # Here we are not returning the password field so it should never be exposed
        fields = ('id', 'name', 'hostname', 'port', 'username', 'schemaname')

Troubleshooting – Force CSRF Cookie

A final note on CSRF Cookies when going live. When I first pushed the project live I encountered an issue where the cookie was not being sent and ended up throwing 403 Forbidden errors on my Ajax calls. I didn’t investigate the issue too much, but the solution I found was to add the @ensure_csrf_cookie decorator to the main view function serving the core Django template.


from django.views.decorators.csrf import ensure_csrf_cookie

@ensure_csrf_cookie
def runcustomquery(request):
    # view function body...

Django has some very useful error pages built in that can help with a lot of debugging. Unfortunately sometimes you need to dig into the code a little more at run time to track down errors, etc.

Here are some additional methods for debugging Django…

Using pdb

pdb is part of the Python standard library. It allows you to set breakpoints, inspect objects, etc.

In Django, start your local server by invoking pdb

python -m pdb manage.py runserver

In your code, add this line right before the line you want to set a breakpoint on

import pdb; pdb.set_trace()

At the breakpoint, you will see the command line terminal stop and a (Pdb) prompt will show up. From here you can enter an object’s name to inspect it. You can also run simple functions to see what they output.

For example:

(Pdb) request.is_ajax()
True

Just type c to continue on from a breakpoint.

This article is a work in progress…

The Amazon Web Services (AWS) website has instructions for doing this, but they are mostly tailored to doing it on a Linux machine. Here’s how to do it on Windows 7.

Prerequisites

  • Download and unzip the Elastic Beanstalk command line tools package (eb)
  • TIP: Unzip it into a directory so that there are no spaces in the full filepath. Spaces + command line = not fun.

    Add eb to the path

    set PATH=%PATH%;C:\Users\andygilbertson\AWS-ElasticBeanstalk-CLI-2.6.0\eb\windows\
  • Python 2.7
  • Make sure Git is installed
  • Have virtualenv installed
  • To install with PIP, here’s the command:

    pip install http://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.7.2.tar.gz

    Steps

    Open up a command prompt (cmd.exe, not PowerShell)

    Create a virtual environment

    cd C:\
    virtualenv --python=C:\Python27\python.exe Users\andygilbertson\envs\django_aws_hello
    NOTE: The second argument is optional. It is the version of Python you want the virtual environment to build.
    NOTE: The slashes are reversed from what the Amazon documentation has. The command won’t accept the second argument unless you reverse the directory slashes correctly.

    This will create a mini Python installation in the directory. You can go there and see the files to confirm.

    Go into the newly created “Scripts” directory

    cd C:\Users\andygilbertson\envs\django_aws_hello\Scripts

    Then activate the virtual environment

    activate
    NOTE: This is where you don’t want to be using PowerShell. VirtualEnv’s activate script is subject to execution policies on Windows. For some reason cmd.exe doesn’t have issues.

    Now you’ll see your virtual environment name in parentheses to indicate you are in your virtual environment
    Inside your virtual env, install django and python

    pip install django==1.6.2
    easy_install MySQL-python==1.2.5
    NOTE: Getting mysql-python to install correctly was way harder than it should have been. I think the problem is that on Windows this package has some Visual Basic dependencies (C compiler), as noted by the developer. Installing it via easy_install rather than pip solved the issue.

    Start the Django project locally

    cd..
    django-admin.py startproject hellodjango

    Go into the newly created root folder for your local Django site

    cd hellodjango

    Create the requirements.txt file

    pip freeze > requirements.txt

    After freezing, requirements.txt should look like this:

    Django==1.6.2
    mysql-python==1.2.5

    Initialize the Git repository

    git init .

    Make sure you’re in the project directory

    cd C:\Users\andygilbertson\envs\django_aws_hello\hellodjango

    Run eb init

    eb init

    Now you’ll have to fill in a bunch of Elastic Beanstalk settings.

    Here’s the link if you need fresh access keys

    Settings to choose

    • Environment Tier: WebServer::Standard::1.0
    • Solution stack: 32bit Amazon Linux 2013.09 running Python 2.7
    • Environment type: LoadBalanced
    • Create an RDS DB instance: y

     

    On your local computer, create an .ebextensions directory in the top-level directory of your source bundle. In this example, we use /root/mysite/.ebextensions.

    mkdir .ebextensions

    Create a configuration file (myapp.config)

    container_commands:
      01_syncdb:    
        command: "django-admin.py syncdb --noinput"
        leader_only: true
    
    option_settings:
      - namespace: aws:elasticbeanstalk:container:python
        option_name: WSGIPath
        value: mysite/wsgi.py
      - option_name: DJANGO_SETTINGS_MODULE
        value: mysite.settings
      - option_name: AWS_SECRET_KEY
        value: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
      - option_name: AWS_ACCESS_KEY_ID
        value: AKIAIOSFODNN7EXAMPLE
    
    NOTES:
    • Make sure you fill in all four of the option_settings with your data.
    • Make sure your config file doesn’t have tab characters. Use only spaces for indenting

    Save this .config file in the newly created .ebextensions directory

    Edit your settings.py file (/root/mysite/settings.py). Add this for the DATABASES object:

    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.mysql',
            'NAME': os.environ['RDS_DB_NAME'],
            'USER': os.environ['RDS_USERNAME'],
            'PASSWORD': os.environ['RDS_PASSWORD'],
            'HOST': os.environ['RDS_HOSTNAME'],
            'PORT': os.environ['RDS_PORT'],
        }
    }

    Add these files to the Git repository and commit the changes

    git add .
    git commit -m "eb configuration"
    git aws.push

    New you can run eb start to actually initialize Elastic Beanstalk with all the settings

    eb start

    When it’s all up and running (it takes about 15 minutes) you should be able to get some status info

    eb status --verbose

    When the server status is Green, you should be able to grab the supplied URL and plug it into your browser. If everything went according to plan, you’ll see Django’s basic welcome page.

    If you see the Elastic Beanstalk starter page (not Django’s), you may have to run Git aws.push again.

    git aws.push

    Troubleshooting

    One of the first places to look is the AWS Console. Go to Elastic Beanstalk, find your environment. On the left you’ll see “Logs”. “Snapshot Logs” and read through them to find errors.

    Closing Up Shop

    To stop the Elastic Beanstalk server/environment

    eb stop

    To delete the Elastic Beanstalk server/environment

    eb delete

    To deactivate your virtual environment

    cd C:\Users\andygilbertson\envs\django_aws_hello\Scripts
    deactivate

This doc is a work in progress…

Using Django version: 1.6.2

Basic Ajax Example

views.py

from django.shortcuts import render
def basicajax(request):
    context = {}

    # Draw initial page
    if request.method == 'GET':
        return render(request, 'basic_ajax.html', context)

    # Handle posts
    elif request.method == 'POST':

        # Make sure request is Ajax
        if request.is_ajax():
            context['textbox_data'] = request.POST['textbox']
            context['hiddenfield_data'] = request.POST['hiddenfield']

            # Return response as JSON
            return HttpResponse(json.dumps(context), content_type='application/json')
    else:
        # Handle errors here
        pass
Notes

  • We are returning an HttpResponse, not a render
  • content_type='application/json' is necessary. Otherwise the Ajax success callback returns the data as a string
  • content_type used to be mimetype. Some documentation online still shows up using this but it is now deprecated. Use content_type
  • The data that is passed in from the hidden fields can be used to separate different Ajax requests from the same page. Just give each Ajax form an identifier in a hidden field and use that field to handle the logic in views.py

basic_ajax.html

<html>
<head>
    <title>Basic Ajax Example</title>
    <script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js"></script>
</head>
<body>
    <form id="form_id1">
        <input type="text" name="textbox">
        <input type="submit" value="Search">
        <input type="hidden" name="hiddenfield" value="Cant see me">
        {% csrf_token %}
    </form>
    Textbox value: <span id="textbox_val"></span>, Hidden field value: <span id="hiddenfield_val"></span>
</body>
<script>
$(document).ready(function() {
    $("#form_id1").submit(function(event){
        // Disable submit button
        $("#form_id1").find('input[type=submit]').prop("disabled",true);

        $.ajax({
            type:"POST",
            url:"",
            data: $("#form_id1").serializeArray(), // Easy way to capture all data from form
            success: function(data) {

                // Fill in our output spans
                $("span#textbox_val").html(data['textbox_data']);
                $("span#hiddenfield_val").html(data['hiddenfield_data']);

                // Enable submit button
                $("#form_id1").find('input[type=submit]').prop("disabled",false);
            },
            error: function(xhr, textStatus, errorThrown) {
                alert("Please report this error: "+errorThrown+xhr.status+xhr.responseText);
            }
        });
        return false; //<---- so page doesn't reload
    });
});</script>
</html>
Notes

  • Imports jQuery at the top
  • url="", within the Ajax call to submit to same page
  • To fill the data in the Ajax call use serializeArray(), not serialize()
  • {% csrf_token %} must be present within each form

urls.py

from django.conf.urls import patterns, include, url
from hellodjango.views import *
urlpatterns = patterns('',
    url(r'^examples/basicajax/', basicajax),
)

A couple years ago I started delving into MySQL Events as a clean way to do some cronjob like tasks (data aggregation, preprocessing, etc). Overall I’ve been pleased with their performance. However, like any automated, scheduled process eventually something will not work correctly. When that happens you need to know when the issue started to occur, and then dig into the “why” so you can fix it.

Here’s how I track the performance of my scheduled Events. The general idea is to create a new table to store data about each scheduled Event that runs. Technically since a single Event can consist of multiple commands, I try to record a row for each command that is issued.

Log Table

CREATE TABLE `ScheduledEventLog` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `InsertTime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `SourceScript` varchar(64) DEFAULT NULL,
  `StartTime` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  `EndTime` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  `RunDuration` int(11) DEFAULT NULL,
  `RowsAffected` int(11) DEFAULT NULL,
  PRIMARY KEY (`ID`),
  KEY `InsertTime_idx` (`InsertTime`),
  KEY `RunDuration_idx` (`RunDuration`),
  KEY `SourceScript_idx` (`SourceScript`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

Populating the Table

The trick here is to add a couple short commands surrounding each of our scheduled Events. Prior to each aggregation, we just use a session variable to store the database time. After each aggregation we calculate how long it took and record that in our table, along with the number of rows affected.

DELIMITER $$

CREATE EVENT `DailyAggregations` 
ON SCHEDULE EVERY 1 DAY STARTS '2014-01-21 01:01:01' 
ON COMPLETION NOT PRESERVE ENABLE 
COMMENT 'Aggregates data into Logins_PerDayAndUser' 
DO BEGIN

/*** LOG START TIME ***/
SET @starttime := NOW();
/*** LOG START TIME ***/

/*** COMMAND ***/
INSERT INTO `SchemaName`.`Logins_PerDayAndUser`
(`TheDate`,`UserID`,`Logins`)
	SELECT DATE(`Time`) AS TheDate, UserID, COUNT(*) AS Logins
	FROM `SchemaName`.`Logins`
	WHERE `Time` BETWEEN CAST(CONCAT(DATE_SUB(CURDATE(), INTERVAL 1 DAY), ' 00:00:00') AS DATETIME)
		AND CAST(CONCAT(DATE_SUB(CURDATE(), INTERVAL 1 DAY), ' 23:59:59') AS DATETIME)
	GROUP BY TheDate, UserID
	ON DUPLICATE KEY UPDATE
		`Logins`=VALUES(`Logins`);
/*** COMMAND ***/

/*** LOG END TIME ***/
INSERT LOW_PRIORITY INTO `SchemaName`.`ScheduledEventLog`
(`SourceScript`, `StartTime`, `EndTime`, `RunDuration`, `RowsAffected`)
	SELECT 'DailyAggregations_1.1' AS `SourceScript`
	, CAST(@starttime AS DATETIME) AS `StartTime`
	, NOW() AS `EndTime`
	, TIMESTAMPDIFF(SECOND,CAST(@starttime AS DATETIME),NOW()) AS `RunDuration`
	, `RowsAffected`
	FROM (SELECT ROW_COUNT() AS `RowsAffected`) AS R1;
/*** LOG END TIME ***/



/*** LOG START TIME ***/
SET @starttime := NOW();
/*** LOG START TIME ***/

/*** COMMAND ***/
INSERT INTO `SchemaName`.`Logins_PerDayAndUser`
(`TheDate`,`UserID`,`Logins`)
	SELECT DATE(`Time`) AS TheDate, UserID, COUNT(*) AS Logins
	FROM `SchemaName`.`Logins`
	WHERE `Time` BETWEEN CAST(CONCAT(DATE_SUB(CURDATE(), INTERVAL 2 DAY), ' 00:00:00') AS DATETIME)
		AND CAST(CONCAT(DATE_SUB(CURDATE(), INTERVAL 2 DAY), ' 23:59:59') AS DATETIME)
	GROUP BY TheDate, UserID
	ON DUPLICATE KEY UPDATE
		`Logins`=VALUES(`Logins`);
/*** COMMAND ***/

/*** LOG END TIME ***/
INSERT LOW_PRIORITY INTO `SchemaName`.`ScheduledEventLog`
(`SourceScript`, `StartTime`, `EndTime`, `RunDuration`, `RowsAffected`)
	SELECT 'DailyAggregations_1.2' AS `SourceScript`
	, CAST(@starttime AS DATETIME) AS `StartTime`
	, NOW() AS `EndTime`
	, TIMESTAMPDIFF(SECOND,CAST(@starttime AS DATETIME),NOW()) AS `RunDuration`
	, `RowsAffected`
	FROM (SELECT ROW_COUNT() AS `RowsAffected`) AS R1;
/*** LOG END TIME ***/

END$$
DELIMITER ;
NOTE: `SourceScript` is just a text field used to store a combination of the Event name and a unique number so we can track down performance of each individual aggregation.

 

Now that we are storing data from each command within each Event we have a lot of flexibility to track the performance. I won’t go into depth, but here are a few ideas:

  • Trigger an email if any command takes longer than X seconds
  • Report an issue if a command takes more than 100% longer than the recent average duration
  • Query for missing logs to find commands that didn’t finish successfully

Recently I was fooling around with Chrome Extension development, mostly just to satisfy my curiosity about how it worked.

This article will show a basic template that illustrates one method of communication between two core components of a Chrome Extension. I don’t expect to be doing much more Extension development, but you never know, this might be useful in the future (or if some poor soul ends up reading this article).

TIP: This is just one method of communication. There are more, and some others might be better for your particular needs.

 

The basic idea of this Extension is to create a ticker that starts as soon as the extension is loaded (generally on browser load). The ticker increments every 5 seconds on its own. The user can click the Extension’s icon, loading a small extension popup that shows the same ticker amount and stays in sync with the icon’s ticker. It also has a button to manually increase the ticker by 10 ticks. Completely useless (I think?), but it hopefully illustrates a couple basic Chrome Extension concepts.

NOTE: This Extension uses Manifest Version 2. This is the most recent version at the time of publishing. More info on version differences.

 

Background page

Every extension has an invisible background page (maybe it’s optional). The background page is loaded as soon as the Extension is loaded into Chrome (generally when the browser first starts). This example uses both HTML and JavaScript files for the background page, but you can just use a JavaScript file.

NOTE: If you only want JavaScript for the background page (no HTML), the manifest.json file changes a bit. See documentation.

 

background.html

All our background.html file does is load our JavaScript file. I mainly included it here because it contains some DOM elements, allowing us to initialize some of our JavaScript code after the DOM is loaded.

<!doctype html>
<html>
  <head>
    <script src="background.js"></script>
  </head>
  <body>
  </body>
</html>

 

background.js

The background JavaScript is the one that creates and controls our ticker. The code is hopefully straightforward and well commented.

One thing to note is the listener that is started. More types of requests can easily be added by adding another case. Incoming requests can have arbitrary data attached to the request object (ie, request.data is used in one of the cases).


var masterTicker = {
	tickerInterval : 1000 * 5, // seconds
	ticker : 0,
	incrementTicker: function() {
		masterTicker.ticker++;
		masterTicker.updateTicker();
	},
	addToTicker: function(amount) {
		masterTicker.ticker += amount;
		masterTicker.updateTicker();
	},
	updateTicker: function() {
		if (masterTicker.ticker==0) {
			chrome.browserAction.setBadgeText({text: ""});
		} else {
			chrome.browserAction.setBadgeText({text: "" + masterTicker.ticker});
		}
		// Send ticker amount to popup page
		chrome.extension.sendRequest({method: 'updateTicker', data: masterTicker.ticker}, function(response) {
		});
	},
};

// Start our listener
chrome.extension.onRequest.addListener(function(request, sender, sendResponse) {
	switch (request.method) 
	{
	case 'addToMasterTicker':
		masterTicker.addToTicker(request.data);
		sendResponse({result: 'success'});
		break;
	case 'getMasterTicker':
		sendResponse({result: '' + masterTicker.ticker});
		break;
	}
});

// Run our script as soon as the document's DOM is ready.
document.addEventListener('DOMContentLoaded', function () {
	// Start the interval for updating the ticker
	setInterval(function(){masterTicker.incrementTicker()},masterTicker.tickerInterval);
});

Also note the masterTicker.updateTicker function. This contains the syntax for sending a request to the popup.js. When we get to popup.js, you’ll see a corresponding listener that handles that type of request.

 

Popup page

popup.html

This file is also very simple.

<!doctype html>
<html>
  <head>
    <title>Basic Popup</title>
    <!-- JavaScript and HTML must be in separate files. -->
	<script src="popup.js"></script>
	
	<!-- Local copy of javascript is needed -->
	<script src="jquery-2.1.0.min.js"></script>
  </head>
  <body>
		<div>Current Ticker: <span id="ticker-amount"></span></div>
		<button id="add-ten-to-ticker">Add 10</button>
  </body>
</html>

 

popup.js

The popup.js version of the ticker stores a local copy of it’s own ticker but the initial value is grabbed from background.js (on page load). After that it should always be getting the latest ticker value from background.js.

Notice the similar methods of logging a listener that handles certain request.method values and also using chrome.extension.sendRequest to initiate it’s own communication.

var localTicker = {
	currentTicker : 0,
	getTicker: function() {
		// Get official ticker amount from background page
		chrome.extension.sendRequest({method: 'getMasterTicker'}, function(response) {
			// Store ticker amount
			localTicker.currentTicker = response.result;
			// Update the DOM with the result
			localTicker.updateTicker();
		});
	},
	addToTicker: function(amount) {
		// Request X amount to be added to ticker
		chrome.extension.sendRequest({method: 'addToMasterTicker', data:amount}, function(response) {
			// Get new official ticker amount
			localTicker.getTicker();
		});
	},
	updateTicker: function() {
		// Update the DOM to show the ticker value
		$( "#ticker-amount" ).text("" + localTicker.currentTicker);
	},
};

// Add listener to receive messages from background page
chrome.extension.onRequest.addListener(function(request, sender, sendResponse) {
	switch (request.method) 
	{
	case 'updateTicker':
		localTicker.currentTicker = request.data;
		localTicker.updateTicker();
		sendResponse({result: 'success'});
		break;
	}
});

// Run our script as soon as the document's DOM is ready.
document.addEventListener('DOMContentLoaded', function () {
	// Grab current ticker amount from background
	localTicker.getTicker();
	
	// Attach onclick event to button
	$("#add-ten-to-ticker").click( function() {
		localTicker.addToTicker(10);
	});
});

That’s really the meat and potatoes of showing this method of communication between background.js and popup.js.

 

The rest of the Extension

I’ll briefly go through the remaining parts of the extension and how to get it up and running.

 

manifest.json

Every Chrome Extension needs a manifest.json file. Here is what our’s looks like.

{
  "manifest_version": 2,

  "name": "Basic Ticker Extension",
  "description": "Demonstrates basic communication between background.js and popup.js",
  "version": "1.0",

  "browser_action": {
    "default_icon": "icon.png",
    "default_popup": "popup.html"
  },
  "background" : {"page": "background.html"}
}

 

Other files

You need 2 more files:

  • icon.png: A 19 x 19 image file
  • Local version of any JavaScript files you need (ie, jQuery)
Full file list

File list for extension

 

Loading and testing Extension

Getting the Extension working in your browser is very easy.

  1. Go to your Extensions console (via upper right button or chrome://extensions/ in your address box.
  2. Make sure Developer mode is turned on.
  3. Click Load unpacked extension… near the top of the page.
  4. Select the folder on your hard drive containing all of the Extension files

Now you should see the icon of your Extension in the upper right and a new entry in the Extensions list.

NOTE: To debug your background page, use the Inspect views: background.html link.

SUMPRODUCT is a very useful and flexible function in Excel, but it also has one of the trickier syntaxes to get right. I use this function often, and concise reference articles have been hard to find.

NOTE: This is for Excel 2010. Other versions may work differently.

 

SUMPRODUCT Rules:

  • All arrays/ranges used must be the same length
  • Text and blank cells in the arrays are treated as zeroes
  • Any errors in the arrays will result in an error

 

Multiply Horizontally, Sum Result

The most basic usage just takes two arrays (ranges) and multiplies each row horizontally, and then adds up the total.
Basic SUMPRODUCT usage

FORMULA:
=SUMPRODUCT(C2:C8,D2:D8)

 

Match Criteria, Sum Matches (Like SUMIF)

This usage matches according to cell reference. The match is translated into a boolean which is then multiplied horizontally (so non-matches are multiplied by zero).
SUMIF SUMPRODUCT usage

There are two ways to write this expression.
FORMULAS:

=SUMPRODUCT((B$2:B$8=B11)*C$2:C$8)
=SUMPRODUCT(--(B$2:B$8=B11),C$2:C$8)

 

Match Several Criteria, Sum Matches (Like SUMIFS)

Similar to above but you can match multiple criteria like SUMIFS.
SUMIFS SUMPRODUCT usage

FORMULAS:

=SUMPRODUCT((B$2:B$8=B11)*(C$2:C$8=C11)*D$2:D$8)
=SUMPRODUCT(--(B$2:B$8=B11),--(C$2:C$8=C11),D$2:D$8)

 

Match Several Criteria, Multiply Matching Rows Across

Building on the above uses, instead of just summing matching rows, we can use SUMPRODUCT for a more unique purpose (summing several multiplications at once) that would be hard to emulate with other functions.
SUMPRODUCT - multiple criteria

FORMULAS:

=SUMPRODUCT((B$2:B$8=B12)*(C$2:C$8=C12)*D$2:D$8*E$2:E$8)
=SUMPRODUCT(--(B$2:B$8=B12),--(C$2:C$8=C12),D$2:D$8,E$2:E$8)

 

Count Matching Rows

It’s worth mentioning that we can just stick to boolean evaluations of matching criteria and come up with a simple count of matching rows.
SUMPRODUCT count matching rows

FORMULAS:

=SUMPRODUCT((B$2:B$8=B12)*(C$2:C$8=C12))
=SUMPRODUCT(--(B$2:B$8=B12),--(C$2:C$8=C12))

 

Weighted Averages

SUMPRODUCT also allows us to computes a weighted average all within one cell. Let’s say a Kingdom can vote on a new feature. Each Kingdom must vote as a whole, so one vote per Kingdom. However, if a Kingdom has more members we want their vote to count for more (ie, Electoral College).
SUMPRODUCT weighted average

FORMULA:

=SUMPRODUCT(B2:B8,C2:C8) / SUM(B2:B8)

I was recently tasked with optimizing a table for a rather tricky query. A typical query went something like this:


SELECT  `ID` FROM `ExistingUsers`
WHERE  `EntryType` =  'NEW'
AND  `Version` =  'long.version.string'
AND 
(
	(
		`IDTypeOne` =  '4e46552376ef6ab7'
		AND  `IDTypeOne` IS NOT NULL 
	)
	OR 
	(
		`IDTypeTwo` =  '860525021806844'
		AND  `IDTypeTwo` IS NOT NULL 
	)
	OR 
	(
		`IDTypeThree` =  'g2r376ef6ab7'
		AND  `IDTypeThree` IS NOT NULL 
	)
);

The query’s purpose is to look into the `ExistingUsers` table to find if we already have a matching entry for this user. A qualifying matching entry would satisfy both the `EntryType` and `Version` columns, and at least one of the three possible `IDTypeXXX` columns.

For this particular problem, optimizing the query was not an option, and cleaning the data in the table wasn’t either. So I was left with optimizing the table itself.

Here’s what the CREATE TABLE showed:


CREATE TABLE `ExistingUsers` (
  `ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `EntryType` enum('NEW','EXISTING') DEFAULT NULL,
  `Version` varchar(64) DEFAULT NULL,
  `IDTypeOne` varchar(128) DEFAULT NULL,
  `IDTypeTwo` varchar(64) DEFAULT NULL,
  `IDTypeThree` varchar(64) DEFAULT NULL,
  `Time` datetime DEFAULT NULL,
  PRIMARY KEY (`ID`),
  UNIQUE KEY `Uniq_idx` (`EntryType`,`IDTypeOne`,`IDTypeTwo`,`IDTypeThree`,`Version`),
  KEY `Time` (`Time`),
) ENGINE=InnoDB AUTO_INCREMENT=3816978 DEFAULT CHARSET=utf8

A relatively small table of less than 4 million rows (actually fewer since not all the auto increments resulted in a new row). Each time this query ran it took between 2.5 and 5 seconds.

Here’s what EXPLAIN EXTENDED showed:


id  select_type  table          type  possible_keys  key        key_len  ref
1   SIMPLE       ExistingUsers  ref   Uniq_idx       Uniq_idx   2        const

rows    filtered  Extra
1894887 100       Using where; Using index

The important number here is rows. MySQL is expecting to examine about half of all rows in the table. That’s telling me the index is not doing enough of the heavy lifting.

The first thing I noticed right away is that our query has two direct columns it is always going to match against and then three more columns that are all “OR”s. The order of the unique index might be better suited to have those two hard matches first, followed by the three OR columns. From this:

`Uniq_idx` (`EntryType`,`IDTypeOne`,`IDTypeTwo`,`IDTypeThree`,`Version`)

to this:

`Uniq_idx` (`EntryType`,`Version`,`IDTypeOne`,`IDTypeTwo`,`IDTypeThree`)

Here is EXPLAIN EXTENDED after this change:


id  select_type  table          type  possible_keys  key        key_len  ref
1   SIMPLE       ExistingUsers  ref   Uniq_idx       Uniq_idx   197      const,const

rows    filtered  Extra
78550   100.00    Using where; Using index

This change actually helped the performance quite a bit. We can see the number of rows expecting to be examined is about 5% of what it was before. Also note the ref column shows const,const telling us there are two constraints being used in the index.

After this change, if there was a matching row to return, the query took around 0.03 secs. Definitely acceptable. However, if there was no match to return, there was no improvement in run time at all. I would expect that roughly 10% of the time this query runs, it would not have a row to match against.

So that leaves one final optimization to try. Can we somehow create one or more indices that work for the other three columns that are examined in the “OR” clauses of the query? The answer here lies in a MySQL optimization called Index Merge. What we need to do to take advantage of this optimization is to index each of the columns to be examined by the “OR” clauses separately.

This command should do it:


ALTER TABLE `ExistingUsers`
ADD KEY `IDTypeOne_idx` (`IDTypeOne`),
ADD KEY `IDTypeOne_idx` (`IDTypeTwo`),
ADD KEY `IDTypeThree_idx` (`IDTypeThree`);

Now let’s look at an explain of our query:


id  select_type  table          type         possible_keys  
1   SIMPLE       ExistingUsers  index_merge  Uniq_idx,IDTypeOne_idx,IDTypeOne_idx,IDTypeThree_idx

key                                          key_len      ref
IDTypeOne_idx,IDTypeOne_idx,IDTypeThree_idx  387,195,195  null

rows  filtered  Extra
3     100.00    Using union(IDTypeOne_idx,IDTypeOne_idx,IDTypeThree_idx); Using where

EXPLAIN is telling us it expects to only examine three rows. That’s looking much better. We also see type showing index_merge.

MySQL documentation states:

The Index Merge method is used to retrieve rows with several range scans and to merge their results into one.

In possible_keys we see the index that was being used before, along with the three new indexes. In key we see that the optimizer is planning on using the three new indexes together and not even using the previous index.

Also note the Extra column is telling us it is planning on using the union flavor of the index merge optimization.

Again from MySQL documentation:

The [Union] algorithm can be employed when the table’s WHERE clause was converted to several range conditions on different keys combined with OR

There are some special cases and restrictions on MySQL’s use of the index merge optimization. I won’t go into any more details here, but if you’re having a hard time getting indexes to work on a query containing multiple or nested AND/OR clauses, the index Merge optimization might be what you’re looking for.

Just want to document some of the Excel functions I end up using fairly frequently, but can be time consuming to reproduce.

Pull Content from Another Sheet by Matching Top Row and Left Column

Basic

=HLOOKUP(TOPROWCELL,'SOURCESHEET'!$1:$100000,MATCH(LEFTREFERENCECELL,'SOURCESHEET'!$A:$A,0),FALSE)

W/ Error Checking

=IFERROR(HLOOKUP(TOPROWCELL,'SOURCESHEET'!$1:$100000,MATCH(LEFTREFERENCECELL,'SOURCESHEET'!$A:$A,0),FALSE),"")

W/ “null” and Error Checking

=IFERROR(IF(HLOOKUP(TOPROWCELL,'SOURCESHEET'!$1:$100000,MATCH(LEFTREFERENCECELL,'SOURCESHEET'!$A:$A,0),FALSE)="null","",HLOOKUP(TOPROWCELL,'SOURCESHEET'!$1:$100000,MATCH(LEFTREFERENCECELL,'SOURCESHEET'!$A:$A,0),FALSE)),"")

Using SUMPRODUCT

=SUMPRODUCT(--('SOURCESHEET'!$A$2:$A$999999=B$1),--('SOURCESHEET'!$B$2:$B$999999=$A2),'SOURCESHEET'!$G$2:$G$999999)

or
=SUMPRODUCT(('SOURCESHEET'!$A$2:$A$999999=B$1)*('SOURCESHEET'!$B$2:$B$999999=$A2)*'SOURCESHEET'!$G$2:$G$999999)
Function requirements:

  • ALL evaluation/comparison ranges need to be exactly the same
  • Evaluation/comparison ranges should have valid, numeric data in all cells

This post is just to document what worked for me to get a very basic implementation of a Flask API up and running with Amazon Web Services (AWS) Elastic Beanstalk. For this I was using Flask version 0.10.1.

The main Python code comes from this Flask tutorial.

 

Elastic Beanstalk set up

First create a new Elastic Beanstalk application in AWS dashboard.

Configuration Settings

Environment Type

  • Environment tier: Web Server
  • Predefined configuration: Python (2.7)
  • Environment type: Load balancing, autoscaling

Application Version

  • Application source: Sample application (for now)

Additional Resources

  • Create RDS DB instance? (not needed)
  • Create environment inside a VPC? (not needed)

Configuration Details

  • Instance type: t1.micro (free)
  • EC2 key pair: none for now

Go ahead and create the environment. It takes 5-10 to start it up.

After it starts you’ll see the URL of your sample application near the top of the dashboard (circled yellow). You can visit this link to verify the environment started up correctly.

URL to sample application
URL to sample application

 

application.py

Here is Python code for our basic API:

import flask

base_url = 'flaskapitest-env.elasticbeanstalk.com'

application = flask.Flask(__name__)

#Set application.debug=true to enable tracebacks on Beanstalk log output.
#Make sure to remove this line before deploying to production.
application.debug=True

tasks = [
    {
        'id': 1,
        'title': u'Buy groceries',
        'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
        'done': False
    },
    {
        'id': 2,
        'title': u'Learn Python',
        'description': u'Need to find a good Python tutorial on the web',
        'done': False
    }
]

@application.errorhandler(400)
def not_found(error):
    return flask.make_response(flask.jsonify( { 'error': 'Bad request' } ), 400)

@application.errorhandler(404)
def not_found(error):
    return flask.make_response(flask.jsonify( { 'error': 'Not found' } ), 404)

@application.route('/')
def hello_world():
    return "Hello world!"

@application.route('/todo/api/v1.0/tasks', methods = ['GET'])
def get_tasks():
    return flask.jsonify( { 'tasks': map(make_public_task, tasks) } )

def make_public_task(task):
    new_task = {}
    for field in task:
        if field == 'id':
            new_task['uri'] = base_url + '/todo/api/v1.0/task/' + str(task[field])
        else:
            new_task[field] = task[field]
    return new_task


if __name__ == '__main__':
    application.run(host='0.0.0.0', debug=True)

Make sure to replace the base_url variable to the URL to your application that was pointed out above.
Save this code as “application.py”

 

requirements.txt

You’ll also need a text file called requirements.txt. Save this file in the same directory as application.py.
The contents of requirements.txt (one liner):

Flask==0.10.1

 

Zip, Upload, Deploy

When both these files exist in the same folder, select and zip both of them into the same zip archive.

NOTE: Do not zip the folder itself, zip the files inside of the folder.

Now go into the Elastic Beanstalk dashboard and use the “Upload and Deploy” button to upload the zip archive.

After the dashboard comes up as green, we’re ready for a quick test.

 

Test with cURL

On Windows, open up a command line window. Make sure you have cURL installed and included in the path.

This command should return “Hello World!”:

curl -i http://testflaskapp-env.elasticbeanstalk.com

This command should return the short task list:

curl -i http://testflaskapp-env.elasticbeanstalk.com/todo/api/v1.0/tasks