AuthorAnton Savchenko (Software Engineer)

Level: Beginner

Description: knows javascript, getting started with browser extension development

window.location is an object which contains information about the document URL. This object also exposes methods for URL modification. There is a known bug in Firefox when retrieving the location.hash property. This property returns the URL following the hash symbol(#), however unlike other browsers or any other location property the returned portion of the URL is automatically decoded.

results in:

  • hash=#Fire fox
  • search=?q=Fire%20fox

In order to simulate the same behavior as the other browsers the hash property should be passed to encodeURIComponent method.For more information on the location property check out the full documentation here.

Just to reiterate the point this issue only exists in Firefox and its a known bug. It’s likely that the behavior of the location.hash property will change in the future to match the other location properties.

Code example, use this for Firefox only!

var locationObject = Window.location;
var correctHash = encodeURIComponent(location.hash);

AuthorAnton Savchenko (Software Engineer)

Level: Intermediate

Description: knows their way around Chrome and Firefox extension development

Javascript provides an initMouseEvent method for initializing and firing mouse click events. Although the syntax is the same across Chrome and Firefox the behavior is slightly different. The documentation states that all parameters are mandatory and Firefox spews out error messages if any are omitted, however Chrome will silently continue passing in defaults for missing parameters.

Developers should be cautions of this and provide all parameters to prevent unexpected behaviors and ensure cross browser functionality.

event.initMouseEvent(type, canBubble, cancelable, view,
                     detail, screenX, screenY, clientX, clientY,
                     ctrlKey, altKey, shiftKey, metaKey,
                     button, relatedTarget);

The full documentation for initMouseEvent is available on the Mozilla Developer Network.

Creating mouse event example

var click = document.createEvent('MouseEvents');
click.initMouseEvent('click', true, true, window, 0, 0, 0, 0, 0, false, false, 
false, false, 0, null);

As usual, IE keeps us in business by being a completely different story…

Author: Amir Nathoo (Co-founder)

Level: Intermediate

Description: knows their way around Chrome and Firefox extension development

Google Chrome has a built-in API call that lets extensions take screenshots of the pages users see:


In fact, there’s even a code sample showing how to do this in the Google Chrome Extension documentation.

It’s a little trickier on Firefox but perfectly possible using a canvas element in your overlay XUL. Here’s how.. add this element:

<html:canvas id="my-canvas" style="display: none;" />

Then in your overlay javascript, listen for new document loads and this snippet will create a data url of a screenshot:

var canvas = document.getElementById('my-canvas');
var context = canvas.getContext('2d');

//Find the window dimensions
//doc is the content document that you listened for
canvas.height = doc.defaultView.innerHeight; 
canvas.width = doc.defaultView.innerWidth;

context.drawWindow(doc.defaultView, 0, 0, canvas.width, 
    canvas.height, "rgba(0,0,0,0)");

//Create a data url from the canvas
var dataUrl = canvas.toDataURL("image/png");

You can then read about nsiIOService and nsiWebBrowserPersist to create an nsiURI from data url and persist it locally.

Let us know if this helps you!

Equivalent to beforeload event for Firefox extensions

Author: James Brady (Co-founder)

Level: Expert

Description: assumes detailed knowledge of Chrome and Firefox extension development

You can use the beforeload event to block loading of resources on webpages – Disconnect uses this to prevent Facebook widgets from loading and capturing your browsing data.


Here’s how you use it in Chrome. First, setup your a content script to run right at the start of a new document load, here’s what a manifest.json file might look like to get that done:

  "name": "Disconnect",
  "version": "0.1",
  "description": "Stop major third parties and search engines 
from tracking your browsing and search history (built on WebMynd).",
  "permissions": [ "tabs",  "http://*/*",  "https://*/*" ],
  "background_page": "webmynd.html",
  "content_scripts": [
      "matches": [ "http://*/*", "https://*/*" ],
      "js": [
      "run_at" : "document_start"

Second, in your content script (content.js), add an event listener for the beforeload event. Then within the event listener you can selectively use event.preventDefault() to stop a resource from loading.

document.addEventListener('beforeload', function(event) {
  if (event.url.match('facebook')) {
}, false);

Easy right? You need to be careful to not to put too much inside the event listener, like back-and-forth with the background page, since Chrome has pretty aggressive timeouts (presumably to stop extensions slowing down page loads too much)

But what about Firefox? Problem is, it doesn’t support the beforeload event. Luckily there’s an alternative though it’s harder work. You can setup an observer to listen for the http-on-modify-request event. This is called as the http request is made and allows you to modify headers and even cancel the request – here’s a snippet you can add to your overlay javascript to get this working:

    observe: function(aSubject, aTopic, aData) {
  if ("http-on-modify-request" == aTopic) {
    var url = aSubject
    if (url && url.match('facebook')) {
  }, "http-on-modify-request", false);

Learn more in Mozilla’s documentation on intercepting page loads and the observer service.

There are a few drawbacks to Mozilla’s approach that you need to watch for as compared to listening for the beforeload event in Chrome and Safari:

  • Your observer method is called for every single load request and there is no timeout so if you write inefficient code, you will significantly slow down the users’ browsing experience
  • The requests are not necessarily tied to a particular document object – some http-on-modify-request events are fired even before the onLocationChange event for a document load. This makes it tricky to get to the document and browser that actually triggered the request

That said, it’s good to know that you can intercept requests to load remote resources on Firefox as well as Chrome and Safari.

We’re announcing a new product. Tabble. Go try it now…

If you find yourself looking people up on LinkedIn or Facebook, and researching companies on Crunchbase or Quora, you’ll want to try it. It shows you relevant emails from your Gmail account and profiles from all your top cloud apps in one place. This makes is easier for you to keep tabs on people and companies.

For example, here I’m looking up a friend on Crunchbase, notice the unobtrusive favicons on the right:

If I hover over the Gmail favicon, I see we have a meeting scheduled.

I can also see his company got funded recently from the SEC filings.

Without needing to go to multiple sites, I get the full picture on my friend. We call this CLOUD INTEGRATION. You don’t need to share your login details with us. We don’t crawl your data. It’s as if you went and did all that research yourself. It doesn’t matter that Gmail and Facebook aren’t open, we solve that problem for you. Like it?

Let us know what you think of this new product from WebMynd!

What do we mean by CLOUD INTEGRATION? There’s more… you’ll have to email us.

We’ve just released a new version of our sidebar for Firefox and Chrome. Upgrade your sidebar to try it now and see the new “Search Tabs” interface.

Search Tabs interface

New "Search Tabs" interface

On search pages, you can see favicons on the right: these are the Tabs. Just hover over them and a small widget will appear showing you relevant search results from one of your top sources.

Search Tabs results widget

Search Tabs results widget

It’s a way you can get all the benefits of The Search Sidebar in a smaller interface. You can minimize the sidebar to just see the Tabs, or maximize it to access all features, just by clicking on the blue “WebMynd” tab.

In this release we’ve also made a couple of other changes:

  • Added new sources: Quora, iTunes, Crunchbase
  • Improved our recommendations of which source to try based on keywords e.g. search for “i love cheese” and you’ll see RecipePuppy results come up
  • Fixed a clash with Feedly on Firefox: the sidebar now works just fine with it

Let us know what you think!

Sahil here, from the WebMynd team. James, Amir and I have been working like mad men over the past few months to deliver new and exciting changes to the WebMynd service.


To start, we are very excited to announce the official launch of our brand new sidebar design. It was time to give the product a facelift, and with the help of a few sketches from the team and a web design expert, Richard Kramer – we pulled it off. There are of course ongoing visual/functionality tweaks to be made, but we are looking forward to any and all feedback from you.

Download the new WebMynd Search Sidebar for Firefox here: Install Now. Or go to our homepage to install it for other browsers.

In the spirit of facelifts, we couldn’t leave our website out of the chic makeover party. With the help of a few more binder-paper sketches and another web design expert, David Kidger of Squidge Inc, we gave our website a much needed redesign to reflect our awesome new sidebar. Check out the new website here

Content Concierge:

We soft-launched Content Concierge just about three weeks ago and have had some incredible reception. With the CC platform, any publisher or even user who has his/her own site they want to promote, can head over to our website, click “Create your Search Sidebar” and in a matter of minutes, generate their own search sidebar.

When users install your sidebar, by default, it will have the sources that you defined upon creation (adding multiple website verticals like Yahoo! NBA andYahoo! MLB is pretty cool). We even provide a pretty spiffy landing page that we generate for you to promote and use as the destination for your uses to download your search sidebar.

So far, we’ve had over 50 libraries from around the world using our CC platform to create a search sidebar that will let them combine their University’s  scholarly catalog sources into one tool, where, when a user engages in his/her normal search behavior and searches Yahoo! / Google / Bing, they will get results from all sources the library defined during the search sidebar creation process. Generally, these are sources from their OPAC catalog where many of the sources are behind authentication servers or even proxy servers…have no fear, the WebMynd team can help you build a sidebar with widgets that require logging in and even proxy account logins.

To get help getting setup, just shoot us an email at: WebMynd Support

Thanks to our friends, Aaron Tay (librarian at National University of Singapore) and another fellow librarian, Guus Van Den Brekel – word is out on the librarian street that WebMynd Content Concierge is the new big thing for making a useful utility for staff/faculty/students since sliced bread. Check out their blog posts: Guus’ Post on WebMynd CC | Aaron’s Post on WebMynd CC

We have plenty of updates to come for Content Concierge, and many of the questions Aaron and Guus had around editing/deleting sidebars have been addressed (as well as the other small tweaks) and the new functionality is live on the alpha. We look forward to keeping you all up to date on the latest and greatest from WebMynd with things like an entirely new sidebar creation page/process with some awesome new features that will allow you to make even more kick-ass sidebars for your users.

As always, we look forward to answering any questions, getting any feedback and discussing cool new things WebMynd can do in the future. Please visit our new Getsatisfaction forum to post your questions/feedback/cool ideas for the future of WebMynd: WebMynd Forum

Blog comments are cool too.


We’ve partnered with to bring you the Capitalist Toolbar for Firefox and pioneer browser-based technologies that will keep the world’s business leaders informed. This first toolbar application brings you important news and commentary from wherever you are on the web. Learn more and install it now.

The Capitalist Toolbar notifies you of breaking news as it happens by showing you the headlines at the top of your browser window. You will be able to keep track of what you read and be notified about popular stories from the major channels including: Technology, Markets, Entrepreneurs, Personal Finance and Leadership. Wherever you are on the web, you will be able to search for relevant articles, stock quotes and executive biographies related to your task.

The Capitalist Toolbar

The Capitalist Toolbar

First Internet Explorer made browser addons a focus for their v8 release, then Chrome launched their own application gallery. And we’ve seen several startups launch browser based products in the last few weeks: Browsarity (give to charity while you browse), Rapportive (simple CRM on top of Gmail) and then Etacts, having previously launched their ‘personal CRM’ web application quickly followed it with their own browser addon.

Why this growing interest in browser applications, even from Google,  when all functionality is supposed to be moving to the cloud?

The power to modify

One common attribute of all those recently launched products (and indeed WebMynd’s own search applications) is that they use the power of browser-based applications to modify pages. In the case of Browsarity, they re-write links to be affiliate links. The WebMynd, Rapportive and Etacts applications modify the right hand sides of search and webmail pages to incorporate new content.

Etacts' Gmail application

But what’s the value to the user having that power through these apps?

Get it ‘to go’

It’s hard enough to get users to come to your site in the first place, let alone come back again and again. Most people can only remember to go to a certain (small) number of urls, and will only tolerate a certain number of emails saying a friend has poked them. One of those they do remember is Google, so if you are high up in either the organic or paid results for a term your target audience is searching for, you’re fine. For the rest…

What if you only had to get your target audience to visit once and then they could take the information and functionality and use it where they already are?

That’s what browser applications offer. Like with food, some apps are takeaway only: WebMynd, Xobni, Browsarity and Rapportive are in this camp. Others like StumbleUpon, Delicious and Etacts are web applications in their own right with a browser application component.

Middleware for the web?

So it saves users from remembering to go back to your site or you having to remind them by sending spammy (or should I say, ‘viral’) emails. That’s great, but it’s not the full story.

Like with enterprise applications in the ’90s, the web is full of application and data silos – Gmail and Facebook just for starters. Integration is either non-existent, since the application owners want to lock-in users by holding on to their data and keep them on their site. Or point-to-point, like LinkedIn including Tripit and Twitter information.

Of course it’s perfectly possible for Google to let you search Gmail on the right-hand side of their search page, or get Twitter and LinkedIn data on the right-hand side of Gmail messages. But if you want that anytime soon, you’re going to have to use WebMynd, Rapportive and Etacts. And why, when I’m looking on Yelp for somewhere to eat tonight, am I not reminded that a friend sent me an email to my Gmail account recommending me some places 6 months ago? Such a personalized web experience isn’t possible without integration of my personal data silos.

Middleware is software that glues together application silos. So could these browser apps be the start of a distributed middleware for the web?  What integrations would you like to see?

Relational databases, and the object-relational mapping layers which abstract them, are not particularly well suited to storing large blobs of data: images, videos, pictures, compressed files and so on.

Far better than streaming megabytes of binary to the database is to instead keep a reference into a separate store, better suited to the task of saving and serving files.

At WebMynd, we use SQLAlchemy as our ORM and Amazon’s Simple Storage Service (S3) to store our files. We’ve used Boto to create a convenient, transparent way to store a file in SQLAlchemy, with the actual data of the file actually residing in S3. These files can then be served directly from S3, decreasing database size and I/O load, and potentially reducing bandwidth costs.

Transparent changes to file content

Suppose the objects we wish to be backed by S3 have a content attribute, which is the file body itself. What we’re aiming for is to be able to do something like:

file = session.query(File).get(file_id)
file.content="new content"

This can be achieved by creating a property on the SQLAlchemy model class:

    def _set_content(self, cont):
        s3     = boto.connect_s3(aws_id, aws_key)
        bucket = s3.get_bucket(s3_bucket)
        key    = bucket.get_key(self.key)
        if not key:
            key = Key(bucket=bucket, name=self.key)
        # if you want to serve files directly from S3:
    def _get_content(self):
        s3     = boto.connect_s3(aws_id, aws_key)
        bucket = s3.get_bucket(s3_bucket)
        key    = bucket.get_key(self.key)
        if not key:
            pass # complain
            return key.get_contents_as_string()
    content = property(_get_content, _set_content)

Cleaning up S3 artifacts

The task of keeping S3 synchronised with the database state seems like it would be awkward, perhaps involving database triggers and queues of reconciliation tasks. I was pleasantly surprised to find that SQLAlchemy has an excellent MapperExtension class, which gives you a bunch of hooks to hang custom code off. For example, to delete an S3 key when a SQLAlchemy File object is deleted, you would do something like:

class CleanupS3(MapperExtension):
    def after_delete(self, mapper, conn, inst):
        s3     = boto.connect_s3(aws_id, aws_key)
        bucket = s3.get_bucket(s3_bucket)
        key    = bucket.get_key(inst.key)
        if key:
            pass # complain
        return orm.EXT_CONTINUE

mapper(File, file_table, extension=CleanupS3())

A script with a working example can be found here. It requires Boto, SQLAlchemy and some AWS configuration. In real-world usage, you’d want some more error-checking, handling of mime types and you may choose to stream in the file content with Boto’s set_contents_from_file method. You’ll also note that we connect to S3 for every method invocation; if you have frequent changes to file content, using a connection pool for Boto might help improve performance.

Follow WebMynd on Twitter