It can be nice to access your FF extension’s variables/functions from the browser console (ctrl+shift+j) if you need some insight into its state.

It took me a while to figure this out, so I’m sharing it. Somewhere in your extension, do:

var chromewin = win_util.getMostRecentBrowserWindow();
chromewin.my_extension_state = ...;

Now in the browser console, you can access whatever variables you set in the global variable my_extension_state. In my case, I used it to assign a function that lets me evaluate code in the addon’s background page. This lets me gain insight into the background page’s variables and state straight from the browser console.

Note! This is a security hole. Only enable this when debugging your extension/addon. Disable it when you release it.

It’s true.

  • I’ve never been to the doctor with an embarrassing condition, but if I did go to the doctor, I’d be fine with my visit being live tweeted, including symptoms, conditions, diseases, and medications.
  • I’ve never searched the internet (including Craigslist) for anything other than products, popular TV shows, or information related to my work. If I did, I’d be fine with my search history being listed at the bottom of every email I send to anyone.
  • I’ve never been to an adult store, but if I did I’d be fine with the itemized receipt (including my name, address, and phone number) being copied and stapled to every lightpost and bulletin board in town.
  • I never use the bathroom, masturbate, or have sex…but if I did I’d be fine with there being a live video feed of it on CNN’s homepage.
  • I’ve never stayed home from work because I felt lazy.
  • I’ve never said anything malicious or idiotic, but if I did I’d be fine with a recording of it being played before every conversation I have.
  • I’ve never pretended to like someone even though I hate their guts, but if I did I’d be fine with a text message that says “I hate you” being sent to them from my phone each time I smiled through my teeth.
  • I’ve never gotten shitfaced and embarrassed myself in front a large group of people, but if I did I’d be fine with a video of it being on Youtube’s homepage for a year.
  • I’ve never had sexual thoughts about one of my teachers, bosses, underlings, or co-workers…but if I did, I’d be fine with a detailed transcription of the thought being delivered to them by bike messenger in the middle of class or the work day.
  • I’ve never had thoughts of self-doubt, but if I did I’d be fine with each one being sent to my co-workers, friends, and people I don’t like.
  • I’ve never had dirty or perverted thoughts, but if I did I’d be fine with each one being emailed to all my relatives and co-workers.
  • I am fine with each of my dreams being transcribed, in its entirety, and sent to everyone I know, friend or foe.
  • I’ve never fought with my significant other, but if I did I would do so over a loudspeaker so my neighbors could hear every word.

There are two types of people in this world with nothing to hide: ones who can say every item in the list above is true and ones who are completely misinformed. The above examples may seem extreme, but this information is being collected and it is being stored forever. Every part of your life that takes place on the internet and your phone is now completely accessible by a group of people you’ve never met (and any organizations they deem fit to view it).

There is an eye following you wherever you go, judging every movement you make. This eye is not god, and this eye does not keep you safe. The eye collects and stores, and when you deviate — donating to/joining a political party other than the two main options, advocating animal rights, coming out as gay/transgender, etc — you are watched even more closely. When will this information about you be used, and by who? All you can hope is they have your best interests in mind.

Believe me when I say that you do have something to hide. There are things you don’t want the world to know.

This is why privacy is worth fighting for.

Comments Off

Hi FORKS. Tuesday I announced my new app, Turtl for Chrome (and soon Firefox). Turtl is a private Evernote alternative. It uses AES-256bit encryption to obscure your notes/bookmarks before leaving the browser. What this means is that even if your data is intercepted on the way to the server or if the server itself is compromised, your data remains private.

Even with all of Turtl’s privacy, it’s still easy to share boards with friends and colleagues: idea boards, todo lists, youtube playlists, etc. With Turtl, only you and the people you share with can see your data. Not even the guys running the servers can see it…it’s just gibberish without the key that you hold.

One more thing: Turtl’s server and clients are open-source under the GPLv3 license, meaning anyone can review the code or use it for themselves. This means that Turtl can never be secretly compromised by the prying hands of hackers or government gag orders. The world sees everything we do.

So check out Turtl and start keeping track of your life’s data. If you want to keep up to date, follow Turtl on Twitter.

A while ago, I released cl-async, a library for non-blocking programming in Common Lisp. I’ve been updating it a lot over the past month or two to add features and fix bugs, and it’s really coming along.

My goal for this project is to create a general-purpose library for asynchronous programming in lisp. I think I have achieved this. With the finishing of the futures implementation, not only is the library stable, but there is now a platform to build drivers on top of. This will be my next focal point over the next few months.

There are a few reasons I decided to build something new. Here’s an overview of the non-blocking libraries I could find:

  • IOLib – An IO library for lisp that has a built-in event loop, only works on *nix.
  • Hinge – General purpose, non-blocking library. Only works on *nix, requires libev and ZeroMQ.
  • Conserv – A nice layer on top of IOLib (so once again, only runs on *nix). Includes TCP client/server and HTTP server implementations. Very nice.
  • teepeedee2 – A non-blocking, performant HTTP server written on top of IOLib.

I created cl-async because of all the available libraries, they are either non-portable, not general enough, have too many dependencies, or a combination of all three. I wanted a library that worked on Linux and Windows. I wanted a portable base to start from, and I also wanted tools to help make drivers.

Keeping all this in mind, I created bindings for libevent2 and built cl-async on top of them. There were many good reasons for choosing libevent2 over other libraries, such as libev and libuv (the backend for Node.js). Libuv would have been my first choice because it supports IOCP in Windows (libevent does not), however wrapping it in CFFI was like getting a screaming toddler to see the logic behind your decision to put them to bed. It could have maybe happened if I’d written a compatibility layer in C, but I wanted to have a maximum of 1 (one) dependency. Libevent2 won. It’s fast, portable, easy to wrap in CFFI, and on top of that, has a lot of really nice features like an HTTP client/server, TCP buffering, DNS, etc etc etc. The list goes on. That means less programming for me.

Like I mentioned, my next goal is to build drivers. I’ve already built a few, but I don’t consider them stable enough to release yet. Drivers are the elephant in the room. Anybody can implement non-blocking IO for lisp, but the real challenge is converting everything that talks over TCP/HTTP to be async. If lisp supported coroutines, this would be trivial, but alas, we’re stuck with futures and the lovely syntax they afford.

I’m going to start with drivers I use every day: beanstalk, redis, cl-mongo, drakma, zs3, and cl-smtp. These are the packages we use at work in our queue processing system (now threaded, soon to be evented + threaded). Once a few of these are done, I’ll update the cl-async drivers page with best practices for building drivers (aka wrapping async into futures). Then I will take over the world.

Another goal I have is to build a real HTTP server on top of the bare http-server implementation provided by cl-async. This will include nice syntax around routing (allowing REST interfaces), static file serving, etc.

Cl-async is still a work in progress, but it’s starting to become stabilized (both in lack of bugs and the API itself), so check out the docs or the github project and give it a shot. All you need is a lisp and libevent =].

We’re building a queuing system for Musio written in common lisp. To be accurate, we already built a queuing system in common lisp, and I recently needed to add a worker to it that communicates with MongoDB via cl-mongo. Each worker spawns four worker threads, each thread grabbing jobs from beanstalkd via cl-beanstalk. During my testing, each worker was updating a Mongo collection with some values scraped from our site. However, after a few seconds of processing jobs, the worker threads begin to spit out USOCKET errors and eventually Clozure CL enters it’s debugger of death (ie, lisp’s version of a segfault). SBCL didn’t fare too much better, either.

The way cl-mongo’s connections work is that it has a global hash table that holds connections: cl-mongo::*mongo-registry*. When the threads are all running and communicating with MongoDB, they are using the same hash table without any inherent locking or synchronization. There are a few options to fix this. You can implement a connection pool that supports access from multiple threads (complicated), you can give each thread its own connection and force the each thread to use its connection when communicating, or you can take advantage of special variables in lisp (the easiest, simplest, and most elegant IMO). Let’s check out the last option.

Although it’s not in the CL spec, just about all implementations allow you to have global thread-local variables by using (defparameter) or (defvar), both of which create special variables (read: dynamic variables, as opposed to lexical). Luckily, cl-mongo uses defvar to create *mongo-registry*. This means in our worker, we can re-bind this variable above the top level loop using (let) and all subsequent calls to MongoDB will use our new thread-local version of *mongo-registry* instead of the global one that all the threads we’re bumping into each other using:

;; Main worker loop, using global *mongo-registry* (broken)
(defun start-worker ()
  (loop
    (let ((job (get-job)))
      (let ((results (process-job job)))
        ;; this uses the global registry. not good if running multiple threads.
        (with-mongo-connection (:db "musio")
          (db.save "scraped" results))))))

New version:

;; Replace *mongo-registry* above worker loop, creating a local version of the
;; registry for this thread.
(defun start-worker ()
  ;; setting to any value via let will re-create the variable as a local thread
  ;; variable. nil will do just fine.
  (let ((cl-mongo::*mongo-registry* nil))
    (loop
      (let ((job (get-job)))
        (let ((results (process-job job)))
          ;; with-mongo-connection now uses the local registry, which stops the
          ;; threads from touching each other.
          (with-mongo-connection (:db "musio")
            (db.save "scraped" results)))))))

BOOM everything works great after this change, and it was only a one line change. It may not be as efficient as connection pooling, but that’s a lot more prone to strange errors and synchronization issues than just segregating the connections from each other and calling it a day. One issue: *mongo-registry* is not exported by cl-mongo, which is why we access it via cl-mongo::*mongo-registry* (notice the double colon instead of single). This means in future versions, the variable name may change, breaking our above code. So, don’t update cl-mongo without testing. Not hard.

Hopefully this helps a few people out, let me know if you have better solutions to this issue!

I’ve been seeing a lot of posts on the webz lately about how we can fix email. I have to say, I think it’s a bit short-sighted.

People are saying it has outgrown it’s original usage, or it contains bad error messages, or it’s not smart about the messages received.

These are very smart people, with real observations. The problem is, their observations are misplaced.

What email is

Email is a distributed, asynchronous messaging protocol. It does this well. It does this very well. So well, I’m getting a boner thinking about it. You send a message and it either goes where it’s supposed to, or you get an error message back. That’s it, that’s email. It’s simple. It works.

There’s no company controlling all messages and imposing their will on the ecosystem as a whole. There’s no single point of failure. It’s beautifully distributed and functions near-perfectly.

The problem

So why does it suck so much? It doesn’t. It’s awesome. The problem is the way people view it. Most of the perceived suckiness comes from its simplicity. It doesn’t manage your TODOs. It doesn’t have built-in calendaring. It doesn’t give you oral pleasure (personally I think this should be built into the spec though). So why don’t we build all these great things into it if they don’t exist? We could add TODOs and calendaring and dick-sucking to email!!

Because that’s a terrible idea. People are viewing email as an application; one that has limited features and needs to be extended so it supports more than just stupid messages.

This is wrong.

We need to view email as a framework, not an application. It is used for sending messages. That’s it. It does this reliably and predictably.

Replacing email with “smarter” features will inevitably leave people out. I understand the desire to have email just be one huge TODO list. But sometimes I just want to send a fucking message, not “make a TODO.” Boom, I just “broke” the new email.

Email works because it does nothing but messaging.

How do we fix it then?

We fix it by building smart clients. Let’s take a look at some of our email-smart friends.

Outlook has built-in calendaring. BUT WAIT!!!!! Calendaring isn’t part of email!!1 No, it’s not.

Gmail has labels. You can categorize your messages by using tags essentially. Also, based on usage patterns, Gmail can give weight to certain messages. That’s not part of email either!! No, my friend, it’s not.

Xobni also has built incredible contact-management and intelligence features on top of email. How do they know it’s time to take your daily shit before you do? Defecation scheduling is NOT part of the email spec!!

How have these companies made so much fucking money off of adding features to email that are not part of email?

It’s all in the client

They do it by building smart clients! As I said, you can send any message your heart desires using email. You can send JSON messages with a TODO payload and attach a plaintext fallback. If both clients understand it, then BAM! Instant TODO list protocol. There, you just fixed email. Easy, no? Why, with the right client, you could fly a fucking space shuttle with email. That’s right, dude, a fucking space shuttle.

If your client can create a message and send it, and the receiving client can decode it, you can build any protocol you want on top of email.

That’s it. Use your imaginations. I’ll say it one more time:

There’s nothing to fix

Repeat after me: “There’s nothing to fix!” If you have a problem with email, fork a client or build your own! Nobody’s stopping you from “fixing” email. Many people have made a lot of cash by “fixing” email.

We don’t have to sit in fluorescent-lit, university buildings deliberating for hours on end about how to change the spec to fit everyone’s new needs. We don’t need 100 stupid startups “disrupting” the “broken” email system with their new protocols, that will inevitably end up being  a proprietary, non-distributed, “ad hoc, informally-specified, bug-ridden, slow implementation of half of” the current email system.

Please don’t try to fix email, you’re just going to fuck it up!! Trust me, you can’t do any better. Instead, let’s build all of our awesome new features on top of an already beautifully-working system by making smarter clients.

I’m currently doing some server management. My current favorite tool is TMUX, which among many other things, allows you to save your session even if you are disconnected, split your screen into panes, etc etc. If it sounds great, that’s because it is. Every sysadmin would benefit from using TMUX (or it’s cousin, GNU screen).

There’s a security flaw though. Let’s say I log in as user “andrew” and attach to my previous TMUX session: tmux attach. Now I have to run a number of commands as root. Well, prefixing every command with sudo and manually typing in all the /sbin/ paths to each executable it a pain in the ass. I know this is a bad idea, but I’ll often spawn a root shell. Let’s say I spawn a root shell in a TMUX session, then go do something else, fully intending log out later, but I forget. My computer disconnects, and I forget there’s a root shell sitting there.

If someone manages to compromise the machine, and gain access to my user account, getting a root shell is as easy as doing tmux attach. Oops.

Well, I just found out you can timeout a shell after X seconds of inactivity, which is perfect for this case. As root:

1 echo -e "\n# logout after 5 minutes of inactivity\nexport TMOUT=300\n" >> /root/.bash_profile

Now I can open root shells until my ass bleeds, and after 5 minutes of inactivity, it will log out back into my normal user account.

A good sysadmin won’t make mistakes. A great sysadmin will make mistakes self-correct ;-].

Comments Off

So my brother Jeff and I are building to Javascript-heavy applications at the moment (heavy as in all-js front-end). We needed a framework that provides loose coupling between the pieces, event/message-based invoking, and maps well to our data structures. A few choices came up, most notably Backbone.js and Spine. These are excellent frameworks. It took a while to wrap my head around the paradigms because I was so used to writing five layers deep of embedded events. Now that I have the hang of it, I can’t think of how I ever lived without it. There’s just one large problem…these libraries are for jQuery.

jQuery isn’t bad. We’ve always gravitated towards Mootools though. Mootools is a framework to make javascript more usable, jQuery is nearly a completely new language in itself written on top of javascript (and mainly for DOM manipulation). Both have their benefits, but we were always good at javascript before the frameworks came along, so something that made that knowledge more useful was an obvious choice for us.

I’ll also say that after spending some time with these frameworks and being sold (I especially liked Backbone.js) I gave jQuery another shot. I ported all of our common libraries to jQuery and I spent a few days getting used to it and learning how to do certain things. I couldn’t stand it. The thing that got me most was that there is no distinction between a DOM node and a collection of DOM nodes. Maybe I’m just too used to Moo (4+ years).

Composer.js »

composerSo we decided to roll our own. Composer.js was born. It merges aspects of Spine and Backbone.js into a Mootools-based MVC framework. It’s still in progress, but we’re solidifying a lot of the API so developers won’t have to worry about switching their code when v1 comes around.

Read the docs, give it a shot, and let us know if you have any problems or questions.

Also, yes, we blatantly ripped off Backbone.js in a lot of places. We’re pretty open about it, and also pretty open about attributing everything we took. They did some really awesome things. We didn’t necessarily want to do it differently more than we wanted a supported Mootools MVC framework that works like Backbone.

I was looking around for Riak information when I stumbled (not via stumble, but actually doing my own blundering) across a blog post that mentioned a Riak GUI. I checked it out. Install is simple and oddly enough, the tool uses only javascript and Riak (no web server needed). I have to say I’m thoroughly impressed by it. Currently the tool doesn’t do a ton besides listing buckets, keys, and stats, but you can edit your data inline and delete objects. It also supports Luwak, which I have no first-hand experience with and was unable to try out.

One thing I thought that was missing was a way to run a map-reduce on the cluster via text input boxes for the functions. It would make writing and testing them a bit simpler I think, but then again it would be easy enough to write this myself in PHP or even JS, so maybe I’ll add it in. Search integration would be nice too, although going to 127.0.0.1:8098/solr/[bucket]search?… is pretty stupid easy.

All in all, a great tool.

Comments Off

John Resign created some incredibly wonderful Javascript code for templating. It’s so terse that it almost shouldn’t work…but it does. I’ve been using it on a lot of front-end JS apps lately, and realized I could make a few changes and improvements.

First off, I don’t like <% asp style tags %>. It reminds me of programming ASP. It reminds me of a trip through hell I’ve taken too many times. I changed it to use PHP-style tags instead:

1 <ul class="<?=myclass?>">
2     <? for(var i = 0; i < items.length; i++) { ?>
3         <li><?=items[i].name?>
4     <? } ?>
5 </ul>

This makes it easier for me to type. I also made one further modification. Adding $ in front of your variables will check if they are undefined before using them, and if not defined will return them as null:

1 Hello, <?=$name?>.
2 <? if($user.friends) { ?>
3     You have <?=$user.friends.length?> friends.
4 <? } ?>

The if statement above will compile to

1 if((typeof(user.friends) == 'undefined' ? null : user.friends)) { ...

This allows some simple usages of undefined variables, such as “if(undefined_var) { … } else { … }” which actually pops up a lot. You still can’t access non-existent properties of variables that aren’t defined, but this should catch a lot of errors that would otherwise turn your code into a bunch of if(typeof …)’s.

Here’s the code (for brevity, I left out all the caching stuff that makes this fast):

 1 var template    =   '<h1><?=$title?></h1> ...';
 2 template        =   template.replace(/(\r\n|\n\r)/g, "\n");
 3 var fnstr       =
 4     "var ___p=[],print=function(){___p.push.apply(___p,arguments);};" +
 5     "with(obj) {___p.push('" +
 6     template
 7         // fix single quotes in html (escape them)
 8         .replace(/(^|\?>)([\s\S]*?)($|<\?)/g, function(match) {
 9             return match.replace(/'/g, '\\\'');
10         })
11         // implement safe usage of $varname
12         .replace(/<\?([\s\S]*?)\?>/g, function(match) {
13             return match.replace(/\$([a-z_][a-z0-9_\.]+)/gi, '(typeof($1) == "undefined" ? null : $1)').replace(/[\r\n]+/g, ' ');
14         })
15         .replace(/\r?\n/g, '___::NEWLINE::___')
16         .split("<?").join('___::TABBBBB::___')
17         .replace(/((^|\?>)(?!=___::TABBBBB::___))'/g, "$1___::SLASHR::___")
18         .replace(/___::TABBBBB::___=\s*\$?(.*?)\?>/g, "',$1,'")
19         .split('___::TABBBBB::___').join("');")
20         .split('?>').join("___p.push('")
21         .split("___::SLASHR::___").join("\\'")
22         .replace(/___::NEWLINE::___/g, '\'+ "\\n" + \'') +
23         "');}" + "return ___p.join('');";
24 var tpl_fn  =   new Function("obj", fnstr);

This has been working for me for a bit now, and has saved me countless annoying declarations at the top if my templates. If you run into any problems, please let me know.