The Composer.js MVC framework has just released version 1.0! Note that this is a near drop-in replacement for Composer v0.1.x.

There are some exciting changes in this release:

  • Composer no longer requires Mootools… jQuery can be used as a DOM backend instead. In fact, it really only needs the selector libraries from Moo/jQuery (Slick/Sizzle) and can use those directly. This means you can now use Composer in jQuery applications.
  • Controllers now have awareness of more common patterns than before. For instance, controllers can now keep track of sub-controllers as well as automatically manage bindings to other objects. This frees you up to focus on building your app instead of hand-writing boilerplate cleanup code (or worse, having rogue objects and events making your app buggy).
  • The ever-popular RelationalModel and FilterCollection are now included by default, fully documented, and considered stable.
  • New class structures in the framework expose useful objects, such as Composer.Class which gives you a class structure to build on, or Composer.Event which can be used as a standalone event bus in your app.
  • There’s now a full test suite so people who want to hack away on Composer (including us Lyon Bros) can do so without worrying about breaking things.
  • We updated the doc site to be much better organized!

Breaking changes

Try as we might, we couldn’t let some things stay the same and keep a clear conscience. Mainly, the problems we found were in the Router object. It no longer handles hashbang (#!) fallback…it relies completely on History.js to handle this instead. It also fixes a handful of places where non-idiomatic code was used (see below).

  • Composer.Router: the on_failure option has been removed. Instead of
    var router = new Composer.Router(routes, {on_failure: fail_fn});

    you do

    var router = new Composer.Router(routes);
    router.bind('fail', fail_fn);

  • Composer.Router: The register_callback function has been removed. In order to achieve the same functionality, use router.bind('route', myfunction);.
  • Composer.Router: The “preroute” event now passes {path: path} as its argument instead of path. This allows for easier URL rewriting, but may break some apps depending on the old method.
  • Composer.Router: History.js is now a hard requirement.

Sorry for any inconvenience this causes. However, since the rest of the framework is backwards compatible, you should be able to just use the old Composer.Router object with the new framework without any problems if you don’t wish to convert your app.

Have fun!

Check out the new Composer.js, and please open an issue if you run into any problems. Thanks!

- The Lyon Bros.

Comments Off

I recently embarked on a project to rebuild the main functionality of Turtl in common lisp. This requires embedding lisp (using ECL) into node-webkit (or soon, Firefox, as node-webkit is probably getting dumped).

To allow lisp and javascript to communicate, I made a simple messaging layer in C that both sides could easily hook into. While this worked, I stumbled on nanomsg and figured it couldn’t hurt to give it a shot.

So I wrote up some quick bindings for nanomsg in lisp and wired everything up. So far, it works really well. I can’t tell if it’s faster than my previous messaging layer, but one really nice thing about it is that it uses file descriptors which can be easily monitored by an event loop (such as cl-async), making polling and strange thread < –> thread event loop locking schemes a thing of the past (although cl-async handles all this fairly well).

This simplified a lot of the Turtl code, and although right now it’s only using the nanomsg “pair” layout type, it could be easily expanded in the future to allows different pieces of the app to communicate. In other words, it’s a lot more future-proof than the old messaging system and probably a lot more resilient (dedicated messaging library authored by 0MQ mastermind beats hand-rolled, hard-coded simple messaging built by non-C expert).

Lately I’ve been neck-deep in embedding. Currently, I’m building a portable (hopefully) version of Turtl‘s core features in ECL.

Problem is, when embedding turtl-core into Node-webkit or Firefox, any output that ECL writes to STDOUT triggers:

C operation (write) signaled an error. C library explanation: Bad file descriptor.

Well it turns out Windows doesn’t let you write to STDOUT unless a console is available, and even if using msys, it doesn’t create a console for GUI apps. So here’s a tool (in lisp, of course) that will let you convert an executable between GUI and console.

Seems to work great. Special thanks to death.

Comments Off

It can be nice to access your FF extension’s variables/functions from the browser console (ctrl+shift+j) if you need some insight into its state.

It took me a while to figure this out, so I’m sharing it. Somewhere in your extension, do:

var chromewin = win_util.getMostRecentBrowserWindow();
chromewin.my_extension_state = ...;

Now in the browser console, you can access whatever variables you set in the global variable my_extension_state. In my case, I used it to assign a function that lets me evaluate code in the addon’s background page. This lets me gain insight into the background page’s variables and state straight from the browser console.

Note! This is a security hole. Only enable this when debugging your extension/addon. Disable it when you release it.

Hi FORKS. Tuesday I announced my new app, Turtl for Chrome (and soon Firefox). Turtl is a private Evernote alternative. It uses AES-256bit encryption to obscure your notes/bookmarks before leaving the browser. What this means is that even if your data is intercepted on the way to the server or if the server itself is compromised, your data remains private.

Even with all of Turtl’s privacy, it’s still easy to share boards with friends and colleagues: idea boards, todo lists, youtube playlists, etc. With Turtl, only you and the people you share with can see your data. Not even the guys running the servers can see it…it’s just gibberish without the key that you hold.

One more thing: Turtl’s server and clients are open-source under the GPLv3 license, meaning anyone can review the code or use it for themselves. This means that Turtl can never be secretly compromised by the prying hands of hackers or government gag orders. The world sees everything we do.

So check out Turtl and start keeping track of your life’s data. If you want to keep up to date, follow Turtl on Twitter.

A while ago, I released cl-async, a library for non-blocking programming in Common Lisp. I’ve been updating it a lot over the past month or two to add features and fix bugs, and it’s really coming along.

My goal for this project is to create a general-purpose library for asynchronous programming in lisp. I think I have achieved this. With the finishing of the futures implementation, not only is the library stable, but there is now a platform to build drivers on top of. This will be my next focal point over the next few months.

There are a few reasons I decided to build something new. Here’s an overview of the non-blocking libraries I could find:

  • IOLib – An IO library for lisp that has a built-in event loop, only works on *nix.
  • Hinge – General purpose, non-blocking library. Only works on *nix, requires libev and ZeroMQ.
  • Conserv – A nice layer on top of IOLib (so once again, only runs on *nix). Includes TCP client/server and HTTP server implementations. Very nice.
  • teepeedee2 – A non-blocking, performant HTTP server written on top of IOLib.

I created cl-async because of all the available libraries, they are either non-portable, not general enough, have too many dependencies, or a combination of all three. I wanted a library that worked on Linux and Windows. I wanted a portable base to start from, and I also wanted tools to help make drivers.

Keeping all this in mind, I created bindings for libevent2 and built cl-async on top of them. There were many good reasons for choosing libevent2 over other libraries, such as libev and libuv (the backend for Node.js). Libuv would have been my first choice because it supports IOCP in Windows (libevent does not), however wrapping it in CFFI was like getting a screaming toddler to see the logic behind your decision to put them to bed. It could have maybe happened if I’d written a compatibility layer in C, but I wanted to have a maximum of 1 (one) dependency. Libevent2 won. It’s fast, portable, easy to wrap in CFFI, and on top of that, has a lot of really nice features like an HTTP client/server, TCP buffering, DNS, etc etc etc. The list goes on. That means less programming for me.

Like I mentioned, my next goal is to build drivers. I’ve already built a few, but I don’t consider them stable enough to release yet. Drivers are the elephant in the room. Anybody can implement non-blocking IO for lisp, but the real challenge is converting everything that talks over TCP/HTTP to be async. If lisp supported coroutines, this would be trivial, but alas, we’re stuck with futures and the lovely syntax they afford.

I’m going to start with drivers I use every day: beanstalk, redis, cl-mongo, drakma, zs3, and cl-smtp. These are the packages we use at work in our queue processing system (now threaded, soon to be evented + threaded). Once a few of these are done, I’ll update the cl-async drivers page with best practices for building drivers (aka wrapping async into futures). Then I will take over the world.

Another goal I have is to build a real HTTP server on top of the bare http-server implementation provided by cl-async. This will include nice syntax around routing (allowing REST interfaces), static file serving, etc.

Cl-async is still a work in progress, but it’s starting to become stabilized (both in lack of bugs and the API itself), so check out the docs or the github project and give it a shot. All you need is a lisp and libevent =].

We’re building a queuing system for Musio written in common lisp. To be accurate, we already built a queuing system in common lisp, and I recently needed to add a worker to it that communicates with MongoDB via cl-mongo. Each worker spawns four worker threads, each thread grabbing jobs from beanstalkd via cl-beanstalk. During my testing, each worker was updating a Mongo collection with some values scraped from our site. However, after a few seconds of processing jobs, the worker threads begin to spit out USOCKET errors and eventually Clozure CL enters it’s debugger of death (ie, lisp’s version of a segfault). SBCL didn’t fare too much better, either.

The way cl-mongo’s connections work is that it has a global hash table that holds connections: cl-mongo::*mongo-registry*. When the threads are all running and communicating with MongoDB, they are using the same hash table without any inherent locking or synchronization. There are a few options to fix this. You can implement a connection pool that supports access from multiple threads (complicated), you can give each thread its own connection and force the each thread to use its connection when communicating, or you can take advantage of special variables in lisp (the easiest, simplest, and most elegant IMO). Let’s check out the last option.

Although it’s not in the CL spec, just about all implementations allow you to have global thread-local variables by using (defparameter) or (defvar), both of which create special variables (read: dynamic variables, as opposed to lexical). Luckily, cl-mongo uses defvar to create *mongo-registry*. This means in our worker, we can re-bind this variable above the top level loop using (let) and all subsequent calls to MongoDB will use our new thread-local version of *mongo-registry* instead of the global one that all the threads we’re bumping into each other using:

;; Main worker loop, using global *mongo-registry* (broken)
(defun start-worker ()
  (loop
    (let ((job (get-job)))
      (let ((results (process-job job)))
        ;; this uses the global registry. not good if running multiple threads.
        (with-mongo-connection (:db "musio")
          (db.save "scraped" results))))))

New version:

;; Replace *mongo-registry* above worker loop, creating a local version of the
;; registry for this thread.
(defun start-worker ()
  ;; setting to any value via let will re-create the variable as a local thread
  ;; variable. nil will do just fine.
  (let ((cl-mongo::*mongo-registry* nil))
    (loop
      (let ((job (get-job)))
        (let ((results (process-job job)))
          ;; with-mongo-connection now uses the local registry, which stops the
          ;; threads from touching each other.
          (with-mongo-connection (:db "musio")
            (db.save "scraped" results)))))))

BOOM everything works great after this change, and it was only a one line change. It may not be as efficient as connection pooling, but that’s a lot more prone to strange errors and synchronization issues than just segregating the connections from each other and calling it a day. One issue: *mongo-registry* is not exported by cl-mongo, which is why we access it via cl-mongo::*mongo-registry* (notice the double colon instead of single). This means in future versions, the variable name may change, breaking our above code. So, don’t update cl-mongo without testing. Not hard.

Hopefully this helps a few people out, let me know if you have better solutions to this issue!

I’ve been seeing a lot of posts on the webz lately about how we can fix email. I have to say, I think it’s a bit short-sighted.

People are saying it has outgrown it’s original usage, or it contains bad error messages, or it’s not smart about the messages received.

These are very smart people, with real observations. The problem is, their observations are misplaced.

What email is

Email is a distributed, asynchronous messaging protocol. It does this well. It does this very well. So well, I’m getting a boner thinking about it. You send a message and it either goes where it’s supposed to, or you get an error message back. That’s it, that’s email. It’s simple. It works.

There’s no company controlling all messages and imposing their will on the ecosystem as a whole. There’s no single point of failure. It’s beautifully distributed and functions near-perfectly.

The problem

So why does it suck so much? It doesn’t. It’s awesome. The problem is the way people view it. Most of the perceived suckiness comes from its simplicity. It doesn’t manage your TODOs. It doesn’t have built-in calendaring. It doesn’t give you oral pleasure (personally I think this should be built into the spec though). So why don’t we build all these great things into it if they don’t exist? We could add TODOs and calendaring and dick-sucking to email!!

Because that’s a terrible idea. People are viewing email as an application; one that has limited features and needs to be extended so it supports more than just stupid messages.

This is wrong.

We need to view email as a framework, not an application. It is used for sending messages. That’s it. It does this reliably and predictably.

Replacing email with “smarter” features will inevitably leave people out. I understand the desire to have email just be one huge TODO list. But sometimes I just want to send a fucking message, not “make a TODO.” Boom, I just “broke” the new email.

Email works because it does nothing but messaging.

How do we fix it then?

We fix it by building smart clients. Let’s take a look at some of our email-smart friends.

Outlook has built-in calendaring. BUT WAIT!!!!! Calendaring isn’t part of email!!1 No, it’s not.

Gmail has labels. You can categorize your messages by using tags essentially. Also, based on usage patterns, Gmail can give weight to certain messages. That’s not part of email either!! No, my friend, it’s not.

Xobni also has built incredible contact-management and intelligence features on top of email. How do they know it’s time to take your daily shit before you do? Defecation scheduling is NOT part of the email spec!!

How have these companies made so much fucking money off of adding features to email that are not part of email?

It’s all in the client

They do it by building smart clients! As I said, you can send any message your heart desires using email. You can send JSON messages with a TODO payload and attach a plaintext fallback. If both clients understand it, then BAM! Instant TODO list protocol. There, you just fixed email. Easy, no? Why, with the right client, you could fly a fucking space shuttle with email. That’s right, dude, a fucking space shuttle.

If your client can create a message and send it, and the receiving client can decode it, you can build any protocol you want on top of email.

That’s it. Use your imaginations. I’ll say it one more time:

There’s nothing to fix

Repeat after me: “There’s nothing to fix!” If you have a problem with email, fork a client or build your own! Nobody’s stopping you from “fixing” email. Many people have made a lot of cash by “fixing” email.

We don’t have to sit in fluorescent-lit, university buildings deliberating for hours on end about how to change the spec to fit everyone’s new needs. We don’t need 100 stupid startups “disrupting” the “broken” email system with their new protocols, that will inevitably end up being  a proprietary, non-distributed, “ad hoc, informally-specified, bug-ridden, slow implementation of half of” the current email system.

Please don’t try to fix email, you’re just going to fuck it up!! Trust me, you can’t do any better. Instead, let’s build all of our awesome new features on top of an already beautifully-working system by making smarter clients.

I’m currently doing some server management. My current favorite tool is TMUX, which among many other things, allows you to save your session even if you are disconnected, split your screen into panes, etc etc. If it sounds great, that’s because it is. Every sysadmin would benefit from using TMUX (or it’s cousin, GNU screen).

There’s a security flaw though. Let’s say I log in as user “andrew” and attach to my previous TMUX session: tmux attach. Now I have to run a number of commands as root. Well, prefixing every command with sudo and manually typing in all the /sbin/ paths to each executable it a pain in the ass. I know this is a bad idea, but I’ll often spawn a root shell. Let’s say I spawn a root shell in a TMUX session, then go do something else, fully intending log out later, but I forget. My computer disconnects, and I forget there’s a root shell sitting there.

If someone manages to compromise the machine, and gain access to my user account, getting a root shell is as easy as doing tmux attach. Oops.

Well, I just found out you can timeout a shell after X seconds of inactivity, which is perfect for this case. As root:

1 echo -e "\n# logout after 5 minutes of inactivity\nexport TMOUT=300\n" >> /root/.bash_profile

Now I can open root shells until my ass bleeds, and after 5 minutes of inactivity, it will log out back into my normal user account.

A good sysadmin won’t make mistakes. A great sysadmin will make mistakes self-correct ;-].

Comments Off

So my brother Jeff and I are building to Javascript-heavy applications at the moment (heavy as in all-js front-end). We needed a framework that provides loose coupling between the pieces, event/message-based invoking, and maps well to our data structures. A few choices came up, most notably Backbone.js and Spine. These are excellent frameworks. It took a while to wrap my head around the paradigms because I was so used to writing five layers deep of embedded events. Now that I have the hang of it, I can’t think of how I ever lived without it. There’s just one large problem…these libraries are for jQuery.

jQuery isn’t bad. We’ve always gravitated towards Mootools though. Mootools is a framework to make javascript more usable, jQuery is nearly a completely new language in itself written on top of javascript (and mainly for DOM manipulation). Both have their benefits, but we were always good at javascript before the frameworks came along, so something that made that knowledge more useful was an obvious choice for us.

I’ll also say that after spending some time with these frameworks and being sold (I especially liked Backbone.js) I gave jQuery another shot. I ported all of our common libraries to jQuery and I spent a few days getting used to it and learning how to do certain things. I couldn’t stand it. The thing that got me most was that there is no distinction between a DOM node and a collection of DOM nodes. Maybe I’m just too used to Moo (4+ years).

Composer.js »

composerSo we decided to roll our own. Composer.js was born. It merges aspects of Spine and Backbone.js into a Mootools-based MVC framework. It’s still in progress, but we’re solidifying a lot of the API so developers won’t have to worry about switching their code when v1 comes around.

Read the docs, give it a shot, and let us know if you have any problems or questions.

Also, yes, we blatantly ripped off Backbone.js in a lot of places. We’re pretty open about it, and also pretty open about attributing everything we took. They did some really awesome things. We didn’t necessarily want to do it differently more than we wanted a supported Mootools MVC framework that works like Backbone.