It can be nice to access your FF extension’s variables/functions from the browser console (ctrl+shift+j) if you need some insight into its state.

It took me a while to figure this out, so I’m sharing it. Somewhere in your extension, do:

var chromewin = win_util.getMostRecentBrowserWindow();
chromewin.my_extension_state = ...;

Now in the browser console, you can access whatever variables you set in the global variable my_extension_state. In my case, I used it to assign a function that lets me evaluate code in the addon’s background page. This lets me gain insight into the background page’s variables and state straight from the browser console.

Note! This is a security hole. Only enable this when debugging your extension/addon. Disable it when you release it.

Note that this may or may not work on your device. If you’re running into an app that works in a real browser but on in your Android’s stock browser, do this:

  1. Navigate to your app in the browser.
  2. In the same tab go to about:debug
  3. Reload (it may reload for you).
  4. Profit.

This will show you errors that even window.onerror doesn’t catch, which should help you narrow down your problem(s).

Source: This stackoverflow answer.

Comments Off

It’s true.

  • I’ve never been to the doctor with an embarrassing condition, but if I did go to the doctor, I’d be fine with my visit being live tweeted, including symptoms, conditions, diseases, and medications.
  • I’ve never searched the internet (including Craigslist) for anything other than products, popular TV shows, or information related to my work. If I did, I’d be fine with my search history being listed at the bottom of every email I send to anyone.
  • I’ve never been to an adult store, but if I did I’d be fine with the itemized receipt (including my name, address, and phone number) being copied and stapled to every lightpost and bulletin board in town.
  • I never use the bathroom, masturbate, or have sex…but if I did I’d be fine with there being a live video feed of it on CNN’s homepage.
  • I’ve never stayed home from work because I felt lazy.
  • I’ve never said anything malicious or idiotic, but if I did I’d be fine with a recording of it being played before every conversation I have.
  • I’ve never pretended to like someone even though I hate their guts, but if I did I’d be fine with a text message that says “I hate you” being sent to them from my phone each time I smiled through my teeth.
  • I’ve never gotten shitfaced and embarrassed myself in front a large group of people, but if I did I’d be fine with a video of it being on Youtube’s homepage for a year.
  • I’ve never had sexual thoughts about one of my teachers, bosses, underlings, or co-workers…but if I did, I’d be fine with a detailed transcription of the thought being delivered to them by bike messenger in the middle of class or the work day.
  • I’ve never had thoughts of self-doubt, but if I did I’d be fine with each one being sent to my co-workers, friends, and people I don’t like.
  • I’ve never had dirty or perverted thoughts, but if I did I’d be fine with each one being emailed to all my relatives and co-workers.
  • I am fine with each of my dreams being transcribed, in its entirety, and sent to everyone I know, friend or foe.
  • I’ve never fought with my significant other, but if I did I would do so over a loudspeaker so my neighbors could hear every word.

There are two types of people in this world with nothing to hide: ones who can say every item in the list above is true and ones who are completely misinformed. The above examples may seem extreme, but this information is being collected and it is being stored forever. Every part of your life that takes place on the internet and your phone is now completely accessible by a group of people you’ve never met (and any organizations they deem fit to view it).

There is an eye following you wherever you go, judging every movement you make. This eye is not god, and this eye does not keep you safe. The eye collects and stores, and when you deviate — donating to/joining a political party other than the two main options, advocating animal rights, coming out as gay/transgender, etc — you are watched even more closely. When will this information about you be used, and by who? All you can hope is they have your best interests in mind.

Believe me when I say that you do have something to hide. There are things you don’t want the world to know.

This is why privacy is worth fighting for.

Comments Off

As you all know, I’m building a Turtl, a browser extension for client-side encrypted note/file storage.

Well once in a while the I need to debug the release version. There are docs scattered around detailing how to do this, but as usual with this type of thing you really need to do some digging.

By default, Firefox’s Browser Console only logs error events. You can change this to log any and all console.log() calls from your addon (and any other addon) by doing this:

  1. Go to about:config
  2. Search for the key extensions.sdk.console.logLevel
  3. If it exists, set it to “info”, otherwise add a new string with the key extensions.sdk.console.logLevel and the value “info”

Boom, all your addon’s log calls now show up in the browser console.

Comments Off

I use curl to test out my HTTP libraries all the time. Recently, I ran into an issue where when uploading a file (25mb) from curl in the command line to my common lisp app server, only about half the data showed up (12.5mb). I was doing this:

curl -H 'Authorization: ...' -H 'Transfer-Encoding: chunked' --data-binary @/media/large_vid.mov

Naturally, I assumed the issue was with my libraries. It could be the cl-async library dropping packets, it could be the HTTP parser having issues, and it could be the app server itself. I mean, it has to be one of those. Curl has been around for ages, and there’s no way it would just drop data. So I spent days tearing my hair out.

Finally, I ran curl with the --trace and looked at the data. It provides a hex dump of everything it sends. It’s not formatted perfectly, but with vim’s block select and a few handy macros, I was able to get the length of the data being sent: 12.5mb. That’s right, curl was defying me. There was no error in my code at all.

I did a search online for curl not sending the full file data when using --data-binary. Nothing. So I looked over my options and found -T which looks surprisingly similar to --data-binary with the @ modifier. I tried:

curl -H 'Authorization: ...' -H 'Transfer-Encoding: chunked' -T /media/large_vid.mov

All 25mb came through (every byte accounted for).

Conclusion

If you’re uploading files, use -T /path/to/file instead of --data-binary @/path/to/file. Note that -d/-D were also “broken.”

Comments Off

Hi FORKS. Tuesday I announced my new app, Turtl for Chrome (and soon Firefox). Turtl is a private Evernote alternative. It uses AES-256bit encryption to obscure your notes/bookmarks before leaving the browser. What this means is that even if your data is intercepted on the way to the server or if the server itself is compromised, your data remains private.

Even with all of Turtl’s privacy, it’s still easy to share boards with friends and colleagues: idea boards, todo lists, youtube playlists, etc. With Turtl, only you and the people you share with can see your data. Not even the guys running the servers can see it…it’s just gibberish without the key that you hold.

One more thing: Turtl’s server and clients are open-source under the GPLv3 license, meaning anyone can review the code or use it for themselves. This means that Turtl can never be secretly compromised by the prying hands of hackers or government gag orders. The world sees everything we do.

So check out Turtl and start keeping track of your life’s data. If you want to keep up to date, follow Turtl on Twitter.

UPDATE: Since RethinkDB 1.4, this post is pretty irrelevant. You can now just do:

./configure --allow-fetch
make ALLOW_WARNINGS=1

This will build RethinkDB without a hitch.

UPDATE: Check out Samuel Hughes’ comment on compiling in Slack, which may make some of the below process simpler. Specifically, the section about editing the Makefile to not use static libraries (apparently you can pass RECOMMEND_STATIC=0 to the make process to do this instead).

So I’ve had a theoretical boner for RethinkDB ever since reading about it. I decided to try and give it a go, but have had problems compiling. I’m going to try and give an overview of how to get it running. These instructions are aimed at people who don’t have a linux with the targeted packaging systems that RethinkDB currently supports (in other words, you’re stuck with compiling it yourself). The build process is slightly annoying (which is why I’m writing this guide). Slava from the RethinkDB team told me that they are working on a new build system, so hopefully we’ll soon be able to just to a make.

Installing V8

See the instructions for building and installing V8. They are pretty simple, but I believe you need Python (since they use GYP for the build):

cd v8
make dependencies
make native
# do a manual installation
sudo mkdir -p /usr/local/v8/include /usr/local/v8/lib
sudo cp include/* /usr/local/v8/include
sudo cp out/native/lib.target/libv8.so /usr/local/v8/lib

Done (yeah, I know, my syntax highlighting is annoying).

Building RethinkDB

This was a bit trickier, but hey I’ve compiled alpha versions of Compiz on top of Slack before, this should be a cakewalk. It took some Makefile tweaking to get it running, so here’s how I did it. Note that you’ll need Python >= 2.7 to do the full make process (or else you’ll get “AttributeError: ‘module’ object has no attribute ‘check_output’”). Slack 13.1 comes with 2.6.x so I had to compile it. Guess I need to upgrade soon. So grab the latest source:

git clone git://github.com/rethinkdb/rethinkdb.git
cd rethinkdb

First things first, src/Makefile adds the option -Werror to the build. This is great, but causes the build to fail when it includes v8.h since there are unused variables. So for now, we’ll have to trust them that there are no other warnings/errors and remove this from the Makefile. So open src/Makefile in your favorite editor and change:

RT_CXXFLAGS+=-Wall -Wextra -Werror -Wnon-virtual-dtor

to

RT_CXXFLAGS+=-Wall -Wextra -Wnon-virtual-dtor

Also the Makefile tries to use static versions of some of the boost libs, but I only have dynamic versions on my system. So let’s comment out that line. Find this and comment it out (UPDATE: per Samuel Hughes’ comment below, you can skip this step and pass RECOMMEND_STATIC=0 to the make commend instead of hacking up the Makefile):

STATIC_RECOMMENDS_INDIFFERENT:=boost_serialization boost_program_options

becomes

#STATIC_RECOMMENDS_INDIFFERENT:=boost_serialization boost_program_options

Now let’s write a “simple” build script that wraps the make process:

#!/bin/bash

BIN="`pwd -P`/support/usr/bin"
export PATH=$PATH:$BIN
export RT_CXXFLAGS="-I./include -I./src -I/usr/local/v8/include"
export RT_LDFLAGS="-L/usr/local/v8/lib ../support/usr/lib/libprotobuf.a ../support/usr/lib/libtcmalloc_minimal.a -lboost_program_options -lboost_serialization"
export STATIC_LIBRARY_PATHS=$RT_LDFLAGS
make \
        VERBOSE=1 \
        FETCH_INTERNAL_TOOLS=1 \
        RECOMMEND_STATIC=0 \
        STATIC_LIBRARY_PATHS="$RT_LDFLAGS"

Save it to compile.sh (or just c like I do because life is just too short to be typing “ompile.sh” all over the place) and run it.

That should do it! If all the post-build stuff (like installing js modules and crap) works fine, you should be able to start the db like so:

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/v8/lib/ ./build/release/rethinkdb

If that’s too cumbersome for you, you can either link /usr/local/v8/lib/libv8.so to /usr/local/lib or add /usr/local/v8/lib to your /etc/ld.so.conf file (and, of course, run ldconfig as root) and run rethinkdb freely, without the worries of library paths.

I recently installed FreeBSD 9.1-RELEASE on a VirtualBox VM to do some cl-async testing. I wanted to get Xorg running so I could edit code at a more “comfortable” resolution. I was able to get Xorg running fairly easily just by installing Xfce from /usr/ports.

However, upon starting Xorg, my keyboard mouse would not work. I tried many things: following the steps in the handbook, enabling/disabling hald, reconfiguring Xorg, etc. No luck. My Xorg.0.log was telling me that it couldn’t load the kdb/mouse drivers. After snooping around some forums, I found the solution:

  • Install the x11-drivers/xf86-input-keyboard port
  • Install the x11-drivers/xf86-input-mouse port

After doing this, all was right with the world. Just to clarify, I am using dbus/hald and more or less using the default configuration that Xorg -configure gave me.

A while ago, I released cl-async, a library for non-blocking programming in Common Lisp. I’ve been updating it a lot over the past month or two to add features and fix bugs, and it’s really coming along.

My goal for this project is to create a general-purpose library for asynchronous programming in lisp. I think I have achieved this. With the finishing of the futures implementation, not only is the library stable, but there is now a platform to build drivers on top of. This will be my next focal point over the next few months.

There are a few reasons I decided to build something new. Here’s an overview of the non-blocking libraries I could find:

  • IOLib – An IO library for lisp that has a built-in event loop, only works on *nix.
  • Hinge – General purpose, non-blocking library. Only works on *nix, requires libev and ZeroMQ.
  • Conserv – A nice layer on top of IOLib (so once again, only runs on *nix). Includes TCP client/server and HTTP server implementations. Very nice.
  • teepeedee2 – A non-blocking, performant HTTP server written on top of IOLib.

I created cl-async because of all the available libraries, they are either non-portable, not general enough, have too many dependencies, or a combination of all three. I wanted a library that worked on Linux and Windows. I wanted a portable base to start from, and I also wanted tools to help make drivers.

Keeping all this in mind, I created bindings for libevent2 and built cl-async on top of them. There were many good reasons for choosing libevent2 over other libraries, such as libev and libuv (the backend for Node.js). Libuv would have been my first choice because it supports IOCP in Windows (libevent does not), however wrapping it in CFFI was like getting a screaming toddler to see the logic behind your decision to put them to bed. It could have maybe happened if I’d written a compatibility layer in C, but I wanted to have a maximum of 1 (one) dependency. Libevent2 won. It’s fast, portable, easy to wrap in CFFI, and on top of that, has a lot of really nice features like an HTTP client/server, TCP buffering, DNS, etc etc etc. The list goes on. That means less programming for me.

Like I mentioned, my next goal is to build drivers. I’ve already built a few, but I don’t consider them stable enough to release yet. Drivers are the elephant in the room. Anybody can implement non-blocking IO for lisp, but the real challenge is converting everything that talks over TCP/HTTP to be async. If lisp supported coroutines, this would be trivial, but alas, we’re stuck with futures and the lovely syntax they afford.

I’m going to start with drivers I use every day: beanstalk, redis, cl-mongo, drakma, zs3, and cl-smtp. These are the packages we use at work in our queue processing system (now threaded, soon to be evented + threaded). Once a few of these are done, I’ll update the cl-async drivers page with best practices for building drivers (aka wrapping async into futures). Then I will take over the world.

Another goal I have is to build a real HTTP server on top of the bare http-server implementation provided by cl-async. This will include nice syntax around routing (allowing REST interfaces), static file serving, etc.

Cl-async is still a work in progress, but it’s starting to become stabilized (both in lack of bugs and the API itself), so check out the docs or the github project and give it a shot. All you need is a lisp and libevent =].

We’re building a queuing system for Musio¬†written in common lisp. To be accurate, we already built a queuing system in common lisp, and I recently needed to add a worker to it that communicates with MongoDB¬†via cl-mongo. Each worker spawns four worker threads, each thread grabbing jobs from beanstalkd via cl-beanstalk. During my testing, each worker was updating a Mongo collection with some values scraped from our site. However, after a few seconds of processing jobs, the worker threads begin to spit out USOCKET errors and eventually Clozure CL enters it’s debugger of death (ie, lisp’s version of a segfault). SBCL didn’t fare too much better, either.

The way cl-mongo’s connections work is that it has a global hash table that holds connections: cl-mongo::*mongo-registry*. When the threads are all running and communicating with MongoDB, they are using the same hash table without any inherent locking or synchronization. There are a few options to fix this. You can implement a connection pool that supports access from multiple threads (complicated), you can give each thread its own connection and force the each thread to use its connection when communicating, or you can take advantage of special variables in lisp (the easiest, simplest, and most elegant IMO). Let’s check out the last option.

Although it’s not in the CL spec, just about all implementations allow you to have global thread-local variables by using (defparameter) or (defvar), both of which create special variables (read: dynamic variables, as opposed to lexical). Luckily, cl-mongo uses defvar to create *mongo-registry*. This means in our worker, we can re-bind this variable above the top level loop using (let) and all subsequent calls to MongoDB will use our new thread-local version of *mongo-registry* instead of the global one that all the threads we’re bumping into each other using:

;; Main worker loop, using global *mongo-registry* (broken)
(defun start-worker ()
  (loop
    (let ((job (get-job)))
      (let ((results (process-job job)))
        ;; this uses the global registry. not good if running multiple threads.
        (with-mongo-connection (:db "musio")
          (db.save "scraped" results))))))

New version:

;; Replace *mongo-registry* above worker loop, creating a local version of the
;; registry for this thread.
(defun start-worker ()
  ;; setting to any value via let will re-create the variable as a local thread
  ;; variable. nil will do just fine.
  (let ((cl-mongo::*mongo-registry* nil))
    (loop
      (let ((job (get-job)))
        (let ((results (process-job job)))
          ;; with-mongo-connection now uses the local registry, which stops the
          ;; threads from touching each other.
          (with-mongo-connection (:db "musio")
            (db.save "scraped" results)))))))

BOOM everything works great after this change, and it was only a one line change. It may not be as efficient as connection pooling, but that’s a lot more prone to strange errors and synchronization issues than just segregating the connections from each other and calling it a day. One issue: *mongo-registry* is not exported by cl-mongo, which is why we access it via cl-mongo::*mongo-registry* (notice the double colon instead of single). This means in future versions, the variable name may change, breaking our above code. So, don’t update cl-mongo without testing. Not hard.

Hopefully this helps a few people out, let me know if you have better solutions to this issue!