My thoughts on programming and technology

CORS and Rack Middleware

08.09.2012 at 09:00 AM in Javascript, Ruby, Rack, Web development, Software | View Comments

I was facing an issue at work today. My Javascript needed to talk to a server. That'd be okay if that server weren't on another domain. Unfortunately, things weren't as simple as just using JQuery.

The Problem and the Solution

I've met a decent number of people (mostly those with little experience with HTTP outside of web programming) who have been taught that cross domain requests from JS were not possible. Some more initiated folks are aware of the protocol that lets us turn on cross domain requests, but think that it requires a lot of effort. I, on the other hand, know that CORS (Cross Origin Resource Sharing) is incredibly easy (at a very basic level, it's one line in an Apache config file). I didn't want to take the easy approach, though. First, our resources were semi-private, so allowing all domains (that is, sending a header Access-Control-Allow-Origin : *) was out of the question. Second, we have people running development servers as well as testing and staging servers, so changing an Apache config file was not very scalable or convenient. This was particularly true because any non-standard ports had to be spelled out in the header.

I decided to solve my problem with Rack middleware. We're using Rails, so I considered doing the following:

class ApplicationController < ActionController::Base

  before_filter :allow_cors

  def allow_cors
    response.headers['Access-Control-Allow-Origin'] = ALLOWED_DOMAINS

  # ...

This seemed strange, though. I wasn't comfortable with the amount of overhead that this might incur, particularly since it would be called for every request. I also thought that something this low-level and unrelated to the application logic didn't really belong in my controllers. So I decided to build some Rack middleware instead.

Rack Middleware

Rack is fairly poorly documented, unfortunately. The API and conventions for writing middleware are no exception to this, and there is very little formal documentation on how to do anything nontrivial with Rack middleware. Fortunately, there is a Github repo, rack-contrib, that is full of real world Rack middlewares. I read the (very readable) source for a few of those examples and knew exactly what to do. This is what I came up with:

First, lib/cors_middleware.rb:

# Add Access-Control-Allow-Origin headers to every request
class CorsMiddleware
  def initialize(app, config_file)
    @@allowed_domains ||= YAML.load_file(config_file)
    @app = app

  def call(env)
    status, headers, body = @app.call(env)
    # Check our list of patterns and see if any match our Origin header.
    # If so, set Access-Control-Allow-Origin to the request's Origin
    origin = env['HTTP_ORIGIN']
    if origin && @@allowed_domains.any? { |pattern| File.fnmatch?(pattern, origin) }
      headers['Access-Control-Allow-Origin'] = origin
    [status, headers, body]

Then, config/cors.yml:

# Allowed domains for CORS. Shell style globbing is supported.
 - http://localhost:*

And config/application.rb:

require 'rails/all'
require_relative '../lib/cors_middleware'

# ...

config.middleware.use CorsMiddleware, "#{Rails.root}/config/cors.yml"

I restarted the server and everything worked magnificently.

A Few Final Notes

Some final notes because I found these things to be poorly documented on the internet:

  • The header Access-Control-Allow-Origin does not support any form of globbing except a single wildcard (to allow all domains). This is very unfortunate, but can be solved with some simple code (like my File.fnmatch calls earlier). Remember, * is the only valid value with a star in it.
  • Rack HTTP request headers are kept in the env hash but are all mangled so that everything is all caps and hyphens become underscores. For example, the header Content-Type is env['HTTP_CONTENT_TYPE'].
  • CORS is incredibly easy to implement and everyone exposing public APIs that don't require authentication for some or all of the data should enable it
  • Rack is a very powerful and efficient tool for doing low-level things with HTTP in any Ruby web app (particularly in Rails where everything seems to have a good bit of overhead)
  • The lack of examples in IETF/W3C's RFCs is incredibly frustating. Yes, I can read Backus-Naur form but I'd rather not when I could just have the behavior I want modeled, particularly when your BNF is loaded with random escape sequences to the point where I can't tell if it is intentional or not.
Read and Post Comments

Design for Developers

07.10.2012 at 05:00 PM in Web development, Software | View Comments

I have no idea what I'm doing. Really.

Disclaimer: some of the things that I suggest in this post will make people who know better cringe. I'm not a designer and this post isn't intended for designers; it's intended for developers with little design aptitude and time who still want to produce decent-looking websites. Also, as a matter of warning to more design-savvy developers: this post is intended for the most design-oblivious of them all (specifically, people like me). Read ahead with caution.

If you're like me, then you build a fair number of websites. If you're really like me, most of them look like complete and utter garbage. However, you're still a discerning consumer. You can tell why a site like tumblr looks good and a site like craigslist does not. The answer to that specific questions might just be 15 years of improvements in web technology, but in a more general sense the difference is that a lot of effort clearly went into the design of tumblr while craigslist is almost entirely unstyled.

As a discerning consumer of web design and as a producer of web sites, I take a lot of pride in my work and I'm always concerned about its quality. I know how to produce a good site from a technological perspective -- I can make sites that load fast, sip server resources, and scale well -- but I don't know much about making a site that's graphically appealing. Fortunately, quite a few people do and they've taken the time to share their work in a format that developers can use.

Now, to the meat of the post: my goal here is to share some tools I've come across and used extensively to create websites that don't look terrible or antiquated. Without further adieu, here's the list:

Twitter Bootstrap (don't judge me just yet)

This was a bold first pick. Bootstrap is controversial and a lot of designers hate it. I'm the first to admit: Bootstrap has made it too easy to put absolutely no effort into the design of your site, which has led to an army of sites that look like exact clones of its sample site:

Looks familiar, doesn't it?

I'm here to tell you that, with some customization and willpower, Bootstrap can give you a lot of scaffolding to produce a modern-looking site without having to muck around with too much CSS. Here are a few things about Bootstrap that the developer in me loves:

  • It prevents me from reinventing the wheel (I'm sorry, but no one should have to implement a CSS grid layout, ever. It's been done a million times before)
  • It gives you modern-looking inputs and form controls that look more or less the same across different OSes and browsers
  • It works on the major browsers
  • It comes with a lot of sizing stuff built-in for text, form controls, buttons, etc.
  • It provides a nice, clean typographical look that's a serious upgrade to anything's default
  • It's very well documented and maintained

Bootstrap is the JQuery of CSS for me. It does a lot of stuff that I'd just end up writing on my own (except probably in a less-correct and portable way). It's the lazy (or productive, depending on how you want to spin it) programmer's best friend.

Now, a few caveats:

  • Please, for the love of God, do not use their fixed, black nav bar with light text. I can't even remember if it ever looked good, but today it's a cliché that will make pretty much anyone in the know lose respect for you. It's just incredibly overused.
  • Don't use their default colors (more on how to pick these later). If I see another site with black nav on a white background with dark text and blue buttons (and lots of grey wells) I will kill a pandawhale.

In short, Bootstrap is great for prototyping and rapid development, and is a rock-solid foundation for the front-end design of a site. It is so overused, though, that one must be extremely deft when making use of it if they do not wish to anger the design gods.

Adobe Kuler

I'm not actually sure what Kuler is intended to be. I think it's a color-theme sharing site for designers. I do certainly know what I use it for: I use it to pick colors that go together and make my sites look distinctive.

I'll be honest: I don't know the least thing about color theory. I don't even know if knowing something about color theory would help me do what Kuler does for me. All I know is that when I see a group of colors together, I can tell if they look good. In that sense, Kuler plays to my strengths, by showing some vibrant and distinctive sets of colors that go well together and letting me pick. When I've chosen something that looks decent to me, I take each color's hex code and make a variable for it in my SASS stylesheet. Then, I chug away building a site that looks distinctive.

Note: this is how you should pick colors to customize your Bootstrap site.

Subtle Patterns

One of the habits of good-looking websites that I've noticed (perhaps only after being introduced to Subtle Patterns) is that a lot of them have subtle textures as background images. I have no sensible description for why this is, it just makes sites look better. Subtle Patterns is a collection of unobtrusive textures that you can use as background images for your site to immediately up its design panache.

My workflow for Subtle Patterns is typically to look at every pattern I think might work and try it on the site. Once I've found something that looks good, I stick to it.

Google Web Fonts

Web Fonts is basically a gallery of free fonts designed for the web. My process with this is similar to my Subtle Patterns process: go through fonts, find ones that look suitable, try them out, and pick which one's best.


Use these tools with judgement and grace. Don't think you can replace a professional designer in any case -- there are some situations where you'll have to bite the bullet and hire someone. But, when you're just looking to make a decent-looking minimum viable product or building a personal project that you don't want to be ashamed of, these tools are perfect for you.

Read and Post Comments

What I've Been Doing Since I Last Posted

02.23.2012 at 02:00 PM in Personal | View Comments

It's been a pretty long time since I last posted on my blog. 4 months, in fact. I suppose I owe an explanation.


I've been at school. The University of Pennsylvania, to be exact. I've been studying CS, and I've been having a blast. However, school keeps me pretty busy both through coursework and related activities.

I've been working hard to get requirements and prerequisites out of the way as well as do the introductory CS track that we have here. So far I've been doing well and enjoying it a lot. I liked one class, CIS 120, so much last semester that I'm TAing it this semester. I think that the method that we're using to teach CS and programming to people with some, but not a lot, of exposure to programming paradigms and techniques is brilliant (that's likely the subject of another post).

I've also been working on web dev for the Daily Pennsylvanian, our independent student newspaper. I've transitioned a bit from writing code to managing people and designing whole applications, but I still get to do some development. It's been an interesting experience to take on more of a CTO role at a mid-sized corporation, though I'm not sure if it's something I could do for the rest of my life.


I've been working on and off at my old job from before I went to school. I think that, right now, I'd have to officially classify myself as no longer an employee, but I might return.

I've added TAing this semester as a job, and I find that I enjoy that a lot, even though it doesn't pay very well. It's great to make an impact on developing CS students.

Last, I'm looking for a summer internship. I've done interviews with some places that everyone has definitely heard of and with some places that no one has heard of. Some of them have gone well, some not so much. That's about as much detail as I can give now, I'll likely post when I've made a decision.

Side Projects

This part makes me the saddest. I've really neglected a lot of the side projects that I used to work on before I went to school. I've resolved to start revisiting these projects (including my blog) in my spare time. The last thing I want to lose is my curiosity, and I'm terribly afraid that I'll get bored with CS. So I'm working to revisit my old projects.


That's about it. Hopefully, my blog will return to its busy old state. Thanks for reading and thanks for sticking around.

Read and Post Comments

Host Switch

10.20.2011 at 04:00 PM in Hosting | View Comments

I've switched hosting for this site from a hosting provider that shall not be named to Amazon S3. I switched because the server that my host had serving this site was compromised, and, in the process, all of my data was lost, and a few other bad things probably happened that I don't want to contemplate. If you're interesting in knowing how I got set up on S3, I used a post from the AWS blog and a rudimentary knowledge of how to alter DNS to get things set up. Here's hoping that there's not another massive amount of downtime for AWS in the near future.

Read and Post Comments

Why I Learned to Love Make

07.07.2011 at 12:00 PM in Software | View Comments

I'll be honest -- I never really took the time to learn any formal build system until recently. I hadn't worked on many major projects and at my job, there were only a few people working on any project, so there was no reason to learn anything fancy like Make or Maven. I just wrote shell scripts and that worked fine. But now that I've looked into Make, it doesn't seem like I'll ever go back to plain shell scripts.

In this post, I'm going to take a look at a few things that make Make infinitely better than the primitive way of doing things -- shell scripts or manually, and why Make has improved my life.

My initial apprehensions

I had a long list of reasons why I'd never tried learning Make before. Here's a fairly comprehensive version:

  • I figured my projects were too small to require a specialized build system or dependency tracking
  • I though shell scripts were just okay
  • I like writing bash scripts (Make is actually very similar to writing bash scripts, but with some domain-specific abstraction)
  • I'd heard a lot of disrespect for Make as a system and a language
  • I didn't need to work on any existing Makefiles

Nevertheless, I gave it a try when I had some free time, and it was a cathartic experience. Outlined throughout the rest of the article are a few reasons why I'll never go back to writing scripts in places where Make is king.

Argument parsing

Most of the time, when I first write a shell script to build a project, it starts off as a very simple, linear script. But then, I need to do something else (e.g. generate just one file, or generate a different distribution format, or make a patch), and the script starts to branch. Now, I have to start writing branches on the script to take different actions with different verbs or commands (e.g. installer, executable, and so on). I also have to introduce some additional complexity due to error handling -- I want to gracefully handle cases when the given verb is not a recognized command for the script. Basically, it gets very complicated fast.

In a general-purpose language like Python, there's libraries to abstract over this and make things easier (e.g. argparse or optparse). But bash has no such functionality built in. Enter Make: Make does all of the argument parsing for you. You just have to define targets and error reporting and branching is handled for you. Here's a comparison (note that this makefile is not valid -- I had a tough time formatting tabs, so I used 8 spaces instead):

# Command should be either update or installer

if [ $command = 'installer' ]; then
    # do some stuff
elif [ $command = 'update' ]; then
    # do some stuff
    echo "Command $command not recognized."
installer: dependency1 dependency2 # And so on with the dependencies
        # Some stuff

update: dependency1 dependency2
        # Some other stuff

You tell me which one is better. I say having the framework already laid is miles better.


Almost all build shell scripts are modular -- e.g. what they do happens in discrete steps. At the most basic level, these three steps are setup, build, and cleanup. The build step is often comprised of several components as well. In a shell script, either you perform all three actions or none at all (unless you use the command branching strategy I outlined above, but we've already discussed why that's bad). However, Make is modular by nature -- because a Makefile, when used as intended, is comprised of small targets that combine to make larger, more general targets, it's incredibly simple to generate atomic portions of a build with no fussing around with your shell script. You simply have to specify the target that you want to build.

Make is smarter than you or your shell scripts

On a large project, builds can take a long time. It's important to only build what you need to build, lest you waste valuable time and resources. Again, unless a shell script uses a branching strategy or a complex scheme to check when files were last modified (at which point you'd be writing a Make by hand, which is obviously unnecessary), you can't prevent a build shell script from doing unnecessary work. However, because Make understands the dependencies between your files and checks modified times for files to determine if any particular target needs to be rebuilt, it prevents any needless rebuilding of unchanged targets (with a well written Makefile, of course). Make is smarter and more general than you or your shell script, so you should take advantage of the potential time and resource savings.


Make, although it probably predates DRY (Don't Repeat Yourself), epitomizes it. You're encouraged given plenty of tools to generalize targets so that you can build all similar targets with one target. You're encouraged to use variables, so that if a command or an option changes (e.g. you switch compilers, or want to use a different version of an interpreter), the changes only have to be made in one place. In fact, I'd go as far to say that in a well-written, general Makefile, any single change should only require modifications in one place.

Ease of use and standards

It's also important to note that Make is something of a standard in software development, particularly among the C and Unix communities. Pick a few major C projects (e.g. Python, Ruby, OpenSSH), and see how many use Makefiles for building. That means that when people need to change your build procedure, they'll know where to look.

Additionally, Make is a standard for users. Most somewhat-savvy users know that in order to install most software from source, you should run make and then sudo make install. That convenience is tremendous boon to your user base.

I'm not advocating conformism for conformity's sake, but conforming to standards dramatically increases your software's ease of use for both users and developers. Happy users become contributors and happy developers contribute more, and that's a good thing.


Obviously, you can automate running of shell scripts as well, but there's a lot more that could go wrong than just automating a run of Make.

Conclusion and Disclaimer

This post is not an argument for everyone to use Make. It's an argument for everyone to use a formal build system of some form with particular emphasis on the benefits of Make. Most of the stuff I praised Make for can also be said for Rake, Ant, Maven, or any other popular build system. My goal is to convert people away from shell scripts/manual building for any given project in favor of something that saves time and abstracts over a lot of the difficulties of writing shell scripts.

Read and Post Comments

Analyzing the LulzSec Password Leak

06.16.2011 at 11:30 AM in security | View Comments

Maybe there's something wrong with me, but when I first heard about LulzSec releasing 62,000 passwords, I was actually pretty excited. I've always wanted to a little analysis on a big leak like this, and now I finally get to do one.

So, as a brief overview, I'm going to take a look at a few different things: password frequency, password length, and password complexity, and see how the people in the link were doing security-wise.

Getting the passwords into one place

The original text document was not perfectly formed. It took a bit of tweaking to get just the passwords. First off, there was some chatter at the top that I had to remove. Also, part of the document was formatted password | email |, part was formatted password | email, and another part was formatted number | password | email, so I had to change that as well. I first replaced all instances of | (a pipe with a space on each side) with a single space, to make the input more digestible for awk. Then I did this:

$ awk '$1 ~ /.+@.+\..+/ { print $2 } $2 ~ /.+@.+\..+/ { print $1 } $3 ~/.+@.+\..+/ { print $2 }' ~/passwords.txt > ~/justpasswords.txt

Basically, I decided which field to print based on what field looked like a password (had some text, followed by an @, followed by more text, then a ., then more text). I lost about 13 passwords in the process, but that's not too big a deal. I also got one or two lines in justpasswords.txt that were actually emails, but that's okay as well.

Anyway, there I had it: just the passwords. It was time to do some analyzing.

Looking at password frequency

First, I decided to figure out what the most common passwords were in the set. I tested this using the following Python:

#!/usr/bin/env python

freqs = {}
with open('justpasswords.txt') as f:
    for password in f:
        password = password.strip()
            freqs[password] += 1
        except KeyError:
            freqs[password] = 1
# Get the top 25 most common passwords.
for password, freq in sorted(freqs.items(), key=lambda x: x[1], reverse=True)[:25]:
    print "%s was used %d times" % (password, freq)

The output came out like this:

123456 was used 558 times
123456789 was used 181 times
password was used 132 times
romance was used 88 times
102030 was used 68 times
mystery was used 67 times
tigger was used 62 times
shadow was used 61 times
123 was used 55 times
ajcuivd289 was used 55 times
bookworm was used 54 times
dragon was used 53 times
sunshine was used 53 times
reader was used 52 times
12345 was used 50 times
purple was used 50 times
maggie was used 48 times
reading was used 47 times
 was used 43 times
1234 was used 42 times
vampire was used 34 times
peanut was used 34 times
angels was used 34 times
booklover was used 33 times
michael was used 32 times

Okay, so there's some usual suspects in there. No surprise that 1234, 12345, 123456, 123456789 or password are there. But some of them are a bit weird -- like ajcuivd289. Why does that show up? A quick grep of the original password file reveals that each of the emails belonging to this particular password is a generic female (mostly Anglo or Hispanic) name at either a variation on gmail.com, mail15.com, or mail333.com. I suspect we can chalk this up to one person or entity having 55 accounts.

Also interesting was that a blank password was used 43 times. This probably has to do with errors in my earlier processing of the data.

Basically, this reveals what we already know: a lot of passwords are very common and based on dictionary words. But only about 4000 out of 62000 passwords total had passwords used 15 or more times in the set, so perhaps these passwords aren't as weak to dictionary attacks as we thought.

Password length

Next, let's look at how long these passwords are on average. Here's another Python script to calculate that:

with open("justpasswords.txt") as f:
    lines = f.readlines()
    average = sum(len(line.strip()) for line in lines if line) / float(len(lines))
    print "Average password length is", average

The average length of a password was 7.63 characters. That's not terrible, but it's not good either. Obviously, whatever source these passwords came from was not doing a very good job of making sure that their users used good passwords.

Password complexity

Finally, let's test to see how complex the passwords are. I tested for lowercase, uppercase, digits, and symbols using this script:

upper, digits, symbols, total = 0, 0, 0, 0.0
with open("justpasswords.txt") as f:
    for password in f:
        total += 1
        password = password.strip()
        if any([str.isupper(x) for x in password]):
            upper += 1
        if any([str.isdigit(x) for x in password]):
            digits += 1
        if any([not str.isalnum(x) for x in password]):
            symbols += 1
    avg_upper, avg_digits, avg_symbols = upper / total, digits / total, symbols / total
    print "Percentage with uppercase: %2.3f" % (avg_upper * 100,)
    print "Percentage with digits: %2.3f" % (avg_digits * 100,)
    print "Percentage with symbols: %2.3f" % (avg_symbols * 100,)

We got this output:

Percentage with lowercase: 79.613%
Percentage with uppercase: 2.010%
Percentage with digits: 55.831%
Percentage with symbols: 1.565%

So, basically, most of the passwords have lowercase or numbers (but not necessarily both) and very few have uppercase or symbols. That does not bode well for password complexity. Let's look for passwords that only pick from one character set:

lower, upper, digits, symbols, total = 0, 0, 0, 0, 0.0
with open("justpasswords.txt") as f:
    for password in f:
        total += 1
        password = password.strip()
        if all([str.islower(x) for x in password]):
            lower += 1
        if all([str.isupper(x) for x in password]):
            upper += 1
        if all([str.isdigit(x) for x in password]):
            digits += 1
        if all([not str.isalnum(x) for x in password]):
            symbols += 1
    avg_lower, avg_upper, avg_digits, avg_symbols = \
               lower/ total, upper / total, digits / total, symbols / total
    print "Percentage with all lowercase: %2.3f%%" % (avg_lower * 100,)
    print "Percentage with all uppercase: %2.3f%%" % (avg_upper * 100,)    
    print "Percentage with all digits: %2.3f%%" % (avg_digits * 100,)
    print "Percentage with all symbols: %2.3f%%" % (avg_symbols * 100,)
    mixture = (1.0 - (avg_lower + avg_upper + avg_digits + avg_symbols)) * 100
    print "Percentage with a mixture: %2.3f%%" % (mixture)

And the output:

Percentage with all lowercase: 43.108%
Percentage with all uppercase: 0.364%
Percentage with all digits: 19.536%
Percentage with all symbols: 0.078%
Percentage with a mixture: 36.914%

Shucks. That's not good. 20% of the passwords are only drawing from a character set with 10 possibilities. 43% are only drawing from lowercase, which has 26 possibilities. This combined with an average password length of 7.3 characters or so is troubling. It's good to hear that 37% of the passwords are mixed, but the majority are still insecure (and remember, there are very few passwords that use uppercase or symbols, which would dramatically increase security compared to adding a number or two to a lowercase password).


The passwords that LulzSec gave us weren't quite as bad we'd expected, but they weren't secure either. Clearly, the source of these passwords did not enforce password security as much as they needed to, judging by the number of passwords that were all lowercase or all digits and exceedingly short. Web developers: force your users to use long and complex passwords. It's good for them. Users: use better passwords.

Read and Post Comments

Random NCAA Tourney Bracket, How Will it do?

03.14.2011 at 09:09 PM in Sports | View Comments

I love March Madness. I always pick several brackets, and this year is no exception.

This year, as a control to compare my performance against as well as an experiment, I've created a completely random bracket. I entered the bracket alongside my others on ESPN. I used a TI-83 plus calculator to make the picks. You can see what I (or rather, the random number generator) chose here.

UCLA wins in the end over Xavier, 89-77. The highest seed in the Final Four is Xavier (a 6 seed). Only one 1st seed makes it out of the first round, Duke (who falls to Texas in the Sweet Sixteen). Here's hoping that the real tourney is a bit more predictable.

Read and Post Comments

Check out wpxml2blogofile

02.28.2011 at 09:30 AM in Python, Blogofile | View Comments

I said in my last post that I had a tough time getting my Wordpress posts converted using the existing wordpress2blogofile script provided by the good people at blogofile.

After the fact, I decided it was worth a few hours to actually write a script to convert Wordpress XML dumps to blogofile posts. It has a GitHub repo where you can look at the code. It may eventually end up in the blogofile repo too.

As for the technical factors, I chose to write output as similar as possible to wordpress2blogofile as possible. The post naming conventions and the sequence of YAML metadata mimic what wordpress2xml does. I chose to use lxml for HTML parsing for a few reasons:

  1. It's a dependency for blogofile, so it's likely that the script will work for the user with no extra effort
Read and Post Comments

Migration to Blogofile

02.26.2011 at 12:45 PM in Uncategorized | View Comments

In case you couldn't tell, I'm not on Wordpress anymore. I've switched to blogofile. It's not so much that I dislike Wordpress, but Blogofile meets my needs really well:

  • Markdown editing (and ReST, if I'm ever feeling crazy)
  • Syntax highlighting out of the box
  • Spam-proof comments from Disqus
  • Static site (no pesky PHP)
  • Written in Python, customized in Python
  • Easily versioned using Git or some other VCS

Basically, it's super awesome. My favorite piece is that it's static -- Wordpress was never responsive enough, and this is as responsive as it gets. With Wordpress, you also had to worry about caching, avoiding poor-performing plugins, and databases. All those troubles melt away with blogofile.

That said, it's not perfect. Some cons:

  • Usage is non-obvious and no non-nerd could ever effectively use it
  • Documentation is no good. There's very little to read on blogofile in general, so to understand it you're going to have to read most of the source code in your controllers, filters, etc.
  • Migrating from Wordpress on shared hosting is tough. There's a Python script that does the migration, but it has several dependencies (SQLAlchemy is the big one) that makes it unlikely to work on shared hosting (I'm fortunate enough to have limited shell access, still couldn't install SQLAlchemy, even from source). I ended up having to dump my MySQL DB, install and run MySQL on my computer, load the dump, and run the script from there. In the future, it should probably try to read Wordpress's XML dumps (I bet that would be pretty easy, maybe I'll do it myself).
  • There are definitely some bugs, and it's definitely not "mature" technology (Wordpress really isn't either)

Overall, I'm happy with my decision despite some of the hardships involved in the transfer. The coolest thing is that I can stay on $1 a month shared hosting forever.

Read and Post Comments

C resources that helped me

02.18.2011 at 04:17 AM in Uncategorized | View Comments

I've been picking up C a bit more lately, and it's been great so far. Along the way, I've picked up some great articles, books, and sites. I thought I'd post them for posterity and the convenience of anyone who might happen on this.

  • The Essential C, by Nick Parlante -- This is really a must for anyone who wants to relearn C or an experienced programmer who want a primer. It's only 50 pages, and it's quick and to the point. It's something that you could read and comprehend in a day if you wanted to. It's also very thorough -- most of the major language concepts see some coverage.
  • K&R C, by Kernighan and Ritchie -- no C programmer would be complete without it. If you have to ask, you shouldn't be using C.
  • comp.lang.c FAQ -- it's a great FAQ. Quite thorough. Even if you don't have a specific question, it's very educational to peruse: you can pick up nice bits and pieces of advice along the way.
  • High and Low Level C by Jim Larson -- sort of a guide to doing things in C that it wasn't really designed to do (closures, classes, GC), and sort of a guide to low level hacks. It's a good overview of some cool hacks in C that can come in handy for the advanced C coder.
  • The ISO C standard -- again, if you have to ask...
  • The GCC manual -- everyone's favorite C compiler has quite a few options and quite a few associated tricks. Also, GNU C is, in many ways, a superset of C itself -- it allows a number of things that the standard does not (like nested functions). It's nice to know what those features are.
  • The FreeBSD handbook's section on secure programming -- programming in C, especially with strings, can be dangerous. This is the most comprehensive guide to programming securely that I've found so far.

If I come across any other good resources, I'll add them.

Read and Post Comments

Next Page »