Greetings,

Recently I got a casual query from a user who was interested in what else I had going on. I do have a lot of projects that I work on, along with spending time supporting JBidwatcher.  What follows is a lightly edited version of what I told them…

(1) Most obviously there is JBidwatcher which I spend some time supporting every day via email, forum, and the support site, and sadly a little less time developing, but if you’ve poked at my site you’ve noticed that I’ve got several other projects listed.  There is also one or two that are not listed which I’ll talk about.

(2) One of the most obvious other projects to JBidwatcher users is My JBidwatcher.  It has its own configuration tab in JBidwatcher, and it’s pretty far along in terms of features.

Unfortunately I’ve not implemented the ‘Add Auction’ command in My JBidwatcher yet, nor the ability to send snipes back to your desktop JBidwatcher instance.  All the capabilities to do it are in place, I just haven’t seen enough interest from users to implement it.  My JBidwatcher gets a very small number of occasional users, although it’s been invaluable in helping me debug JBidwatcher, especially recently with the ‘null’ priced items problems.  You can link My JBidwatcher to your desktop app just by entering your account information (user name and password) into the JBidwatcher ‘My JBidwatcher’ configuration.

I really want to make it work better, but I’ve not gotten a confident feel that folks want that capability, or what folks would like from a web interface to JBidwatcher, so I’m unsure about spending time working on it as opposed to other projects.

(3, 4) As for Hacker’s Health (and the iPhone companion app Health Hacker)…  Heh!  In 2007 as part of learning Ruby on Rails, I built a health tracking application for myself.  I still use it, and I’d love others to see it, but it needs some cleanup before that happens.  Fast forward to 2010, when I built an iPhone application around communicating with Google Health, and storing stuff in their data format (which is crazy complex, because it had to support health data vaults, essentially).  Then I started working for Google, and put it on hold, because they were talking about doing one themselves and I didn’t want to step on any toes…  Now that I’m not working for Google anymore, and Google Health has sadly gone away, I’ve revived it and updated my web application and have started making the two of them talk to each other.  This project is lots of fun to build, and very useful as every time I actually put focus on my health it gets better.

(5) One of the last of my publicly acknowledged projects, MacBidwatcher is an application I’m actively working on. Right now it can load eBay auctions, trigger updates, shows thumbnails,  track them in folders, and you can drag and drop items between folders. I’ve added the ability to log in, but only on ebay.com so far. (No international support yet.) I’ve got some prototype code that bids, but I need to build the user interface for bidding, then start adding features to support sniping.

MacBidwatcher is a much more Mac-like application than JBidwatcher though, and my plan is (eventually) to offer it on the Mac App Store, and maybe through non-MAS sales as well.  Even folks like me need to eat.

(6) Not really discussed anywhere, is iBidwatcher which is an iPhone version of JBidwatcher. It’s already got a few cool features (including secure over-the-air sync with JBidwatcher if you’re on a wireless network with a running JBidwatcher instance) but it’s severely hampered by the inability of iPhone applications to do anything in the background. This means it can’t keep the price of a listing up to date when not running.  I’ve written code to scrape eBay’s site on the iPhone, and it works great, but it can only run when you are in the app. It also cannot snipe from the phone, because the phone would have to be on and the app running, which is counter to the idea of a sniping application, i.e. you shouldn’t need to think about it. Instead I’m considering partnering with Gixen, and allowing you to place snipes on their service through iBidwatcher if you have a Gixen Mirror subscription. Their price of $6/year seems reasonable, and I have a lot of respect for the person who runs it.

(7) The least interesting to most JBidwatcher users is Outlinr, which has gotten sidelined as I deal with other projects.  It was a blast to build, and was functional for a while, but server upgrades and the relentless progress of browsers has made it no longer work.  I love outlining as a knowledge-capture model, and think it could be done SO much better than anyone is currently doing it, but I have to focus on projects that I know others are more immediately interested in.

I’d be very interested in your thoughts on the various projects I’ve described, and what you’d like to see out of them.  I’m always looking for more insights into what would be helpful and valuable.

— Morgan Schweers, CyberFOX!


Greetings,
I am not an expert Objective C, Cocoa, or Core Data coder. There, I admitted it. I expect some of the people viewing this will be, and I encourage you to let me know where I’ve gotten things wrong. I’m also sure my MacRuby style is…let’s call it idiosyntactic.

Too long to fit in the margins of this book

My journey starts with Matt Aimonetti’s excellent MacRuby: The Definitive Guide, and it’s Core Data chapter. It’s a really good introduction to Core Data, but I needed something a bit deeper and more advanced. My dream, of course, would be to use Core Data with Active Record’s ease of use. Core Data is really complex, though. Books have been written on the subject; and yet my complete exposure to it has been writing a few Objective C iPhone applications for my own use.

My problem

It’s always best to start with a relatively simple action, and figure out how to build on that. My initial problem to solve was allowing the user to select a Category from an outline view on the left (iTunes style) and pull up a list of the items in that Category in a table view in the main part of the UI, like so:

MacBidwatcher screenshot with an outline view on the left and a table on the right.

I’m sure I could do this a number of ways, and it’s probably even possible to make it work with the standard NSArrayController like in MacRuby: The Definitive Guide, but there are a number of places where it won’t work as well, such as when I want to find the item ending the soonest from all Categories, or load and modify specific Auction items during updates or when the user chooses to perform actions on individual items. At that point, it’ll become necessary to do the Core Data handling ‘by hand’, and so I felt it’d be best not to start with something I was going to have to bypass anyway.

The Data Model

It may be helpful to sketch a simple data model, described using the data modeler in Xcode; some other models in the project have been elided for clarity:

MacBidwatcher's Data Model for Categories and Auctions

Because of the reciprocal relationship between Category and Auction, I could simply load a Category with a particular name, and then refer to category.auctions to get the list (in Core Data form) of Auction objects associated with that category. Adding .allObjects returns the actual objects as an array for easy referencing in a TableView. (This is not efficient, but I’m trying to get the code working right now; I’ll manage efficiency once I’ve gotten it in users hands.)

MacRuby vs. Objective C

The Objective C (that I’ve written) to do a similar load takes about 80 lines of code; I’m not embedding it because it’s too long, and it only loads by name. You can go take a look; it’s pretty straightforward. Unlike the Objective C version, the Ruby code can load by any attribute dynamically, and has additional features like limits and offsets.

This is the core of my code when the user clicks a category/folder on the left:

The key code being referenced is this:
@current_category = Category.find_first(context, by_name:category_name)
It’s roughly equivalent to the Objective C code (extracted from the app that uses my ObjC code above):
NSArray *cats = [Category findByName:category fromContext:managedObjectContext];
The main difference is that the Objective C code requires a different method for each findBy* that you want to do, whereas the MacRuby code uses the fact that its parameters are really just hash entries using the ‘:’ syntax for faux named parameters. So, for example, while this code is doing a find_first, if it were to use find_all it could also pass ..., limit:20, offset:7) if you wanted to get 20 items starting at the 7th.

The Magic

The magic, of course, happens in entity.rb which looks like this:

It references inheritable_attrs.rb which is available as a gist of its own

One important part that any entity needs to get is a ‘context’, which is actually a managed object context. Fortunately when you create a Mac Ruby project using the MacRuby Core Data Application template, a context is available from the AppDelegate class. You may notice that in my AuctionTableDatasource there is: attr_writer app_delegate which doesn’t really do anything obvious. That’s an ‘outlet’ to refer to the AppDelegate; using Interface Builder control-click-and-drag from your data source to the App Delegate, and choose the app_delegate outlet to link it up. The default Core Data AppDelegate class instantiates a Managed Object Context which is essentially a link to your Core Data database. It needs to be passed in to code which is going to interact with data from the database, which is why each of the public methods in entity.rb takes a context as their first parameter.

A MacRuby Caveat

I mildly disagree with the default MacRuby Core Data Application template which creates an XML-based Core Data app; specifically it creates a file with the extension of .xml and passes NSXMLStoreType as the persistent store type. I believe it should pass a .sqlite file extension and use NSSQLiteStoreType as the storage type. For small applications, it may not matter, but if you feel you’re likely to be storing enough data that handling Core Data by hand will be necessary, then you’re going to want the SQL-based storage type. The typical recommendation is to start with an XML type and switch to SQL when you’re going to release, nominally because XML is easier to read. I dispute that, though, because there are slight behavioral differences, such as the XML storage type keeping the entire object graph in memory and reports of subtly different handling of (generally incorrectly specified) relations. If you want to use an SQLLite backed database, you’ll want to fix up AppDelegate.rb once you’ve created your project.

Conventions and Features

I know there’s a lot of disagreement over Active Record’s design, but I happen to like it a lot, and many of the guidelines of Rails creep into my code. I especially like the presumption that there’s an easy way to do whatever you’re doing, that the code handles for you automatically, but if you want to fight the convention, you can; it just won’t be as clean. There’s a lot of ‘convention’ involved in the Entity class, especially in the presumptions that you’ll have model classes which are named identically (including case!) to the core data models you’ve created, i.e. generally with the first letter uppercased. You should be able to override that by setting the entity_name in your subclasses using

require 'entity'
Class OddlyNamedModel < Entity
  self.entity_name = 'oddly_named_model'
.
.
.

but it’s not something you should do lightly, or at all if you’re creating a new project, and embarrassingly, I haven’t tested it.

Attributes

Attribute names were something else to deal with; I typically name my attributes in lower case while, as mentioned, my entities have their first letter capitalized. So my Category.find_first(context, by_name:category_name) method will look for an attribute called ‘name‘. I am aware that some people capitalize the first letter of their attributes, so: Category.find_first(context, byName:category_name) would work if you have an attribute called ‘Name‘ and looks very Objective C-ish.

Limits and Offsets

It’s straightforward to index into your results, using Category.find_all(context, by_name:common_name, offset:20, limit:10); this would search for all instances that have common_name in their name attribute, and then starting from the 20th, pull down 10 of them. In SQLLite terms, this translates into something approximately like: SELECT * FROM ZCATEGORY WHERE NAME = ? LIMIT 10 OFFSET 20

Conditions (the escape hatch)

Another useful feature is a conditions key, which provides one entry which is raw predicate logic, essentially. An example for finding all categories that end in ‘nt’ would be: Category.find_all(context, conditions:['name like %@', 'nt']) which translates in SQLLite terms to: SELECT * FROM ZCATEGORY WHERE NSCoreDataLike( ZNAME, ?, 0)

Debugging

One incredibly useful built-in capability (and the source of the above approximations of the commands issued by Core Data) I’ve found for doing debugging of SQLLite-based Core Data code is to pass -com.apple.CoreData.SQLDebug 1 to the application from the command line, in order to view the queries as they’re being done. From the command line, in the root of your app directory, this would look something like this

bash$ build/Release/MyApp.app/Contents/MacOS/MyApp -com.apple.CoreData.SQLDebug 1

Conclusion

I’ve tried to make an extremely simple class that I can subclass for my entities, that allows me to do straightforward find operations in a way that doesn’t feel overly complex, and fits in a Ruby-ish style. It’s working so far for my code, but I can’t promise it’ll work for everybody’s. Especially if you’re facing down historical schemas, or other issues, my defaults are probably not going to be appropriate for you. It’s definitely less code than the Objective C equivalent, and feels…more comfortable for me.

I’m VERY interested in feedback, and improving this code. As I stated at the outset, the best I could be considered is a hobbyist Objective C coder, and I’m only just starting to dig into the power that MacRuby has. Please feel free to provide pointers, comments, ideas, features, or (if necessary) derision. Especially feel free to fork entity.rb and make corrections or additions as you see fit.

— Morgan Schweers, CyberFOX!

Update: A friend and quite excellent Objective C developer David Brown provided his own take on the Entity concept; according to him it’s untested and needs a few tweaks, but the basic concept should work equally well; check it out on his own gist of entity.m. It’s like a Rosetta stone between MacRuby and Objective C!


Greetings,

This is a very informal, off-the-cuff survey of some web applications in the Rails ecosystem that have varying payment plans, how they present them, and some of the differences among them.  The idea is not to compare the services, but to start to get a handle on how to price services on the web.

Background

I’m working on launching a small, very niche Rails-based web application in the near future, and am looking to charge a small amount per-month for it.  Mostly as a way to make ends meet while I’m unemployed, but also because I’d like to provide a few extra features to those users who already donate to the free JBidwatcher application, and because the web application uses S3 and I have to defray those costs.  While I’m very grateful my users are contributing because JBidwatcher itself is useful to them, I’d like to offer a ‘premium’ value both for folks who have already donated, and for people who don’t feel comfortable just sending money, and want to see something extra for it.

This led me to be curious about pricing.  I pay for several web applications already, and expect that eventually I’ll have to pay for more if Google starts actually asking for money. :)  I decided to dig into the applications that are out there, and see what kind of pricing models I should be looking at.

From my informal survey, a lot of the subscription plan models used by Rails-oriented companies naturally grew out of 37signals and their fanatic devotion to the concept that web applications should be good enough to charge for, and should be charged for.

The products I’ve looked at for this are Lighthouse, Tender, GitHub, Highrise, Basecamp, Campfire, NewRelic RPM, CrazyEgg, Hoptoad, Thunder Thimble, and Pingdom.  I’m sure there are many others out there, but those are the ones I deal with on a semi-regular basis.  I have paid accounts with GitHub and Pingdom, and have used (at other companies) paid accounts on Hoptoad and NewRelic RPM.

One specific attribute of all of these applications is that they have varying levels of paid accounts, not just a free/paid[/lifetime] like many other services (LiveJournal, Flickr, LibraryThing, to pick a few that I personally use).  I like the spectrum approach as it gives users a choice as to how much services they’ll need, and what the amount they’re willing to pay for it is.

Raw Pricing Plans

First I want to present the ‘raw data’, a link to the plans and a brief table with the plan names and the monthly cost.  Then I’ll point out a few things I found interesting about the data.

Lighthouse*

Gold Silver Bronze
$100 $50 $25
* and a free plan

Tender Support

Basic Plus Premium
$19 $49 $99

Free trial is 30 days only.

GitHub

Open Source Micro Small Medium Large Mega Giga
Free $7 $12 $22 $50 $100 $200

Highrise*

Max Premium Plus Basic Solo
$149 $99 $49 $24 $29
* and a free plan

Basecamp*

Max Premium Plus Basic
$149 $99 $49 $24
* and a free plan

Campfire*

Max Premium Plus Basic
$99 $49 $24 $12
* and a free plan

NewRelic RPM

Lite Bronze Silver Gold
Free $40 $85 $200

CrazyEgg

Pro Plus Standard Basic
$99 $49 $19 $9

Hoptoad

Egg Tadpole Toad Bullfrog
Free $5 $15 $25

Thunder Thimble

Free Trial* Tiny Small Medium Large Extra-Large
Free $9 $19 $39 $79 $119

*Free trial is 30 days only

Pingdom*

Basic Business
$9.95 $39.95
* and a free plan

So what’s interesting here?

Lighthouse, Tender, GitHub, Highrise, Basecamp and Campfire all differentiate on disk space used.

Lighthouse, Tender, GitHub, Highrise, Campfire, Hoptoad and Thunder Thimble all have some variation on the concept of ‘user’ that you can pay more for more of.

In everything except NewRelic and to a lesser extent Hoptoad, all the capabilities of each application are available to all user levels, just at varying quantities.  Feature distinction only exists in those two applications.  In Hoptoads case, the distinction is between free/non-free; if you’re paying, you get all the services.  This leaves NewRelic as the only one that deeply distinguishes between features available to different paid levels.

GitHub, Highrise, Campfire, and Hoptoad all have SSL support as an ‘add-on’ feature; it’s not part of the free accounts, and in some cases it’s not part of the basic level paid accounts either.

Discounts

When I signed up for Pingdom, they sent me a ‘70% off the first year’ invitation, which reduced the price to roughly $3/mo. for all the basic plan amenities; presumably under the theory that they will be able to re-subscribe me in a year at full price.

NewRelic is running a 20% off discount currently, which lasts as long as you have a paid account.

Pricing Plan Layout

Historically I seem to recall most of these sites had the table style of comparison chart (still used by NewRelic, Hoptoad, Thunder Thimble, and CrazyEgg), but most have converted to the ‘box’ style of comparisons.  GitHub still has the table when you view the plan chart from your account settings, but other than that Lighthouse, Tender, GitHub, all the 37signals products, and Pingdom are using separate boxes for each plan level.

The 37signals products and Pingdom use an outsized box to emphasize one of the plans, presumably to drive signups to that plan.  CrazyEgg does the same thing within their table style, also.

I kept the order the various products displayed their prices, and it’s noteworthy that all the 37signals products, CrazyEgg and Lighthouse start with the larger plans on the left, and go down from there.

It’s also interesting that all the ones marked as ‘and a free plan’ de-emphasize the free plan by putting it in small text under the large table of paid options.  Two of them, Tender and Thunder Thimble offer a 30 day free trial, but no ongoing free plan.

GitHub and NewRelic are the only ones whose plan details go below the fold.  GitHub’s plan upgrade doesn’t, but their new sign-up plan list does.

Disk Usage

So for the applications which differentiate on disk usage, how much does $50/mo. get you?

Lighthouse Tender GitHub Highrise Basecamp Campfire
2GB 5GB 6GB 10GB 10GB 10GB

This pattern is roughly the same for the $24 price point for all of them. 500MB for Lighthouse and Tender, 2.4GB for GitHub, and 3GB for the 37signals products. This suggests that limiting on disk usage ranges from 20MB-200MB per dollar per month, a pretty wide range. Estimating based on S3 costs suggests a per dollar per month storage amount of around 2.3GB, but that relies on one upload, one download, and ongoing storage. If your usage is asymmetric, or storage is temporary, the S3 based cost can vary a lot.

Users

For the applications which vary pricing based on users, how much does around $24/mo. get you (I chose this number since Hoptoad doesn’t have a $50 price point)?

Lighthouse Tender GitHub Highrise Campfire Hoptoad Thunder Thimble
15 5 10 6 25 32 8

I believe this is a wide range mainly because per-user data is usually relatively light, and so it has more to do with how complex the application’s interaction between users is, rather than a real per-user cost. Still, some numbers do come out of this. The cost/user ranges from $0.78 to $4. At the $50 price point, the cost/user for qualifying apps is $0.82 to $3.27.

Summary

I don’t pretend to know pricing, or sales, but I like to believe that in aggregate the people putting these sites together do.  I see a pretty good argument here for feature parity among price points, but finding quantities that can vary between prices.  There is a clear value to users and disk space used, so those are early things to look at when pricing an application.  SSL support is a common feature of paid plans, and not of free plans.

There’s a definite movement towards boxed plan details, over tabular feature comparisons.  Ongoing free plans still exist in the majority of applications, but are de-emphasized in most, guiding users towards the paid plans.  Overall, the plans are simple, only falling below the fold in two cases, and relatively easily consumable in all.

The lowest payment point plans are $5-40, with a bare majority falling in the $5-12 range, and all the rest but NewRelic falling in the $19-$25 range.

Closing

I hope this has been an interesting and potentially useful survey of a few pricing plans for applications generally in the Rails ecosystem.  Any mistakes are mine, and I’d very much like to hear about them so I can fix them.  Other data points are welcome, and points I might have missed that would be valuable to folks thinking about pricing are welcome, and even encouraged!

I did this for my own edification, but I’d also very much like to know if others find it interesting!

Best of luck, and may figuring out pricing not be as much of a pain for you as it is for me!

—  Morgan Schweers, CyberFOX!


Greetings,

What alternative do you suggest for using models in migrations? I was in several situations where I had to not only change the underlying db structure but change the contained data, too.

Data changes, especially moving data around, are almost always rake task-worthy in my experience.

The other side of that, populating large amounts of seed data into new databases, is a difficult task no matter the method; seed_fu attempted to deal with it, but it’s not an optimal solution and pretty old. I’m not even sure if it works anymore. It’s worse if you need the seed data to be from a legacy database in tests (e.g. a nutritional database). Reloading lots of data each time a clone_structure_to_test is done makes your tests very slow.

I break down migrations into three kinds; structural (tables, columns, indices, etc.), data (pre-populating tables, etc.) and procedural (moving data around, recalculating counts, etc.). The first is what I strive to limit migrations to. I feel like there should be a good answer for the second and Rails 3.0 has a ‘Simplest Thing That Can Work’ feature in Rails commit #4932f7b. The third, I try to relegate to rake tasks that are usually run once, on deployment of the branch.

The procedural tasks don’t need to be run when building a fresh database, because there isn’t legacy data to correct. That’s why you can usually define the model in the migration to force it to work even if the real model is gone or renamed; there’s no data, so the operations often don’t matter. If they don’t NEED to be run when building a fresh database, I try not to put them in the migrations.

It’s not ‘hard and fast’, because I usually work in startups to small companies, where dogma doesn’t work so well. Imagine, though, a large and thin piece of foam. It’s flexible, and you can make it into all sorts of shapes, and yet it’s simple. Each time you add code that makes reasonable changes in the future painful, it’s like putting a thin glass rod into the foam. It’s still flexible, but there’s some bends you can’t do without breaking. Add too many and you’ve got an inflexible and brittle object, no matter how dynamic the base material is.

The fear of breaking things by changing the code is deeply demotivating for everyone.

I know I waterboarded that analogy, but hopefully it makes sense…

— Morgan Schweers, CyberFOX!


Greetings,

[Edit: Since writing this article up back in early March, I’ve moved on from this job. The folks who are now maintaining it at least know where the pain points are, can run migrations safely, deploy it locally, and to dev servers, and to the main deployment area.  It’s a working app, although I never got code coverage above about 45% at least the coverage was decent in the core app areas by the time I left.]

Mike Gunderloy had an interesting article entitled ‘Batting Clean-up‘, which was very timely for me.  I’ve just started maintaining and trying to improve a Rails app developed by an ‘outsourced’ group. The only tests were the ones generated automatically by ‘restful authentication’, and they were never maintained, so they didn’t come close to passing. Swaths of the program are written in terribly complex (and sometimes computed) SQL, migrations didn’t bring up a fresh database (poor use of acts_as_enumerated causes great hurt), and vendor/plugins should have just had one named ‘kitchen_sink’.

It hurts to see Rails abused like that; you want to take the poor application under your arm and say, ‘It’ll be okay…we’ll add some tests and get you right as rain in no time!’, but you know you’d be lying…

I did much of what Mike described (half the gems it used were config.gem’ed, the other half weren’t), vendor’ed rails (it breaks on newer than 2.1.0), and brought the development database kicking and screaming into life. There was no schema.rb, it had been .gitignore’d, and the migrations added data, used models, and everything else you can imagine doing wrong. (Including using a field on a model after adding that column in the previous line…I don’t know what version of Rails that ever worked on…) I didn’t want a production database; who knows what’s been done to that by hand. I want to know what the database is _supposed_ to look like; I can figure out the difference with production later.

Once the clean (only data inserted by migrations) dev database was up, I brought the site up to see if it worked. Surprisingly enough, it did; apparently they used manual QA as their only testing methodology. I appreciate their QA a lot; it means it’s a working application, even if it’s not going to help me refactor it.

I ran flog and flay and looked at the pain points they found to get an idea how bad things might be. I picked an innocuous join table (with some extra data and functionality) to build the first set of tests for, which gave me insight into both sides of the join without having to REALLY dig into the ball of fur on either side. I viciously stripped all the ‘test_truth’ tests. I looked for large files that flog and flay hadn’t picked up to pore over. Check out custom rake tasks, because those often are clear stories and easy to quickly understand in a small context.

Checking out the deployment process tells you a lot also, although it turns out this was stock engine yard capistrano.

Skimming views (sort by size!) will tell you a lot also, especially when you find SQL queries being run in them…

Use the site for a little while, and watch the log in another window. Just let it skim by; if you’ve looked at log files much, things that seem wrong will jump out even if it’s going faster than you can really read.

In my case, the code’s mine now, so it’s my responsibility to make it better before anybody else has to touch it. I’ve got about a week of ‘free fix-it-up time’ before I need to start actually implementing new features and (thankfully) stripping out old ones… At my previous company, I was the guy pushing folks to test, now I’ve inherited a codebase with zero tests. Poetic justice, I suppose… :)

Good luck!

— Morgan Schweers, CyberFOX!


Greetings,

@workon‘ is a brutally simple time tracker; it’s mainly to scratch my own itch, but I’d like to open it up to others.  Twitter has opened up it to 20,000 API calls per hour, which is 19,800 more than I think it’ll need.  :)  It’s a Rails application, listening to email via IMAP in order to recognize followers, and using the Twitter API in order to listen and send direct messages.

The simplest use is to follow ‘workon‘, wait until it sends you a direct message, then ‘d workon {what I’m doing}’, and ‘d workon done’ to mark the most recent one as finished.  It keeps a page (by default a semi-private URL it direct messages to you, but the user can make it visible by their handle) of currently open tasks, and the time information for recently closed tasks.

Right now it polls for direct messages and new followers once every minute.

I created this because I’ve started contracting recently, and I have to keep track of my time for the first time ever.  Since I’ve always got TwitterFox in my browser, I thought it might be a good way to keep track of my status there.  There will be more features to come; I only started to work on it from Saturday at 3:45am, until Monday at around 3am, and most of Sunday I was ‘active parent’ letting my wife have a day off, but it already does what I’ve described above.

It also has the ability to support Jabber, and I’m thinking about how to improve that cleanly…

If you find it interesting, or want any features, let me know!

—  Morgan Schweers, @cyberfox / @workon


Greetings,
The JBidwatcher home page, forums, and svn are all down for a few hours while my hosting service fixes some power problems in their data center.

I’m really sorry I didn’t set up a fallback DNS, or something else, beforehand. Offhand, I don’t know a good way to handle my sole host being powered off…

— Morgan Schweers, CyberFOX!


March 19th, 2009

gotAPI Fluid Icon

Greetings,

I’m definitely not an artist, and there’s not much to work with from gotAPI. No logo on the blog, Twitter, and the site’s main logo is textual. The favicon is the only piece of abstract iconography to work from, so that’s what the Fluid icon is based on.

It works for me, because I’m a heavy-duty tab-user, there are often so many tabs that icons are all that’s left, so I’m used to looking for the gotapi favicon.

With those caveats in mind, this png is what I’m using as my Fluid icon.

gotAPI Fluid Icon

— Morgan Schweers, CyberFOX!


Greetings,
That quote is from a Treasury spokeswoman, quoted by Forbes, on why the bank bailout will need $700 billion.  A spokeswoman who probably has joined the unemployed today.

There’s a lot of people suggesting that we should let them all die (including me, in a fit of deep fury, when the bailout first was proposed).  Others have suggested that simply improving the more stable banks ability to give mortgages would help, so folks could refi with those, and leave the bad financial companies to wither.

Unfortunately it’s long past being just a mortgages issue.

The original sub-prime mortgages have been securitized into investments which returned a good percentage with an aggregated low risk, and were part of the formula that many companies used to approximate future revenues.  They loaned and invested based on that approximation.  Several relatively small companies sold ‘insurance policies’ (its more complex than that, but it comes down to hedging against risk) that they weren’t sufficiently collateralized to back.  It turns out there was a lot more risk involved than was visible.  When the bad loans became endemic, these insurance policies were called in by the major companies to preserve capital.  The small companies folded, not able to actually provide the liquid assets needed to back the policies.

Where we are now is that there have been huge losses, and companies who offered hedges against those losses are backing out of their obligations because they don’t have the liquid funds to meet them.  Those companies will close, bankrupt, and because nobody will ever want to do business with them again (and there’s probably quite a bit of legal action that will happen).  They are mostly small to medium sized hedge funds and independent insurers, basically.

This leaves the larger companies holding the bag for billions in risky, unhedged investments.  They want to get rid of them, not because they are all going to go to $0 worth, but because there’s NOBODY who’s willing to provide insurance on them right now, and in the financial market an uninsurable investment is not acceptable.

The bailout plan is to allow the government to acquire these securitized mortgages and hold onto them while the financial system rebuilds itself, and companies who are sufficiently capitalized to insure against the (now recognized to be higher) risks.  Then the government can re-sell them back into the system slowly, hopefully recouping a percentage of the amount that ends up being spent.

The problem right now is that EVERYBODY wants to sell, and NOBODY wants to buy.  In that kind of a market, even good quality doesn’t protect you from being pounded flat.

The reason for the government investment is to provide the market time to come to its senses, and breathing room to realize that these are not universally bad investments, the risk was just underpriced.  To make up a homily on the spot, if everybody’s terrified into immobility of the mouse in the room, nobody’s able to go and bring the cat in.

The reason this is far beyond mortgages is that when all these big companies have a large amount of risky investments that they can’t hedge against, they don’t lend money, because they aren’t comfortable knowing how much spare they have.

When these large financial institutions don’t lend money, people can’t get home loans, home improvement loans, student loans, car loans, business loans.  This trickles down to every single segment of society, from CEOs to greeters at Home Depot to startups to teachers to mechanics.

That’s the disaster that we need to avert.  And it pisses me off to no end that we’re in the situation where we HAVE to hand money over.  I’ve railed against this in public and private, but of all the insane things about it that make me deeply infuriated, the worst of all is that now that it’s gotten to this point, we have no real choice.  We’re forced to make a move like this.

I too want heads to roll.  The most common phrase around the office regarding this is ‘Heads.  On.  Pikes.’  There must be accountability, and it must be large and visible, not detail-oriented and generally annoying like Sarbanes-Oxley.  Several CEOs should lose their jobs, sans parachutes.  Several of the regulations which were eliminated in 1999 will be reinstated.  Maybe there’ll even be CEO compensation limits, and some government ownership of these companies in exchange for government assistance.

The reason that ‘700 Billion’ was picked out of the air, is not because there’s some knowledge of how much of these securitized mortgages there are out there (there isn’t, and if there was, I bet it’d be a lot more than $1T).  It’s because what is needed, far more than anything else, is a symbol of motion.  It’s for one person to come into that room of fear-frozen people, and corner the mouse for a few minutes while someone else goes and grabs the cat.  It had to be a number that shocks the conscience, because otherwise people would be asking nervously, ‘Is..that going to be enough…?’  And calming fear is, in the end, what this is really about.

That all said, one political party has de-regulation as an express political plank of their platform.  I know I’m generally preaching to the choir here, but they should learn just how out-of-touch that particular political plank is on November 4th.

—  Morgan Schweers, CyberFOX!


Greetings,

So I’ve had this in my playlist for years,  but shuffle just brought it up again, and the need to push it welled up.  It may only make sense to folks who’ve lived in New York and California, but…

Dar Williams – Southern California wants to be Western New York

It’s not my usual fare (leaning more towards meaningless high energy pop, techno, etc., and the occasional story-rock), but it makes me oddly nostalgic.  :)

—  Morgan Schweers, CyberFOX!