Greetings,
Unfortunately, the John Edwards Keynote at Gnomedex was overly political. As Chris Pirillo put it, once you open up the session to the audience, the audience drives it to where they want it to go.

For the most part it was political questions, comments on the democrat party, and grandstanding by audience members.watch movie Keeping Up with the Joneses 2016 now

Back in the day, I used to have access to the Congressional Record computer system, an VAX/VMS system. One of the interesting things it had was variations of bills as they went through various stages. One thing I learned back then was that all bills are ‘patches’ to existing government code. So the obvious question becomes how that is visualized, written, edited, and collaborated on by the Senate right now. Further, is there any way to expose that inner workings, and the steps involved publicly?

Specifically taking the wiki approach, adding in added/deleted/changed hilighting, along with version control and version identification. It would be IMMENSELY valuable to see which Senator made individual changes in the bills, for instance. The other related question, is how Senators collaborate on writing a bill. For instance, since it’s all deltas to the government code, you have to keep track of what those changes are, and allow multiple disconnected Senators to send data back and forth in order to build the full text of the bill. This seems like a great place for wiki technology to be useful to improve the processes of government.

Anyhow, that was my ‘technical’ question, as opposed to the majority of the political questions/discussions.

We had a good chance to talk real technology with John Edwards, but it devolved more or less to a political mess, and I didn’t even get to ask my question.

Meh.

— Morgan Schweers, CyberFOX!


Greetings,
I am not an expert Objective C, Cocoa, or Core Data coder. There, I admitted it. I expect some of the people viewing this will be, and I encourage you to let me know where I’ve gotten things wrong. I’m also sure my MacRuby style is…let’s call it idiosyntactic.

Too long to fit in the margins of this book

My journey starts with Matt Aimonetti’s excellent MacRuby: The Definitive Guide, and it’s Core Data chapter. It’s a really good introduction to Core Data, but I needed something a bit deeper and more advanced. My dream, of course, would be to use Core Data with Active Record’s ease of use. Core Data is really complex, though. Books have been written on the subject; and yet my complete exposure to it has been writing a few Objective C iPhone applications for my own use. buy cialis

My problem

It’s always best to start with a relatively simple action, and figure out how to build on that. My initial problem to solve was allowing the user to select a Category from an outline view on the left (iTunes style) and pull up a list of the items in that Category in a table view in the main part of the UI, like so:
MacBidwatcher screenshot with an outline view on the left and a table on the right. break 20mg cialis half

I’m sure I could do this a number of ways, and it’s probably even possible to make it work with the standard NSArrayController like in MacRuby: The Definitive Guide, but there are a number of places where it won’t work as well, such as when I want to find the item ending the soonest from all Categories, or load and modify specific Auction items during updates or when the user chooses to perform actions on individual items. At that point, it’ll become necessary to do the Core Data handling ‘by hand’, and so I felt it’d be best not to start with something I was going to have to bypass anyway.

The Data Model

It may be helpful to sketch a simple data model, described using the data modeler in Xcode; some other models in the project have been elided for clarity:
MacBidwatcher's Data Model for Categories and Auctions

Because of the reciprocal relationship between Category and Auction, I could simply load a Category with a particular name, and then refer to category.auctions to get the list (in Core Data form) of Auction objects associated with that category. Adding .allObjects returns the actual objects as an array for easy referencing in a TableView. (This is not efficient, but I’m trying to get the code working right now; I’ll manage efficiency once I’ve gotten it in users hands.)

MacRuby vs. Objective C

The Objective C (that I’ve written) to do a similar load takes about 80 lines of code; I’m not embedding it because it’s too long, and it only loads by name. You can go take a look; it’s pretty straightforward. Unlike the Objective C version, the Ruby code can load by any attribute dynamically, and has additional features like limits and offsets.

This is the core of my code when the user clicks a category/folder on the left:

The key code being referenced is this:
@current_category = Category.find_first(context, by_name:category_name)
It’s roughly equivalent to the Objective C code (extracted from the app that uses my ObjC code above):
NSArray *cats = [Category findByName:category fromContext:managedObjectContext];
The main difference is that the Objective C code requires a different method for each findBy* that you want to do, whereas the MacRuby code uses the fact that its parameters are really just hash entries using the ‘:’ syntax for faux named parameters. So, for example, while this code is doing a find_first, if it were to use find_all it could also pass ..., limit:20, offset:7) if you wanted to get 20 items starting at the 7th.

The Magic

The magic, of course, happens in entity.rb which looks like this:

It references inheritable_attrs.rb which is available as a gist of its own

One important part that any entity needs to get is a ‘context’, which is actually a managed object context. Fortunately when you create a Mac Ruby project using the MacRuby Core Data Application template, a context is available from the AppDelegate class. You may notice that in my AuctionTableDatasource there is: attr_writer app_delegate which doesn’t really do anything obvious. That’s an ‘outlet’ to refer to the AppDelegate; using Interface Builder control-click-and-drag from your data source to the App Delegate, and choose the app_delegate outlet to link it up. The default Core Data AppDelegate class instantiates a Managed Object Context which is essentially a link to your Core Data database. It needs to be passed in to code which is going to interact with data from the database, which is why each of the public methods in entity.rb takes a context as their first parameter.

A MacRuby Caveat

I mildly disagree with the default MacRuby Core Data Application template which creates an XML-based Core Data app; specifically it creates a file with the extension of .xml and passes NSXMLStoreType as the persistent store type. I believe it should pass a .sqlite file extension and use NSSQLiteStoreType as the storage type. For small applications, it may not matter, but if you feel you’re likely to be storing enough data that handling Core Data by hand will be necessary, then you’re going to want the SQL-based storage type. The typical recommendation is to start with an XML type and switch to SQL when you’re going to release, nominally because XML is easier to read. I dispute that, though, because there are slight behavioral differences, such as the XML storage type keeping the entire object graph in memory and reports of subtly different handling of (generally incorrectly specified) relations. If you want to use an SQLLite backed database, you’ll want to fix up AppDelegate.rb once you’ve created your project.

Conventions and Features

I know there’s a lot of disagreement over Active Record’s design, but I happen to like it a lot, and many of the guidelines of Rails creep into my code. I especially like the presumption that there’s an easy way to do whatever you’re doing, that the code handles for you automatically, but if you want to fight the convention, you can; it just won’t be as clean. There’s a lot of ‘convention’ involved in the Entity class, especially in the presumptions that you’ll have model classes which are named identically (including case!) to the core data models you’ve created, i.e. generally with the first letter uppercased. You should be able to override that by setting the entity_name in your subclasses using 50 mg viagra how long does it last

require 'entity'
Class OddlyNamedModel < Entity
  self.entity_name = 'oddly_named_model'
.
.
.

but it’s not something you should do lightly, or at all if you’re creating a new project, and embarrassingly, I haven’t tested it.

Attributes

Attribute names were something else to deal with; I typically name my attributes in lower case while, as mentioned, my entities have their first letter capitalized. So my Category.find_first(context, by_name:category_name) method will look for an attribute called ‘name‘. I am aware that some people capitalize the first letter of their attributes, so: Category.find_first(context, byName:category_name) would work if you have an attribute called ‘Name‘ and looks very Objective C-ish.

Limits and Offsets

It’s straightforward to index into your results, using Category.find_all(context, by_name:common_name, offset:20, limit:10); this would search for all instances that have common_name in their name attribute, and then starting from the 20th, pull down 10 of them. In SQLLite terms, this translates into something approximately like: SELECT * FROM ZCATEGORY WHERE NAME = ? LIMIT 10 OFFSET 20

Conditions (the escape hatch)

Another useful feature is a conditions key, which provides one entry which is raw predicate logic, essentially. An example for finding all categories that end in ‘nt’ would be: Category.find_all(context, conditions:['name like %@', 'nt']) which translates in SQLLite terms to: SELECT * FROM ZCATEGORY WHERE NSCoreDataLike( ZNAME, ?, 0)

Debugging

One incredibly useful built-in capability (and the source of the above approximations of the commands issued by Core Data) I’ve found for doing debugging of SQLLite-based Core Data code is to pass -com.apple.CoreData.SQLDebug 1 to the application from the command line, in order to view the queries as they’re being done. From the command line, in the root of your app directory, this would look something like this

bash$ build/Release/MyApp.app/Contents/MacOS/MyApp -com.apple.CoreData.SQLDebug 1

Conclusion

I’ve tried to make an extremely simple class that I can subclass for my entities, that allows me to do straightforward find operations in a way that doesn’t feel overly complex, and fits in a Ruby-ish style. It’s working so far for my code, but I can’t promise it’ll work for everybody’s. Especially if you’re facing down historical schemas, or other issues, my defaults are probably not going to be appropriate for you. It’s definitely less code than the Objective C equivalent, and feels…more comfortable for me.

I’m VERY interested in feedback, and improving this code. As I stated at the outset, the best I could be considered is a hobbyist Objective C coder, and I’m only just starting to dig into the power that MacRuby has. Please feel free to provide pointers, comments, ideas, features, or (if necessary) derision. Especially feel free to fork entity.rb and make corrections or additions as you see fit.

— Morgan Schweers, CyberFOX!

Update: A friend and quite excellent Objective C developer David Brown provided his own take on the Entity concept; according to him it’s untested and needs a few tweaks, but the basic concept should work equally well; check it out on his own gist of entity.m. It’s like a Rosetta stone between MacRuby and Objective C!


Greetings,

This is a very informal, off-the-cuff survey of some web applications in the Rails ecosystem that have varying payment plans, how they present them, and some of the differences among them.  The idea is not to compare the services, but to start to get a handle on how to price services on the web.

Background

I’m working on launching a small, very niche Rails-based web application in the near future, and am looking to charge a small amount per-month for it.  Mostly as a way to make ends meet while I’m unemployed, but also because I’d like to provide a few extra features to those users who already donate to the free JBidwatcher application, and because the web application uses S3 and I have to defray those costs.  While I’m very grateful my users are contributing because JBidwatcher itself is useful to them, I’d like to offer a ‘premium’ value both for folks who have already donated, and for people who don’t feel comfortable just sending money, and want to see something extra for it.

This led me to be curious about pricing.  I pay for several web applications already, and expect that eventually I’ll have to pay for more if Google starts actually asking for money. 🙂  I decided to dig into the applications that are out there, and see what kind of pricing models I should be looking at.

From my informal survey, a lot of the subscription plan models used by Rails-oriented companies naturally grew out of 37signals and their fanatic devotion to the concept that web applications should be good enough to charge for, and should be charged for.

The products I’ve looked at for this are Lighthouse, Tender, GitHub, Highrise, Basecamp, Campfire, NewRelic RPM, CrazyEgg, Hoptoad, Thunder Thimble, and Pingdom.  I’m sure there are many others out there, but those are the ones I deal with on a semi-regular basis.  I have paid accounts with GitHub and Pingdom, and have used (at other companies) paid accounts on Hoptoad and NewRelic RPM.

One specific attribute of all of these applications is that they have varying levels of paid accounts, not just a free/paid[/lifetime] like many other services (LiveJournal, Flickr, LibraryThing, to pick a few that I personally use).  I like the spectrum approach as it gives users a choice as to how much services they’ll need, and what the amount they’re willing to pay for it is.

Raw Pricing Plans

First I want to present the ‘raw data’, a link to the plans and a brief table with the plan names and the monthly cost.  Then I’ll point out a few things I found interesting about the data.

Lighthouse*

Gold Silver Bronze
$100 $50 $25
* and a free plan

Tender Support

Basic Plus Premium
$19 $49 $99

Free trial is 30 days only.

GitHub

Open Source Micro Small Medium Large Mega Giga
Free $7 $12 $22 $50 $100 $200

Highrise*

Max Premium Plus Basic Solo
$149 $99 $49 $24 $29
* and a free plan

Basecamp*

Max Premium Plus Basic
$149 $99 $49 $24
* and a free plan

Campfire*

Max Premium Plus Basic
$99 $49 $24 $12
* and a free plan

NewRelic RPM

Lite Bronze Silver Gold
Free $40 $85 $200

CrazyEgg

Pro Plus Standard Basic
$99 $49 $19 $9

Hoptoad

Egg Tadpole Toad Bullfrog
Free $5 $15 $25

Thunder Thimble

Free Trial* Tiny Small Medium Large Extra-Large
Free $9 $19 $39 $79 $119

*Free trial is 30 days only

Pingdom*

Basic Business
$9.95 $39.95
* and a free plan

So what’s interesting here?

Lighthouse, Tender, GitHub, Highrise, Basecamp and Campfire all differentiate on disk space used.

Lighthouse, Tender, GitHub, Highrise, Campfire, Hoptoad and Thunder Thimble all have some variation on the concept of ‘user’ that you can pay more for more of.

In everything except NewRelic and to a lesser extent Hoptoad, all the capabilities of each application are available to all user levels, just at varying quantities.  Feature distinction only exists in those two applications.  In Hoptoads case, the distinction is between free/non-free; if you’re paying, you get all the services.  This leaves NewRelic as the only one that deeply distinguishes between features available to different paid levels.

GitHub, Highrise, Campfire, and Hoptoad all have SSL support as an ‘add-on’ feature; it’s not part of the free accounts, and in some cases it’s not part of the basic level paid accounts either.

Discounts

When I signed up for Pingdom, they sent me a ‘70% off the first year’ invitation, which reduced the price to roughly $3/mo. for all the basic plan amenities; presumably under the theory that they will be able to re-subscribe me in a year at full price.

NewRelic is running a 20% off discount currently, which lasts as long as you have a paid account.

Pricing Plan Layout

Historically I seem to recall most of these sites had the table style of comparison chart (still used by NewRelic, Hoptoad, Thunder Thimble, and CrazyEgg), but most have converted to the ‘box’ style of comparisons.  GitHub still has the table when you view the plan chart from your account settings, but other than that Lighthouse, Tender, GitHub, all the 37signals products, and Pingdom are using separate boxes for each plan level.

The 37signals products and Pingdom use an outsized box to emphasize one of the plans, presumably to drive signups to that plan.  CrazyEgg does the same thing within their table style, also.

I kept the order the various products displayed their prices, and it’s noteworthy that all the 37signals products, CrazyEgg and Lighthouse start with the larger plans on the left, and go down from there.

It’s also interesting that all the ones marked as ‘and a free plan’ de-emphasize the free plan by putting it in small text under the large table of paid options.  Two of them, Tender and Thunder Thimble offer a 30 day free trial, but no ongoing free plan.

GitHub and NewRelic are the only ones whose plan details go below the fold.  GitHub’s plan upgrade doesn’t, but their new sign-up plan list does.

Disk Usage

So for the applications which differentiate on disk usage, how much does $50/mo. get you?

Lighthouse Tender GitHub Highrise Basecamp Campfire
2GB 5GB 6GB 10GB 10GB 10GB

This pattern is roughly the same for the $24 price point for all of them. 500MB for Lighthouse and Tender, 2.4GB for GitHub, and 3GB for the 37signals products. This suggests that limiting on disk usage ranges from 20MB-200MB per dollar per month, a pretty wide range. Estimating based on S3 costs suggests a per dollar per month storage amount of around 2.3GB, but that relies on one upload, one download, and ongoing storage. If your usage is asymmetric, or storage is temporary, the S3 based cost can vary a lot.

Users

For the applications which vary pricing based on users, how much does around $24/mo. get you (I chose this number since Hoptoad doesn’t have a $50 price point)?

Lighthouse Tender GitHub Highrise Campfire Hoptoad Thunder Thimble
15 5 10 6 25 32 8

I believe this is a wide range mainly because per-user data is usually relatively light, and so it has more to do with how complex the application’s interaction between users is, rather than a real per-user cost. Still, some numbers do come out of this. The cost/user ranges from $0.78 to $4. At the $50 price point, the cost/user for qualifying apps is $0.82 to $3.27.

Summary

I don’t pretend to know pricing, or sales, but I like to believe that in aggregate the people putting these sites together do.  I see a pretty good argument here for feature parity among price points, but finding quantities that can vary between prices.  There is a clear value to users and disk space used, so those are early things to look at when pricing an application.  SSL support is a common feature of paid plans, and not of free plans.

There’s a definite movement towards boxed plan details, over tabular feature comparisons.  Ongoing free plans still exist in the majority of applications, but are de-emphasized in most, guiding users towards the paid plans.  Overall, the plans are simple, only falling below the fold in two cases, and relatively easily consumable in all.

The lowest payment point plans are $5-40, with a bare majority falling in the $5-12 range, and all the rest but NewRelic falling in the $19-$25 range.

Closing

I hope this has been an interesting and potentially useful survey of a few pricing plans for applications generally in the Rails ecosystem.  Any mistakes are mine, and I’d very much like to hear about them so I can fix them.  Other data points are welcome, and points I might have missed that would be valuable to folks thinking about pricing are welcome, and even encouraged!

I did this for my own edification, but I’d also very much like to know if others find it interesting!

Best of luck, and may figuring out pricing not be as much of a pain for you as it is for me!

—  Morgan Schweers, CyberFOX!


Greetings,

What alternative do you suggest for using models in migrations? I was in several situations where I had to not only change the underlying db structure but change the contained data, too.

Data changes, especially moving data around, are almost always rake task-worthy in my experience.

The other side of that, populating large amounts of seed data into new databases, is a difficult task no matter the method; seed_fu attempted to deal with it, but it’s not an optimal solution and pretty old. I’m not even sure if it works anymore. It’s worse if you need the seed data to be from a legacy database in tests (e.g. a nutritional database). Reloading lots of data each time a clone_structure_to_test is done makes your tests very slow.

I break down migrations into three kinds; structural (tables, columns, indices, etc.), data (pre-populating tables, etc.) and procedural (moving data around, recalculating counts, etc.). The first is what I strive to limit migrations to. I feel like there should be a good answer for the second and Rails 3.0 has a ‘Simplest Thing That Can Work’ feature in Rails commit #4932f7b. The third, I try to relegate to rake tasks that are usually run once, on deployment of the branch.

The procedural tasks don’t need to be run when building a fresh database, because there isn’t legacy data to correct. That’s why you can usually define the model in the migration to force it to work even if the real model is gone or renamed; there’s no data, so the operations often don’t matter. If they don’t NEED to be run when building a fresh database, I try not to put them in the migrations.

It’s not ‘hard and fast’, because I usually work in startups to small companies, where dogma doesn’t work so well. Imagine, though, a large and thin piece of foam. It’s flexible, and you can make it into all sorts of shapes, and yet it’s simple. Each time you add code that makes reasonable changes in the future painful, it’s like putting a thin glass rod into the foam. It’s still flexible, but there’s some bends you can’t do without breaking. Add too many and you’ve got an inflexible and brittle object, no matter how dynamic the base material is.

The fear of breaking things by changing the code is deeply demotivating for everyone.

I know I waterboarded that analogy, but hopefully it makes sense…

— Morgan Schweers, CyberFOX!


Greetings,

[Edit: Since writing this article up back in early March, I’ve moved on from this job. The folks who are now maintaining it at least know where the pain points are, can run migrations safely, deploy it locally, and to dev servers, and to the main deployment area.  It’s a working app, although I never got code coverage above about 45% at least the coverage was decent in the core app areas by the time I left.]

Mike Gunderloy had an interesting article entitled ‘Batting Clean-up‘, which was very timely for me.  I’ve just started maintaining and trying to improve a Rails app developed by an ‘outsourced’ group. The only tests were the ones generated automatically by ‘restful authentication’, and they were never maintained, so they didn’t come close to passing. Swaths of the program are written in terribly complex (and sometimes computed) SQL, migrations didn’t bring up a fresh database (poor use of acts_as_enumerated causes great hurt), and vendor/plugins should have just had one named ‘kitchen_sink’.

It hurts to see Rails abused like that; you want to take the poor application under your arm and say, ‘It’ll be okay…we’ll add some tests and get you right as rain in no time!’, but you know you’d be lying…

I did much of what Mike described (half the gems it used were config.gem’ed, the other half weren’t), vendor’ed rails (it breaks on newer than 2.1.0), and brought the development database kicking and screaming into life. There was no schema.rb, it had been .gitignore’d, and the migrations added data, used models, and everything else you can imagine doing wrong. (Including using a field on a model after adding that column in the previous line…I don’t know what version of Rails that ever worked on…) I didn’t want a production database; who knows what’s been done to that by hand. I want to know what the database is _supposed_ to look like; I can figure out the difference with production later.

Once the clean (only data inserted by migrations) dev database was up, I brought the site up to see if it worked. Surprisingly enough, it did; apparently they used manual QA as their only testing methodology. I appreciate their QA a lot; it means it’s a working application, even if it’s not going to help me refactor it.

I ran flog and flay and looked at the pain points they found to get an idea how bad things might be. I picked an innocuous join table (with some extra data and functionality) to build the first set of tests for, which gave me insight into both sides of the join without having to REALLY dig into the ball of fur on either side. I viciously stripped all the ‘test_truth’ tests. I looked for large files that flog and flay hadn’t picked up to pore over. Check out custom rake tasks, because those often are clear stories and easy to quickly understand in a small context.

Checking out the deployment process tells you a lot also, although it turns out this was stock engine yard capistrano.

Skimming views (sort by size!) will tell you a lot also, especially when you find SQL queries being run in them…

Use the site for a little while, and watch the log in another window. Just let it skim by; if you’ve looked at log files much, things that seem wrong will jump out even if it’s going faster than you can really read.

In my case, the code’s mine now, so it’s my responsibility to make it better before anybody else has to touch it. I’ve got about a week of ‘free fix-it-up time’ before I need to start actually implementing new features and (thankfully) stripping out old ones… At my previous company, I was the guy pushing folks to test, now I’ve inherited a codebase with zero tests. Poetic justice, I suppose… 🙂

Good luck!

— Morgan Schweers, CyberFOX!



Greetings,
The JBidwatcher home page, forums, and svn are all down for a few hours while my hosting service fixes some power problems in their data center.

I’m really sorry I didn’t set up a fallback DNS, or something else, beforehand. Offhand, I don’t know a good way to handle my sole host being powered off…

— Morgan Schweers, CyberFOX!


March 19th, 2009

gotAPI Fluid Icon

Greetings, quanto custa o generico do viagra

I’m definitely not an artist, and there’s not much to work with from gotAPI. No logo on the blog, Twitter, and the site’s main logo is textual. The favicon is the only piece of abstract iconography to work from, so that’s what the Fluid icon is based on.

It works for me, because I’m a heavy-duty tab-user, there are often so many tabs that icons are all that’s left, so I’m used to looking for the gotapi favicon.

With those caveats in mind, this png is what I’m using as my Fluid icon. cialis generic

gotAPI Fluid Icon tadalafil 20 mg

— Morgan Schweers, CyberFOX!


Greetings,
That quote is from a Treasury spokeswoman, quoted by Forbes, on why the bank bailout will need $700 billion.  A spokeswoman who probably has joined the unemployed today. buy tadalafil Levitra Without Doctor

There’s a lot of people suggesting that we should let them all die (including me, in a fit of deep fury, when the bailout first was proposed).  Others have suggested that simply improving the more stable banks ability to give mortgages would help, so folks could refi with those, and leave the bad financial companies to wither. cialis tablets

Unfortunately it’s long past being just a mortgages issue.

The original sub-prime mortgages have been securitized into investments which returned a good percentage with an aggregated low risk, and were part of the formula that many companies used to approximate future revenues.  They loaned and invested based on that approximation.  Several relatively small companies sold ‘insurance policies’ (its more complex than that, but it comes down to hedging against risk) that they weren’t sufficiently collateralized to back.  It turns out there was a lot more risk involved than was visible.  When the bad loans became endemic, these insurance policies were called in by the major companies to preserve capital.  The small companies folded, not able to actually provide the liquid assets needed to back the policies.

Where we are now is that there have been huge losses, and companies who offered hedges against those losses are backing out of their obligations because they don’t have the liquid funds to meet them.  Those companies will close, bankrupt, and because nobody will ever want to do business with them again (and there’s probably quite a bit of legal action that will happen).  They are mostly small to medium sized hedge funds and independent insurers, basically. getting viagra without a doctor

This leaves the larger companies holding the bag for billions in risky, unhedged investments.  They want to get rid of them, not because they are all going to go to $0 worth, but because there’s NOBODY who’s willing to provide insurance on them right now, and in the financial market an uninsurable investment is not acceptable.

The bailout plan is to allow the government to acquire these securitized mortgages and hold onto them while the financial system rebuilds itself, and companies who are sufficiently capitalized to insure against the (now recognized to be higher) risks.  Then the government can re-sell them back into the system slowly, hopefully recouping a percentage of the amount that ends up being spent.

The problem right now is that EVERYBODY wants to sell, and NOBODY wants to buy.  In that kind of a market, even good quality doesn’t protect you from being pounded flat.

The reason for the government investment is to provide the market time to come to its senses, and breathing room to realize that these are not universally bad investments, the risk was just underpriced.  To make up a homily on the spot, if everybody’s terrified into immobility of the mouse in the room, nobody’s able to go and bring the cat in. cialis prices

The reason this is far beyond mortgages is that when all these big companies have a large amount of risky investments that they can’t hedge against, they don’t lend money, because they aren’t comfortable knowing how much spare they have.

When these large financial institutions don’t lend money, people can’t get home loans, home improvement loans, student loans, car loans, business loans.  This trickles down to every single segment of society, from CEOs to greeters at Home Depot to startups to teachers to mechanics.

That’s the disaster that we need to avert.  And it pisses me off to no end that we’re in the situation where we HAVE to hand money over.  I’ve railed against this in public and private, but of all the insane things about it that make me deeply infuriated, the worst of all is that now that it’s gotten to this point, we have no real choice.  We’re forced to make a move like this.

I too want heads to roll.  The most common phrase around the office regarding this is ‘Heads.  On.  Pikes.’  There must be accountability, and it must be large and visible, not detail-oriented and generally annoying like Sarbanes-Oxley.  Several CEOs should lose their jobs, sans parachutes.  Several of the regulations which were eliminated in 1999 will be reinstated.  Maybe there’ll even be CEO compensation limits, and some government ownership of these companies in exchange for government assistance.

The reason that ‘700 Billion’ was picked out of the air, is not because there’s some knowledge of how much of these securitized mortgages there are out there (there isn’t, and if there was, I bet it’d be a lot more than $1T).  It’s because what is needed, far more than anything else, is a symbol of motion.  It’s for one person to come into that room of fear-frozen people, and corner the mouse for a few minutes while someone else goes and grabs the cat.  It had to be a number that shocks the conscience, because otherwise people would be asking nervously, ‘Is..that going to be enough…?’  And calming fear is, in the end, what this is really about.

That all said, one political party has de-regulation as an express political plank of their platform.  I know I’m generally preaching to the choir here, but they should learn just how out-of-touch that particular political plank is on November 4th. billig viagra online

—  Morgan Schweers, CyberFOX!


Greetings,

So I’ve had this in my playlist for years,  but shuffle just brought it up again, and the need to push it welled up.  It may only make sense to folks who’ve lived in New York and California, but…

Dar Williams – Southern California wants to be Western New York

It’s not my usual fare (leaning more towards meaningless high energy pop, techno, etc., and the occasional story-rock), but it makes me oddly nostalgic.  🙂

—  Morgan Schweers, CyberFOX!