tag:blogger.com,1999:blog-89939014355739217862024-03-12T16:06:52.052-07:00Pin DancingRavi Mohan's BlogRavihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.comBlogger68125tag:blogger.com,1999:blog-8993901435573921786.post-84588936545008907902013-02-02T05:48:00.000-08:002013-02-02T05:52:48.510-08:00Silence and Flow - Minor updates to my preferred workstyleRecently I found myself, half by intent and half by circumstance, cut off almost completely from the internet. No email, no twitter, no HackerNews. I don't have TV or cable. I also refused to answer my cellphone, handing it over to a friend to monitor. (He knew how to contact me in an emergency).<br />
<br />
And.. I loved it. A few withdrawal symptoms for the first few days, but I've never been so productive in my life before.<br />
<br />
I'm thinking of making this kind of isolation + total focus a big part of work going forward. I'm thinking of going into "Alternate 4-6 weeks of focused work in isolation with 4-6 weeks of connected, 'normal' existence" mode (with few days of travel to random places in the middle).<br />
<br />
Which might be really soon. I have a pile of mathematics texts to work through. My initial efforts to learn mathematics enabled me to break free of the 'take some random US business's database and build a web front end and call yourself a software engineer' phase of my career, and make a living writing interesting programs for interesting people doing interesting things. Now is a good time to ramp up on the math (and then write more interesting programs doing interesting things. ;-). Rinse. Repeat.).<br />
<br />
Mostly my work (post the wasted enterprise dev years) involves a lot of programming and a little math. This year the ratio might be inverted. A scary, if exhilarating, thought - I find creating proofs way tougher than creating programs. With a good maths text, half a page is often a good day's progress. But on the other hand, it feels somehow more fundamental and satisfying. <br />
<br />
One doesn't really need the internet to work through math texts. You don't even need a computer. You need a pile of paper, pens, and a good place where you can work undisturbed.<br />
<br />
I did miss some of the news - I didn't get to know of Aaron Swartz's suicide for instance (poor fellow, R.I.P) and the latest shenanigans of India's dysfunctional government, or the French invasion of Mali. But by and large, the world didn't change all that much in 6 weeks, and not having to follow or respond to a constant stream of news and tweets and emails and phone calls is a refreshing experience.<br />
<br />
The one complicating factor in all this is the prevalence of compelling MOOCs put out by Coursera, EDx et al. For 'attending' a MOOC, you need the internet, and then it is all too easy to drop into checking email, or peeping at Twitter and then you are back in the roar of the world flowing by. I don't quite have a solution yet. Working on it.<br />
<br />
[1] Phone calls are worse than any other communications medium because the damn thing actually *rings* and jerks you out of the zone to deal with (mostly) trivia. Switching into silent mode every time you want to do some work is a major pain, because (on my phone) you have to go through multiple button presses to accomplish the mode change. And then if you see a friend's missed call you feel guilty if you don't call back. But if you don't see anything, you can just get on with work and apologize later ;-).Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com4tag:blogger.com,1999:blog-8993901435573921786.post-57895256940397743872012-11-28T03:02:00.000-08:002012-11-28T06:34:37.597-08:00The meaning of " I don't have a good (startup) idea"I know a few dozen people who are looking for a 'good startup idea' and are seemingly held back from starting up only because they don't have an idea.<br />
<br />
Which is curious, because there is a notion floating around that ideas are not important, or at least not very important wrt startups, and execution is all that matters. In other words you can have a so-so idea but execute brilliantly and succeed.<br />
<br />
This is a slippery enough concept that nailing down exactly what is meant is difficult, leave alone arguing against it. But I've consistently encountered situations where talented, hardworking people say they have trouble coming up with startup ideas, <br />
<br />
The other day I met a friend who I haven't seen for years, and since like everyone who is halfway technically competent, he is also "planning a startup". One of the questions he asked me was "So, do you have any decent ideas for a startup?" My answer "Sure, I have a few dozen" seemed to surprise him. I suspect that 'I have plenty of ideas, sure' is a somewhat unusual answer amongst would be startup folks (who can also code). <br />
<br />
So this friend is a sharp dev, who can build anything he can conceive, has worked in startups, and still hesitates wrt starting his own company because he doesn't 'have a good idea'.<br />
<br />
<br />
Yet another friend, ex dev, who works as a manager in a (time and materials) software services company, tweeted the other day, about his (mild) frustration with an 'agile methodology' process bottleneck, and another friend who is an immensely successful entrepreneur, wrote back "Quit now. Life's too short for this nonsense. build something, give it your best shot and you'll love every minute of it.." And my manager friend replied (I am not doing a startup because)"I don't have a good idea yet".<br />
<br />
A third friend, very talented, with massive experience in business consulting, able to spot inefficiencies in half a dozen industries, still went into a loop of "What is a good idea for a startup?" and stayed there for a few weeks.<br />
<br />
When you see something repeat three times, it is probably worth investigating what the underlying dynamic is.<br />
<br />
Paul Graham in his recent essay <a href="http://paulgraham.com/startupideas.html">"How to get Startup Ideas"</a>, says, <br />
<br />
"The very best startup ideas tend to have three things in common: they're something the founders themselves want, that they themselves can build, and that few others realize are worth doing."<br />
<br />
Turning these into questions, an alternative to asking yourself "What is a good idea I can build a startup on?" is to ask yourself<br />
<br />
(a) What do you(personally) want (to exist in the world)?<br />
(b) Can you build (the answer to the previous question)<br />
(c) Do other people see/are other people working on the same opportunity?<br />
<br />
These are easier questions to answer than 'What is a good idea for us to build a startup around?'. So how come people aren't attempting to answer them?<br />
<br />
The rest of the essay deals with characteristics of good ideas vs mediocre ones, and how to select between multiple ideas and so on. (the whole essay is well worth reading).<br />
<br />
Now let's look at someone who thinks very differently. <br />
<br />
<br />
Paras Chopra, CEO of Wingify, is pulling in millions of dollars, from an office in India with a very small team, at an age where most of us are doing entry level jobs in IBM or Infosys or wherever. He found out the hard way that building 'cool' things don't necessarily bring in the money, so he decided to explicitly focus on making money. And he did (make tonnes of money).<br />
<br />
Paras, like PG, is a doer, not a self help guru. I am fairly allergic to self help pablum by people whose job is selling self help pablum in the form of books or conferences (hence my dismissal of the whole "Lean Startup" (TM) idea, which is mostly tonnes of anecdotes wrapped around a sliver of a 'mother hood and apple pie" homily (hello 'agile'!). So yeah, I don't really believe in "set up a landing page and fool people into leaving their emails so you can spam them" type approaches.<br />
<br />
Paras wrote a trio of blog posts illuminating his philosophy -- <a href="http://paraschopra.com/blog/entrepreneurship/webapp-is-not-going-to-make-money.htm">"Sorry your "cool" webapp is probably not going to make money"</a>, <a href="http://paraschopra.com/blog/entrepreneurship/how-to-find-startup-ideas-that-make-money.htm">"How to find startup ideas that make money"</a>, and <a href="http://paraschopra.com/blog/entrepreneurship/validate-startup-idea.htm">"Validate your startup idea by asking three simple questions"</a><br />
Again his advice can be turned into the 'questions to ask yourself' format.<br />
<br />
from his second essay, a more fruitful set of questions to ask yourself (than "What is a good startup idea?")<br />
<br />
is<br />
<br />
(a) What product is already making money for other people?<br />
(b) Do you find this product (area) interesting /aligned with your skill set?<br />
(c) What is a niche within the product area where you can launch a competing/disruptive product?<br />
<br />
<br />
Again, these questions are more focussed, and easier to answer, than the overarching "What is a good idea for a startup"?<br />
<br />
It is way easier to answer "What product is already making money for other people?" or "What do you(personally) want (to exist in the world)?" than "What is a startup idea"?<br />
<br />
If so, and spending a day or two with these questions (and other 'how to do it' advice from people who have already walked this path) would generate half a dozen ideas, why do people still agonize over finding the right idea?<br />
<br />
I think the real problem is more subtle. (Putting on my cynical hat) I think most people going around with "What is a good startup idea?" have no more intention of following through if they did get one, than do people who keep saying "I am going to write a novel (someday)" . I don't think serious writers (or even wannabe writers) greet each other with "So, do you have any nice ideas yet for a novel?". They might have rough drafts of their next novel, but I doubt they spend time agonizing over 'good ideas' over beer.<br />
<br />
Suppose you did get that 'perfect idea'. Are you really going to resign your job? And explain to your spouse, inlaws and kids, that you gave up a perfectly good job for an uncertain shot at changing the world? Then work insane hours and get into unknown territory like staffing, fund raising etc?<br />
<br />
Isn't it easier to just <i>talk</i> about it? Say "I <i>would</i> be attempting a startup if only I had a good idea"? The easiest way to never startup is to dismiss any embryonic idea with some form of "Yes, but ..."<br />
<br />
My rather cynical conclusion is that a "good startup idea", for most people, is something to think about occasionally, and talk about, and not really be executed upon. Which is perfectly fine, of course. <br />
<br />
None of this is to argue that a startup is somehow a more noble endeavour than holding down a BigCorp job, or consulting. I also suspect that startups are started either by people who've never held down a job (and don't want to), like the mythical Stanford students in their dorm, or by people who've had enough of the corporate bullshit that permeates most BigCo jobs, and decide to do a lifetime's work in n years and raise their 'Fuck You money', so they don't have to work anymore. People who are ok with their jobs don't (and probably shouldn't) try.<br />
<br />
So why don't <i>I </i>do a startup? See above, tweak the details a bit.<br />
<br />
All this applies to me just as to everyone else. My 'problem' with 'starting up' is not a lack of ideas. I don't have any trouble coming up with a few dozen ideas whenever I want to. I am a good enough programmer that I can build most things I can think of. <br />
<br />
I don't want to start a company all by myself (this could change). Most people I'd like to work with live in the USA, and I have no plans to live there, ever. I also don't want to work on technically trivial projects (I'd be very unhappy working on a Groupon clone, or doing Ruby on Rails consulting say). Meanwhile consulting on machine learning gives me my 'tackle hard problems' fix. What's not to like?<br />
<br />
Are any of these insormountable obstacles? Not at all. I just choose to use these 'reasons' as excuses to go on doing what I do(vs actually living up to potential, what a scary thought ;)) .<br />
<br />
So I'm stuck, just like most other people. And I know why I'm stuck. I just try not to fool myself too much ;)Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com2tag:blogger.com,1999:blog-8993901435573921786.post-88057068164008187142012-10-24T11:42:00.001-07:002012-10-25T22:18:54.443-07:00On the "Do you want to be a programmer at fifty?" thing.Once upon a time, James Hague <a href="http://prog21.dadgum.com/154.html">asked an interesting question</a> on his blog.<br />
<br />
"When I was still a professional programmer, my office-mate once asked out of the blue, "Do you really want to be doing this kind of work when you're fifty?"<br />
<br />
James went on to identify two kinds of programming <br />
<br />
Type A "work(ing) out the solutions to difficult problems. That takes careful thought, but it's the same kind of thought a novelist uses to organize a story or to write dialog that rings true. That kind of problem-solving is satisfying, even fun."<br />
<br />
<br />
Type B "what most programming is about - trying to come up with a working solution in a problem domain that you don't fully understand and don't have time to understand... skimming great oceans of APIs that you could spend years studying and learning, but the market will have moved on by then ... reading between the lines of documentation and guessing at how edge cases are handled and whether or not your assumptions will still hold true two months or two years from now.. the constant evolutionary changes that occur in the language definition, the compiler, the libraries, the application framework, and the underlying operating system, that all snowball together and keep you in maintenance mode instead of making real improvements."<br />
<br />
He went on to state that while he'll continue doing Type A programming, he isn't particularly interested in Type B, (presumably at fifty).<br />
<br />
I was looking forward to some good discussion on this, but HackerNews (which, in spite of its flaws still has no competition) <a href="http://news.ycombinator.org/item?id=4611337">went off into some tangents</a> primarily about ageism in the software industry and there was surprisingly little discussion about what James actually said.<br />
<br />
Now, is ageism a problem? Yes, it is. As people grow older, they are expected to do anything <i>but</i> programming. It is a cultural thing and not necessarily logical. I know someone who is a good programmer, but left Bangalore for a decade (programming all the while) and now can't get an interview (let alone a job)because "oh you have 18 years experience, we are looking for people with two years of experience.Sorry".<br />
<br />
So, yes ageism <i>is</i> a problem, even in Outsourcing Land, and there is plenty to be discussed, and action to be taken, with respect to ageism.But that is a topic for another day and isn't quite the problem addressed by James Hague in his blog post.<br />
<br />
In this post, I'll try to explain what <i>I</i> think (and it is just that, my opinion ymmv etc etc) about "Do you really want to be doing this at fifty?"<br />
<br />
The essence of the question is "Do you <i>want</i> to be doing this(at a future time point)". The question addresses the (evolution of) <i>motivation</i> to program, and James goes on to state that his motivation to do a certain type of programming (unfortunately this is the more dominant type of programming worldwide) decreases with increasing age.<br />
<br />
The question of motivation with respect to career activities has been discussed by a wide variety of people and a lot of research has been conducted. One interesting insight has been articulated by Dan Pink - in his book "Drive - The Surprising Truth About What Motivates Us" he identifies 3 factors that motivate us (or demotivate us) to undertake and pursue any activity. <br />
<br />
(1) Mastery - getting better at what you are doing <br />
<br />
(2) Autonomy - the degree to which you can direct your activity.<br />
<br />
(3) Purpose (or meaning) - doing something that really matters.<br />
<br />
If you get high scores in any of the above, ideally all three, and more importantly get more and more of all the above as your programming career progresses, of *course* you'll be programming at 50. Or 60. Why wouldn't you?<br />
<br />
The problem of course, is that in <i>most</i> programming jobs you either hit a declining slope or at best, plateau, with respect to one or more of the above as you age. If you are on a team of 50 people, maintaining some legacy leasing system written in Java, with business analysts doing the business thinking and you are converting <i>their</i> thoughts into code you are being a scribe for other people's ideas in a rigid and ageing language, in a context where you are an expensive 'resource'. <br />
<br />
<i>In general</i> , even at many product (vs services) companies, a 'line programmer' has low levels of autonomy - other people - product managers, business analysts etc etc - tell him what to do. Legacy codebases constrain technology choices. His 'mastery', while not non existent, is of a shallow and frothy kind (hey I use Rails today instead of J2EE yesterday! Node.js vs rails blah) and writing the n-th business app pulling data off a database and putting it on a web page for some corporate drone to use to update his TPS reports <i>crushes</i> 'being part of a higher purpose'. Little autonomy, modest mastery, non existent purpose. No wonder few people want to be doing <i>this</i> kind of programming at fifty.<br />
<br />
Thankfully, other kinds of programming do exist. John Carmack of Id Software is still programming in his forties because programming (and till recently, being a majority shareholder of a cutting edge games company!) helps him in maximizing all three attributes.<br />
<br />
Programming is a skill, like writing. Unlike writing, we live in a society where most people are code illiterate. And coding ability has (some) economic value. "Software is eating the world" etc, and so anyone who is comfortable with coding can exchange that skill for money. The deeper question is whether you can trade increasing experience in the skill of programming for <i>increasing</i> amounts of money (and mastery and autonomy and purpose) as time passes. For most people that function plateaus and then stays steady or declines.<br />
<br />
If you were someone who knew how to write, but lived in an illiterate society you could exchange that skill for money, by being a scribe at so many cents per word. You write people's letters and wills etc and you get paid for it by word count. But if you did it for thirty years, and you are still writing letters for people when you are fifty, would you be satisified with your career? What about when your customers move to that desperate youngster who offers a lesser rate per written word? <br />
<br />
A novelist uses writing in a different manner than someone who sets up as a letter writer for illiterate people. A novelist is trying to do something that uses writing grammatically correct sentences as a base skill, but the <i>core</i> of his work, plotting, characterization, dialogue, world building, etc lie on a plane well above deciding whether to put an i before an e, or vice versa. And you don't even need much base skill. Many people are pretty bad at grammar and still write best selling or world changing books.<br />
<br />
Generalizing, the (conceptually) shortest step to getting away from the 'path to ageist irrelevancy' for programmers is to find a way to make money by transcribing <i>your own ideas</i> to code. This might involve, for example, stepping away from time and materials services types of programming to product development. If not by yourself, then as a part of a small team. Even if you are still technically an employee, your are much more autonomous in small teams and companies (and codebases). <br />
<br />
A second<i> </i>way out of a programming career deadpool track is to move to something <i>related</i> where programming skill actually helps in a major way, but it isn't the core of your job.<br />
<br />
If you are a Computer Science researcher who is also an excellent programmer, your primary job is the creation of new knowledge (aka research, embodied in published papers) but your programming skills will help. <br />
<br />
If you are a (technical) startup founder using cutting edge languages and algorithms to build a superior product, your primary job is to satisfy users and pull your company ahead of the competition, then superior programming ability can help.<br />
<br />
If you are a finance expert who can <i>also</i> code, you probably have a significant edge over your competitors who have to depend on the software people to come in after the weekend to prototype your idea.<br />
<br />
Programming skill amplifies effectiveness in almost everything you can do.<br />
<br />
Of course you could find yourself in the same situation in your new career. If you still lack money, autonomy, mastery and purpose, you are back at square one. That said, being "an excellent programmer and a good X" seems like a decent plan.<br />
<br />
<span class="comment"><span style="color: black;">The idea that a programmer
always has to work in a half understood domain transforming some one
else's ideas into code is just that, an idea. It is a dominant idea, but
nothing really stops anyone from mastering an interesting domain or acquiring a complementary skill in
addition to programming.</span></span><br />
<br />
That gets me to what <i>I</i> think is the right way to go about 'career planning'.<br />
<br />
Decide what increased levels of autonomy, mastery and purpose mean to you. Figure out what you need to do to get to that point. Then do whatever it takes.<br />
<br />
If increased programming skill will move you towards increasing one or more of the three attributes, work on it. If something else (like writing skill, or knowing a domain, or getting good at sales, or going to medical school) looks more promising, work on <i>that</i>. Assembly line programming inside 'the industry', converting other people's thoughts into code in stone age languages, is a beginning. It need not be the end.<br />
<br />
To conclude, will <i>I</i> be programming at fifty? I think so (these days I do as much maths and stats as programming, and everything feeds very nicely into everything else), but at fifty, I'll be writing novels, not scribing letters.<br />
<br />Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com13tag:blogger.com,1999:blog-8993901435573921786.post-40949576680385893312012-10-14T07:27:00.001-07:002012-10-14T07:27:11.985-07:00Why I (need to) write (more)For some context on what sparked this, see this <a href="http://jsomers.net/blog/more-people-should-write">excellent post</a> and <a href="http://news.ycombinator.com/item?id=4649524">discussion on Hacker News</a>, (for those so inclined)<br />
<br />
The reason I used to blog somewhat regularly is very simple - I found that I could take some half formed thought and flesh it out by writing about it. Sometimes this would result in something <a href="http://pindancing.blogspot.com/2011/02/repertoire-method-in-concrete.html">actually useful to people</a>. Most often, I would just end up 'emptying my head' and think new thoughts.<br />
<br />
And then something happened. I got an audience. And comments. And emails. And controversies, and flames.<br />
<br />
All good, because I didn't (and don't) *mostly* care what people thought of anything I wrote (or what people think of me for that matter) . But there *is* a small element of reactivity and friction when you know persons X, Y and Z will be reading what you write.<br />
<br />
"Hmm is this too harshly worded for friend X. After all he is a big Ruby fan and if I say Ruby is a particularly brain dead language, would this ruin his day?". And then I have to write stuff twice. First write down what I really want to write, and then go through it with and delete stuff or add more explanatory stuff and cautionary qualifiers and so forth.<br />
<br />
I am not the only person facing this. People who are much better at writing than me apparently face this too.<br />
<br />
Someone <a href="http://news.ycombinator.com/item?id=4497691">asked</a> Paul Graham (he of the glowing essays fame) on Hacker News<br />
<br />
"<span class="comment"><span style="color: black;">what's
it like to have your every written (or spoken!) word analyzed by a
bunch of people? Esp. people that you end up having some form of contact
with.</span></span><br />
<br />
<span style="color: black;">It seems like it would be
difficult to just have a public conversation about a topic. Do you think
about that much when you write?"</span><br />
<br />
<span style="color: black;">and PG <a href="http://news.ycombinator.com/item?id=4497714">replied</a> to say</span><br />
<br />
<span style="color: black;">"</span><span style="color: black;"><span class="comment"><span style="color: black;">It's pretty grim. I think that's one of the reasons I write fewer essays now.</span></span></span><br />
<span style="color: black;"><span style="color: black;">After
I wrote this one, I had to go back and armor it by pre-empting anything
I could imagine anyone willfully misunderstanding to use as a weapon in
comment threads. The whole of footnote 1 is such armor for example. I
essentially anticipated all the "No, what I said was" type comments I'd
have had to make on HN and just included them in the essay."</span></span><br />
<br />
<span style="color: black;"><span style="color: black;">If pg can't escape this fate, I sure can't.</span></span><br />
<br />
<span style="color: black;"><span style="color: black;">but otoh I am less concerned than pg about whether someone mis-understands me etc because, I am (comparitively) not famous, and I am not writing essays just spewing out (comparitively) unpolished *blog posts*. I could deal with misunderstandings just fine.</span></span><br />
<br />
<br />
<span style="color: black;"><span style="color: black;">What I found harder to deal with was -- Twitter.</span></span><br />
<br />
<span style="color: black;"><span style="color: black;">Once I started <a href="https://twitter.com/ravi_mohan">tweeting regularly,</a> I found I could just try to distill whatever I was thinking about and just tweet about it. 140 characters is pretty good as a constraint. And since I regularly purge my twitter following of idots and nutcases, I am fairly sure I can convey exactly what I want to, and most people following me would understand (and if not clarifications are just 1 < n < 3 * 140 character tweets away.</span></span><br />
<span style="color: black;"><span style="color: black;"><br /></span></span>
<span style="color: black;"><span style="color: black;"><br /></span></span>
<span style="color: black;"><span style="color: black;">But as good as twitter is, 140 character tweets aren't as good as multi paragraph blog posts to *explore ideas* (vs expressing their seed forms concisely). My writing has suffered, though like riding a bicycle it should come back pretty fast, and then I'll go around improving it.</span></span><br />
<span style="color: black;"><span style="color: black;"><br /></span></span>
<span style="color: black;"><span style="color: black;"><br /></span></span>
<span style="color: black;"><span style="color: black;">The goal of my writing remains the same. I write to explore thoughts and ideas and 'empty me head'. No more. No less. </span></span><br />
<br />
<span style="color: black;"><span style="color: black;">One thing I am doing differently this time is to pay even less attention than usual to comments and reactions, and not bother clarifying what precise shade of meaning I intended to convey and so forth. This is just me writing a letter to a friend every other week or so. It just takes the form of a blog post other people can read.</span></span><br />
<span style="color: black;"> </span><br />
<br />
<br />
<span class="comment"><span style="color: black;"><span class="comment"><span style="color: black;"></span></span></span></span><br />
<span style="color: black;"><span style="color: black;">And so here goes.</span></span><br />
<span style="color: black;"><span style="color: black;"><br /></span></span>
<span style="color: black;"><span style="color: black;">I'm back.</span></span><br />
<br />
<br />
<br />Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com2tag:blogger.com,1999:blog-8993901435573921786.post-18018216048299019202012-09-24T08:32:00.001-07:002012-09-24T08:36:18.062-07:00Renewal. Learning. StuffI haven't written anything here for more than a year.<br />
<br />
That will now change.<br />
<br />
I was too busy and <a href="https://twitter.com/ravi_mohan">twitter</a> was interesting enough - 140 character limits are good training for conciseness - that I didn't miss blogging all that much.<br />
<br />
But writing longer pieces have their own advantages, and I hope to write at least one entry a week for the next year or so.<br />
<br />
Stay tuned.<br />
<br />
PS: I hate the changes Google has brought to blogger. A change of platform is on the agenda. I just don't have the time right now. Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-8993901435573921786.post-38225921198679141642011-10-08T01:44:00.000-07:002011-10-08T01:54:54.575-07:00An unimportant person''s comment on Steve Jobs's deathContext: Everyone and his dog is hyperventilating on the internet about the death of Steve Jobs.<br />
<br />
Here is my opinion (which like most opinions isn't worth very much, but hey this is my blog).<br />
<br />
All men are mortal.<br />
<br />
Steve Jobs was a man. (A great man, but still, a man.)<br />
<br />
[Modus Ponens] Steve Jobs was mortal too. <br />
<br />
Now he has died. The world endures. Life goes on. <br />
<br />
<b>Your</b> (and my) time to depart will soon be here. The world will still endure. And life will still go on. <br />
<br />
I read somewhere that the one regret most people have at the moment of death is about how they should have done X or Y instead of A or B. <br />
<br />
Get back to work. Do X or Y instead of A or B. Die happy, when your time comes.<br />
<br />
To the degree one admires Jobs, emulating his virtues in your life is a more fitting tribute than another silly comment about how he was as influential as Plato and Aristotle (an idiot actually said this on HN).<br />
<br />
Update: on Stallman's comment on Steve Jobs' death. People have different ideas on whether a person's achievements were good or bad. This affects their judgement of whether a person's death was "good for the world" or not.<br />
<br />
Stallman thinks that the end of Steve's influence on computing (note: he clearly distinguished it from Steve's death itself) is a good thing. And said as much. <br />
<br />
I don't agree. <br />
<br />
I think, for all his faults (and like you, me and every human who ever lived, he had some) Steve's influence was beneficial (overall) and I wish he'd lived longer. <br />
<br />
But I also think it is ok for people (including Stallman) to express their opinions, even when I find them not in agreement with my own. <br />
<br />
I look forward to the day when everyone (including me) will shut up about how other people should think exactly like everyone else.(or else we'll all get all self righteous and puffed up and hyperventilative). <br />
<br />
<br />
And now I'll go back to coding. (Thank You for reading this far.You really should be doing something useful instead!)Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com5tag:blogger.com,1999:blog-8993901435573921786.post-43296066380677479782011-08-15T03:14:00.001-07:002011-08-15T03:14:22.138-07:00On Owning a Kindle<br />
<br />
I was gifted a Kindle a month or so ago. I like it for what it does.<br />
<br />
Should you buy one? If you are a book lover, one of those people who always have a book on hand, or reach for one when you have an hour to spare, you definitely should. If you read mostly technical or math books (which require a lot of flipping back and forth and good rendering of code or equations)or research papers, you won't get as much benefit as you ought to. <br />
<br />
I am well satisfied with the Kindle for allowing me to lug around about 300 books (I still have almost 3 GB left) so I can read on the bus, while waiting for someone, etc. I would have loved it if I could read math books and papers (pdf rendering on the (small) kindle is terrible) and also scribble notes (the kindle "make notes" functionality is awkward and unusable) but e-ink based readers are still in their early days. For what it does (enable you to carry around a few hundred fiction books it is awesome. For example, I have all the 20 Aubrey Maturin books and the dozen or so Jim Butcher books in a 6 inch device. (E-Paper blows away IPad's screen for reading.) <br />
<br />
If I'd received the Kindle before the release of George Martin's utterly terrible "A Dance With Dragons" (I should write a blog entry one of these days on how terrible it is - suffice to say that the man has lost his touch) I could have spent 11$ on a kindle version instead of 54 $ (18 for the book, the rest for postage to India). The Kindle shines for fiction and light non fiction books. And you can avoid paying for the kindle editions by downloading "pirated" versions if you know where to look. I suspect it would work well for magazine subscriptions too ( at least for those in which the written word is more important than glossy pictures).<br />
<br />
Somewhat tangentially, someone should write a piece of software that works like LaTex for math but generates flowable text. Tex is (print) page oriented.If you could just take a LaTex file and generate a kindle readable document out of it,I suspect a lot of math/tech papers would find their way on to e-readers very fast. <br />
<br />
After having used the kindle for a while I am not surprised that Amazon sells more e-books now than paper books. I suspect the Kindle is a very potent weapon in Amazon's arsenal, that its competitors underestimate. If they make it work in the Indian context, (Amazon plans to launch in India in 2012 - I have no idea how much of a role they plan for the Kindle here), their competitors will get swatted aside like so many flies. (Hmmm I should write a post on how I see the Amazon-Flipkart battle shaping up in India. Interesting times we live in).<br />
<br />
Meanwhile, if you are a reader and can afford to buy a Kindle, you should. It (or something like it) is the future of reading.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com2tag:blogger.com,1999:blog-8993901435573921786.post-61820274610289739912011-06-27T23:11:00.000-07:002011-06-27T23:37:07.717-07:00Two Roads Diverge - Machine Learning or IOS Dev?Now that my latest project (for those interested in such things - 30 k lines of Haskell, 200 k lines of (mostly legacy) C/C++ code, a few thousand lines of Lua, signal processing, some NLP ish things) is "done done"[1] I have to choose [2] a new project.<br />
<br />
Choosing a new project is always an exciting time, but also a mildly stressful one. For every choice made, a half dozen equally worthy alternatives have to be rejected. And I do a lot of agonizing over what is the 'right' project to take up. <br />
<br />
<br />
One way to maintain a sense of continuity among projects is to examine what could have been done better on the finished project and see if you can build a project around fixing those deficiencies. <br />
<br />
Working on the last project exposed some flaws in my dev chops - I know nothing about Network Programming and this caused me to take longer than usual to fix a few nasty bugs that cropped up. So I'd like to take a couple of months off and work through Stevens's books and close this gap and then build some customized network monitoring tools which I could have used when my hair was on fire. Our visualization and rendering subsystem used an Open Source renderer that fell down on large datasets. NLP algorithms in Haskell had to be painfully built one by one. The Computer Vision library we used (Open CV) is a friggin mess that needs serious surgery. And so on. Doing all that would multiply the existing codebase's power by a factor of 10. And also make good building blocks for new projects.<br />
<br />
And many ML projects have the strange property that completing them successfully opens up even more ambitious projects. The folks who sponsored the last project want me to do more stuff for them.<br />
<br />
<br />
Another way to choose a new project to work on is to find great people you'd like to work with and build a project around what they are doing or are interested in. My stubborn refusal to move to the USA somewhat limits my choice in this regard - Not many people or companies in Bangalore are doing anything interesting in Machine Learning. But otoh a few people have bounced really (really really) interesting IOS projects (and start up plans) to me. On the one hand, this means I have to go over to the Dark Side and sell my soul to the evil but competent folks at Apple and learn Objective C and overpay for a MacBook and the annually renewed right to put software I write on hardware I already paid for and so on. Being a storm trooper for Darth Steve is a proposition that requires some thought. But on the other hand, I would be working with ultra competent devs again (Working alone, or as the only dev on a team is the only negative - and it is a small one - in my 'lifestyle'. Fixing that would rock). <br />
<br />
A third choice - I actually thought of sitting down and writing a book, just for a change of pace. I have a few ideas for some tech books I think are missing from thes shelves and every dev I pitched reacted with a variant of "I'd buy that RIGHT now - please please write it". What stops me is that people who have written successful tech books say that it is a pretty thankless task, and with some exceptions, financially unrewarding (though your "prestige" goes up- something I don't care a rat's ass about). If I had to choose between spending a thousand hours writing a book and a thousand hours writing code, it is somewhat hard to choose the former.<br />
<br />
Hence the "two roads diverge" tone that permeates my thoughts. I could dive deeper into Machine Learning(and allied areas) or go do mobile app stuff. Choosing promises to be interesting. <br />
<br />
Two Roads Diverge and all that jazz[3]<br />
<br />
But first, before I have to make a choice, clear the backlog of people to meet (I thank you all for your patience and suffering my erratic schedules), places to visit, things to do. (Metaphorically) lie on a beach somewhere with no computers in sight. Relax, refresh. Then decide. <br />
<br />
<br />
[1] Most projects have an official "done " date and then a later "done done" date. In this case the project was 'done' some time ago and then a rookie dev wiped out the source control repo while simultaneously trying to alter the Haskell code (vs writing a minor script in Lua, which is what the situation called for), bringing the whole cluster down, causing the (non dev) owners of the project to send an SOS to me to get on a plane pronto and put out the fire.<br />
Some fences have been built to avoid this kind of FUBAR situation from happening again so now I am "done done"<br />
<br />
[2] One significant milestone in one's evolution as a developer is when you realize that you have more ideas than you can implement in your lifetime. You are even luckier when people pay you to implement them (vs being assigned to some Godawful Leasing System dev in some enterprise dev body shop say)<br />
<br />
[3] - From Frost's poem, of course <br />
<br />
<i>Two roads diverged in a yellow wood,<br />
And sorry I could not travel both<br />
And be one traveler, long I stood<br />
And looked down one as far as I could<br />
To where it bent in the undergrowth.<br />
<br />
...................................<br />
<br />
I shall be telling this with a sigh<br />
Somewhere ages and ages hence:<br />
Two roads diverged in a wood, and I<br />
I took the one less traveled by,<br />
<br />
And that has made all the difference.<br />
</i>Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com9tag:blogger.com,1999:blog-8993901435573921786.post-35473448027458115962011-05-29T06:20:00.000-07:002011-05-29T06:21:58.046-07:00"Civil-Society Hacker" barcamp at Google GurgaonWithout extra comment, an email I received. If you are interested in this kind of thing and/or live near Gurgaon, maybe you should take a look. <br />
<br />
<br />
<i><br />
From:<br />
<br />
"Laina Emmanuel" < lemmanuel@accountabilityindia.org ><br />
<br />
Dear Ravi, <br />
<br />
I came across your blog while looking for hackers in India. I am looking for civil-society hackers who would like to use their programming skills to develop innovative solutions for governance. To facilitate a conversation between programmers and policy-makers, I am organizing (if it can be called organizing) a bar-camp at the Google Campus in Gurgaon, on "Technology, Transparency and Accountability" on the 5th of June.<br />
<br />
This bar-camp is being held by Accountability Initiative. Founded in 2008, Accountability Initiative is a research initiative that aims to improve the quality of public services in India by promoting informed and accountable governance. To this end, one of AI's key efforts is to develop innovative models for tracking government led social sector programs in India. The Centre for Policy Research, an independent and non-partisan research institute and think-tank, is the institutional anchor for this initiative. <br />
<br />
We have a wide variety of participants for the bar-camp ranging from policy-makers to technology-enthusiasts. We would be honored if you could also join us at this bar-camp and help show how hackers can contribute to governance. Also, we would really appreciate it if you could forward this invitation to others who would be interested.<br />
<br />
Thanks and regards<br />
Laina Emmanuel</i><br />
<br />
I have no affiliation with any of the organizations mentioned in the email. Write to Laina for details.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com2tag:blogger.com,1999:blog-8993901435573921786.post-71182832005666630322011-04-08T02:52:00.000-07:002011-04-09T07:50:40.533-07:00Why startups in India find it hard to hire devsMayank Sharma <a href="http://mayanks.posterous.com/hiring-in-startups">wrote an interesting blog post</a> about hiring difficulties for startups in India. He says the reasons are<br />
<br />
(1) The very good devs can work for multinationals (Google,Microsoft etc) and local startups can't match the compensation.<br />
<br />
(2) Indian services companies have lock in contracts<br />
<br />
(3) No real history of non founders making lots of money from a startup<br />
<br />
(4) Those who do want to work for startups have "unreasonable expectations" (quotes mine, not his)<br />
<br />
<br />
Manoj Govindan (who works for (startup) <a href="http://www.perfios.com/">Perfios</a> - due disclosure - I reccomended him to Perfios - guilty as charged! :-) ) responded with four reasons why these startups don't hold any allure for good devs<br />
<br />
(1) very few Indian startups offer <b>significant</b> equity to early hires.<br />
<br />
(2) Many Indian startups give employees little say in strategic decision making. In the end, you are still a "coding body".<br />
<br />
(3) Signal/Noise ratio - too many "social this" "cloud that" clone startups out there than innovative ones. Noise attracts noise.<br />
<br />
(4) "Very good guys" like to make own technical decisions. Here they find them already made and locked in before they join. Often tech stacks are in place even before people actually decide what to do.<br />
<br />
I have some sympathy for both view points. Is the question such a complicated one though?<br />
<br />
There is a simpler explanation. All the points above can be subsumed under "Whenever there is a demand and supply gap, and some kind of free market mediating the two, prices rise". That is it.<br />
<br />
<br />
Good devs, those with the combination of tech chops, communication skills, attitude and *ambition* (iow the exact type you want for your startup) are a scarce resource generally and particularly so in today's market. Demand is very high (this fluctuates) and supply is very low (this is constant). In such a scenario, the price varies from very high to high. Right now it is very high. And that is it. <br />
<br />
You just can't buy gold for peanuts (speaking metaphorically. In reality you can trade a few truckloads for some gold). I can't. You can't. It is a value neutral statement - just the way the market works. You can sometimes marginally route around the Iron Law of supply and demand - <a href="http://www.zoho.com/">Zoho</a> trains people who wouldn't be considered for normal dev jobs for e.g - in effect, creating their own supply - but you can't escape it.<br />
<br />
Even then, the argument goes, the "price" for good devs isn't necessarily all about money. True enough. <br />
<br />
Look at how Silicon Valley's startups (or even more narrowly, YC funded companies) hire good devs - If you can't match (or exceed) market price in terms of compensation you need to make it up with (a) significant equity and/or (b) advanced tech and/or (c) a compelling business model with some traction and/or (d) a "change the world" product (think <a href="http://www.spacex.com/">SpaceX</a>) or (e) credible, proven founders. This is true *everywhere*, not just in the valley, though details and leeway differ. If you are a company trying to hire the very best developers, you have to pay the price somehow.<br />
<br />
The typical Indian "lean startup" fails on all counts - it is often doing some whacky buzzword heavy, content lite, mobile/social mini app, works on PHP and MySQL, has no money (and so offers bare minimum salaries), is often run by clueless MBA/undistinguished engineer types and will give a prospective employee 1% equity in a doomed product for an endless 18 hour working day.<br />
<br />
<br />
Why the hell would anyone half way technically good want to work with such outfits?<br />
Devs are sometimes economically stupid but they aren't *that* stupid. <br />
<br />
If they want to do interesting tech stuff they can go work for Google (Mountain View, not Bangalore!), start their own "great tech" companies, work on Open Source projects, work from home for US based startups doing interesting things or wait for Indian startups with cutting edge tech to proliferate, meanwhile drawing a steady salary and honing their skills. <br />
<br />
If you must stay in Bangalore, but want great tech and a small company, go work for <a href="http://www.gluster.com/">Gluster</a>. All the tech and Open Source you want. Or apply to <a href="http://tachyon.in/">Tachyon</a>. Or <a href="http://www.notionink.com/">Notion Ink</a>. There are a *lot* of companies attempting technically interesting things in Bangalore these days. And there is space for many many more, if you have founder dreams.<br />
<br />
If you want to do services web dev, and work with small teams of bright people in a great office atmosphere, they should go work for someone like <a href="http://www.c42.in/">C42</a>. Good company I vouch for. <br />
<br />
If you just want to get out of that crappy enterprise services bodyshop, but don't want to get stressed about whether your children will starve, *and* want to do something "like a startup, but stable",go work for (a) someone with traction and funding, but aren't quite a startup any more (say <a href="http://www.slideshare.net/">SlideShare</a> or <a href="http://www.inmobi.com/">Inmobi</a> or <a href="http://www.flipkart.com/">Flipkart</a>) (b) work with one of the many offshore centres of product companies, who have plenty of money. (say <a href="http://www.intuit.com/">Intuit</a>,or <a href="http://www.zynga.com/">Zynga</a>).<br />
<br />
If you choose to be an employee (and it is an honourable choice), always make sure you are getting what you are worth - be that in terms of money, equity or technical challenge or whatever your individual preference is. This is just plain common sense.<br />
<br />
If you want to be entrepreneurial and don't mind the stress, be a (co)founder, not an employee. That way you can still work on the latest social/mobile ripoff idea in PHP (or the blue sky idea in your own custom language!) and scrounge around for money, but you'll be in control of your destiny and won't have to factor in crazy bosses. ;-)<br />
<br />
Nutshell : The law of demand and supply explains all phenomena in a free market. If you are any good at programming, you have more options now than you ever did. If you want to hire such people, create a compelling value proposition. If you aren't getting enough applications from qualified devs your "offer" isn't good enough. <br />
<br />
The End.<br />
<br />
EDIT: someone asked about good startups in Pune. The only Pune based startup I know of is <a href="http://infinitelybeta.com/">Infinitely Beta</a> (who have no problems recruiting afaik). But I am no expert on Indian startups.Do your research!<br />
<br />
EDIT 2: I dashed this off while waiting for a build to complete. Apologies in advance for any typos/flaws.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com17tag:blogger.com,1999:blog-8993901435573921786.post-50347002039708949942011-02-04T07:01:00.000-08:002016-09-03T05:52:03.529-07:00The repertoire method in "Concrete Mathematics"<a href="http://www.amazon.com/exec/obidos/ASIN/0201558025">Concrete Mathematics</a> by Knuth et al is a great book but there are a couple of places where people learning by themselves can stumble, fall and get lost. <br />
<br />
When I originally worked through the book I couldn't make head or tail of the "repertoire method" of solving recurrences. Eventually I did figure it out and sometime later I wrote up what I understood and posted it on my (then) blog. It seems that a lot of people search for "Concrete Mathematics Repertoire method" on Google and it is my second most popular <a href="http://ravimohan.blogspot.com/2005/03/levelling-up-in-math-land.html">blog post</a> (The most popular post is <a href="http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-solvers.html">this one</a>, fwiw)<br />
<br />
So I re-read the repertoire method post today and while it is correct, I can explain it better today so here goes. <br />
<br />
(What follows may not make sense to you if you are not working through CM (be warned!). Also be warned that I have no formal training in mathematics, computer science or programming and am entirely self taught. So people with such training can probably explain things better. The following reflects only *my* understanding. That said, onwards!)<br />
<br />
<br />
By the time you hit the repertoire method section in Chapter 1 of Concrete Mathematics, you have been taught a simple method to find closed forms of recurrences. The essential algorithm is <br />
<br />
(a) make a table of the recurrence values R(n) for small values of n. <br />
(b) Eyeball the table to see if you can spot a pattern, <br />
(c) write down the pattern. <br />
(d) Prove (or disprove) the candidate closed form's correctness by Mathematical Induction over (a subset of) the natural numbers.<br />
<br />
You used this method, for example to solve the Josephus problem, so you know where to stand (in the circle of your idiot friends trying to commit mass suicide) so that you end up being the survivor), surrender to the Romans and become a historian for the ages).<br />
<br />
The repertoire method makes its (first) appearance in the generalization of the Josephus recurrence. The constants 1 and -1 are replaced by alpha beta and gamma to give the more general recurrence<br />
<br />
f(1) = alpha<br />
f(2n) = 2*f(n) + beta<br />
f(2n + 1) = 2*f(n) + gamma~~~~~~~~~~~~~~~~~~~~~~~[1]<br />
<br />
Your job is to find f(n) such that this recurrence is true for any values of alpha, beta, and gamma.<br />
<br />
So how do you do this? You use the only technique you know (at this point in the book) of making a table for small values of n and eyeballing it to spot a pattern (I am too lazy to reproduce the table here, go buy the book!). You don't spot a pattern for the f(n) but you do notice that all values of f(n) follow a pattern of <br />
<br />
f(n) = A(n)*alpha + B(n)*beta + C(n)*gamma~~~~~~~~~~~~~~[2]<br />
<br />
This is a kind of template of what the final closed form will look like, depending of course on the values of A(n), B(n) etc<br />
<br />
In other words if we can find functions A(n), B(n) and C(n) such that when given n they generate the right coefficients as in Table 1.12, then we have a closed form for f(n). <br />
<br />
Something very important happened here. You broke down a problem into smaller subproblems.<br />
<br />
You still don't know the value of f(n) but now you know that if you can find the values of the functions A(n), B(n), and C(n) you have solved f(n).<br />
<br />
Now you can find these values with your trusted "spot a pattern and verify with induction" (the only tool you have at this point) OR you can use the (unexplained!) repertoire method.<br />
<br />
<br />
The first bit of confusion arises because the authors do both. They guess values of all the three functions in n, A(n), B(n) and C(n) from the initial table, say (correctly) that proving that these values by induction is long and tedious and then they go ahead and prove that the guessed value of A(n) is correct by induction! (to be fair they are solving for only A(n) and not all the unknowns simultaneously but though smaller this induction is still tedious. Try it. I did.)<br />
<br />
Or, in more detail, <br />
<br />
a value is guessed, (that A(n) = 2^m , m coming from rewriting n as 2^m + k as in the original Josephus problem - the book uses the letter l instead of k, I use k to distinguish easily from the number 1), beta and gamma are set to zero (remember the recurrence has to hold for *all* values of alpha, beta and gamma, including the selected 1,0,0) and then the recurrence [1] is rewritten (using [2]) as a recurrence in terms of A(n).<br />
<br />
I'll repeat that so it is clear what is happening<br />
<br />
<br />
Step 1: Guess a value for A(n), by eyeballing.We guess A(n) = 2^m<br />
<br />
Step 2: Select alpha, beta and gamma so that the other two functions of n, B(n) and C(n), get eliminated from [2]. You can do this by selecting alpha = 1 and beta = gamma = zero.<br />
<br />
Step 3: Rewrite the equation <b>[2]</b> (i.e the observed generalized form) in terms of the selected values of alpha, beta and gamma. You get f(n) = A(n).<br />
<br />
Step 4: Use this equation (ie f(n) = A(n)) and your chosen values of alpha, beta and gamma to rewrite the <b>original recurrence</b> (ie equation [1]) to get<br />
<br />
A(1) = 1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[3]<br />
A(2n) = 2*A(n)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[4]<br />
A(2n + 1) = 2*A(N)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[5]<br />
<br />
<br />
<br />
Now the problem becomes to prove that your guess (i.e, A(n) = A(2^m + k) = 2^m) satisfies this new recurrence as expressed by [3] and [4] and [5].<br />
<br />
The authors state "Sure enough it is true (by induction on m) that A(2^m + k) = 2^m "<br />
<br />
<br />
The authors don't show the induction but you can work out this (tedious but not difficult) induction. Only basic algebraic manipulation is required) <br />
<br />
Be aware that (a) the induction is <b>on m, not n</b> and (b) the predicate to be proven has the form P_m: (A(2^m +k) => [3] AND [4] AND [5]). <br />
<br />
First, prove P_zero (as m starts from zero, though n starts from 1 - we need an induction on m, not n!). Then prove that P_m => P_m+1. (Weak Induction is sufficient).<br />
<br />
So we prove (yay!!) that A(n) does = 2^m.<br />
<br />
You <b>could</b> do the same for B(n) and C(n). Select values for alpha, beta and gamma to create recurrences in terms of B(n) only and then C(n) only, just as we did above for A(n), then use mathematical induction <b>over m</b> to prove your guesses correct.<br />
<br />
In the book though, the authors <i>switch to the repertoire method to find B(n) and C(n)</i>. This switch is the first confusing bit - A(n) is found using a guess + induction (the old method - eyeball, guess, use induction). But then they switch - and the repertoire method is used to find B(n) and C(n) and the initial guesses as to their values are unused!<br />
<br />
Worsening the confusion is the fact that the repertoire method is not identified or explained explicitly at this point. As a student states in a margin note (great idea btw) "Beware: the authors are expecting you to figure out the idea of the repertoire method from seats of pants examples instead of giving a top down presentation"<br />
<br />
The seat-of-pants example is actually enough if A(n),B(n) and C(n) are <b>all</b> worked out with the repertoire method.<br />
<br />
<br />
So let us solve the whole thing with the repertoire method (no induction) and see how it works. Let us throw away the guesses about the values of A(n), B(n), and C(n). We'll assume we couldn't make any guesses for A(n), B(n) and C(n).<br />
<br />
All we have is the original recurrence <br />
<br />
f(1) = alpha<br />
f(2n) = 2*f(n) + beta<br />
f(2n + 1) = 2*f(n) + gamma~~~~~~~~~~~~~~~~~~~~~[1]<br />
<br />
and our observation that f(n) always has the form <br />
<br />
f(n) = A(n)*alpha + B(n)*beta + C(n)*gamma~~~~~~~~[2]<br />
<br />
ok so we don't know (and we need to find out) the values of A(n), B(n) and C(n) . <br />
<br />
The repertoire method (for this recurrence) works like this. <br />
(1) Guess a value <b>for f(n)</b>. (ie the guess is for the whole of [2] NOT a component A(n) as we did above!) <br />
(2)See if you can find values for alpha, beta and gamma to validate this guess. Rewrite <b>[1]</b>, the original recurrence in terms of your guess for f(n). See if you can find values for alpha, beta and gamma.<br />
(3) Substitute these values (of alpha, beta and gamma), back into [2]. You'll get an equation in terms of the three unknowns A(n), B(n) and C(n).<br />
(4)Repeat steps (1) - (3) till you have three <b>independent</b> equations. <br />
(5)Solve for three linear equations in three unknowns. <br />
<br />
Done. <br />
<br />
Important: If you make a "wrong" guess you will end up with a useless equation like 0 = 0 or an equation that is not independent of the already derived equations and so on. If this happens, don't worry about it, try another guess till you do get three independent equations.<br />
<br />
In detail.<br />
<br />
I guess (the authors do too) that f(n) = 1. <br />
<br />
Rationale for the guess: f(n) = constant is the simplest possible formulation of f(n) (just for fun you might want to try f(n) = 0)<br />
<br />
Let us try substituting this in [1]<br />
<br />
f(1) = alpha becomes 1 = alpha (since f(n) is 1 for any n). <br />
<br />
similarly <br />
<br />
f(2n) = 2*f(n) + beta becomes 1 = 2*1 + beta so beta = -1<br />
f(2n + 1) = 2*f(n) + gamma becomes 1 = 2*1 + gamma so gamma = -1<br />
<br />
so we have values for alpha,beta, gamma and f(n) and when we substitute back into [2] we get <br />
<br />
A(n) - B(n) - C(n) = 1~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[6]<br />
<br />
ok!!! we have the first equation.<br />
<br />
Now if we could get just two more independent equations like this we would have three equations in three unknowns (and so solve by Linear Algebra). <br />
<br />
The authors use f(n) = n as their second guess and get A(n) + C(n) = n as their second equation.<br />
<br />
(This is a good guess too. After f(n) = k, f(n) = n is the next rung up the complexity ladder. But in this specific example we can do better - see below)<br />
<br />
and since they already proved that A(n) = 2^m by the "guess and use induction" method they don't need a third equation. They have two equations in two unknowns and they solve to get the solution.<br />
<br />
But we don't have any guesses as to the values of A(n), so we can't plug that in. <br />
<br />
We guessed at f(n) = 1 and got the equation A(n) - B(n) - C(n) = 1 and we need two more independent equations. We could re use the authors' f(n) = n guess and also cheat a bit and reuse the (non generalized) Josephus recurrence solution that we already proved that to "guess" that f(n) = 2k + 1. Then alpha = 1, beta = -1 and gamma = 1, giving us the third equation A(n) - B(n) + C(n) = 2k + 1. Three equations three variables. Solve. This gives the right answer too.<br />
<br />
But that feels like a cheat. What if we hadn't solved the Josephus recurrence before? How would we guess f(n) = 2k + 1? We could go with f(n) = n but we can do better. <br />
<br />
Since n = 2^m + k, we guess (our second guess)<br />
<br />
f(n) = 2^m<br />
<br />
and f(n) = k. (our third guess)<br />
<br />
(Rationale for these guesses. Since n is dependent on 2^m and k why not guess with the simpler variables rather than f(n) = n, like the example in the book does? In this case this decision pays off spectacularly, giving the solutions for A(n) and C(n) directly and B(n) trivially )<br />
<br />
These give us the equations, (just like we worked out [4] above, try it!)<br />
<br />
A(n) = 2^m~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[7]<br />
C(n) = k~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~[8]<br />
<br />
and these two along with [6] give us B(n) = 2^m - k - 1.<br />
<br />
Look ma! no induction !<br />
<br />
So now we have the values for A(n), B(n) and C(n) and now we can say that<br />
<br />
the solution to the recurrence<br />
<br />
f(1) = alpha<br />
f(2n) = 2*f(n) + beta<br />
f(2n + 1) = 2*f(n) + gamma is (given n = 2^m + k as explained earlier in the book)<br />
<br />
f(n) = (2^m)*alpha + (2^m - k - 1)*beta + k*gamma.<br />
<br />
Double checking,When alpha = 1, beta = -1 and gamma = 1 we get<br />
<br />
the solution to <br />
<br />
f(1) = 1<br />
f(2n) = 2*f(1) - 1<br />
f(2n + 1) = 2*f(n) + 1 (note: this is the original Josephus Recurrence)<br />
<br />
is (2^m)*1 + (2^m - k - 1)*(-1) + k*(1), which resolves to <br />
<br />
2k+1 which agrees with [1.9] (in the book).<br />
<br />
Hopefully now what the authors say about the recurrence method makes more sense, though the sentence structure at this point in the book is confusing Let us take it apart - my comments in italics.<br />
<br />
"First we find settings for parameters for which we know the solution" <i>(the parameters here are alpha beta and gamma, the "solution"s are the various guessed values of f(n) <b>NOT</b> A(n),B(n), C(n). We make guesses for A(n),B(n) etc when we are using the prove by Induction method. When we use Repertoire method, we guess for f(n) and *find* A(n), B(n) etc)</i>; <br />
<br />
<br />
"this gives us a repertoire of special cases that we can solve" <i>(the special cases are the independent equations in the unknowns A(n),B(n),C(n) and solving them gives us the values of A(n) etc)</i>. <br />
<br />
<br />
"Then we obtain the general case" <i>(the solution of recurrence [1], the general value of f(n) )</i> <br />
<br />
"by combining the special cases" <i>(In this case we combine the solutions of the equations which are the "special cases")</i>.<br />
<br />
Hopefully that helped a bit. <br />
<br />
The repertoire method pops up all over CM in various contexts, and once you grasp it is easy to identify and use. Enjoy the rest of Concrete Mathematics (which imho is a great, great book every programmer should have on his bookshelf)Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com17tag:blogger.com,1999:blog-8993901435573921786.post-3513691753141221772010-12-11T06:41:00.000-08:002010-12-11T19:47:38.190-08:00The answer to "Will you mentor me?" isNo.<br />
<br />
Thanks for understanding.<br />
<br />
Ok that was the nutshell version. If that answers your question, that's great.<br />
<br />
The more detailed answer is "No, I won't mentor you,but in this blog entry I will tell you what to do instead, to get where you want to go". And I can reply with the url to this post the next time someone requests mentoring.<br />
<br />
I once wrote a comment on Hacker News about what *I* learned about ending up with awesome mentors. Here it is, slightly edited so it reads a little better.<br />
<br />
(The OP asked) <i>Recently I have tried approaching a few good developers through their blogs about various matters including advice on how to go about some projects I'm undertaking but I am surprised at the unfriendly responses I have received. Maybe I have been going about it the wrong way but it got me thinking; Shouldn't the guys whose work we look up to be keen on what some of us young aspiring developers have to contribute to the community? I mean sure, we don't have the experience or skills some of these guys have(yet) but we still have some ideas that are viable with the right technical skills to back them. If any of them want to reach out and help nurture some potential talent, it may very well benefit all them in the end, whether financially or in terms of new ideas and experiences.</i><br />
<br />
<br />
I commented thus<br />
<br />
<br />
<i>I have some experience in this, so let me try to explain a couple of things that I learned in the "school of hard knocks".<br />
<br />
Once upon a time I was in a situation where I thought I could contribute to something one of the best programmers in the world was working on so I sent an email (I got the address from his webpage) and said something to the effect of " you say on this webpage you need this code and I have been working on something similair in my spare time and I could write the rest for you over the next few months because I am interested in what you are doing" and I got a 2 line reply which said (paraphrased) " A lot of people write to me saying they'll do this , but I've never seen any code yet so I am a little skeptical. Don't take it personally. Thanks. bye.".<br />
<br />
So in the next email (sent <b>a minute after I received his reply</b>) I sent him a zipped file of code with an explanation that "this is what I've done so far which is about 70% of what you want" and he immediately replied saying "Whoa you are serious. That is refreshing .. ' and opened up completely, giving me a lot of useful feedback and very specific advice. He is a (very valued) mentor to this day.<br />
<br />
Another time, I was reading a paper from a (very famous) professor at Stanford, and I thought I could fill in some gaps in that paper so I wrote a "You know your paper on X could be expanded to give results Y and Z. I could use the resulting code in my present project. Would you be interested in seeing the expanded results or code" email and I got a very dismissive one line email along he lines of " That is an old paper and incomplete in certain respects, Thanks".<br />
<br />
So a few days later, I sent along a detailed algorithm that expanded his idea, with a formal proof of correctness and a code implementation and he suddenly switched to a more expansive mode, sending friendly emails with long and detailed corrections and ideas for me to explore.<br />
<br />
Now I am not in the league of the above two gentlemen, but perhaps because I work in AI and Robotics in India,which isn't too common, I receive frequent emails to the effect of "please mentor me", often from students. I receive too many of these emails to answer any in any detail, but if I ever get an email with "I am interested in AI/ Robotics. This is what I've done so far. Here is the code. I am stuck at point X. I tried A, B, C nothing worked. What you wrote at [url] suggests you may be the right person to ask. can you help?" I would pay much more attention than to a "please mentor me" email.<br />
<br />
<b>In other words, when you asks for a busy person's time for "mentorship" or "advice" or whatever, show (a) you are serious and have gone as far as you can by yourself (b) have taken concrete steps to address whatever your needs are and (optionally. but especially with code related efforts)(c) how helping you could benefit them/their project.</b><br />
<br />
Good developers are very busy and have so much stuff happening in their lives and more work than they could ever hope to complete that they really don't have any time to answer vague emails from some one they've never heard of before.<br />
<br />
As an (exaggerated) analogy, think of writing an email to a famous director or movie star or rock star, saying "I have these cool ideas about directing/acting/ music. Can you mentor me/give me advice?"<br />
<br />
I am replacing the words "app" and "technical" in your sentence below with "film" and "film making".<br />
<br />
"if I have an idea for a film that I want to develop, but my film making skills limit me, it would be nice to have people to bounce the idea off and have it implemented. "(so .. please mentor me/give me advice/make this film for me).<br />
<br />
Do you think a top grade director (say Spielberg) would respond to this?<br />
<br />
The fact that you at least got a 2 line response shows that the developers you wrote to are much nicer than you may think. They care enough not to completely dismiss your email, though they receive dozens of similar emails a week.<br />
<br />
As someone else advised you on this thread, just roll up your sleeves and get to work. If your work is good enough, you'll get all the "mentoring" you'll need. "Mentoring" from the best people in your field is a very rare and precious resource and like anything else in life that is precious, should be earned.<br />
<br />
My 2 cents. Fwiw. YMMV.<br />
</i><br />
<br />
That says most of what I want to say.<br />
<br />
Some minor points now, addressing some points raised in the latest emails. <br />
<br />
If you claim to be "very passionate about X" but have never done anything concrete in X I find it difficult to take you seriously. People who are really passionate about anything don't wait for "leaders" or "mentors" before doing *concrete* work in the area of their passion, however limited. Specifically wrt to programming/machine learning etc in the days of the internet and with sites like Amazon or the MIT OCW you have no limits except those you impose on yourself.<br />
<br />
I hate to sound all zen master-ey but in my experience, it is <b>doing</b> the work that teaches you what you need to do next. Walking the path reveals more of the map. All the mentoring a truly devoted student needs is an occasional nudge here or an occasional brief warning there. Working with uncertainty is part of the learning. Waiting for mentorship/leadership/"community"[1]/ whatever to start working is a flaw that guarantees you will never achieve anything worthwhile. <br />
<br />
Ok pseudo-zen-master-mode off. More prosaic version - "shut up and code". Or make a movie on your webcam, Or write that novel. Whatever. Your *work* will, in time, bring you all the mentoring and community or whatever else you need. <br />
<br />
As always My 2 cents. Fwiw. YMMV. Have a nice day.<br />
<br />
[1] For some reason Bangalore is crawling with people who first want to form a community and then start learning/working/whatever. These efforts almost invariably peter out uselessly. First do the work. Then if you feel like "communing" talk to others who are also working hard.Please read <a href="http://www.teamten.com/lawrence/writings/plan05.html">this</a> , sent to me by my friend Prakash Swaminathan.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com24tag:blogger.com,1999:blog-8993901435573921786.post-81277361160065354672010-09-09T21:59:00.000-07:002010-10-31T02:21:07.216-07:00My Schedule for the rest of the year- Starting tomorrow hack 12 hours a day as part of my current project. ( C & Haskell, Machine Learning, if anyone is interested). Will be traveling to places without Internet connectivity. So expect to be mostly offline.<br />
<br />
- Oct end / Beginning of Nov: Back in Bangalore . Back online. Yay!<br />
<br />
- Nov end: complete paperwork/documentation/training blah blah, Project handover.<br />
<br />
- Nov end. This (phase of this) project done. Whew.<br />
<br />
- December - somewhat free. I hope to release some Open Source code before EOY. Fairly old Scala code (so needs to be updated to Scala 2.8, add some comments and so on) but should be useful to others. Paperwork for Open Source release should come through before then.<br />
<br />
Jan 1, 2010. New Year. No definite plans but lots of nice opportunities. Problems of plenty. Touch Wood. (Update: "No definite plans" is no longer true. A couple of VERY interesting opportunities in the air. Life is good.)Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com4tag:blogger.com,1999:blog-8993901435573921786.post-76728387585004524932010-09-01T20:06:00.000-07:002010-09-03T09:14:47.499-07:00The Secret of Professional HappinessI was talking to <a href="http://www.cloudknow.com/">Prakash Swaminathan</a> the other day and he said something that I thought encapsulated the essence of having a great professional life.<br />
<br />
(a) Work with people you admire, (b) on interesting projects and (c) work from home as much as possible. <br />
<br />
I could imagine dropping (c) if the other two criteria were met (though it does make a lot of sense in today's networked world) but whenever I've compromised on (a) or (b) life has sucked, <i>without exception</i>. <br />
<br />
So children,learn from your elders. Always work with great people on great projects and avoid the corporate politics bullshit and you'll be happy professionally.<br />
<br />
Of course this assumes you are skilled enough (or are willing to work to get there) that awesome people want you on awesome projects but that is a different post altogether.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com4tag:blogger.com,1999:blog-8993901435573921786.post-25404335863283162542010-08-20T10:20:00.000-07:002010-08-22T00:16:48.732-07:00Who (and what) I would like to see at DevCampComments, requests and suggestions on my last post are pouring in (re: my last post). Thanks everyone. One of the folks who sent me email asked "Who would *you* like to see speaking at DevCamp, assuming they are in India and willing to deliver a talk, and on what?"<br />
<br />
Hmm. Interesting question. I haven't really thought about this very deeply but here is a quick response(very busy day, no time to edit, link to home pages etc, sorry)<br />
<br />
In no particular order,<br />
<br />
(1) Debashish Ghosh on deep Scala programming. This guy is really good.<br />
<br />
(2) Baishampayan Ghose on the technical aspects of paisa.com<br />
<br />
(3) Bhasker Kode on Erlang at Hover.in<br />
<br />
(4) Peter Thomas - guru on things Wicket-ey, speaking of things wicket-ey. (Due disclosure , old friend of mine)<br />
<br />
(5) Narayan Raman on *the evolution* of Sahi (and on running a company based around an open source tool he wrote. How cool is that?)<br />
<br />
(6) Anyone from c42 or ActiveSphere on the challenges of setting an n-man (n < 7) consultancy and competing with the big boys (due disclosure both companies built by ex tw -ers. I know a few of them)<br />
<br />
(7) Anyone (technical) from FlipKart. They seem to be doing good things (I am a satisfied customer) and I am interested in how they tackle the huge challenges in building (for e.g) reccomendation systems.<br />
<br />
(8) Anyone at all in India doing serious work in Haskell (Scala would do).<br />
<br />
(9) Anyone building/working in a *technically* challenging startup (Notion Ink, say) on their *technical* challenges.<br />
<br />
(10) ThoughtWorkers hacking on stuff, on what they are hacking on. TW ers in general have all kinds of side projects going. The two Viveks (Prahalad and Singh) would be a good start.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com1tag:blogger.com,1999:blog-8993901435573921786.post-78567535384379712352010-08-17T00:54:00.000-07:002010-12-02T21:27:27.138-08:00Speaking at DevCamp 2010I'll be speaking at DevCamp 2010. <a href="http://ravimohan.blogspot.com/2008/01/dev-camp-bangalore.html">As in 2008</a>, I have a "menu" of topics that people can vote on and will select the topic at the very last minute. Since I don't use slides(in general) this isn't very hard to pull off. More on this below.<br />
<br />
Dev Camp is interesting because in India, there aren't any "developer to developer" conferences. Most are either company sponsored events (e.g Sun/Oracle/Adobe Tech days) or are overrun by "evangelists" hired by MegaCorps to sell their crapware to developers. DevCamp attempts to head these people off by stating "DevCamp is an annual BarCamp style unconference for hackers, by hackers and of hackers that began in Bangalore in 2008 with code and hacking as its core themes" Some of these "evangelists" are shameless enough to crash the conf anyway, but the Law of Two Feet often takes care of that.<br />
<br />
So what do you talk about at DevCamp? (everything that follows is *my* opinion. I have nothing to do with the organizing of DevCamp) <br />
<br />
If you are any kind of hacker, you have a pet project running on the side. You are learning or doing something that might be of interest to other developers. So in the last devcamp I attended (in 2008) someone was trying to replace JMeter with an equivalent Erlang tool and he gave a very interesting talk on the advantages and challenges of this approach. <br />
<br />
Bring your laptop and show us what you are working on. *Don't* make one of those slide heavy "Introduction to Blah" type talks that are prevalent at most Indian conferences (last year's PyConf India was a good example of this iirc. Hopefully this year is better). Your audience consists of professional developers who are quite comfortable with looking up stuff on the Internet. <br />
<br />
As the Dev Camp page puts it, "Assume a high level of exposure and knowledge on the part of your audience and tailor your sessions to suit. Avoid 'Hello World' and how-to sessions which can be trivially found on the net. First hand war stories, in-depth analysis of topics and live demos are best. ". Again some folks so try to sneak in "Introduction to Blah" where X is the latest "hot" topic (Clojure or Android would fit the bill these days for e.g), but again "The Law of Two Feet" (mostly) takes care of them. <br />
<br />
If you want to talk about Clojure don't do "An Introduction to Clojure". In the days of YouTube, Rich Hickey can do that much better than you could. Talk about "How I built a Text Processing/WebcCrawler library in Clojure" or "My startup runs on Clojure" (and show us the code). Tell us what *you* know that few others do ("in-depth analysis") and/or show us interesting code you wrote ("live demos"). If someone were to do a talk on (for e.g) how the Clojure *compiler* works and the tradeoffs in its design, that would be interesting to me. If you are recycling "Clojure has macros, woot!" I don't care.<br />
<br />
The other interesting aspect about DevCamps is how lightweight it is. There is none of the stuffiness associated with the usual company conferences. It is an *un* conference, like Barcamp, but without the legion of SEO marketing people, "bloggers", non-tech "founders" trawling for naive developers who'll work for free on their latest "killer idea"s etc who swarm Barcamp. BarCamp (imo) attracts fringe lunatics. DevCamp attracts (or should attract when it works well) competent developers.<br />
<br />
So, these are the things I could talk about at DevCamp. Since I work on Machine Learning and Compilers, the topics reflect that experience. I could talk about how to build a Leasing System in Java but I doubt I'd have anything interesting to say ;-). <br />
<br />
Send me email if you have a preference (or leave a comment here). I'll talk about whatever has the highest number of votes on Sep 4. "Customer Development" for sessions woot? Email > comments here > twitter but any and all forms of media are acceptable.<br />
<br />
The topics from highest to lowest number of votes registered at the time of writing are <br />
<br />
<b><i>(1)An In Depth Look at the Category Theory Bits in Haskell (expanded version of the old Monad tutorial)</i></b><br />
<br />
At DevCamp 2008, I presented a talk on "Understanding Monads" where the idea was that someone who knew nothing about Monads should come to the talk and walk out knowing how they work and when to use them. Instead of giving vague analogies("monads are space stations/containers/elephants.." you build monads from the ground up using first class functions. The talk included, in its first iteration, the List, Maybe and State monads. Later versions (over the years I have given the talk a few times) broke down the Category Theory behind monads and how it helps in structuring programs. <br />
<br />
The latest version encompasses all the hairy Category Theory related bits and pieces(Applicatives, Monoids, Functors , Monad Transfomers...) which impede programmers trying to learn Haskell/Scala/ML etc. I don't assume any theory/math background from the audience and introduce required formalisms. The good news is that this is a very polished and popular topic (and is trending highest in the number of "votes") . The bad news is that I am bored of this talk (but will still use it if it scores the highest number of votes).<br />
<br />
<br />
<b><i>(2) Building a Type Inferencer in 45 minutes</i></b><br />
<br />
<br />
Static Type Systems, especially those more powerful than the Java/C# variety are a mystery to most programmers. This can be seen for example in how developers with a Java background write "Java in Scala" than idiomatic Scala. The best way (and the Hacker's way) to understand how a Type Inferencer works is to build one. This session builds a Hindley Milner type checker with a couple of extensions. <br />
<br />
<b><i>(3) WarStory: How I escaped Enterprise SW and became a Machine Learning Dev</i></b><br />
<br />
Self explanatory ;-)<br />
<br />
<b><i><br />
(4) Proof Technique for Programmers - A Developer's gateway to Mathematics (and Machine Learning)</i></b><br />
<br />
This comes out of something I observed in the Bangalore Dev community. A lot of people read "Programming Collective Intelligence" (a terrible book - read my HN "review" <a href="http://news.ycombinator.com/item?id=208811">here</a> - I am "plinkplonk". See also comments by brent) and fancy themselves "Machine Learning" people ("we aren't experts but we know the basics". Ummm . No, you don't :-P. )<br />
<br />
The sad truth is, you can't do any serious machine learning (or Computer Vision, or Robotics, or NLP or Algorithm heavy) development without high levels of mathematics. "Pop" AI books like PCI are terrible in teaching you anything useful.<br />
<br />
To quote Peter Norvig from his <a href="http://www.amazon.com/review/RZ7FBFHHLJHYE/ref=cm_cr_rdp_perm">review</a> of Chris Bishop's Neural network book (emphasis mine)<br />
<br />
<i>"To the reviewer who said "I was looking forward to a detailed insight into neural networks in this book. Instead, almost every page is plastered up with sigma notation", that's like saying about a book on music theory "Instead, almost every page is plastered with black-and-white ovals (some with sticks on the edge)." Or to the reviewer who complains this book is limited to the mathematical side of neural nets, that's like complaining about a cookbook on beef being limited to the carnivore side. If you want a non-technical overview, you can get that elsewhere (e.g. Michael Arbib's Handbook of Brain Theory and Neural Networks or Andy Clark's Connectionism in Context or Fausett's Fundamentals of Neural Networks), but <b>if you want understanding of the techniques, you have to understand the math</b>. Otherwise, there's no beef. "</i><br />
<br />
The "if you want understanding of the techniques, you have to understand the math" bit is true for all areas of ML, not just Neural networks. The biggest stumbling block (there are many ;-)) for most developers attempting to grok the underlying mathematics is the proof based learning method most higher level Math/Machine Learning books assume.<br />
<br />
E.g here is the *first* exercise of the *second* chapter of "Elements of Statistical Learning", a which (unlike PCI) book you *should* read if you plan to do Machine Learning-ey things<br />
<br />
<i>"Suppose each of K-classes has an associated target tk , which is a<br />
vector of all zeros, except a one in the kth position. Show that classifying to<br />
the largest element of y amounts to choosing the closest target, mink ||tk − y ||, if the elements of y sum to one."<br />
</i><br />
<br />
This "Given X, Prove Y" structure is how almost all books in the field teach things. Sure you should code up the algorithms, but doing such problems is how you get *insight* into the field. And algorithms have their own problems (pun intended). Open Cormen et al's "Introduction to Algorithms" and you'll find questions like (randomly opening the third edition)<br />
<br />
<i>Problem 20.1 (e) Prove that, under the assumption of simple uniform hashing, your RS-vEB-TREE-INSERT (Note vEB tree == van Emde Boas tree) and RS-vEB-TREE-SUCCESSOR run in O(lg lg u) expected time.</i><br />
<br />
Thus it turns out that for getting into many areas of interest, a knowledge of how to prove things is critical. You will make very slow or zero progress without that understanding. That is the bad news. The good news is, proofs are (relatively) easy for programmers to understand when presented the right way (acquiring <i>skill</i> takes a while). I wasted many years learning this stuff in inefficient ways. Don't make the same mistake. <br />
<br />
Zero math background required. Just bring some paper to write on.<br />
<br />
<br />
<b><i>(5) Trika - A Hierarchical Reinforcement Learning framework in Scala</i></b><br />
<br />
A demo and discussion on an RL framework I built. I haven't yet cleared the paperwork to Open Source this (the process is like pulling teeth, long story), but I can still show it off.<br />
<br />
<b><i>(6) Neuro genetic Algorithms - Theory and Applications </i></b><br />
<br />
<br />
An interesting branch of AI/ML with some elegant applications. Again live demo of a couple of interesting algorithms and talk about design/performance trade-offs.<br />
<br />
<b><i>(7) Denotational, Operational and Axiomatic Semantics - Designing programming languages with mathematics</i></b><br />
<br />
This is of interest to people building their own languages. Most language implementations are adhoc "hacks". They don't have to be.<br />
<br />
If you plan to attend, let me know which of these topics strike your fancy. And if you are a reader of this blog, find me and say Hello. <br />
<br />
See you at DevCamp!Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com11tag:blogger.com,1999:blog-8993901435573921786.post-35661129752176902402010-07-17T03:13:00.000-07:002010-07-17T03:55:07.056-07:00The New American MilitarismExcerpt from the preface of Andrew Bacevich's "The New American Militarism: How Americans are Seduced by War"<br />
<br />
<i>The final point concerns my understanding of history. Before moving into a career focused on teaching and writing about contemporary U.S. foreign policy, I was trained as a diplomatic historian. My graduate school mentors were scholars of great stature and enormous gifts, admirable in every way. They were also splendid teachers, and I left graduate school very much under their influence. My own abbreviated foray into serious historical scholarship bears the earmarks of their approach, ascribing to Great Men—generals, presidents, and cabinet secretaries—the status of historical prime movers.<br />
<br />
I have now come to see that view as mistaken. What seemed plausible enough when studying presidents named Wilson or Roosevelt breaks down completely when a Bush or Clinton occupies the Oval Office. Not only do present-day tendencies to elevate the president to the status of a demigod whose every move is recorded, every word parsed, and every decision scrutinized for hidden meaning fly in the face of republican precepts. They also betray a fundamental misunderstanding of how the world works.<br />
<br />
What is most striking about the most powerful man in the world is not the power that he wields. It is how constrained he and his lieutenants are by forces that lie beyond their grasp and perhaps their understanding. Rather than bending history to their will, presidents and those around them are much more likely to dance to history’s tune. Only the illusions churned out by public relations apparatchiks and perpetuated by celebrity-worshipping journalists prevent us from seeing that those inhabiting the inner sanctum of the West Wing are agents more than independent actors. Although as human beings they may be interesting, very few can claim more than marginal historical significance. So while the account that follows discusses various personalities—not only politicians but also soldiers, intellectuals, and religious leaders—it uses them as vehicles to highlight the larger processes that are afoot.<br />
<br />
Appreciating the limits of human agency becomes particularly relevant when considering remedial action. If a problem is bigger than a particular president or single administration—as I believe the problem of American militarism to be—then simply getting rid of that president will not make that problem go away. To pretend otherwise serves no purpose.<br />
<br />
..................<br />
<br />
The bellicose character of U.S. policy after 9/11, culminating with the American-led invasion of Iraq in March 2003, has, in fact, evoked charges of militarism from across the political spectrum. Prominent among the accounts advancing that charge are books such as The Sorrows of Empire: Militarism, Secrecy, and the End of the Republic, by Chalmers Johnson; Hegemony or Survival: America’s Quest for Global Dominance, by Noam Chomsky; Masters of War: Militarism and Blowback in the Era of American Empire, edited by Carl Boggs; Rogue Nation: American Unilateralism and the Failure of Good Intentions, by Clyde Prestowitz; and Incoherent Empire, by Michael Mann, with its concluding chapter called “The New Militarism.”<br />
<br />
Each of these books appeared in 2003 or 2004. Each was not only written in the aftermath of 9/11 but responded specifically to the policies of the Bush administration, above all to its determined efforts to promote and justify a war to overthrow Saddam Hussein.<br />
<br />
As the titles alone suggest and the contents amply demonstrate, they are for the most part angry books. They indict more than explain, and whatever explanations they offer tend to be ad hominem. The authors of these books unite in heaping abuse on the head of George W. Bush, said to combine in a single individual intractable provincialism, religious zealotry, and the reckless temperament of a gunslinger. Or if not Bush himself, they finger his lieutenants, the cabal of warmongers, led by Vice President Dick Cheney and senior Defense Department officials, who whispered persuasively in the president’s ear and used him to do their bidding. Thus, according to Chalmers Johnson, ever since the Persian Gulf War of 1990–1991, Cheney and other key figures from that war had “wanted to go back and finish what they started.” Having lobbied unsuccessfully throughout the Clinton era “for aggression against Iraq and the remaking of the Middle East,” they had returned to power on Bush’s coattails. After they had “bided their time for nine months,” they had seized upon the crisis of 9/11 “to put their theories and plans into action,” pressing Bush to make Saddam Hussein number one on his hit list.6 By implication, militarism becomes something of a conspiracy foisted on a malleable president and an unsuspecting people by a handful of wild-eyed ideologues.<br />
<br />
<br />
By further implication, the remedy for American militarism is self-evident: “Throw the new militarists out of office,” as Michael Mann urges, and a more balanced attitude toward military power will presumably reassert itself.<br />
<br />
As a contribution to the ongoing debate about U.S. policy, The New<br />
American Militarism rejects such notions as simplistic. It refuses to lay the<br />
responsibility for American militarism at the feet of a particular president<br />
or a particular set of advisers and argues that no particular presidential election holds the promise of radically changing it. Charging George W. Bush with responsibility for the militaristic tendencies of present-day U.S. foreign policy makes as much sense as holding Herbert Hoover culpable for the Great Depression: whatever its psychic satisfactions, It is an exercise in scapegoating that lets too many others off the hook and allows society at large to abdicate responsibility for what has come to pass.<br />
The point is not to deprive George W. Bush or his advisers of whatever<br />
credit or blame they may deserve for conjuring up the several large-scale<br />
campaigns and myriad lesser military actions comprising their war on terror. They have certainly taken up the mantle of this militarism with a verve not seen in years. Rather it is to suggest that well before September 11, 2001, and before the younger Bush’s ascent to the presidency a militaristic predisposition was already in place both in official circles and among<br />
Americans more generally. In this regard, 9/11 deserves to be seen as an event that gave added impetus to already existing tendencies rather than as a turning point. For his part, President Bush himself ought to be seen as a player reciting his lines rather than as a playwright drafting an entirely new script.<br />
<br />
<br />
In short, the argument offered here asserts that present-day American<br />
militarism has deep roots in the American past. It represents a bipartisan<br />
project. As a result, it is unlikely to disappear anytime soon, a point<br />
obscured by the myopia and personal animus tainting most accounts of<br />
how we have arrived at this point.<br />
<br />
</i><br />
<br />
<br />
Great Book. Worth Reading.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-8993901435573921786.post-7220148986841654772010-07-07T22:50:00.000-07:002010-07-08T03:18:00.447-07:00Acquistion AftertasteMini-msft <a href="http://minimsft.blogspot.com/2010/07/kin-fusing-kin-clusion-to-kin-and-fy11.html">asks</a><br />
<br />
<i>How big was the original iPhone team? How big was the KIN team? Why did one result in a lineage of amazingly successful devices in the marketplace, and the other become a textbook extended definition for "dud" ?</i><br />
<br />
He goes on to quote an ex Danger employee<br />
<br />
"And finally, one Danger-employee's point of view of why they became demotivated:<br />
<i><br />
To the person who talked about the unprofessional behavior of the Palo Alto Kin (former Danger team), I need to respond because I was one of them.<br />
<br />
You are correct, the remaining Danger team was not professional nor did we show off the amazing stuff we had that made Danger such a great place. But the reason for that was our collective disbelief that we were working in such a screwed up place. Yes, we took long lunches and we sat in conference rooms and went on coffee breaks and the conversations always went something like this..."Can you believe that want us to do this?" Or "Did you hear that IM was cut, YouTube was cut? The App store was cut?" "Can you believe how mismanaged this place is?" "Why is this place to dysfunctional??"<br />
<br />
Please understand that we went from being a high functioning, extremely passionate and driven organization to a dysfunctional organization where decisions were made by politics rather than logic.<br />
<br />
Consider this, in less than 10 years with 1/10 of the budget Microsoft had for PMX, we created a fully multitasking operating system, a powerful service to support it, 12 different device models, and obsessed and supportive fans of our product. While I will grant that we did not shake up the entire wireless world (ala iPhone) we made a really good product and were rewarded by the incredible support of our userbase and our own feelings of accomplishment. If we had had more time and resources, we would of come out with newer versions, supporting touch screens and revamping our UI. But we ran out of time and were acquired and look at the results. A phone that was a complete and total failure. We all knew (Microsoft employees included) that is was a lackluster device, lacked the features the market wanted and was buggy with performance problems on top of it all.<br />
<br />
When we were first acquired, we were not taking long lunches and coffee breaks. We were committed to help this Pink project out and show our stuff. But when our best ideas were knocked down over and over and it began to dawn on us that we were not going to have any real affect on the product, we gave up. We began counting down to the 2 year point so we could get our retention bonuses and get out.<br />
<br />
I am sorry you had to witness that amazing group behave so poorly. Trust me, they were (and still are) the best group of people ever assembled to fight the cellular battle. But when the leaders are all incompetent, we just wanted out. <br />
</i><br />
<br />
I guess we need another ThinkWeek paper on how to successfully acquire companies, too. Between this and aQuantive, we only excel at taking the financial boon of Windows and Office and giving it over to leadership that totally blows it down the drain like an odds-challenged drunk in Vegas. And the shareholders continue to suffer in silence. And the drunks are looking for their next cash infusion."<br />
<br />
Hilarious. You couldn't invent this stuff. But after my last stint at MegaCorp where my group blew through millions of dollars and delivered zilch (I got out early!), I am not surprised. Dumb dumber management structures have the irresistible property of stifling any innovation or effectiveness.<br />
<br />
Something I am watching is HP's acquisition of Palm. I know a couple of good people at HP but by and large the company is bloated and dysfunctional. It will be interesting to see what they end up doing with the Palm assets.<br />
<br />
<br />
The comments are hilarious on Mini's post.<br />
<br />
<i>"If Roz and/or Andy doesn't go, what does that say about our supposed value of "accountability?" I for one am tired of accountability meaning "we move them over here and give them a smaller project and hope they resign." </i><br />
<br />
Heh heh! Something like this just happened to someone at my ex employer. I've come to the conclusion that there is only one MegaCorp worldwide and the idea of separate companies is probably an illusion fostered to give the loser employee types the illusion of changing jobs in the hopes of life getting better ;-)Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-8993901435573921786.post-8191158486416586602010-07-01T22:50:00.000-07:002010-07-01T23:13:50.167-07:00Why learn Compiler Implementation?Yesterday a friend called up and asked what I was doing and, among other things, I said " I am building a compiler for a language with features X, Y and Z". He replied "But why do that?".<br />
<br />
Well I like building compilers, interpreters etc but there are good reasons why "mainstream" programmers should learn this stuff. <br />
<br />
As Hal Abelson of MIT (co author of SICP ) said<br />
<br />
"If you don't understand compilers, you can still write programs - you can even be a competent programmer- but you can't be a master"<br />
<br />
Steve Yegge has an interesting, if more verbose, post at http://steve-yegge.blogspot.com/2007/06/rich-programmer-food.html<br />
<br />
One minor side effect of knowing this stuff is that you are immune to language fads. The local fanboi crowd is jumping off the somewhat creaky Ruby bandwagon and onto the gleaming Clojure one. Watch out for a lot of half baked blather on the wonders of Lisp by people with not much of a clue. But it will be amusing and help pass the time.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com3tag:blogger.com,1999:blog-8993901435573921786.post-47670908017783932252010-06-03T21:20:00.000-07:002013-11-04T05:57:59.102-08:00Work from Home job for Kernel HackersSomeone I know from Hacker News sent me this email (lightly edited) <br />
<br />
<br />
"I am likely to hire one high-class Linux kernel engineer in November<br />
time. I would be interested to have information on any possible<br />
candidates that may fit the work.<br />
<br />
I am looking for people who are:<br />
<br />
- Enthusiastic about linux kernel work<br />
- Have working knowledge of CPUs, cache internals, SMP, concurrency,<br />
memory management. ARM/Embedded knowledge would be a plus.<br />
- Ability to bring out a solution on his own, even architect a design<br />
without my intervention.<br />
- Familiar with open source work flow, i.e. using git, knows how to<br />
create a clean patch that is well tested and send it by email.<br />
- Good communication skills, i.e. ability to write full English<br />
sentences with correct wording and no typos - you would be surprised to<br />
see how people even lack this.<br />
- Past work experience on the linux kernel would be helpful.<br />
<br />
<br />
The work environment involves telecommuting from home with occasional<br />
yearly meetups. Most communication is done by email. We are all about<br />
core kernel development"<br />
<br />
When he originally talked to me about this I had someone in mind but he decided to go to grad school and will be leaving India in August for the USA. If I had any kernel dev experience I would have taken it up myself - looks to be an awesome job.<br />
<br />
<br />
So if anyone has the qualifications listed above and wants a cool job working from home, contact me with details of what you've done and I'll connect you to the hirer.<br />
<br />
NB:I've got some emails wrt this post. Just to clarify, contact me with details of <b><i>what you've done</i></b> wrt kernel hacking/systems work. Links to submitted patches would be nice, as would any links to core systems code of any kind. This is *not* a GSOC kind of mentored position and needs people with prior experience.<br />
<br />
This post is now obsolete. Do NOT repeat NOT keep sending me CVs. They go straight into the garbage folder. Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com2tag:blogger.com,1999:blog-8993901435573921786.post-9589425440588172572010-04-18T10:31:00.000-07:002010-04-24T02:04:46.599-07:00Thieving Tharoor Gets His Due(Warning:- Indian Politics)<br />
<br />
My <a href="http://pindancing.blogspot.com/2009/04/voting-against-shashi-tharoor.html"> judgement about Shashi Tharoor</a> is vindicated. The philandering, corrupt, "new hope" has been kicked out of the Cabinet for his clumsy attempt to reward his girlfriend with a 15 million dollar stake in an IPL team. His fanboi legion is out there on the internetz banging away with such gems like " What if he is corrupt? Others are even more corrupt" and "start a new party and you will be Prime Minister". <br />
<br />
<br />
Please Mr T, do try. <br />
<br />
His NRI following, who choose not to live in India, but want to have a say in its politics, must be heart broken. I mean, I can understand the appeal- Spend your productive years in the West and when the time comes to retire persuade a party to give you a parliament seat and after a 3 week campaign in a constituency you've never seen before and not even speaking the language of your constituents, become Member of Parliament and then immediately a minister with zero experience as a politician or an MP, thus setting you free to "contribute" to India, enriching your girlfriend by millions of dollars - what's not to like? It is the ultimate NRI fantasy. <br />
<br />
Like all fantasies projected onto reality it does work once in a while, and invariably ends disastrously. <br />
<br />
<br />
As MJ Akbar said in <a href="http://www.mjakbar.org/siegewithin.htm">his blog entry</a><br />
<br />
<i>"Tharoor is writhing between a mistake and a misfortune. His mistake was to gatecrash a party without an invitation. He thought he could buy entry with Dubai and Gujarat money and spin out collateral political benefits by name-association with Kochi. He leapt to take the political credit when Kochi won the franchise. He is alleged to have taken financial rewards more surreptitiously. His friend Sunanda Pushkar's feeble claim that she is not a proxy is silly. You do not get sweat equity in perpetuity, which means free and forever, with a starting value of Rs 70 crore, for being an unknown executive of a Dubai company. There hasn't been a case of "cheque-payment culpability" of this order since the transactions that ended the chief ministership of A R Antulay in 1981. Nearly 30 years ago, Congress inexplicably tried to defend the indefensible before dumping Antulay so hard it virtually broke the warhorse's back. Mystery repeats itself.<br />
<br />
Tharoor's misfortune was to encounter an adversary who could out-Twitter him at high noon in the gunfight at IPL corral. Tharoor and Lalit Modi have more in common than sharp suits, sharp wits and a dogged commitment to the television cameras. Having achieved so much through effective use of the media, they were convinced their favourite weapon remained the best option. They went to war through the media. A veteran like Sharad Pawar would have told them, had they but asked that children in glasshouse nurseries shouldn't throw stones.<br />
<br />
Modi has one advantage over Tharoor; he is in the private sector. His accountability is to fiscal laws. Tharoor affects the image of the Congress at a time when the party cannot afford a greasy controversy. Tharoor is the first Congress minister in the Sonia Gandhi-Manmohan Singh government to be publicly pilloried for alleged corruption."</i><br />
<br />
<br />
I wouldn't put it past His Sliminess to wriggle his way back into some position of power. But for now, Good Riddance.<br />
<br />
Anyway he promised to make Trivandrum a "world class city" if he got elected. As Member of Parliament he can still work towards that. I supect he won't get a chance next time ;-). It would help if he knew some people in his constituency and their problems and/or spoke the language. Oh well now he has the time for all that. We'll wait and watch.<br />
<br />
<br />
Mood: Delighted.<br />
<br />
Some advice for Mr Tharoor: You have about 3 friends in the Congress Party hierarchy. Fortunately for you one of them is the Prime Minister. Get Dr Singh to have a pliable CBI officer do an "investigation" and then declare you innocent. Hey Presto get your ministry back and then we can figure out how to make 70 crores back(with interest). It would help if you could keep your mouth shut and not tweet your usual inanities while all this is going on. Good Luck! <br />
<br />
Meanwhile (non political) temperatures are soaring in Delhi. 52 degrees centigrade the other day. Any more and people will burst into flame.<br />
<br />
PS: I don't have access to the internet these days ( I had to make a special effort for this post - the taste of "I told you so" is too sweet ) so if any of the ex-minister's fanboys (yeah you "ppl" or "tweeple" or whatever you morons call yourself these days) want to leave nasty anonymous comments like the last time I wrote about Saint Tharoor, you'll have to wait till I get back online (the middle of May, more or less).Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com1tag:blogger.com,1999:blog-8993901435573921786.post-89343039131907984362010-03-21T10:05:00.000-07:002010-03-21T10:07:39.499-07:00What to do when "they" hate Indian developersSome fellow posted on proggit<br />
<br />
<i>"Currently the top rated link on proggit is<br />
<br />
"What did I do wrong? (or, how are you supposed to hire a programmer?)"<br />
<br />
<br />
And the first rated comment is<br />
<br />
What did I do wrong?<br />
In the end, we went with the Indian company<br />
Is this some kind of a clever troll?<br />
<br />
<br />
<br />
As an Indian (and a partner at an Indian software consulting company), am I not supposed to be offended by this? Would your answer had been same had the company been an Israeli one? I see similar highly voted comments on proggit all the time, and its very infuriating, as obv. I generally like hanging out he</i>re."<br />
<br />
<br />
and titled it "Dear Proggit: why the hatred for India?"<br />
<br />
My answer<br />
<br />
"(I am an Indian programmer living in India) You are just being too sensitive. Except for some xenophobic rednecks, no one hates India. There are plenty of unskilled Indian "developers" especially in the enterprise sw outsourcing companies who can't code to save their lives and it is only natural that people who have been burned once are leery about outsourcing work to Indian companies.<br />
<br />
The way to fix this is to do good work, more specifically write great code. The Japanese did exactly this in manufacturing, turning around their reputation form a producer of cheap low quality gee gwas to master sof mnufacturing. Whining is useless.<br />
<br />
Just do good work. Then do better. Rinse. repeat. The reputation and "hate" will take care of themselves.<br />
"<br />
<br />
<br />
I am so tired of Indian developers being ultra "sensitive" and doing everything but write good code. <br />
<br />
<br />
An even shorter version of my answer is "Shut up and Code."<br />
<br />
<a href="http://www.reddit.com/r/programming/comments/bg5me/dear_proggit_why_the_hatred_for_india/">full thread</a>Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-8993901435573921786.post-57764343325710857492010-02-23T20:02:00.000-08:002010-02-23T20:07:53.214-08:00No getting away from meAn email (lightly edited) I got from a friend working on Financial Software in London<br />
<br />
"Here I am minding my own business, doing my job testing/developing Trading strategies designed by one of the Quants + Traders and what do I hear -<br />
<br />
someone in the corner is talking about something very<br />
mathematical - it all sounds gobbledegook to me and then I hear the word "Re-inforcement Learning" (I know you work on something like that) <br />
<br />
I walk over, say hi (to some French mathematicians from Ecole Poly) and casually inquire what they are talking about, comes the reply - Modern Statistical Learning for predicting market behavior and tuning algorithmic trade strategies, <br />
<br />
On the Screen - Java code - author @Ravi Mohan<br />
<br />
There is no running away from you is there :-)<br />
"<br />
<br />
No there isn't! <br />
<br />
;-)Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com2tag:blogger.com,1999:blog-8993901435573921786.post-12645398924010205282010-02-18T23:31:00.000-08:002010-02-19T19:00:58.541-08:00A Haskell JourneyOver the last couple of years, I've been using Haskell (oddly enough) as a scripting/shell language, somewhat akin to bash, to tie together various bits and pieces of code written in other languages. (See http://www.cse.unsw.edu.au/~dons/data/Basics.html for some shell like utilities).<br />
<br />
One of my goals for this year is to really master Haskell so I can use it as a primary language. As I dig in, I find there are two levels of Haskell. <br />
<br />
Level 1 consists of algebraic data types, pure (and lazy) functions, typeclasses and modules. Someone fluent in another language can (relatively) easily wrap their heads around this portion of the language and start programming. There are plenty of tutorials and books (including "Real World Haskell") that teach this style of Haskell usage.<br />
<br />
But, sooner or later one comes across code that is written in "level 2" Haskell - making heavy use of Monads, Monoids, Arrows and all the other Category Theory goodness. Monads get an abnormal mindshare among would-be Haskell developers but there is more to Haskell than Monads. No one has really written a comprehensive guide to this bit of Haskell. The nearest we have is the <a href="http://www.haskell.org/sitewiki/images/8/85/TMR-Issue13.pdf">TypeClassopedia</a> (warning, PDF) but that is more a collection of links for further reading than a detailed exposition. The Wikibook is uneven. RWH does a hop-skip-and jump over these bits - a correct decision given the focus and size of the book. <br />
<br />
What we really need is an "Advanced Haskell" book which assumes a knowledge of Level 1 Haskell and then lays out the CT bits in an orderly fashion. Monads for example, are best understood from a Category Theory perspective than through some tortured analogy to Space Stations or Elephants or whatever. Some exposition of Type Theory would help too - a knowledge of kinds, for example is very useful to decode some of the advanced bits ( TAPL's last chapters have a good exposition iirc).<br />
<br />
In the absence of an "Advanced Haskell" book, the best option is to read the various papers, trawl the mailing lists and so on for answers to specific questions. I've learned some Category Theory before (and am fairly comfortable with Type Theory) so I haven't found this to be particularly hard but I can see how it could be (very) hard for someone without this background. Mastery of this level would be when I can code *fluently* with Comonads and such. I am not there yet.<br />
<br />
That said, Haskell is the most elegant language I've ever used and I plan to write a lot of code in it. I plan to open source some Haskell code over the next couple of months, so we'll see how much I've really understood!Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com8tag:blogger.com,1999:blog-8993901435573921786.post-91282913448310697212010-01-15T07:57:00.000-08:002012-02-09T04:42:35.829-08:00Learning about Machine LearningBradford Cross has posted an awesome blog post (edit: removed link, since Bradford took down the post) titled "Learning about Statistical Learning". If you plan to work in ML, read the post, buy some of the books and work through them. <br />
<br />
Could save you years of work if you are systematic from the beginning (I wasn't), especially if you are self taught (I am).<br />
<br />
<br />
I work on different domains (Robotics/Computer Vision/Simulation) from Bradford and so have a different list of books. Please read Bradford's lists first. This is a supplement to his awesome post rather than a replacement. <br />
<br />
I assume you are a good developer and you have a solid grip on algorithm analysis etc.(though, that said, see reccomendations for Discrete Math books below)<br />
<br />
<span style="font-weight:bold;">The first step</span>..<br />
<br />
Learn proof techniques *first*. You'll make no serious progress till you do. The best book is <br />
<br />
Velleman's <a href="http://www.amazon.com/How-Prove-Structured-Daniel-Velleman/dp/0521675995/">"How to Prove It"</a> - reccomended by Bradford but I am repeating it here because this is <b><i>critical</i></b>.<br />
<br />
<span style="font-weight:bold;">Mathematics<br />
</span><br />
In my experience you need to be somewhat comfortable with 6 branches of Mathematics before you can tackle ML. Imo, best to take a year and get these right before venturing into ML proper. (I know, it sounds awfully boring. I wasted a lot of time trying to shorten this step. In this case, the long way is the real shortcut) <br />
<br />
(1) Calculus - best "lite" book - <a href="http://ocw.mit.edu/ans7870/resources/Strang/strangtext.htm">Calculus</a> by Strang (free download) , <br />
<br />
best "heavy" books - <a href="http://www.amazon.com/Calculus-Michael-Spivak/dp/0914098918/">Calculus</a> by Spivak,<a href="http://www.amazon.com/Principles-Mathematical-Analysis-Third-Walter/dp/007054235X"> Principles of Mathematical Analysis</a> a.k.a "Baby Rudin"<br />
<br />
<br />
(2) Some book on Discrete Math (don't know what to recommend here - I don't like Rosen's book) + a good book on say Introduction to Algorithms by Cormen et al will do [*] <br />
<br />
(3) Linear Algebra (First work through <a href="http://www.amazon.com/Introduction-Linear-Algebra-Fourth-Gilbert/dp/0980232716">Strang's book</a>, then <a href="http://www.amazon.com/Linear-Algebra-Right-Sheldon-Axler/dp/0387982582">Axler's</a>) <br />
<br />
(4) Probability (Bertsekas is a good book for those with no prior exp) and <br />
<br />
(5) Statistics (I would recommend <a href="http://www.amazon.com/Introduction-Statistics-Data-Analysis-Roxy/dp/0495557838/">Devore and Peck</a> for the total beginner but it is a damn expensive book. So hit a library or get a bootlegged copy to see if it suits you before buying a copy, see brad's list for advanced stuff.) <br />
<br />
(6) Information Theory (<a href="http://www.inference.phy.cam.ac.uk/mackay/itila/">MacKay's book</a> is freely available online)<br />
<br />
<span style="font-weight:bold;">Basic AI</span>.<br />
<br />
Brad suggests <a href="http://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/0070428077">Mitchell's book</a>. <br />
<br />
I think <a href="http://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597">AIMA (3d Edition)</a> is much better. ( I am biased. I wrote and maintained the Java code for a long while -- children, don't do this. Java is an terrible language to develop AI algorithms in. If you need the JVM use Scala or Clojure -- and I think it covers a lot more than Mitchell does. Take a look at both. Pick one).<br />
<br />
<br />
<span style="font-weight:bold;">Machine Learning</span>.<br />
<br />
NB: you need all the linear algebra, calculus etc worked through before you hit this point<br />
<br />
In order, <br />
<br />
<a href="http://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738">"Pattern Recognition and Machine Learning"</a> by Christopher Bishop, <br />
<br />
<br />
*then* <a href="http://www-stat.stanford.edu/~tibs/ElemStatLearn/">"Elements of Statistical Learning"</a> (free download).<br />
<br />
<span style="font-weight:bold;">Neural Networks</span>:<br />
<br />
In order,<br />
<br />
<a href="http://www.amazon.com/Neural-Network-Design-Martin-Hagan/dp/0971732108/">Neural Network Design </a>Hagan Demuth and Beale,<br />
<br />
<a href="http://www.amazon.com/Neural-Networks-Comprehensive-Foundation-2nd/dp/0132733501">Neural Networks, A Comprehensive Foundation (2nd edition)</a> - By Haykin (there is a newer edition out but I don't know anything about that, this is the one I used)<br />
<br />
and <a href="http://www.amazon.com/Neural-Networks-Pattern-Recognition-Christopher/dp/0198538642/">Neural Networks for Pattern Recognition</a> ( Bishop).<br />
<br />
At this point you are in good shape to read any papers in NN. My reccomendations - anything by <a href="http://yann.lecun.com/">Yann LeCun</a> and <a href="http://www.cs.toronto.edu/~hinton/">Geoffrey Hinton</a>. Both do amazing research.<br />
<br />
<span style="font-weight:bold;">Reinforcement Learning</span> (again this is just stuff *I* happened to specialize in for various projects, so feel free to ignore)<br />
<br />
<a href="http://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262193981/">Reinforcement Learning - An Introduction by Barto and Sutton</a> (follow up with <a href="http://rlai.cs.ualberta.ca/papers/barto03recent.pdf">"Recent Advances In reinforcement Learning" (</a>PDF) which is an old paper but a GREAT introduction to *Hierarchical* Reinforcement learning<br />
<br />
<br />
<a href="http://www.amazon.com/Neuro-Dynamic-Programming-Optimization-Neural-Computation/dp/1886529108/ref=sr_1_2?ie=UTF8&s=books&qid=1263575843">Neuro Dynamic Programming</a> by Bertsekas<br />
<br />
<br />
<span style="font-weight:bold;">Computer Vision</span> <br />
<br />
<a href="http://www.amazon.com/Introductory-Techniques-3-D-Computer-Vision/dp/0132611082/">Introductory Techniques for 3-D Computer Vision</a>, by Emanuele Trucco and Alessandro Verri.<br />
<br />
<a href="http://www.amazon.com/Invitation-3-D-Vision-Yi-Ma/dp/0387008934/">An Invitation to 3-D Vision</a> by Y. Ma, S. Soatto, J. Kosecka, S.S. Sastry. (warning TOUGH!!)<br />
<br />
<span style="font-weight:bold;">Robotics</span>.<br />
<br />
I know only about the software/algorithms side of Robotics and that too only Probabilistic Robotics. I don't know anything about hardware, electronics or Physics.<br />
<br />
<a href="http://www.amazon.com/Probabilistic-Graphical-Models-Principles-Computation/dp/0262013193/">Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning)</a> (strictly speaking not a robotics book, but a lot of the theory in this book is behind the algorithms in the next book<br />
<br />
<a href="http://www.amazon.com/Probabilistic-Robotics-Intelligent-Autonomous-Agents/dp/0262201623/">Probabilistic Robotics (Intelligent Robotics and Autonomous Agents) </a>by Thrun, Burgard and Fox (trivia Thrun also wrote the Robotics chapter in AIMA - did I tell you AIMA rocks as a first introduction to AI?)<br />
<br />
And that's all folks. Happy hacking!<br />
<br />
<br />
[*] working though Cormen et al is a humungous task and can easily consume a year or more of work. Something like Sally Goldman's new Algorithm book maybe more suited to programmers.<br />
<br />
<br />
PS: I have been getting a lot of email asking *how* one should learn X or Y. I have no idea really. The above is a list of books that worked for me and is provided only in the spirit of "these are good books that worked for me I don't know if they'll work for you." <br />
<br />
As to how I learned, I just read books and papers, try to understand, (a lot of banging head against wall at this point) and try to solve problems and code stuff. Beyond that I have no advice on how to learn effectively etc. I am entirely self taught and have no idea how to teach this stuff. You probably need to talk to a good prof.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com11