Anyone who's met me for more than a few seconds knows I'm a big fan of Python, warts and all. It's not the fastest language for performance-sensitive code, but I think it's one of the most expressive -- and one of the most aesthetically pleasing, when it's done right.
That said, there are some patterns in Python that I find more enjoyable to write than others. For me, the most intuitive way to imagine a program is to imagine the actual control flow -- how we get from the initial data right through to the end result. Maybe it's something I've shamelessly nicked from "real" functional programming, but I like the pattern of sequentially applying transformations to data.
Many people don't think like this, and that's okay. An approach that really doesn't work for me is the practice of thinking of all the nouns in your program and making them classes. It's something I've found much more useful in Java and other languages that don't lend themselves so readily to functional representation.
The reason I think this functional-programming-lite works well for me is that it encourages top-down design. It's a natural way to solve problems: if I'm writing a machine which understands grammar, my first step should be to break that task down into stages like tokenising, lexing and parsing.
What I don't love, in the example of a grammar engine, is immediately jumping to the assumption that there's got to be an GrammarEngine object to represent the data and operations on it. Or a Lexer or Parser, in fact. There might be! But without any knowledge of the process, it's not a reasonable assumption to make in a language which has functions as first-class objects.
The Nouns-First approach has a tendency to leave people with a multitude of classes, each of which have two methods:
__init__ and something with some variant on the class name.
Tokeniser.tokenise, and so on.
Because we've jumped straight to classes for the obvious nouns, we could have missed the obvious fact that -- for example -- the tokenising is a simple "split by spaces" on a string, and we've now got to maintain a Tokeniser object which serves exactly one purpose: to call a method. That method shouldn't be attached to a class; it should be a "tokenise" function, and it should be about two lines long.
It's worth remembering that methods in Python are very, very thinly veiled functions with a default first argument baked-in. And classes are slightly better-veiled dicts. Why not avoid the mental overhead when it's unnecessary, and use just that: functions and dictionaries?
Don't get me wrong: I'm not advocating against the use of classes! Far from it, I'm a big fan of inheritance for code reuse. I think mixins and metaclasses are some of the coolest features in the language. But if you're not storing significant state across method calls in your classes, they probably don't need to be classes.
I could probably go on about this for longer, but it's probably a better idea to link some "further watching" from someone a lot more experienced than me: