Author : buzzard
Page : << Previous 3 Next >>
only do "good things"; the constructors and deconstructors happen at "times" when such things are best suited to run.
But, nonetheless, this doesn't necessarily make programs easy to comprehend. An example off the top of my head: if an object in a deconstructor removes itself from a hash table, introducing a bug because the hash table shrinks itself, screwing up the currently executing hash iterator, you may spend a long time discovering what is going on.
If we accept, though, that the constructor and destructor calls are there because that leads to better, more comprehensible semantics--that any object-oriented language is going to need something like constructors and destructors--we are only left with two syntaces to discuss: plain function calls and overloaded operators.
Overloaded Operators
Many style guides strongly recommend disallowing overloaded operators. Some advocate allowing operator overloading for mathematical data structures, like bignums, complex numbers, vectors, matrices, and the like. (Care and handling of copy and assignment constructors is more complex, so I'll simply dispense with attempting to argue about them.)
The argument for avoiding overloading operators is often this simple one: it is to easy for someone reading the code to not realize that there are function calls going on. An ordinary syntax that does not normally resemble a function call is suddenly potentially a function call.
The argument for allowing it for math is simple: the expediency of the syntax overwhelms the argument against it. Nothing particularly surprising is going on under the hood, except possibly the performance overhead.
I cannot argue against this philosophy. I choose not to apply it, as the amount of actual addition or subtraction of vectors in my code is so inconsequential that the typing cost is insignificant; nor do I find the shorter, simpler syntax involving overloaded operators to cause me to introduce fewer bugs. But this is surely more a matter of taste than of logic.
Clearly, one would like operator overloading to follow the principle of least suprise. Operators which normally are side-effect free should remain side-effect free. One would hope operators which are normally commutive remain commutative, and associative associative; but this is not always the case (e.g. matrix multiplication). [Of course, it is not a violation of these rules if operators test for errors and exit, or collect statistics, or do any number of other not-side-effect-free effects. The important issue is that they be side-effect-free in terms of the actual computations of the program, as opposed to the above meta-computations.]
But in a short function, in which the types of the variables are obvious, one has trouble imagining operator overloading causing much trouble.
Idioms
The advantages of concise idiom are legion. I have an enormous number of C idioms I use without thought; idioms in the sense that if you are not familiar with them, the meaning of the code may not be immediately obvious. They are easy enough to figure out if you stop and think, but the power of the idiom comes from the lack of need to think; it is easier to understand a larger chunk of code all at once if the elements of it are idioms.
Here are two idioms I use frequently:
// n loops from n-1 ... 0
while (n--) {
...
}
// i = (i + 1) mod n
if (++i == n) i = 0;
Notably, these idioms rely on preincrementing and postdecrementing, so the odds are high that a reader will have to stop and hesitate and think about the meaning of the code. (The idioms would not normally have the comments describing their meaning.)
Idioms make operator overloading doubly tempting. One aspect is that it allows the use of familiar idioms in new contexts:
for(FooIter *i(foo); (bool) i; ++i) {
... *i ...
}
(Something like that--I'm not very familiar with C++ operator overloading.)
A second aspect is that it allows the creation of new idioms. Expression syntax is much more powerful for idiomatic constructions than function call syntax. You may have seen this sort of construction in C, using a conventional return value to empower an idiom:
x = listAdd(listAdd(listAdd(listAdd(newList(), a), b), c), d);
(Specifically, I've seen code like that used for adding elements to a window.)
The indirection and nesting there is ugly, and so you can see it as much clearer if you could use an idiom like:
x = newList() + a + b + c + d;
I'm not suggesting that people would like this off the cuff; but they might find it tempting to allow operator overloading simply because it allows them to coin such idioms--not just to save typing, but because it becomes much more rapidly comprehenisble. (The nested listAdd()s above are also an idiom, but the difference in ease of comprehension is apparent.)
But this way lies madness!
Such idioms may be powerful, but they build on new, unrelated meanings of the underlying symbols.
It is (I imagine) exactly this reasoning that introduced the ubiquitous operator overloading found in the C++ stream library.
Ask a C programmer what this code does:
a << b << c << d << e;
She will tell you "nothing". None of the operators have side-effects. In C.
Do they have side-effects in C++?
It depends on what functions they call.
C++ programmers swiftly adjust to the use of <<. It seems natural and perfectly reasonable. But don't be fooled by it. Most style guides recommend against coining new forms of operator overloading. That supposed power of idiom is simply too fraught with peril.
Keep this in mind: the argument by analogy to C idioms is broken, because the C idiom is constructed of unambiguous items right there on the page. Comprehending an unfamiliar C idiom just requires parsing the code--an action the reader was already doing. There's no 'secrecy' at all--it just takes a little longer.
Semantics
As noted previously, there are two semantics for a plain C function call. Determining which semantic is in operation is as easy as searching back through the file for the name, and then grepping header files for the name.
Not so for C++. C++ has both run-time indirection and compile-time indirection. In fact, it has a number of flavors of the latter.
foo(x,y);
a plain C-style function call
a plain C-style indirect function call
a call to a non-virtual method in this class, or any parent class
a call to a virtual method (again defined in any ancestor)
a call to a templated function
a call to a method in a templated class
one of several functions of any of the above types, all with the same name, but different numbers of parameters
one of several functions of any of the above types, all with the same name, the same number of parameters, but different formal parameter types
foo->bar(x,y)
a plain C-style indirect function call (e.g. bar is a public function pointer)
a call to a non-virtual method in foo's class, or any parent class
a call to a virtual method (again defined in any ancestor of foo)
a call to a method in a templated class
one of several functions of any of the above types, all with the same name, but different numbers of parameters
one of several functions of any of the above types, all with the same name, the same number of parameters, but different formal parameter types
Some of the variants above may not seem like truely distinctive semantics; however, the distinction between run-time and compile-time dispatch is obvious, and the other distinctions are there to call attention to the effort required for someone to locate the implementation of the called function. Any of those cases could turn out to be true, and each is defined differently.
Templates offer the best example of my core complaint. At their heart (ignoring the committee-driven creeping featurism), templates are there to allow you to do something like define a generic hash table class, but specialize it to be implemented "directly" for some specific class, instead of having to pay indirect dispatches at runtime.
However, I've stated previously, I find this approach flawed, because it introduces an entirely new syntax and semantics. I would much prefer if you just defined the hash table as taking some abstract base class, defined your elements to be hashed as deriving from that base class, and then used a magic 'specialize' keyword to 'instantiate the template'. (Of course, personally I'd prefer a Smalltalk-like approach where you didn't need to use abstract base classes at all; the same sort of specialization is nonetheless entirely withint the realm of computability; and Java implementations may attempt to do JIT inlining to achieve the same effect, much as the academic language Self (something of a sequel to Smalltalk) did in the
Page : << Previous 3 Next >>