Ech... exceptionally wrong! It seems to me you don't know how to use run-time languages
On the contrary, these days I spend 100% of my work time working in a run-time interpretted language (Matlab). The problems that creates for me, and the work-arounds I've had to pursue for it, are exactly the basis for my views.
and seem to decided untyped languages are inherently dynamic.
No. Moreover, my comments there didn't say anything about typing, so not sure where that is coming from.
If your C program took 5 seconds and your (say) Python program took 10 seconds, you probably don't know Python or too much of your cost is system based.
I don't use Python, but the rule-of-thumb I have for comparing my interpretted programs vs. their compiled C equivalents is two orders of magnitude.
If Python gets closer than that, well, bully for Python. But I have plenty of friends who do use Python, and find themselves replacing the intensive functions with C in order to keep runtime under control (and/or relying on the various compiled extension libraries).
And, no, the issue is not system-based costs. It's just computationally-intensive functions. That kind of thing abounds in machine learning, what with the prolific use of iterative nonlinear optimization algorithms.
Numpy and Scipy libraries controlled by a dynamic language often run at near C++ speeds. Control logic rarely takes even 1% of run-time.
Right: that kind of thing - relying on compiled C/assembly-optimized libraries for the intensive stuff - is exactly one of the compromises I suggested.
It hardly goes to your contention that interpretted languages are plenty fast enough on their own. Those are exactly examples of cases wherein the performance of such was way too slow, and so they were replaced by compiled C libraries to handle the intensive stuff. That's a direct, explicit demonstration of the vastly superior speed of compiled languages for intensive numerical calculations.
Likewise, if those packages don't already do what you need to do, you're back to either accepting a huge performance penalty by using native Python, or writing and compiling your own C libraries. Just as I said above.
Alternatively, if what you mean by "using interpretted languages the right way" is "avoid doing anything computationally intensive directly with them, and instead rely on compiled libraries for that," then you are not disagreeing with me in the slightest.
So if you are running that much slower while the rest of the developer world is not, the problem is you.
I'm only running that much slower in cases where I have to rely on the interpretted languages exclusively. In cases where I am able to replace all of the intensive stuff with compiled C, as you recommend, there is little issue. This being why I recommended exactly such a compromise solution in the post you are responding to.
Prototyping Python with BLAS, Numpy, Scipy, PyTables and a whole host of other libraries will usually take HALF the time of a C++ prototype.
Sure, supposing those libraries already do what you need done. If not, you're going to have to either eat a huge runtime penalty, or go ahead and write the C code and compile your own library. Either way, you're only getting the speed because somebody took the time to write the code in optimized C and compile it.
In C++ what is the chain hierarchy which is needed to make even a trivial change? A small change can be have huge implications on a C++ code base where in Python is practically never does. Since part of the prototype is acknowledging that there will be unexpected hurdles, this matters even more!
I agree that C can present some extra difficulties there, but have found that thoughtful architecture of the general framework up-front tends to avoid most of that.
That said, I have frequently worked in the past by doing initial first-pass prototyping in an interpretted language, and then porting it to C once the basic shape of it was in place. That way I could do the really runtime-intensive parts of the development and maintenance at a reasonable clip.
These days, though, I'm stuck entirely in the interpetted languages and so stuck with nasty runtime penalties.
Why should I always think of typing when I write code? Because you said so?
Well, you could try addressing the reasons I actually gave before complaining that I didn't give any.
The point is that your program is, ultimately, going to end up using data types. And so you might as well keep tabs on that, and ensure that the types you end up with are appropriate.
Perhaps this is my background in embedded targets showing through, though. If you aren't trying to run in real-time on a resource-limited, power-limited platform, you might not care much.
Hell... isn't that a big benefit of the typedef and the macro-def?
I'm not sure what you mean there - you're saying that typedef is a way to
not think about typing? Certainly, it is useful for encapsulating platform type dependencies and such from the whole of the source code, but it is nevertheless all about thinking carefully about typing and having fine control over such.
And what does typing have to do with bloat? You're talking absolute nonsense here.
To be clear, I'm not talking about the source code being bloated. I'm talking about ending up with target code that uses data types which are excessive for their purposes. That does not happen when you think about and specify types as you go. It tends to happen all of the time when you don't.
Your mind seems to be stuck in 1999.
Well, again, my whole career is in the embedded target world, which indeed probably looks a lot like the big-iron world of 13-ish years ago.