What is the length () processing overhead in REXX?
This is entirely implementation dependent. Do you mean REXX for OS / 2, REXX for z / VM, REXX for z / OS, OOREXX for Windows, REXX / 400, or Regina?
Nothing in IBM's REXX language specs dictate how the function is implemented under the covers, it could be O (N) if you're scanning a string or O (1) if the length is stored with a string somewhere.
If it really matters, it is best to test with benchmarking code to see if the difference matters.
source to share
I'm not sure. I've written a few Rexxs in my days, but I've never had performance issues with the length () function. The way these scales are probably even dependent on your implementation of the Rexx parser.
I would write a Rexx script that multiplies 10.000 calls to "length ()" by a 10 character bite, then a 100 character string, and then 1000 characters.
Plotting your results on a graph will give you a rough idea of ββhow performance is degrading.
Having said all that, I am assuming the performance degradation is no more linear as in O (n). (See http://en.wikipedia.org/wiki/Big_O_notation )
source to share
This is a specific language implementation. It's been a long time since I wrote REXX now, in fact I wrote AREXX (an Amiga implementation) and that was 15 years ago. :-)
You can write your own test procedure. Generate strings with increasing length and measure the time it takes to get the length () using a high performance timer. If you are storing the time and length of a string in a comma separated table in a text file, you can plot it with gnuplot . And then you will see very clearly how it scales.
Edit: I had to check Rolf first, since he wrote more or less the same. :-)