Well thats fine and all. But I have found the opposite to be true for me. Perhaps it is just years of doing it that way, but I find it convinent and logical in surpricing ways. Not sure if I can account for any of those surprising ways from the top of my head. But to name one for the sake of nameing something, I find it handy that a arrays length is also the index of the would be next item. This can be handy when ranging over indexies say array a[… a.len], but this is of course a very contrived and simple example. I have just found working with 0-based indexes to be the right way to go. This could be better reasoned on my part, but there are probably better reasons to be found. In the end there is also a large subjective factor to these kinds of things.
Yes, I find myself too knowing that 0-based indexing has many nice benefits but they’re subtle little things that are hard enumerate in the spur of the moment. One little benefit that comes to mind is always knowing that when slicing (array[start .. end]) the length of the slice will always be end - start.
I’d like to point out that you can always get one-based array indexing by using a member function and just subtracting one from the argument… so it’s not like you can’t have what you’re asking for ![]()
While I am strongly in the zero-indexed camp, there are use-cases for one-based indexing that are worth mentioning. For instance, a lot of SQL work uses one-indexing (considering that the first row in a CSV is usually the header row, you can generalize this to a lot of tabular data as well).
Another good argument for zero-indexing is the use of modulus for things like computing the index of a hash bin. You could just add one to the output, but I like the strict boundary.
I also don’t know if I personally agree that it’s more natural. For counting, it is - sure, no argument there. For relative positions, I would disagree. If I am standing at the beginning of a racetrack, the starting line is zero steps from where I currently am. The finish line itself marks where the race ends, so my distance is from the starting line (zero steps) to where the finish line starts.
If you think of index i as beginning of array / string / whatever PLUS i,
e.g. when i = 0, then arr[0] means beginning of arr + 0,
it becomes quite natural, IMHO.
Alternatively you can just keep in mind “we start counting with 0, not 1” mantra, IDK.
This is what I am trying to say - if we are talking about offsets we MUST start from zero! So it is more than natural - it is the only way to get a location of an element of an array.
But when we are talking about indices - well, see Pascal examples above - you can start from any integer value you want. And under the hood these indices are anyway translated into offsets.
offsets from the beginning of an array, of course, measured in sizeof(ArrayElementType)
If it was good enough for Dijkstra, it is good enough for me ![]()
Not at all!
Just replace the word ‘index’ with the ‘offset from the beginning of an array’.
With offsets (from an array start) everything is absolutely unambiguous!
(for ex., offset of the last element is always a.len - 1 )
But with ‘indices’ (which can be started from any signed integer, again see Pascal) we have complete mess.
Offset is more ‘strict’ word (with the addition - from where) than ‘index’, I believe.
No mantras!
Let’s name it “definition”!
What is “index”?
Let’s define “index” like this:
Index of an array element is the offset of this element from the beginning of the array in sizeof(ElementType) units (which in turn are in bytes).
This defintion leaves us no choice - indices inevitable are 0..a.len-1 for an array containing len elements.
Can anybody construct a defintion of “index” which would unambigously lead us to a conclusion that we must count from 1 and only from 1?