In a programming language, type safety prevents you from performing an operation on data that is not appropriate to its type. In so-called strongly-typed languages, data is assigned a type, and there are safeguards in place to prevent operations being performed on data if those operations are inappropriate to that data’s type.
For example, in VB.Net a variable myString1 could be declared as being a string.
Dim myString1 as String
Legitimate operations include assigning a value:
myString1 = "Chickens"
and truncating the value:
myString1 = Left(myString1, 7)
but not multiplication:
Dim myString2 as String = myString1 * 7
which would be flagged as an error by the compiler.
Unfortunately, the type systems of most common languages are not rich enough to prevent other kinds of errors that could be prevented with a richer type system.
For example, suppose we assign the members of a football team numbers 1 – 11, based on the number on their shirts. What stops us adding player 1 to player 2? In most common languages, the simple answer is: nothing.
Another example is where an application uses an integer to represent shoe size. Is it legitimate to divide this by 2? In most languages: yes. In reality: no.
There appears to be a problem here. It seems reasonable to me that the operations permitted on data are in some way related to what that data actually means in the world. After all, computer systems are fundamentally models of worlds, real or imagined.
Practically, it is probably unnecessary in most cases to go to the extreme of defining different types of integers for scenarios like these. But for high-reliability applications, it makes sense to me that a richer type system would have advantages in terms of express-ability and robustness.