Computers like to count in binary, but humans like to count in decimal.
To deal with this painful disagreement, we've taken a few binary numbers that are very close to decimal numbers and we've given them names that correspond to the similar decimal numbers:
|Number in Decimal||Word|
|1 000||(or 103)||kilo-||K-||Decimal|
|1 024||(or 210)||kibi-||Ki-||Binary|
|1 000 000||(or 106)||mega-||M-||Decimal|
|1 048 576||(or 220)||mebi-||Mi-||Binary|
|1 000 000 000||(or 109)||giga-||G-||Decimal|
|1 073 741 824||(or 230)||gibi-||Gi-||Binary|
|1 000 000 000 000||(or 1012)||tera-||T-||Decimal|
|1 099 511 627 776||(or 240)||tebi-||Ti-||Binary|
|1 000 000 000 000 000||(or 1015)||peta-||P-||Decimal|
|1 125 899 906 842 624||(or 250)||pebi-||Pi-||Binary|
|1 000 000 000 000 000 000||(or 1018)||exa-||E-||Decimal|
|1 152 921 504 606 846 976||(or 260)||exbi-||Ei-||Binary|
Combine these with byte or its abbreviation B, and you get terminology like 120 gigabyte SSD or 64KiB memory.
You may notice there's a distinction between the decimal and binary naming schemes that you've probably not heard before. You've surely heard about megabytes and gigabytes, but probably not mebibytes and gibibytes. Typically, we tend to use the decimal-style prefixes for both decimal and binary numbers. It's just force of habit, and context usually tells the reader which one we really mean.
In fact, almost all readers will be unfamiliar with the binary xxbi- word prefixes, so it is not necessarily recommended to use them. Prefer to speak to laypeople in terms of megabytes, not mebibytes. If writing documentation for a more technical user, the binary prefixes may be more acceptable.
However, it is generally recommended to use the abbreviated prefixes. For instance, prefer to say "32KiB" over "32KB". This is still easily intuited by an average reader, while being more correct for a technical reader.
It's good to subtly link the use of any binary prefix to a page which explains the differences. (That's is why this page exists!)
Why does it matter? Edit
In the real world, this subject mostly comes up with storage devices. Manufacturers favor decimal, as it allows them to put bigger numbers on their packaging. For instance, a 128GiB drive would be advertised as a 137GB drive, which is literally accurate. However, everywhere else in the computer world, 137GB is assumed to be 137GiB. Consumers are often disappointed to find their drives hold less data than expected.
This is less of an issue in gaming, but programmers may find that it is easier to get a point across to other programmers by using precise language. If, for instance, I tell you I have 130KB remaining in my budget, do I mean 133120 bytes, or 130000 bytes? For smaller numbers it matters very little, but as a programmer, you'll find numbers can get quite large and sometimes their exact values are important. For instance, a tebibyte is about 10% bigger than a terabyte.
It's just a good habit to develop. Not critical, but definitely good.