# Why decimal numbers are not used in computers?

Table of Contents

- 1 Why decimal numbers are not used in computers?
- 2 Can computers understand decimals?
- 3 How does a computer understand decimal number such as 55?
- 4 Why can computers only read binary?
- 5 How do computers represent floating point numbers?
- 6 What will replace binary code?
- 7 Is it possible to use decimal numbers in computers?
- 8 Why don’t all numbers work in base 10?

## Why decimal numbers are not used in computers?

Computer use voltages and since voltages changes often, no specific voltage is set for each number in the decimal……… unfortunately , computer cannot use decimal number system . instead it can use binary number system as it is using the lowest base number system used by us that is 2 . hope you like it!

**Why do computer use binary instead of decimal?**

Computers use voltages and since voltages changes often, no specific voltage is set for each number in the decimal system. For this reason, binary is measured as a two-state system i.e. on or off. Also, to keep calculations simple and convert into binary online, computers use the binary number system.

### Can computers understand decimals?

Computer can understand the instructions and data in the binary form only. 2. Decimal number system comprises of 10 digits from 1 to 10. The base of a number system is the number of unique digits used in it.

**How do computers deal with decimals?**

The computer memory is organized into strings of bits called words of same length. Decimal numbers are first converted into their binary equivalents and then are represented in either integer or floating point form. On this machine since zero is defined as it is redundant to use the number to define a “minus zero”.

#### How does a computer understand decimal number such as 55?

Computers calculate numbers in 0s and 1s. A bit can be either but not in between. So if you enter 3/2 into a calculator, it should return either 1 or 2..

**Why don’t we use ternary computers?**

A ternary bit is known as a trit. The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”—and how they’re used to carry out math. Gates take two inputs, execute a task on them, and then return one output.

## Why can computers only read binary?

To make sense of complicated data, your computer has to encode it in binary. Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand.

**How real numbers are stored by the computer?**

Real numbers are stored in the computer using a similar principle to standard form. Instead of using a power of 10 however, they are stored using a power of 2. The decimal part of the number is known as the mantissa, and the power of 2 to which it is raised is known as the exponent.

### How do computers represent floating point numbers?

Eight digits are used to represent a floating point number : two for the exponent and six for the mantissa. The sign of the mantissa will be represented as + or -, but in the computer it is represented by a bit: 1 means negative, 0 means positive. This representation makes it easy to compare numbers.

**Who invented decimal digits?**

Decimal fractions were first developed and used by the Chinese in the end of 4th century BCE, and then spread to the Middle East and from there to Europe. The written Chinese decimal fractions were non-positional.

#### What will replace binary code?

A ternary computer (also called trinary computer) is one that uses ternary logic (i.e., base 3) instead of the more common binary system (i.e., base 2) in its calculations. This means it uses trits instead of bits, as most computers do.

**Is quantum a binary?**

Quantum computers do not use binary. Quantum computers calculate things using qubits. Unlike classical bits which are binary (which means they are either 0 or 1), qubits are not binary because they can be 0, 1, both, or anything in between.

## Is it possible to use decimal numbers in computers?

It depends on what is meant by “use”. Computers which are implemented internally using binary arithmetic can certainly read and output numbers which are represented in decimal. It is happening all the time. What is less well known is that there have been digital computer designs for which the internal implementation really is decimal.

**Can decimal numbers be represented exactly in binary?**

Decimal numbers can be represented exactly, if you have enough space – just not by floating binary point numbers. If you use a floating decimal point type (e.g. System.Decimal in.NET) then plenty of values which can’t be represented exactly in binary floating point can be exactly represented.

### Why don’t all numbers work in base 10?

There are actually modes of numbers that do that. Binary-coded decimal (BCD) arithmetic has the computer work in base 10. The reason you run into this rarely is that it wastes space: each individual digit of a number takes a minimum of four bits, whereas a computer could otherwise store up to 16 values in that space.

**What was the first computer with a decimal number?**

In fact the first computer I ever got my hands on (in 1962) was such a computer. It was an IBM 1620. Each memory location was used to represent a single decimal digit. A computer ‘word’ could be arbitrarily long, so a decimal number could be represented to arbitrary precision.