Beyond the Basics: A Deep Dive into JavaScript's Primitives and Object Model

19 min read, Wed, 21 May 2025

JavaScript Types Image from pixabay.com

JavaScript is an object-based programming language that works within a host environment. A host environment is the environment that provides a global object and other environment-related features, allowing a script to run successfully within it. JavaScript is standardized as ECMAScript by ECMA International in the ECMA-262 document.

V8 is Google’s open-source high-performance JavaScript and WebAssembly engine written in C++. It can be embedded into any C++ application. This embeddable nature of V8 allowed JavaScript to run on the server side, eliminating the need for a web browser.

Node.js, a server-side host environment for JavaScript also written in C/C++, embeds V8 to run JavaScript code. It adds platform-specific APIs in C++ that can then be called from within JavaScript code. This development allowed JavaScript to become more of a general-purpose programming language, but it still needs a host environment.

JavaScript is generally called a scripting language. A scripting language is a programming language that can be embedded into a host environment to provide scripting features. Since the development of the Node.js runtime, the role of JavaScript has shifted to a mainstream general-purpose language that can be used on the front end in web browsers to provide interactivity to web pages, and on the back end to create microservices and backend processing.

Having discussed the JavaScript overview, let’s now turn our focus to the very basics of the language’s features, i.e., supported values and types.

The Primitive Values and Types

Any programming language cannot be complete without its supported primitive values and types. A primitive value is an immutable datum at the lowest level of the language implementation. A primitive type is the set of data, and this set of data is collectively referred to as a data type.

For example, Boolean is a data type. As explained earlier, a data type is a set of data, and the Boolean data set contains two values: false and true.

In JavaScript, when we want to refer to a primitive value, we use all lowercase letters, and to refer to a type, we capitalize the first character of the type name. We have the following primitive data types and values:

1. undefined

undefined is a primitive value that is used when a variable has not been assigned a value. This undefined value belongs to the Undefined type, and the Undefined type contains only one primitive value: undefined.

Undefined = {undefined}

Example:

let count;

console.log(count); //Output: undefined;
// because count has been declared but never initialized.

console.log(void 0 === count); //Output: true

2. null

null is a primitive value that represent the intentional absence of any value and it belongs to Null type. Null type has only one value which is null value.

Null = {null}
let iAmNull = null;
let iAmUndefined; // See the difference;
// this is not initialized but the above variable iAmNull has been initialized to null.

console.log(iAmNull); //Output: null;

3. true and false

true and false are the two primitive values that represents truthy and falsy values. These values belongs to Boolean type. Boolean type has 2 primitive values true and false.

Boolean = {true, false}
let iAmBoolean = false;

console.log(false === iAmBoolean); //Output: true

let notBoolean = FALSE;
// ReferenceError: FALSE is not defined.
// JavaScript trying to resolve this identifier as a variable but could not find it.

4. string

A string value is used to represent textual data, for example, “hello”. The String type is the set of all ordered sequences of zero or more 16-bit unsigned integer values (also called elements) up to a maximum length of 253 − 1 elements.

In practice, no JavaScript engine can actually handle strings this large due to memory limitations; for example, V8 can handle roughly up to 268 million characters because of memory constraints imposed by devices. Most browsers/tabs would crash before reaching the limit specified by the JavaScript specifications.

Strings in JavaScript are Unicode UTF-16 encoded; that is, each element in the string is treated as a UTF-16 code unit value. Each element in the string occupies a position; for example, in the string “hello”, “h” has position 0, “e” has position 1, and so on.

The length of the string is counted by counting each UTF-16 code unit in the string. An empty string has a length of 0.

Relying on the string length in JavaScript can sometimes produce unexpected results. This is because JavaScript implements strings as UTF-16 encoded Unicode code units. In Unicode, a character is represented by a code point, and a code point can be composed of more than one code unit, up to four code units based on encoding. A special character, for example, an emoji 😊, can take two UTF-16 (32-bit) code units to represent a single special character, hence changing the string’s length. You can read more about Unicodes in my other post.

Have a look at this example:

const str = "😊"; // Code point for this emoji is U+1F60A

console.log(str.length); // Output: 2

const str2 = "Hello";

console.log(str.length); // Output: 5

When more than one code unit are required to represent a character, we call then surrogate pairs. The first code unit in a pair is called leading or high surrogate and the second code unit in a pair is called a trailing or low surrogate.

Below example illustrates the two surrogates used to represent a smiley character:

const str = "😊"; // Code point for this emoji is U+1F60A

console.log(str.charCode(0).toString(16)); // d83d (leading or high surrogate)
console.log(str.charCode(1).toString(16)); // de0a (leading or high surrogate)

5. symbol

The symbol value is created using the Symbol() constructor and is a unique and immutable value. Each Symbol value immutably has a property called [[Description]] that can either be undefined or a String value. The Symbol type is the set of non-string values.

A key characteristic of the Symbol is that every symbol is unique, and you cannot compare two symbols for equality. Even though their descriptions can be the same, the symbol value is unique. Consider the example below:

const sym1 = Symbol("id");
const sym2 = Symbol("id");

console.log(sym1 === sym2); // Output: false
console.log(sym1 == sym2); // Output: false
console.log(sym1 === sym1); // Output: true

Symbols are used to hold unique values and are often used as keys for Object properties.

6. number

The number value in JavaScript is of 2 types:

  1. Number
  2. BigInt

1. Number

The Number type is a 64-bit double-precision implementation of IEEE 754 floating-point numbers. Note that JavaScript does not have specific integer values. Integer values are implemented in JavaScript using floating-point numbers only.

The IEEE 754 format is used to represent decimal numbers, and it has many different formats, e.g., 32-bit, 64-bit, 16-bit etc. formats. An exact number of 18,437,736,874,454,810,627 (that is, 264 − 253 + 3) values can be represented by a Number type. Why are we subracting from 264 to get total possible number values? In order to understand, we need to have at least high level look at how IEEE 754 works. Let’s begin by some basic informations:

What are floating-point numbers.

When we think about numbers, we commonly use integers (whole numbers like 5, 100, or −20) which computers handle efficiently. However, representing numbers with decimal points—such as 3.14159, 0.000123, or even extremely large figures like Avogadro’s number (6.022×1023)—introduces a challenge. These are known as floating-point numbers.

Computers face a fundamental limitation: finite memory. This makes storing real numbers, like pi, to infinite precision impossible. Therefore, floating-point representation offers a clever and practical solution by approximating these numbers using a fixed number of bits.

Imagine scientific notation. A number like 123,000 can be written as 1.23 × 105. Here, 1.23 is the significand (the significant digits), and 5 is the exponent (indicating the decimal point’s position).

This system operates on a principle similar to scientific notation, but it leverages binary (0s and 1s) instead of decimal digits, and powers of 2 instead of powers of 10. Essentially, a floating-point number is stored as a significand (or mantissa) and an exponent, allowing computers to represent an immense range of values—from very small to very large—with a practical level of precision.

How the IEEE 754 standard actually structures these bits in computer memory.

Think of a floating-point number as being stored in a fixed-size box. This box is divided into three main sections:

There are two common sizes for these “boxes” in IEEE 754:

Imagine a little diagram:

Single-precision (32 bits):
| Sign (1 bit) | Exponent (8 bits) | Significand (23 bits) |

Double-precision (64 bits):
| Sign (1 bit) | Exponent (11 bits) | Significand (52 bits) |

These different sizes allow for a trade-off between the range and precision of the numbers you can represent and the amount of memory used. For more details read on IEEE 754.

Many NaNs in IEEE 754 and Calculation of JavaScript Total Number of Numeric Representations

There are many NaN (Not a number) values in IEEE 754 format to represent different scenarios that can result in a mathematically undefined operation. For example, 0/0 or finding a square root of a negative -1 etc. There are many possible mathematicall operation like these which result in undefined or NaN value.

Total NaN Values: These NaN values in IEEE 754 format are represented by setting all bits of exponent to 1s and using the 52 bit significand to represent different categories of NaN values. This gives 252 - 1 possible NaN values (since mantissa is 52 bits). But since sign bit can be 0 or 1, we have 2 × (252 - 1) = 253 - 2 NaN values.

2^64 Possible Combinations: With 64 bits, there are a total of 264 possible combinations of bits. This means a 64-bit value can represent 264 different states.

JavaScript’s Single NaN: JavaScript simplifies this by treating all those different IEEE 754 NaN bit patterns as a single special value, NaN. So, even though the IEEE standard allows for many different NaN representations, JavaScript consolidates them into one.

This is how we arrive at the number of numeric values in JavaScript’s Number type:

  1. Total possible values: 264

  2. Number of IEEE 754 NaNs: 253 - 2

  3. Number of JavaScript NaNs: 1

  4. Difference: 253 - 2 (IEEE NaNs) - 1 (JavaScript NaN) = 253 - 3

Therefore, the Number type has exactly 264 − (253 - 3) = 264 − 253 + 3

Caveat of IEEE 754 Floating Point Numbers

While the IEEE 754 floating-point format is designed to handle decimal calculations, its precision is fundamentally limited by the fixed length of the significand (or mantissa)—typically 52 bits in a double-precision format. Any floating-point value that cannot be precisely represented within these 52 bits will be approximated. This inherent approximation can lead to subtle calculation errors when working with floating-point values.

To illustrate this, consider the notorious example of adding 0.1 + 0.2. In many programming languages using standard floating-point representation, the result of 0.1 + 0.2 will not be exactly 0.3.

While this calculation appears straightforward at a high level, it exposes an underlying limitation caused by the approximation issue inherent in binary representation. Specifically, when the decimal number 0.1 is converted to its binary equivalent, it results in an infinitely repeating fractional binary value:

(Base 10) =  0.1
(Base 2) = 0.00011001100110011...∞

Because 0.1 (and similarly 0.3) must be stored within the fixed 52-bit significand of the IEEE 754 double-precision format, the repeating binary sequence gets rounded. This rounding introduces a tiny, unavoidable error. The same principle applies when 0.3 is converted to binary. Therefore, precautions are essential when dealing with calculations requiring very high precision, especially with fractional values.

2. BigInt

In JavaScript, the Number type has a maximum limit for safely representing integers (exposed as Number.MAX_SAFE_INTEGER). This is because the integer portion of a Number is effectively stored using 52 bits within its significand (or mantissa).

To address this, BigInt was introduced as an effort to support arbitrary-length integers. Unlike the Number type, BigInts are not based on the IEEE 754 floating-point standard. This distinction allows them to represent numbers limited only by the system’s available memory, rather than by a fixed bit allocation.

Internally, a BigInt is implemented as a sequence of fixed-size ‘chunks’ (often 64-bit digits) known as limbs, along with a separate sign bit.

The exact implementation of BigInt can vary depending on the JavaScript engine used (e.g., V8, SpiderMonkey, etc.), but most engines represent as:

Sign: Positibe (+)
Limbs: [0x1234567890ABCDEF, 0x1234567890ABCDEF, ...] // 64-bit chunks

While BigInt can certainly support much larger integer values, they’re generally slower than the Number type because their operations aren’t directly implemented at the hardware level.

Different Number Formats

Number value in JavaScript can be dealt in different bases e.g. decimal, binary, octal, hexadecimal and scientific.

let integer = 42;
let decimal = 3.14159;
let scientific = 1.23e6; // 1.23 × 10^6 = 1230000 scientific notation
let binary = 0b1010; // 10 (0b prefix for binary values)
let octal = 0o755; // 493 (0o prefix for octal values)
let hex = 0xff; // 255 (0x prefix for hexadecimal values)

Special Numeric Values

Numbers in JavaScript has special values.

1. Infinity and -Infinity

This represents values that are too large to be represented by the finite range of floating-point numbers. Think of dividing by zero (e.g., 1.0/0.0). This operation typically results in positive or negative infinity. Infinity is represented by an exponent field of all ones and a significand of all zeros. The sign bit again determines positive or negative infinity.

When does JavaScript produces Infinite?

console.log(1 / 0); // Infinity
console.log(-1 / 0); // -Infinity

console.log(Number.MAX_VALUE * 2); // Infinity
console.log(-Number.MAX_VALUE * 2); // -Infinity

console.log(Math.pow(10, 1000)); // Infinity (exponent too large)
console.log(Math.log(0)); // -Infinity
2. +0 and -0

Yes, there’s a positive zero (+0) and a negative zero (-0) in IEEE 754. While they are numerically equal, their signs can be important in certain calculations (e.g., in complex numbers or when dealing with directional limits). They are represented by an exponent field of all zeros and a significand of all zeros. The sign bit determines whether it’s +0 or -0

When does JavaScript produces -0?

//Division
console.log(1 / -Infinity); // -0
console.log(-1 / Infinity); // -0

//Multiply
console.log(-1 * 0); // -0
console.log(0 * -1); // -0

//Math methods
console.log(Math.round(-0.1)); // -0 (very close to zero but negative)
console.log(Math.atan2(-0, 5)); // -0 (angle calculation)
3. NaN

This is for results that don’t make sense mathematically or are undefined. For example, 0.0/0.0, or the square root of a negative number. NaN is represented by an exponent field of all ones and a non-zero significand.

Why NaN !== NaN?

The IEEE 754 standard defines a significant number of NaN (Not-a-Number) values—specifically, 253 - 2 distinct representations. A key characteristic of NaN is that it’s unordered, meaning any comparison involving NaN (even with itself) will always evaluate to false. This design choice stems from a mathematical principle: since NaN represents an ‘indeterminate’ or ‘invalid’ numerical result, two such results are not considered equal.

const nan1 = 0 / 0; // NaN
const nan2 = Math.sqrt(-1); // NaN
console.log(nan1 === nan2); // false (per IEEE 754)
console.log(NaN === NaN);

However, a notable caveat exists: when using Object.is(NaN, NaN), the comparison returns true.

console.log(Object.is(NaN, NaN)); // true

This behavior of Object.is() was specifically designed to provide developers a reliable way to check for NaN values. While the IEEE 754 standard defines multiple NaN bit patterns, JavaScript simplifies this by coalescing all these into a single, canonical NaN value from a developer’s perspective. This provides a pragmatic and developer-friendly feature, allowing for straightforward checks to determine if a mathematical operation has yielded an invalid result.

The Conculsion on Primitives

In summary, JavaScript defines 7 fundamental primitive types (and their corresponding values). These include undefined, null, string, symbol, boolean, number, and bigint.

These are considered primitive because they are fundamental, immutable values not represented as objects, and they exist at the lowest level of the language’s data structure.

Of these 7 primitive types, 5 have corresponding built-in wrapper objects (excluding undefined and null). For example, the boolean primitive has a Boolean() wrapper object constructor. When you pass a value to Boolean(), it attempts to convert that value into a boolean and then returns a Boolean object (not a primitive boolean value).

Our discussion of JavaScript’s fundamental types would not be complete without addressing the fundamental Object type. In the next section, we will delve into the Object type.

Object Type

Let’s begin with a fundamental definition: a JavaScript object is essentially a collection of key-value pairs, where keys are typically strings or Symbols. It is fundamental to JavaScript’s data structure model, enabling the creation of custom data structures and more complex organizations of data.

The keys of an object are also referred to as properties, and each property falls into one of two categories: a data property or an accessor property:

JavaScript objects are broadly categorized into two fundamental types:

Now, let’s clarify a crucial aspect of JavaScript objects: their keys. Objects can only directly hold String and Symbol keys.

While you might assign numeric values as keys, these are implicitly converted to strings before being used as property names. Furthermore, there’s an upper limit for safely using such numeric keys (which become string keys): up to 253 − 1.

Arrays, as exotic objects, have a specific range for their numeric indices that differs from the behavior of ordinary object properties. The maximum valid index for an array is 232 −2. If an attempt is made to add an index larger than this valid range, JavaScript engines will internally optimize the array’s representation. They typically convert the array from an efficient ‘fast mode’ (optimized for dense, integer-indexed elements) to a ‘dictionary mode’ (or hash table mode), where the larger key is added as a regular object property, not a true array index.

const arr = [];
arr[-0] = "valid";
arr[4294967299] = "invalid";

console.log(arr);
// Output: [ 'valid', '4294967299': 'invalid' ]

While this means arrays can effectively hold 232 −1 elements (from index 0 to 232 −2), the −2 in the maximum index is crucial. This is because array length is internally represented as an unsigned 32-bit integer. Since the length property automatically increments when an element is added, if the maximum index were 232 −1, then incrementing the length property beyond that would exceed the 32-bit integer limit for length itself, leading to an overflow.

The special numerical value -0 is not supported as an object key or array index. When -0 is used as a property key, JavaScript engines handle its coercion differently based on the object type: it’s effectively converted to the string “0” for ordinary objects, but to the numeric value 0 for arrays (as an index).

Here is a code illustration:

const obj = {};
obj[-0] = "value at -0";
console.log(obj[-0]); // Output: value at -0 because -0 is converted to string "0"
console.log(obj["0"]); // Output: Same as obj[-0]
console.log(Object.keys(obj)); // ['0']

const arr = [];
arr[-0] = "value at -0";
console.log(arr[0]); // Output: value at -0 because -0 is converted to numeric 0
console.log(arr.length); // 1

Summary

Understanding JavaScript’s core data types—both its seven immutable primitives (undefined, null, string, symbol, boolean, number, and bigint) and its versatile Object type—is foundational to mastering the language. We’ve explored how primitives, while simple at their base, come with nuances like Number’s floating-point precision issues and BigInt’s arbitrary-length capabilities. Furthermore, the Object type, with its key-value pairs and distinctions between ordinary and exotic behaviors (like Arrays’ unique indexing), provides the building blocks for all complex data structures in JavaScript.

A deep grasp of these underlying data structures and their specific characteristics—from how numeric values are stored to the unique behaviors of NaN and array indices—empowers developers to write more efficient, predictable, and robust JavaScript code. This fundamental knowledge is indispensable, whether you’re building interactive front-end experiences or robust server-side applications.