Introduction to HNSW: Hierarchical Navigable Small World

Introduction
AI innovation is going on at breakneck pace. One of many frontiers of this innovation is the vector serps. What are these serps, you ask? In easy phrases, they assist in coaching massive language fashions (LLMs) by going by massive datasets and choosing out what’s related. Now, there are a lot of other ways during which this indexing is finished in vector databases. Amongst them, Hierarchical Navigable Small World (HNSW) stands out for being performant and scalable. All the most important vector shops present HNSW as an indexing technique. It’s quick, environment friendly, strong, and dependable. So, on this article, we are going to break down the inside workings of HNSW and be taught what makes it so quick.
Studying Goals
- Perceive what embeddings and vector databases are.
- Get conversant in the other ways of indexing in vector databases.
- Study what HNSW is and the way it works.
- Perceive HNSWlib, a header-only HNSW implementation.
This text was printed as part of the Data Science Blogathon.
What are Embeddings?
Embeddings are vector representations of information (textual content, picture) in a high-dimensional vector area.
Two semantically associated knowledge shall be in proximity within the vector area, whereas dissimilar knowledge shall be distant. In different phrases, the phrase embeddings for Messi and soccer shall be shut collectively within the embedding area, whereas the phrase embeddings for soccer and Joe Biden shall be far aside within the embedding area.
The size of vectors can vary from just a few hundred to 1000’s or extra. This makes it tough to retailer, question, and search. However each Retrieval Augmented Era (RAG) primarily based utility requires high-speed search and querying of information embeddings. That is the place Vector Databases come into the image.
What are Vector Databases?
Simply as conventional databases goal to retailer structured and unstructured knowledge, vector databases retailer, search, and question high-dimensional vector embeddings. They supply user-friendly interfaces to work together with embeddings and their related knowledge. Vector databases should not essentially completely different from conventional databases. Vector databases use conventional databases to retailer serialized embeddings. For instance, Chroma makes use of SQLite as in-memory storage, and Pgvector makes use of the Postgres database to retailer embeddings and their related metadata. The factor that separates a conventional database from a vector database is the underlying indexing algorithm.
Indexing in Vector Databases
Indexing refers back to the strategy of organizing high-dimensional vectors in a approach that gives environment friendly querying of nearest-neighbor vectors.
That is essentially the most essential a part of constructing any vector database. These indexes allow quick and environment friendly querying of high-dimensional embeddings. There are a number of indexing strategies to create vector indices, corresponding to:
- Linear search Algorithm (Flat Indexing): This can be a linear search algorithm, which implies it can examine the question vector with each different vector saved within the database. That is the best technique on the market and works properly with small datasets.
- Cluster-based algorithm (IVF): Inverted File is a cluster-based indexing method. It makes use of k-means clustering to cluster all of the vectors. When a question vector is supplied, it calculates the space between the question vector and the centroids of every cluster. And begins looking for the closest neighbors within the cluster with the centroid closest to the question vector. This considerably reduces question time.
- Quantization (Scalar and Product Quantization): The quantization method entails decreasing the reminiscence footprint of huge embeddings by decreasing their precision.
- Graph-based (HNSW): The most typical indexing technique. It makes use of hierarchical graph structure to index vectors. And that is what we’re going to discover.
Understanding HNSW
Massive language fashions (LLMs) have gotten more and more common, and plenty of organizations wish to implement them of their product stacks. Nonetheless, there’s a problem to doing this: LLMs have a restricted context window. A context window is the variety of tokens an AI mannequin can ingest. For instance, the GPT 3.5 turbo has a context size of 4096. That is the place vector search databases shine. As an alternative of throwing the whole guide into the context of LLM, we discover essentially the most related components and feed them to the LLM to get exact outcomes.
Now, amongst all of the other ways of indexing in vector databases mentioned above, HNSW is essentially the most strong and scalable. This makes it essentially the most extensively used indexing technique as properly. HNSW is fashioned by combining two algorithms: skip checklist and navigable small world. To grasp HNSW, we have to know these algorithms. So, let’s dive in.
Skip Checklist
Because the identify suggests, the Skip checklist relies on the linked checklist knowledge construction, or we will say it’s an extension of the linked checklist knowledge construction. It was invented by David Pugh in 1990 as a quicker various to linked lists.
Why will we even want a skip checklist? A linked checklist has an O(n) search time complexity. This is probably not supreme for real-world use instances the place pace of execution is paramount. So, because of this we might have a extra environment friendly linked-list algorithm.
The skip lists have an anticipated time complexity of O(log n). It performs a lot better at random entry than linked lists. Because it has a layered construction with a number of nodes at every layer, the worst-case area complexity is O(n log n), the place n is the variety of nodes on the bottom-most degree.
How Does Skip Checklist Work?
A Skip checklist maintains a layered linked structure the place the highest layer has the longest hyperlinks between components. This reduces exponentially as we transfer down the layers.
Within the above image, the bottom-most layer is a whole linked checklist. As we transfer up, the variety of nodes reduces by half in every layer. The construction is known as skip lists, as the upper layers allow you to skip nodes whereas traversing.
Contemplate the next instance.

When looking for okay:
- if okay = goal aspect
- if okay >= transfer proper
- if okay < transfer down
We begin from the highest left nook and transfer proper till we discover okay or a quantity lower than okay. We descend to the layer under and proceed the method till we attain okay. The search complexity is O(log n) as we skip half the gadgets at every degree.
Whereas random entry is quicker, insertion and deletion are slower as they add further overhead for updating and deleting on a number of layers.
Whereas insertion, we begin from the underside checklist and add the node on the acceptable place. As skip lists preserve a hierarchical construction, we have to decide if the node seems at a better degree. The method is random, like a coin toss. The likelihood of a node showing in its fast higher layer is 0.5. In a really perfect skip checklist, the variety of nodes on layer-1 shall be ~n/2, and in layer-2 ~n/4, the place n is the variety of nodes on the bottom-most layer or the whole linked checklist.
Contemplate the next instance.

We discover the perfect place for insertion and insert the node on the backside degree. We then resolve if the node seems on the higher degree primarily based on a random binary consequence (heads or tail). In an ideal skip checklist, we get a balanced distribution of nodes in every degree.
Deletion occurs equally. Discover the goal quantity and delete the node. If the aspect is there on a better layer, delete it and replace the linked checklist.
Navigable Small World (NSW)
Navigable Small World is a graph-based algorithm for locating approximate nearest neighbors. The info factors in a graph are referred to as nodes. Every node is linked to a set set of connections which are shut to one another.
This can be a grasping algorithm. It begins at a pre-defined level within the graph and selects nodes nearer to the goal node. The space between the nodes could be measured by Euclidean or Cosine similarity. This course of is repeated till it reaches the closest neighbors of the goal node.
The Navigable Small World algorithm could be very environment friendly and simply deployable. It really works properly for datasets starting from just a few hundred to 1000’s. After that, it tends to carry out worse. It suffers from pre-mature termination when it can’t discover a higher neighbor than the present one, because it makes use of solely native data to seek out the closest neighbor.
Throughout insertion, we traverse the graph to seek out the closest neighbors and join them to the node x.
As in vector databases, we have to cope with a number of thousands and thousands of embedding knowledge. Therefore, we want a greater algorithm that scales properly and gives higher searchability. Whereas NSW performs properly sufficient for small datasets, we want a good higher algorithm to deal with 100s of thousands and thousands or billions of information factors. That is the place HNSW comes into the image.
Hierarchical Navigable Small World (HNSW)
HNSW extends NSW by incorporating the hierarchical structure of Skip Lists. This eliminated the scalability bottleneck of NSW. Like Skip Lists, HNSW creates the multi-layer construction of NSWs as a substitute of linked lists. Like Skip Lists, the topmost layer could have fewer knowledge factors with the longest connections. The variety of components will increase as we transfer down the hierarchy. On the bottom-most degree, we have now all the information factors. Like skip lists, the likelihood of the existence of a component exponentially decreases as we transfer up within the hierarchy.
However how will we search in HNSW?
Looking in HNSW
Now recall each the skip checklist and NSW. Within the skip checklist, we begin on the uppermost layer, and in NSW, we begin at a pre-defined level. In HNSW, we begin from a pre-defined level on the uppermost layer of the hierarchy and greedily traverse the graph to seek out the closest aspect to the goal knowledge level in that layer. As soon as we attain the closest node, we descend to the layer under and repeat the method till “Ok” nearest neighbors of the goal node are discovered. See under image

Insertion and Deletion in HNSW
Insertion in HNSW follows the identical precept as that of Skip lists. We traverse the layers, discovering the closest neighbors of the aspect. Then, we transfer down and repeat the identical course of till we discover all the closest neighbors on the underside layer.
The subsequent job is to find out the bi-directional hyperlinks to the inserted aspect. It’s normally decided by a predefined parameter m. We join the m variety of nearest neighbors to the inserted node. This is without doubt one of the methods to find out connections to an inserted node. Different heuristics can be used. For example, as a substitute of solely connecting to the closest neighbor nodes of the identical area, we additionally join the inserted node to the closest node of the closest area to type a better-connected graph.
As with the skip lists, the likelihood of a node showing within the greater layers is set randomly. The operate for it’s ground(-ln(rand(0, 1))), the place rand(0, 1) is a random quantity sampled from a uniform distribution between (0, 1].
Deletion follows an identical method. We begin with the highest layer and delete the goal because it seems until the underside layer.
Complexities in HNSW
The time complexities of looking out, insertion, and deleting in HNSW rely upon a number of elements, together with the peak of the structure, the variety of neighboring nodes per node, and the space metric. However on common, looking out, insertion, and deletion have O(log n) time complexity. The development of the HNSW could be costly. We have to insert n variety of nodes with O(log n) complexity. So, the general time complexity of graph development shall be O(n log n).
Vector databases are constructed to deal with a whole bunch of thousands and thousands of embeddings. Indexing such an quantity of information requires a extremely environment friendly, strong, and scalable algorithm. HNSW ticks all of the bins.
Cons of HNSW
Though the looking out, insertion, and deletion are quicker in HNSW, there are just a few trade-offs you must know earlier than going with HNSW. Right here are some things to remember earlier than implementing HNSW.
- Increased Reminiscence footprint: HNSW maintains a hierarchical construction of embeddings, which considerably will increase reminiscence consumption in comparison with algorithms like NSW. This may be problematic for resource-constraint units.
- Parameter Tuning: HNSW has completely different tunable parameters. Cautious parameter tuning is required to enhance efficiency.
- Problem: Implementing HNSW from scratch can get difficult. Most vector databases use trusted pre-built options corresponding to FAISS or HNSWlib.
HNSWlib is a header-only C++ implementation of the HNSW algorithm with Python bindings. It was written by the writer of the HNSW paper, Yury Malkov. This can be a bare-bone implementation of the algorithm.
So, let’s get into it.
You possibly can set up HNSWlib with any Python package deal supervisor.
pip set up hnswlib
Declare and Initialize HNSW index.
import hnswlib
import numpy as np
import pickle
dim = 16
num_elements = 100
hnsw_index = hnswlib.Index(area="l2", dim = dim) #declare Index
hnsw_index.init_index(max_elements = num_elements, ef_construction = 200, M = 16)
- The area parameter is the space metric that shall be used to calculate the space between nodes. Python implementation helps squared L2, Cosine, and dot-product.
- The dim parameter is the dimension of embedding vectors
- The init_index technique initializes the index.
- ef_construction defines the development time/accuracy trade-off.
- M is the variety of bi-directional hyperlinks created in the course of the insertion of a node.
Now that we created the indexes let’s add just a few vectors.
data1 = np.float32(np.random.random((num_elements, dim)))
ids1 = np.arange(num_elements)
data2 = np.float32(np.random.random((100, dim)))
ids2 = np.arange(100)
data3 = np.float32(np.random.random((100, dim)))
ids3 = np.arange(100)
hnsw_index.add_items(data1, ids1)
hnsw_index.add_items(data2, ids2)
hnsw_index.set_ef(50) #set question time pace/accuracy trade-off
hnsw_index.set_num_threads(4) #units variety of threads throughout batch development
Now, let’s see how we will question okay’s approximate nearest neighbor.
labels, distances = p.knn_query(knowledge, okay = 1)
Serialize the index object utilizing pickle.
p_cp = pickle.masses(pickle.dumps(hnsw_index))
Deleting components.
for id in ids2:
hnsw_index.mark_deleted(id)
This may release the final 100 components from the index. If you want, you can even reuse the reminiscence of deleted components.
hnsw_index.add_items(data3, labels3, replace_deleted=True)
Conclusion
The HNSW is without doubt one of the most vital algorithms proper now for the event of vector retrieval strategies. It’s the main indexing algorithm utilized in all main vector databases. Hope you’ve understood the workings of HNSW by this text. As AI evolves, we are going to see the event of bigger and extra complicated studying fashions, inflicting an increase within the want for utilizing HNSW and rising its purposes and significance.
Key Takeaways
- Vector databases are purpose-built knowledge shops for storing high-dimensional vector embeddings.
- Indexing of embeddings permits vector shops to deal with querying, insertion, and deletion of embeddings.
- There are other ways of indexing vectors, corresponding to IVF, Annoy, Quantization, and HNSW.
- HNSW is a mix of two algorithms. Skip lists and a Navigable Small World (NSW).
Continuously Requested Questions
Ans. HNSW stands for Hierarchical Navigable Small World. It is without doubt one of the top-performing vector indexing algorithms used throughout all vector databases.
A. HNSW (Hierarchical Navigable Small World) extends NSW (Navigable Small World) by developing a hierarchical graph of the information factors. The development of the graph is such that related knowledge factors are shut collectively, and dissimilar knowledge factors are far aside.
Ans. Vector search is a way for locating related gadgets in a dataset primarily based on their vector representations. Vector representations are numerical representations of information objects that seize their semantic and syntactic properties.
Ans. Approximate nearest neighbor (ANN) search is a sort of search algorithm that finds gadgets in a dataset which are much like a given question merchandise however not essentially the precise nearest neighbors. ANN search algorithms are sometimes utilized in purposes the place you will need to discover related gadgets rapidly, even when the outcomes should not completely correct.
Ans. Product quantization (PQ) is a way for compressing high-dimensional vectors to make them extra environment friendly to retailer and search. It really works by dividing the vector into smaller subvectors after which quantizing every subvector individually. This leads to a compressed vector that’s a lot smaller than the unique vector however nonetheless retains a lot of the unique data.
The media proven on this article just isn’t owned by Analytics Vidhya and is used on the Writer’s discretion.