Skip to main content

Write Your Own

Author wearing headphones and smiling

Eshaan Aggarwal @EshaanAgg

Last updated: Wednesday, June 4, 2025

This is a collection of common C++ classes and templates that we often use, and thus are perfect opportunities for interviewers to quiz on to see if the candidate is comfortable with the language, can write clean code, and is aware about the internals of the language.

Table of Contents

Smart Pointers

General Smart Pointer

This is the general implementation of a smart pointer that manages the memory of the pointer. The smart pointer is a class that wraps a raw pointer and manages the memory of the raw pointer. The smart pointer is responsible for deleting the memory when the object goes out of scope. The smart pointer is an application of the RAII (Resource Acquisition Is Initialization) idiom, where the resource is acquired in the constructor and released in the destructor.

1
template <typename T>
2
class AutoPtr {
3
T *ptr;
4
5
AutoPtr(T *ptr = nullptr) : ptr(ptr) {}
6
~AutoPtr() {
7
delete ptr;
8
}
9
10
// Copy constructor [Moves the pointer to self]
11
AutoPtr(const AutoPtr &other) {
12
ptr = other.ptr;
13
other.ptr = nullptr;
14
}
15
16
// Copy assignment operator [Moves the pointer to self]
17
AutoPtr& operator=(AutoPtr &other) {
18
if (this == &other)
19
return *this;
20
21
delete ptr; // Free any existing resource
22
ptr = other.ptr;
23
other.ptr = nullptr;
24
return *this;
25
}
26
27
// Overloading the operators for convinience
28
T& operator*() const { return *ptr; }
29
T* operator->() const { return ptr; }
30
31
}

While this class is a good starting point, it has some limitations:

Because of many more such reasons, the AutoPtr class was first introduced in C++98, but was later deprecated in C++11. It was replaced by std::unique_ptr, std::shared_ptr, and std::weak_ptr, which implement the move semantics and the copy semantics in a more robust way.

Unique Pointer

A std::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the std::unique_ptr goes out of scope. The std::unique_ptr is unique in the sense that it cannot be copied or assigned to another std::unique_ptr, and thus there is only one owner of the object (the owner can be transferred using the move semantics).

1
template <typename T>
2
class UniquePointer
3
{
4
T *ptr;
5
6
public:
7
UniquePointer(T *ptr = nullptr) : ptr(ptr) {}
8
~UniquePointer() { delete ptr; }
9
10
// Disable the copy constructor and copy assignment
11
UniquePointer(const UniquePointer &other) = delete;
12
UniquePointer &operator=(const UniquePointer &other) = delete;
13
14
UniquePointer(UniquePointer &&other)
15
{
16
ptr = other.ptr;
17
other.ptr = nullptr;
18
}
19
20
UniquePointer &operator=(UniquePointer &&other)
21
{
22
if (this == other)
23
return *this;
24
25
ptr = other.ptr;
26
other.ptr = nullptr;
27
}
28
29
// Overload operators
30
T &operator*() const { return *ptr; }
31
T *operator->() const { return ptr; }
32
};
33
34
int main()
35
{
36
UniquePointer<int> uptr(new int(42));
37
cout << "Value: " << *uptr << "\n"; // Should print 42
38
39
UniquePointer<int> uptr2 = std::move(uptr);
40
cout << "Value after move: " << *uptr2 << "\n"; // Should print 42
41
42
return 0;
43
}

Shared Pointer

A std::shared_ptr is a smart pointer that retains shared ownership of an object through a pointer. Several std::shared_ptr objects may own the same object. The object is destroyed and its memory deallocated when all std::shared_ptr objects that own it are destroyed or reset. The std::shared_ptr uses a reference count to keep track of the number of std::shared_ptr objects that own the object.

1
#include <iostream>
2
3
template <typename T>
4
class SharedPointer
5
{
6
T *ptr;
7
int *refCount;
8
9
void _clean()
10
{
11
if (refCount == nullptr)
12
return;
13
14
if (--(*refCount) == 0)
15
{
16
delete ptr;
17
delete refCount;
18
}
19
}
20
21
public:
22
SharedPointer(T *p = nullptr) : ptr(p)
23
{
24
// Initialize reference count to 1 if the pointer is not null
25
// else set it to nullptr as well
26
if (p == nullptr)
27
refCount = nullptr;
28
else
29
refCount = new int(1);
30
}
31
32
~SharedPointer() { _clean(); }
33
34
// Copy constructor
35
SharedPointer(const SharedPointer &other)
36
{
37
ptr = other.ptr;
38
refCount = other.refCount;
39
if (refCount)
40
++(*refCount);
41
}
42
43
// Copy assignment operator
44
SharedPointer &operator=(const SharedPointer &other)
45
{
46
if (this == &other)
47
return *this;
48
49
_clean(); // Release current resources
50
51
ptr = other.ptr;
52
refCount = other.refCount;
53
if (refCount)
54
++(*refCount);
55
56
return *this;
57
}
58
59
// Move constructor
60
SharedPointer(SharedPointer &&other)
61
{
62
ptr = other.ptr;
63
refCount = other.refCount;
64
65
other.ptr = nullptr;
66
other.refCount = nullptr;
67
}
68
69
// Move assignment operator
70
SharedPointer &operator=(SharedPointer &&other)
71
{
72
if (this == &other)
73
return *this;
74
75
_clean();
76
77
ptr = other.ptr;
78
refCount = other.refCount;
79
80
other.ptr = nullptr;
81
other.refCount = nullptr;
82
83
return *this;
84
}
85
86
// Dereference operators
87
T &operator*() const { return *ptr; }
88
T *operator->() const { return ptr; }
89
90
int use_count() const { return refCount ? *refCount : 0; }
91
T *get() const { return ptr; }
92
};
93
94
95
int main()
96
{
97
SharedPointer<int> sp1(new int(42));
98
cout << "Value: " << *sp1 << ", Use count: " << sp1.use_count() << "\n";
99
100
SharedPointer<int> sp2 = sp1;
101
cout << "Value after copy: " << *sp2 << ", Use count: " << sp2.use_count() << "\n";
102
103
SharedPointer<int> sp3 = std::move(sp2);
104
cout << "Value after move: " << *sp3 << ", Use count: " << sp3.use_count() << "\n";
105
106
return 0;
107
}

Hit Counter

Base Variant

You need to implement a HitCounter class that supports the following operations:

Each function accepts a timestamp parameter (in seconds granularity) and you may assume that calls are being made to the system in chronological order (i.e. the timestamp is monotonically increasing).

1
class HitCounter {
2
// Front of the queue will have the oldest timestamp
3
// Monotonically increasing timestamps
4
dequeue<pair<int, int>> hits;
5
int curCnt;
6
7
const int MAX_TIME = 5 * 60;
8
9
void removeOldHits(int timestamp) {
10
while (!hits.empty() && hits.front().first <= timestamp - MAX_TIME) {
11
curCnt -= hits.front().second;
12
hits.pop_front();
13
}
14
}
15
16
public:
17
HitCounter(): curCnt(0) {}
18
19
void hit(int timestamp) {
20
removeOldHits(timestamp);
21
if (!hits.empty() && hits.back().first == timestamp)
22
hits.back().second++;
23
else
24
hits.push_back({timestamp, 1});
25
curCnt++;
26
}
27
28
int getHits(int timestamp) {
29
removeOldHits(timestamp);
30
return curCnt;
31
}
32
};

This function uses a deque to store the hits, and a curCnt to store the total number of hits. The removeOldHits function is used to remove the hits that are older than 5 minutes. One salient feature of this implementation is that it does not store the hits for each second, but rather stores the number of hits at each second, and thus this implementation would gracefully handle the case when there are multiple hits at the same second. If the same was not a requirement, we could have used a queue instead of a deque to store the hits.

Unordered Timestamps

Implement the same interface as above, but now the timestamps are not guaranteed to be in order.

1
class HitCounter {
2
map<int, int> hits;
3
const int MAX_TIME = 5 * 60;
4
5
public:
6
HitCounter() {}
7
8
void hit(int timestamp) {
9
hits[timestamp]++;
10
}
11
12
int getHits(int timestamp) {
13
int cnt = 0;
14
for (int time = timestamp - MAX_TIME + 1; time <= timestamp; time++) {
15
if (hits.find(time) != hits.end())
16
cnt += hits[time];
17
}
18
return cnt;
19
}
20
};

Concurrent Hit Counter

Implement the same interface as above, but now the class needs to be thread-safe. You can assume that the threads would be calling the same in chronological order.

1
class HitCounter {
2
mutex mtx;
3
deque<pair<int, int>> hits;
4
int curCnt;
5
6
const int MAX_TIME = 5 * 60;
7
8
void removeOldHits(int timestamp) {
9
while (!hits.empty() && hits.front().first <= timestamp - MAX_TIME) {
10
curCnt -= hits.front().second;
11
hits.pop_front();
12
}
13
}
14
15
public:
16
HitCounter(): curCnt(0) {}
17
18
void hit(int timestamp) {
19
lock_guard<mutex> lock(mtx);
20
21
removeOldHits(timestamp);
22
if (!hits.empty() && hits.back().first == timestamp)
23
hits.back().second++;
24
else
25
hits.push_back({timestamp, 1});
26
curCnt++;
27
}
28
29
int getHits(int timestamp) {
30
lock_guard<mutex> lock(mtx);
31
32
removeOldHits(timestamp);
33
return curCnt;
34
}
35
};

Infinite Timestamps in the Past

In this variant, we want the hit counter to be able to answer queries of all the time in the past, and the solution should use constant memory. Some inaccuracy is allowed in the answer. To handle the same, we would make use of buckets of size of powers of 2, so that the older buckets store more points and are updated less frequently.

1
class HitCouter;
2
3
class Bucket
4
{
5
int quantum;
6
int cnt;
7
int lastUpdate;
8
9
friend class HitCouter;
10
11
public:
12
Bucket(int p)
13
{
14
quantum = 1 << p;
15
cnt = 0;
16
lastUpdate = 0;
17
}
18
};
19
20
class HitCouter
21
{
22
int MAX_BUCKETS = 12;
23
vector<Bucket> buckets;
24
int curTime;
25
26
void shiftBuckets(int deltaTime)
27
{
28
if (!deltaTime)
29
return;
30
31
// Last bucket represents liftime hits, and should not be shifted
32
for (int i = 0; i < MAX_BUCKETS - 1; i++)
33
{
34
Bucket &b = buckets[i];
35
if (curTime - b.lastUpdate >= b.quantum)
36
{
37
buckets[i + 1].cnt += b.cnt;
38
buckets[i].lastUpdate = curTime;
39
buckets[i].cnt = 0;
40
}
41
}
42
}
43
44
void update(int timestamp)
45
{
46
int deltaTime = timestamp - curTime;
47
curTime = timestamp;
48
shiftBuckets(deltaTime);
49
}
50
51
public:
52
HitCouter()
53
{
54
curTime = 0;
55
for (int i = 0; i < MAX_BUCKETS; i++)
56
buckets.emplace_back(i);
57
}
58
59
void hit(int timestamp)
60
{
61
update(timestamp);
62
buckets[0].cnt++;
63
}
64
65
int getHits(int timestamp, int prevTime)
66
{
67
update(timestamp);
68
69
int cnt = 0;
70
int idx = 0;
71
for (; idx < MAX_BUCKETS; idx++)
72
{
73
if (prevTime >= buckets[idx].quantum)
74
cnt += buckets[idx].cnt;
75
else
76
break;
77
}
78
79
return cnt;
80
}
81
};

Singleton Design Pattern

A singleton class is a class such that you can only have one instance of the class. This is useful when you want to have a single instance of a class that is shared across the application. The singleton class is implemented using a static member variable, and a static member function that returns the instance of the class.

Singletons are useful when we want to apply some functionalities to a “global” set of data, but we do not want to pass the object or the data/member functions around. Having the same encapsulated in a singleton class makes it easier to manage and use throughout the application. They can effectively be managed using namespaces as well, but the singleton class provides a more object-oriented approach.

Here are some good examples of a singleton class:

class Singleton
{
int state;
Singleton() : state(0) {} // Private constructor to prevent instantiation
// Delete constructors and assignment
Singleton(const Singleton &other) = delete;
Singleton(const Singleton &&other) = delete;
Singleton &operator=(const Singleton &other) = delete;
Singleton &operator=(const Singleton &&other) = delete;
public:
static Singleton *getInstance()
{
static Singleton instance;
return &instance;
}
int get()
{
return state;
}
int incr()
{
return ++state;
}
};
int main()
{
Singleton *a = Singleton::getInstance();
Singleton *b = Singleton::getInstance();
cout << "States: " << a->get() << " " << b->get() << "\n";
cout << "Incr: " << a->incr() << " " << b->incr() << "\n";
cout << "States: " << a->get() << " " << b->get() << "\n";
cout << "Equal: " << (a == b) << "\n";
cout << "Address: " << a << " " << b << "\n";
return 0;
}

If you want to call the member functions directly on the class, you can make the member functions static as well with a bit of indirection.

class Singleton {
public:
static Singleton& get() {
static Singleton instance;
return instance;
}
static void foo() {
get().IFoo();
}
private:
void IFoo() {
// Implementation of foo
}
Singleton() {}
Singleton(const Singleton&) = delete;
Singleton& operator=(const Singleton&) = delete;
};
int main() {
Singleton::foo();
return 0;
}

Vector

The std::vector is a dynamic array that can grow and shrink in size. It is a sequence container that encapsulates dynamic size arrays. The storage of the vector is handled automatically, being expanded and contracted as needed. Vectors usually occupy more space than static arrays, because more memory is allocated to handle future growth. This way a vector does not need to reallocate each time an element is inserted, but only when the additional memory is exhausted.

For our implementation, we set the capacity of the vector to be powers of 2, so that the reallocation is done less frequently (though other exponential growth factors can be used as well). We will make use of templates to make the vector generic, and try to provide the same interface as the std::vector. We will also provide the operator[] to access the elements of the vector, and implement iterators to traverse the vector.

Note that in the following implementation, we have used std:::move to move the elements of the vector so that we can avoid copying the elements. To also provide support for emplace_back, we have used the Args &&...args to forward the arguments to the constructor of the object.

1
template <typename T>
2
class Vector
3
{
4
public:
5
typedef T *iterator;
6
7
// Constructors
8
Vector() : capacity(1), size(0) {
9
arr = new T[capacity];
10
}
11
12
Vector(int cap) : capacity(cap), size(0) {
13
arr = new T[capacity];
14
}
15
16
// Destructor
17
~Vector() {
18
delete[] arr;
19
}
20
21
template <typename... Args>
22
void emplace_back(Args &&...args)
23
{
24
push_back(T(std::forward<Args>(args)...));
25
}
26
27
void push_back(T &&val)
28
{
29
if (size == capacity)
30
_resize();
31
arr[size++] = move(val);
32
}
33
34
void pop_back()
35
{
36
if (size > 0)
37
size--;
38
if (size < capacity / 2)
39
_shrink();
40
}
41
42
int len()
43
{
44
return size;
45
}
46
47
T front()
48
{
49
return arr[0];
50
}
51
52
T back()
53
{
54
return arr[size - 1];
55
}
56
57
// Iterators
58
iterator begin()
59
{
60
return arr;
61
}
62
63
iterator end()
64
{
65
return arr + size;
66
}
67
68
// Overloading the operators
69
T &operator[](int idx)
70
{
71
return arr[idx];
72
}
73
74
private:
75
int capacity;
76
int size;
77
T *arr;
78
79
void _resize()
80
{
81
capacity *= 2;
82
T *newArr = new T[capacity];
83
for (int i = 0; i < size; i++)
84
newArr[i] = move(arr[i]);
85
delete[] arr;
86
arr = newArr;
87
}
88
89
void _shrink()
90
{
91
capacity = max(1, capacity / 2);
92
T *newArr = new T[capacity];
93
for (int i = 0; i < size; i++)
94
newArr[i] = move(arr[i]);
95
delete[] arr;
96
arr = newArr;
97
}
98
};

Thread Pool Manager

Implement a thread pool class that manages a pool of worker threads to execute submitted tasks asynchronously. The thread pool should maintain a fixed number of threads and a task queue to handle incoming tasks.

The implementation should include:

Other key requirements that the implementation should follow:

1
#include <bits/stdc++.h>
2
using namespace std;
3
4
using Task = function<void()>;
5
6
class ThreadPool
7
{
8
private:
9
vector<thread> workers;
10
queue<Task> tasks;
11
12
mutex mtx;
13
condition_variable cv;
14
bool stop;
15
16
public:
17
ThreadPool(size_t numThreads)
18
: stop(false)
19
{
20
for (size_t i = 0; i < numThreads; ++i)
21
{
22
workers.emplace_back([this]() {
23
do {
24
Task task;
25
26
{
27
// Block the thread until a task is available or stop is true
28
unique_lock lk(mtx);
29
cv.wait(lk, [this]() {
30
return stop || !tasks.empty();
31
});
32
33
if (stop && tasks.empty())
34
return;
35
36
// Get a task from the queue
37
task = move(tasks.front());
38
tasks.pop();
39
}
40
41
try {
42
task(); // run the task
43
} catch (const exception &e) {
44
// Exceptions are handled in the future
45
cerr << "Exception in thread: " << e.what() << endl;
46
}
47
} while (true);
48
});
49
}
50
}
51
52
~ThreadPool()
53
{
54
{
55
unique_lock lk(mtx);
56
stop = true;
57
}
58
59
cv.notify_all();
60
61
// Join all the threads to the main thread
62
// so that the main thread waits for all threads to finish
63
for (thread &worker : workers)
64
if (worker.joinable())
65
worker.join();
66
}
67
68
// Submit a task to the thread pool and get a future
69
template <typename F, typename... Args>
70
auto submit(F &&f, Args &&...args)
71
-> future<invoke_result_t<F, Args...>>
72
{
73
using return_type = invoke_result_t<F, Args...>;
74
75
auto task = make_shared<packaged_task<return_type()>>(
76
bind(forward<F>(f), forward<Args>(args)...));
77
78
future<return_type> res = task->get_future();
79
80
{
81
unique_lock lk(mtx);
82
if (stop)
83
throw runtime_error("submit on stopped ThreadPool");
84
tasks.emplace([task]()
85
{ (*task)(); });
86
}
87
88
cv.notify_one(); // Notify one thread that a task is available
89
return res;
90
}
91
92
// Delete the copy constructor and assignment operator
93
ThreadPool(const ThreadPool &) = delete;
94
ThreadPool &operator=(const ThreadPool &) = delete;
95
};

Versioned Queue

Requirements

You have to design a data stucture that supports the basic queue operations of enqueue and deque. In addition to the same, whenever one of these operations is called, the version that is associated with the data structure is also incremented by 11 automatically. The history operation of the queue accepts an integer version and prints all the elements that were present in the queue at that version.

The data structure should be designed so that the total memory used by the data structure is O(n)O(n), where nn is the total number of enqueue and dequeue operations that have been performed on the queue. The history operation should run in O(n)O(n) time, where nn is the number of elements in the queue at that version.

Solution

We will make use of a linked list to store the elements of the queue. We will also maintain an vector called versionHistory, which would contain the starting and the tail nodes of the queue (underlying linked list) at each version. The enqueue and dequeue operations will manage the current queue nodes approprately.

1
#include <iostream>
2
#include <vector>
3
using namespace std;
4
5
template <typename T>
6
class Queue;
7
8
template <typename T>
9
class Node
10
{
11
T data;
12
Node *next;
13
friend class Queue<T>;
14
15
public:
16
Node(T val, Node *next = nullptr) : data(val), next(next) {}
17
};
18
19
template <typename T>
20
class Queue
21
{
22
Node<T> *head; // The starting node of the queue (head)
23
Node<T> *tail; // The ending node of the queue (exclusive)
24
int curVersion;
25
vector<pair<Node<T> *, Node<T> *>> versionHistory;
26
27
public:
28
Queue() : head(nullptr), tail(nullptr), curVersion(0) {}
29
30
void enqueue(T val)
31
{
32
Node<T> *newNode = new Node<T>(val);
33
curVersion++;
34
35
if (!head)
36
{
37
// Initialize the queue with the current node
38
head = newNode;
39
tail = nullptr;
40
}
41
else
42
{
43
// Add the element as the head of the queue
44
newNode->next = head;
45
head = newNode;
46
}
47
48
versionHistory.push_back({head, tail});
49
}
50
51
T dequeue()
52
{
53
if (head == nullptr)
54
throw runtime_error("Queue is empty");
55
56
T data = head->data;
57
58
curVersion++;
59
head = head->next;
60
versionHistory.push_back({head, tail});
61
62
return data;
63
}
64
65
int getCurrentVersion() { return curVersion; }
66
67
void printHistory(int version)
68
{
69
if (version <= 0 || version > curVersion)
70
throw runtime_error("Invalid version number");
71
72
cout << "Version " << version << ": ";
73
auto [headNode, tailNode] = versionHistory[version - 1];
74
while (headNode != tailNode)
75
{
76
cout << headNode->data << " ";
77
headNode = headNode->next;
78
}
79
cout << "\n";
80
}
81
};
82
83
int main()
84
{
85
Queue<int> q;
86
q.enqueue(1);
87
q.enqueue(2);
88
q.enqueue(3);
89
q.enqueue(4);
90
q.printHistory(3);
91
q.printHistory(4);
92
int x = q.dequeue();
93
cout << "Dequeued: " << x << endl;
94
x = q.dequeue();
95
cout << "Dequeued: " << x << endl;
96
q.printHistory(5);
97
q.printHistory(6);
98
q.enqueue(5);
99
q.enqueue(6);
100
q.printHistory(8);
101
}

Mutex

A mutex (short for mutual exclusion) is a synchronization primitive that provides exclusive access to a shared resource. When one thread holds the mutex, other threads that try to acquire it will have to wait. This simple implementation uses busy-waiting (spinning) and the compare-and-swap (CAS) atomic primitive to ensure that only one thread can acquire the mutex at a time.

The two core operations are:

This implementation uses std::atomic and avoids the need for condition variables or kernel-level blocking but can be inefficient under contention due to busy-waiting.

class Mutex
{
atomic<int> locked;
public:
Mutex() { locked.store(0); }
void lock()
{
int expected = 0;
while (!locked.compare_exchange_strong(expected, 1, memory_order_acquire))
expected = 0;
}
void unlock()
{
int expected = 1;
if (!locked.compare_exchange_strong(expected, 0, memory_order_release))
throw "unlock called without locking";
}
};
int main()
{
int cnt = 0;
Mutex mtx;
auto worker = [&cnt, &mtx](int id)
{
mtx.lock();
cnt++;
cout << "[" << id << "] Incremented cnt to " << cnt << "\n";
mtx.unlock();
};
vector<thread> threads;
for (int i = 1; i <= 5; i++)
threads.emplace_back(worker, i);
for (auto &t : threads)
t.join();
}

compare_exchange_strong(expected, desired) atomically compares locked with expected. If they are equal, it sets locked to desired and returns true. Otherwise, it updates expected with the current value of locked and returns false. We also use std::memory_order_acquire on lock and std::memory_order_release on unlock to ensure proper ordering of memory operations across threads.


Semaphore

A semaphore is a synchronization primitive that is used to control access to a shared resource. It is a variable that is used to control access to a shared resource by multiple processes in a concurrent system such that the resource is not used by more than a certain number of process at a time. The semaphore has two operations, acquire and release, which are also known as P and V operations respectively. The acquire function waits until the resource is available, and then decrements the available count. The release function increments the available count and notifies one of the waiting threads.

With Condition Variables

This implementation uses a std::mutex and a std::condition_variable to implement the semaphore. A condition variable allows a thread to be signaled when something of interest to that thread occurs, which is useful to avoid busy waiting. Note the use of std::unique_lock to lock the mutex, instead of the usual std::lock_guard, as the std::condition_variable requires the mutex to be locked when calling wait.

1
class Semaphore
2
{
3
// cnt denotes the number of available resources currently.
4
// available denotes the maximum number of resources that can be acquired.
5
// If cnt is 0, then the semaphore is locked.
6
int cnt, available;
7
mutex mtx;
8
condition_variable cv;
9
10
public:
11
Semaphore(int available = 1) : cnt(available), available(available) {}
12
13
void acquire()
14
{
15
unique_lock lk(mtx);
16
cv.wait(lk, [this]()
17
{ return cnt > 0; });
18
cnt--;
19
}
20
21
void release()
22
{
23
unique_lock lk(mtx);
24
cnt++;
25
if (cnt > available)
26
throw "release called without acquiring lock";
27
cv.notify_one();
28
}
29
};
30
31
int main()
32
{
33
Semaphore sem(3);
34
35
// Start multiple threads that sleep for a while after acquiring the semaphore
36
auto worker = [&sem](int id)
37
{
38
cout << "[" << id << "] Trying to ACQ\n";
39
sem.acquire();
40
cout << "[" << id << "] ACQ success. Sleeping\n";
41
this_thread::sleep_for(chrono::seconds(id));
42
cout << "[" << id << "] Releasing\n";
43
sem.release();
44
};
45
46
vector<thread> threads;
47
for (int i = 1; i <= 5; i++)
48
threads.emplace_back(worker, i);
49
50
for (auto &t : threads)
51
t.join();
52
cout << "All threads completed.\n";
53
}

Without Condition Variables

Since we are not using condition variables, we need to busy wait until the resource is available. This is not a good practice as it wastes CPU cycles, but it is useful to understand the concept of semaphores. We now make use of std::atomic to make the operations atomic, and a std::mutex to protect the critical section.

1
class SemaphorePrimitive
2
{
3
// cnt denotes the number of available resources currently.
4
// available denotes the maximum number of resources that can be acquired.
5
// If cnt is 0, then the semaphore is locked.
6
int available;
7
atomic<int> cnt;
8
mutex mtx;
9
10
public:
11
SemaphorePrimitive(int maxCnt) : available(maxCnt)
12
{
13
cnt.store(maxCnt);
14
}
15
16
void acquire()
17
{
18
while (1)
19
{
20
while (cnt == 0) { }
21
22
lock_guard lk(mtx);
23
if (!cnt)
24
continue;
25
cnt--;
26
return;
27
}
28
}
29
30
void release()
31
{
32
lock_guard lk(mtx);
33
cnt++;
34
if (cnt > available)
35
throw "release called without acquiring lock"S;
36
}
37
};
38
39
40
int main()
41
{
42
SemaphorePrimitive sem(3);
43
44
// Start multiple threads that sleep for a while after acquiring the semaphore
45
auto worker = [&sem](int id)
46
{
47
cout << "[" << id << "] Trying to ACQ\n";
48
sem.acquire();
49
cout << "[" << id << "] ACQ success. Sleeping\n";
50
this_thread::sleep_for(chrono::seconds(id));
51
cout << "[" << id << "] Releasing\n";
52
sem.release();
53
};
54
55
vector<thread> threads;
56
for (int i = 1; i <= 5; i++)
57
threads.emplace_back(worker, i);
58
59
for (auto &t : threads)
60
t.join();
61
cout << "All threads completed.\n";
62
}

Producer Consumer Problem

The infamous Producer-Consumer problem, also called the Bounded-Buffer problem, is one of the famous real-world scenarios of synchronization. The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer’s job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time. The problem is to make sure that the producer won’t try to add data into the buffer if it’s full and that the consumer won’t try to remove data from an empty buffer.

Without Condition Variables

We will make use of two semphores, empty and full to keep track of the number of empty and full slots in the buffer. The producer will wait on the empty semaphore before adding an item to the buffer, and the consumer will wait on the full semaphore before removing an item from the buffer. The mutex is used to protect the critical section, i.e., the buffer.

We will also use an additional mutex to protect the access to the buffer, and a queue to store the data. The producer will push data into the queue, and the consumer will pop data from the queue.

1
class ProducerConsumer {
2
mutex mtx;
3
queue <int> buffer;
4
5
semaphore empty;
6
semaphore full;
7
8
public:
9
ProducerConsumer(int capacity): {
10
empty = semaphore(capacity); // The count of empty slots in the buffer
11
full = semaphore(0); // The count of full slots in the buffer
12
}
13
14
void producer(int item) {
15
empty.acquire(); // Wait for an empty slot
16
17
mtx.lock(); // Lock the mutex to protect the buffer
18
buffer.push(item);
19
cout << "Produced: " << item << " | Buffer Size: " << buffer.size() << endl;
20
mtx.unlock();
21
22
full.release(); // Notify that there is a new item in the buffer
23
}
24
25
void consumer() {
26
full.acquire(); // Wait for a full slot
27
28
mtx.lock(); // Lock the mutex to protect the buffer
29
int item = buffer.front();
30
buffer.pop();
31
cout << "Consumed: " << item << " | Buffer Size: " << buffer.size() << endl;
32
mtx.unlock();
33
34
empty.release(); // Notify that there is an empty slot in the buffer
35
}
36
}

With Condition Variables

We make use of the following classes to implement the producer-consumer problem:

In the main function, we create two threads, one for the producer and one for the consumer. The producer produces data and pushes it into the buffer, and the consumer consumes the data by popping it from the buffer. The producer waits if the buffer is full, and the consumer waits if the buffer is empty.

1
class ProducerConsumer {
2
std::mutex mtx;
3
std::condition_variable cv;
4
std::queue<int> buffer;
5
const unsigned int capacity = 10;
6
7
public:
8
void producer(int item) {
9
std::unique_lock<std::mutex> lk(mtx);
10
cv.wait(lk, [this](){
11
return buffer.size() < capacity;
12
});
13
14
buffer.push(item);
15
std::cout << "Produced: " << item << " | Buffer Size: " << buffer.size() << std::endl;
16
17
cv.notify_all();
18
}
19
20
void consumer() {
21
std::unique_lock<std::mutex> lk(mtx);
22
cv.wait(lk, [this](){
23
return !buffer.empty();
24
});
25
26
int item = buffer.front();
27
buffer.pop();
28
std::cout << "Consumed: " << item << " | Buffer Size: " << buffer.size() << std::endl;
29
30
cv.notify_all();
31
}
32
};
33
34
int main() {
35
ProducerConsumer pc;
36
37
std::thread producer([&pc](){
38
for (int i = 0; i < 100; i++)
39
pc.producer(i);
40
});
41
42
std::thread consumer([&pc](){
43
for (int i = 0; i < 100; i++)
44
pc.consumer();
45
});
46
47
producer.join();
48
consumer.join();
49
50
return 0;
51
}

Reader & Writer Problem

The readers-writers problem is a classic synchronization problem in computer science that deals with coordinating access to a shared resource among multiple concurrent processes or threads. You have a shared data structure (like a database, file, or memory location) that multiple threads want to access simultaneously. There are two types of processes:

For the correctness of the implementation, we need to ensure the following:

Without Condition Variables

This is a reader’s preference version, where the writer may suffer from starvation if readers continuously acquire the lock.

We use a binary_semaphore to check if a writer is currently present or not. The write_semaphore is initialized to 11, which means that the writer can acquire the lock if no readers are present. The mutex reader_mutex is used to protect concurrent access to the reader_count variable and to ensure that concurrent readers can increment or decrement the count safely.

1
class ReadWriteLock
2
{
3
private:
4
int reader_count;
5
mutex reader_mutex;
6
binary_semaphore write_semaphore;
7
8
public:
9
ReadWriteLock() : reader_count(0), write_semaphore(1) {}
10
11
void reader_lock()
12
{
13
lock_guard lock(reader_mutex);
14
15
reader_count++;
16
// First reader blocks all the writer
17
if (reader_count == 1)
18
write_semaphore.acquire();
19
}
20
21
void reader_unlock()
22
{
23
lock_guard lock(reader_mutex);
24
25
reader_count--;
26
if (reader_count == 0)
27
write_semaphore.release();
28
}
29
30
void writer_lock()
31
{
32
write_semaphore.acquire();
33
}
34
35
void writer_unlock()
36
{
37
write_semaphore.release();
38
}
39
};

With Condition Variables

Using conditions variables, we can addtionally provide preference to either reader or writer threads if the same is desired.

1
class ReadWriteLockCV
2
{
3
private:
4
int reader_count = 0;
5
int waiting_writers = 0;
6
bool writer_active = false;
7
8
condition_variable reader_cv;
9
condition_variable writer_cv;
10
mutex mtx;
11
12
public:
13
void reader_lock()
14
{
15
unique_lock lock(mtx);
16
reader_cv.wait(lock, [this]
17
{ return !writer_active && waiting_writers == 0; });
18
19
reader_count++;
20
}
21
22
void reader_unlock()
23
{
24
unique_lock lock(mtx);
25
reader_count--;
26
if (reader_count == 0)
27
writer_cv.notify_one();
28
}
29
30
void writer_lock()
31
{
32
unique_lock lock(mtx);
33
waiting_writers++;
34
writer_cv.wait(lock, [this]
35
{ return reader_count == 0 && !writer_active; });
36
37
waiting_writers--;
38
writer_active = true;
39
}
40
41
void writer_unlock()
42
{
43
unique_lock lock(mtx);
44
writer_active = false;
45
46
// Preference: writers first or readers first
47
if (waiting_writers > 0)
48
writer_cv.notify_one(); // Prefer writers
49
else
50
reader_cv.notify_all(); // Prefer readers
51
}
52
};

Dining Philosophers Problem

The Dining Philosophers Problem is a classic concurrency problem that illustrates the challenges of resource allocation among multiple competing threads (philosophers).

Each philosopher alternates between thinking and eating. The challenge is to design a strategy that avoids deadlock and starvation while allowing all philosophers to eat.

Naive Solution (Prone to Deadlock)

1
class DiningPhilosophersNaive
2
{
3
private:
4
mutex forks[5];
5
6
public:
7
void wants_to_eat(
8
int philosopher,
9
function<void()> pick_left_fork,
10
function<void()> pick_right_fork,
11
function<void()> eat,
12
function<void()> put_left_fork,
13
function<void()> put_right_fork
14
) {
15
int left = philosopher;
16
int right = (philosopher + 1) % 5;
17
18
// Naively lock left then right fork
19
lock_guard<mutex> left_lock(forks[left]);
20
lock_guard<mutex> right_lock(forks[right]);
21
22
pick_left_fork();
23
pick_right_fork();
24
eat();
25
put_right_fork();
26
put_left_fork();
27
}
28
};

This can deadlock if all philosophers pick up their left fork at the same time.

Use Ordering

Avoid circular wait by ensuring that not all philosophers try to pick up forks in the same order.

1
class DiningPhilosophersOrdered
2
{
3
private:
4
mutex forks[5];
5
6
public:
7
void wants_to_eat(
8
int philosopher,
9
function<void()> pick_left_fork,
10
function<void()> pick_right_fork,
11
function<void()> eat,
12
function<void()> put_left_fork,
13
function<void()> put_right_fork
14
) {
15
int left = philosopher;
16
int right = (philosopher + 1) % 5;
17
18
if (philosopher % 2 == 0)
19
{
20
lock_guard<mutex> first_lock(forks[left]);
21
lock_guard<mutex> second_lock(forks[right]);
22
23
pick_left_fork();
24
pick_right_fork();
25
eat();
26
put_right_fork();
27
put_left_fork();
28
}
29
else
30
{
31
lock_guard<mutex> first_lock(forks[right]);
32
lock_guard<mutex> second_lock(forks[left]);
33
34
pick_right_fork();
35
pick_left_fork();
36
eat();
37
put_left_fork();
38
put_right_fork();
39
}
40
}
41
};

Use a Semaphore to Limit Access

Use a semaphore to allow at most 44 philosophers to try to pick up forks at once.

1
class DiningPhilosophersSemaphore
2
{
3
private:
4
mutex forks[5];
5
// Only 4 philosophers can try to eat at once
6
counting_semaphore<4> entry{4};
7
8
public:
9
void wants_to_eat(
10
int philosopher,
11
function<void()> pick_left_fork,
12
function<void()> pick_right_fork,
13
function<void()> eat,
14
function<void()> put_left_fork,
15
function<void()> put_right_fork
16
) {
17
entry.acquire();
18
19
int left = philosopher;
20
int right = (philosopher + 1) % 5;
21
22
{
23
lock_guard<mutex> left_lock(forks[left]);
24
lock_guard<mutex> right_lock(forks[right]);
25
26
pick_left_fork();
27
pick_right_fork();
28
eat();
29
put_right_fork();
30
put_left_fork();
31
}
32
33
entry.release();
34
}
35
};

Condition Variable to Avoid Starvation

1
class DiningPhilosophersCV
2
{
3
private:
4
mutex mtx;
5
condition_variable cv;
6
bool forks[5] = {false, false, false, false, false};
7
8
public:
9
void wants_to_eat(
10
int philosopher,
11
function<void()> pick_left_fork,
12
function<void()> pick_right_fork,
13
function<void()> eat,
14
function<void()> put_left_fork,
15
function<void()> put_right_fork
16
) {
17
int left = philosopher;
18
int right = (philosopher + 1) % 5;
19
20
// Acquire both forks
21
{
22
unique_lock<mutex> lock(mtx);
23
cv.wait(lock, [&]
24
{ return !forks[left] && !forks[right]; });
25
26
forks[left] = true;
27
forks[right] = true;
28
}
29
30
// Critical section (outside lock)
31
pick_left_fork();
32
pick_right_fork();
33
eat();
34
put_right_fork();
35
put_left_fork();
36
37
// Release both forks
38
{
39
lock_guard<mutex> lock(mtx);
40
forks[left] = false;
41
forks[right] = false;
42
cv.notify_all(); // Notify all waiting philosophers
43
}
44
}
45
};

Barbershop Problem

The Barbershop Problem is a classic synchronization problem that involves a barber, customers, and a waiting room. The operating rules of the barbershop are as follows:

We make use of the counting_semaphore to represent the number of available chairs in the waiting room, and two binary_semaphores to signal between the barber and the customers. The barber will wait for a customer to be ready, and the customer will wait for the barber to be ready after they signal that they are ready for a haircut.

1
class Barbershop {
2
counting_semaphore waiting_room; // Represents number of available chairs
3
binary_semaphore customer_ready{0}; // Customer signals barber
4
binary_semaphore barber_ready{0}; // Barber signals customer
5
6
public:
7
Barbershop(int chairs)
8
:waiting_room(chairs) {}
9
10
void customer(int id) {
11
// Try to get a chair in the waiting room
12
if (!waiting_room.try_acquire()) {
13
cout << "Customer " << id << " left due to no available chairs." << endl;
14
return;
15
}
16
17
customer_ready.release(); // Notify the barber
18
barber_ready.acquire(); // Wait until the barber is ready
19
waiting_room.release(); // Free the chair
20
21
cout << "Customer " << id << " is getting a haircut." << endl;
22
}
23
24
void barber_work() {
25
while (true) {
26
customer_ready.acquire(); // Wait for customer
27
28
this_thread::sleep_for(chrono::seconds(1)); // Haircut time
29
barber_ready.release(); // Signal to customer that haircut is done
30
31
cout << "Barber finished a haircut." << endl;
32
}
33
}
34
};

The Smoking Cigarettes Problem

In this classic problem, there are three smokers, each with an infinite supply of one of the ingredients:

An agent continously places two random ingredients on the table. The smoker who has the third item makes and smokes a cigarette. The agent waits until atleast one of the smokers has smoked a cigarette before placing the next two items.

If you try to solve this with no synchronization, then we run into a number of race conditions and deadlocks:

  1. Multiple smokers might try to access the table at the same time.
  2. A smoker might grab one of the ingredients, while other smokers grab the other ingredient, leading to a situation where no smoker can complete their cigarette. This would lead to a deadlock, as the agent would be waiting for a smoker to finish smoking before placing the next two items.

Problem History

Intially proposed by Edsger Dijkstra in 19651965, the problem has been used to illustrate the challenges of synchronization in concurrent programming. It was first aimed to solve under two constraints:

  1. The source code of the agent cannot be changed (analogous the source code of a OS allocating resources).
  2. The smokers (processes) cannot use mutexes, semaphores, or condition variables to synchronize their actions.

It has been shown that the given problem is impossible to solve under these constraints. Thus it is proposed that the second constraint be relaxed, as it is an artificial constraint that does not reflect real-world scenarios. Processes and softwares do have access to synchronization primitives, which allow the problem to be solved.

Solution

The solution involves using semaphores for synchronization, and makes use of “pusher” threads for each ingredient. The agent thread places two ingredients on the table, and the corresponding “pusher” threads signal the smokers when they can smoke. Each smoker waits for their ingredient to be available before smoking.

1
class SmokingCigarettes {
2
// Table state and mutex to prevent access conflicts
3
mutex mtx;
4
bool tabacco_present = false, paper_present = false, matches_present = false;
5
6
// Semaphores to signal when ingredients are ready
7
// Used by AGENT
8
binary_semaphore tobacco_ready, paper_ready, matches_ready;
9
binary_semaphore agent_ready;
10
11
// SMOKER semaphores to signal when a smoker is ready to smoke
12
binary_semaphore tobacco_smoker, paper_smoker, matches_smoker;
13
14
public:
15
SmokingCigarettes():
16
tobacco_ready(0), paper_ready(0), matches_ready(0),
17
agent_ready(1), tobacco_smoker(0), paper_smoker(0), matches_smoker(0) {}
18
19
void agent() {
20
while (true) {
21
agent_ready.acquire(); // Wait for the agent to be ready
22
23
// Randomly place two ingredients on the table
24
int notRelease = rand() % 3;
25
if (notRelease == 0) {
26
tobacco_ready.release();
27
paper_ready.release();
28
cout << "Agent placed Tobacco and Paper on the table." << endl;
29
} else if (notRelease == 1) {
30
paper_ready.release();
31
matches_ready.release();
32
cout << "Agent placed Paper and Matches on the table." << endl;
33
} else {
34
matches_ready.release();
35
tobacco_ready.release();
36
cout << "Agent placed Matches and Tobacco on the table." << endl;
37
}
38
}
39
}
40
41
// There would be three "pusher" functions, one for each ingredient
42
void pusher_tobacco() {
43
while (true) {
44
tobacco_ready.acquire(); // Wait for tobacco to be ready
45
lock_guard<mutex> lock(mtx); // Acquire the mutex to access the table
46
47
if (paper_present) {
48
// Paper is present, that means the "pusher" for paper already ran
49
// and placed matches on the table
50
paper_present = false;
51
// Paper & Tobacco are present, so the Matches smoker can smoke
52
matches_smoker.release();
53
} else if (matches_present) {
54
matches_present = false;
55
paper_smoker.release();
56
} else {
57
tobacco_present = true; // Place tobacco on the table
58
}
59
}
60
}
61
62
// There would be three such functions, one for each smoker
63
// When a smoker is allowed to smoke by the pushers, it smokes the cigarette
64
// and notifies the agent to place more ingredients
65
void tobacco_smoke() {
66
while (true) {
67
tobacco_smoker.acquire();
68
69
cout << "Tobacco smoker is smoking a cigarette." << endl;
70
this_thread::sleep_for(chrono::seconds(1)); // Simulate smoking time
71
72
agent_ready.release(); // Notify the agent to place more ingredients
73
}
74
}
75
}

This implementation skips some details for brevity, such as the actual implementation of the pusher_paper and pusher_matches functions, which would be similar to the pusher_tobacco function. In you want a runable CPP code, you can find the same here:

Click to view the complete code

Make sure you use C++20 or later to compile this code, as it uses std::binary_semaphore. The GCC compilation command for the same would be:

Terminal window
g++ -std=c++20 -pthread smoking.cpp -o smoking

#include <iostream>
#include <thread>
#include <mutex>
#include <semaphore>
#include <chrono>
#include <random>
using namespace std;
class SmokingCigarettes
{
mutex mtx;
bool tobacco_present = false, paper_present = false, matches_present = false;
binary_semaphore tobacco_ready{0}, paper_ready{0}, matches_ready{0};
binary_semaphore agent_ready{1};
binary_semaphore tobacco_smoker{0}, paper_smoker{0}, matches_smoker{0};
public:
void agent()
{
random_device rd;
mt19937 gen(rd());
uniform_int_distribution<> dist(0, 2);
while (true)
{
agent_ready.acquire();
int notRelease = dist(gen);
if (notRelease == 0)
{
tobacco_ready.release();
paper_ready.release();
cout << "Agent placed Tobacco and Paper on the table." << endl;
}
else if (notRelease == 1)
{
paper_ready.release();
matches_ready.release();
cout << "Agent placed Paper and Matches on the table." << endl;
}
else
{
matches_ready.release();
tobacco_ready.release();
cout << "Agent placed Matches and Tobacco on the table." << endl;
}
}
}
void pusher_tobacco()
{
while (true)
{
tobacco_ready.acquire();
lock_guard lock(mtx);
if (paper_present)
{
paper_present = false;
matches_smoker.release();
}
else if (matches_present)
{
matches_present = false;
paper_smoker.release();
}
else
tobacco_present = true;
}
}
void pusher_paper()
{
while (true)
{
paper_ready.acquire();
lock_guard lock(mtx);
if (tobacco_present)
{
tobacco_present = false;
matches_smoker.release();
}
else if (matches_present)
{
matches_present = false;
tobacco_smoker.release();
}
else
paper_present = true;
}
}
void pusher_matches()
{
while (true)
{
matches_ready.acquire();
lock_guard lock(mtx);
if (tobacco_present)
{
tobacco_present = false;
paper_smoker.release();
}
else if (paper_present)
{
paper_present = false;
tobacco_smoker.release();
}
else
matches_present = true;
}
}
void tobacco_smoke()
{
while (true)
{
tobacco_smoker.acquire();
cout << "Tobacco smoker is smoking a cigarette." << endl;
this_thread::sleep_for(chrono::seconds(1));
agent_ready.release();
}
}
void paper_smoke()
{
while (true)
{
paper_smoker.acquire();
cout << "Paper smoker is smoking a cigarette." << endl;
this_thread::sleep_for(chrono::seconds(1));
agent_ready.release();
}
}
void matches_smoke()
{
while (true)
{
matches_smoker.acquire();
cout << "Matches smoker is smoking a cigarette." << endl;
this_thread::sleep_for(chrono::seconds(1));
agent_ready.release();
}
}
};
int main()
{
SmokingCigarettes system;
thread agent_thread(&SmokingCigarettes::agent, &system);
thread pusher1(&SmokingCigarettes::pusher_tobacco, &system);
thread pusher2(&SmokingCigarettes::pusher_paper, &system);
thread pusher3(&SmokingCigarettes::pusher_matches, &system);
thread smoker1(&SmokingCigarettes::tobacco_smoke, &system);
thread smoker2(&SmokingCigarettes::paper_smoke, &system);
thread smoker3(&SmokingCigarettes::matches_smoke, &system);
agent_thread.join();
pusher1.join();
pusher2.join();
pusher3.join();
smoker1.join();
smoker2.join();
smoker3.join();
return 0;
}

Template Metaprogramming

Template metaprogramming is a powerful feature in C++ that allows you to perform computations at compile time using templates. It can be used to create generic algorithms, type traits, and even complex data structures that are resolved during compilation rather than at runtime.

Both const and constexpr are both used to declare constants in C++, but they differ in when and how their values are determined:

Using constexpr along with template metaprogramming allows you to create compile-time constants and perform computations that can be evaluated during compilation, leading to more efficient code.

GCD

The greatest common divisor (GCD) of two numbers can be computed at compile time using template metaprogramming. The Euclidean algorithm is a classic method for computing the GCD, and we can implement it using recursive templates.

1
// General template for GCD
2
template <int A, int B>
3
// If both the templates need to be passed as is,
4
// then no need to declare again with the struct
5
struct GCD
6
{
7
static constexpr int value = GCD<B, A % B>::value;
8
};
9
10
// Partial template specialization for the case when B is 0
11
template <int A>
12
struct GCD<A, 0>
13
{
14
static constexpr int value = A;
15
};
16
17
// Helper type alias to simplify usage
18
template <int A, int B>
19
constexpr get_gcd = GCD<A, B>::value;
20
21
int main()
22
{
23
static_assert(get_gcd<48, 18> == 6, "GCD of 48 and 18 should be 6");
24
static_assert(get_gcd<56, 98> == 14, "GCD of 56 and 98 should be 14");
25
static_assert(get_gcd<101, 10> == 1, "GCD of 101 and 10 should be 1");
26
}

Array Dimensions

We want to implement a template metafunction that calculates the number of dimensions in an array type at compile time. We also expect the same to work for non-array types and return 0 dimensions.

1
// Base case for any type that is not an array
2
template <typename T>
3
struct ArrayDimensions
4
{
5
static constexpr int value = 0;
6
};
7
8
// Recursive template specialization for array types
9
template <typename T, std::size_t N>
10
struct ArrayDimensions<T[N]>
11
{
12
// Add 1 for the current dimension
13
static constexpr int value = 1 + ArrayDimensions<T>::value;
14
};
15
16
// Handle dynamic arrays as well
17
template <typename T>
18
struct ArrayDimensions<T[]>
19
{
20
static constexpr int value = 1 + ArrayDimensions<T>::value;
21
};
22
23
// Helper type alias to simplify usage
24
template <typename T>
25
constexpr int get_array_dimensions = ArrayDimensions<T>::value;
26
27
int main()
28
{
29
static_assert(get_array_dimensions<int> == 0, "int -> 0");
30
static_assert(get_array_dimensions<int[5]> == 1, "int[5] -> 1");
31
static_assert(get_array_dimensions<int[3][4]> == 2, "int[3][4] -> 2");
32
static_assert(get_array_dimensions<int[2][3][4]> == 3, "int[2][3][4] -> 3");
33
static_assert(get_array_dimensions<int[]> == 1, "int[] -> 1");
34
static_assert(get_array_dimensions<int[][6][7][8]> == 4, "int[][6][7][8] -> 4");
35
}

Remove Adjacent Duplicates

We want to implement a template metafunction that removes adjacent duplicates from a list of integers at compile time. For example, given the list 1, 2, 2, 3, 3, 3, 4, the result should be 1, 2, 3, 4. We would make use of the Vector type defined below to represent the list of integers.

1
// A simple type representing a vector of integers at compile-time
2
// Used as a container for list of integers
3
template <int... Ints>
4
struct Vector;
5
6
// Prepend is a metafunction that adds a new element to the front
7
// of an Vector at compile-time.
8
// Base Definition: Prepend takes an integer NewElement and another type Vec,
9
// which is expected to be a Vector.
10
template <int NewElement, typename Vec>
11
struct Prepend;
12
13
// Specialization of Prepend: adds NewElement at the front of Vector<Existing...>
14
template <int NewElement, int... Existing>
15
struct Prepend<NewElement, Vector<Existing...>>
16
{
17
using type = Vector<NewElement, Existing...>;
18
};
19
20
// Primary template for RemoveAdjacentDuplicates
21
// Accepts a variable-length list of integers
22
template <int... Ints>
23
struct RemoveAdjacentDuplicates;
24
25
// === Base Cases ===
26
// For the base cases, we do not need to use the "typename" keyword
27
// because we are not using recursive templates here.
28
29
// Empty Vector
30
template <>
31
struct RemoveAdjacentDuplicates<>
32
{
33
using type = Vector<>;
34
};
35
36
// Single Element Vector
37
template <int Only>
38
struct RemoveAdjacentDuplicates<Only>
39
{
40
using type = Vector<Only>;
41
};
42
43
// === Recursive Cases ===
44
// For these cases, we will use recursion to process the elements.
45
// We would need to use the "typename" keyword to refer
46
// to types defined in the template.
47
48
// First element is the same as the second
49
template <int First, int... Rest>
50
struct RemoveAdjacentDuplicates<First, First, Rest...>
51
{
52
using type = typename RemoveAdjacentDuplicates<First, Rest...>::type;
53
};
54
55
// First two elements are different
56
template <int First, int Second, int... Rest>
57
struct RemoveAdjacentDuplicates<First, Second, Rest...>
58
{
59
private:
60
using TailResult = typename RemoveAdjacentDuplicates<Second, Rest...>::type;
61
62
public:
63
using type = typename Prepend<First, TailResult>::type;
64
};

Testing your metafunction

If you want to test your implementation, you can paste your implementation before the following test runner code and check if it works as expected. Please make sure that the classes Vector and RemoveAdjacentDuplicates are defined with the same signature as shown above.

1
// Helper alias for easier usage around the RemoveAdjacentDuplicates template
2
template <int... Ints>
3
using RemoveAdjacentDuplicates_t = typename RemoveAdjacentDuplicates<Ints...>::type;
4
5
// Base definition for a type trait to compare two Vector types
6
// We expect both A and B to be Vector types.
7
template <typename A, typename B>
8
struct AreVectorSame;
9
10
template <>
11
struct AreVectorSame<Vector<>, Vector<>>
12
{
13
static constexpr bool areSame = true;
14
};
15
16
template <int... Ints>
17
struct AreVectorSame<Vector<>, Vector<Ints...>>
18
{
19
static constexpr bool areSame = false;
20
};
21
22
template <int... Ints>
23
struct AreVectorSame<Vector<Ints...>, Vector<>>
24
{
25
static constexpr bool areSame = false;
26
};
27
28
template <int A, int B, int... Ints1, int... Ints2>
29
struct AreVectorSame<Vector<A, Ints1...>, Vector<B, Ints2...>>
30
{
31
static constexpr bool areSame = false;
32
};
33
34
template <int A, int... Ints1, int... Ints2>
35
struct AreVectorSame<Vector<A, Ints1...>, Vector<A, Ints2...>>
36
{
37
static constexpr bool areSame = AreVectorSame<
38
Vector<Ints1...>,
39
Vector<Ints2...>
40
>::areSame;
41
};
42
43
// Test cases
44
int main()
45
{
46
using R1 = RemoveAdjacentDuplicates_t<1, 1, 2, 3, 3, 4, 4, 5>;
47
using E1 = Vector<1, 2, 3, 4, 5>;
48
static_assert(AreVectorSame<R1, E1>::areSame, "Test 1 failed");
49
50
using R2 = RemoveAdjacentDuplicates_t<1, 2, 3, 4, 5>;
51
using E2 = Vector<1, 2, 3, 4, 5>;
52
static_assert(AreVectorSame<R2, E2>::areSame, "Test 2 failed");
53
54
using R3 = RemoveAdjacentDuplicates_t<1, 1, 1, 1, 1>;
55
using E3 = Vector<1>;
56
static_assert(AreVectorSame<R3, E3>::areSame, "Test 3 failed");
57
58
using R4 = RemoveAdjacentDuplicates_t<>;
59
using E4 = Vector<>;
60
static_assert(AreVectorSame<R4, E4>::areSame, "Test 4 failed");
61
62
using R5 = RemoveAdjacentDuplicates_t<1, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 7, 8, 8>;
63
using E5 = Vector<1, 2, 3, 4, 5, 6, 7, 8>;
64
static_assert(AreVectorSame<R5, E5>::areSame, "Test 5 failed");
65
}

PS. If you carefully look at the implementation of AreVectorSame, you will notice that we default to all types being different. Thus to shorten the implementation, we can actually remove all the cases (base & recursive) where the output is false, and only keep the cases where the output is true! Pretty neat, right?