Skip to main content

Write Your Own

Image of the author

Eshaan Aggarwal @EshaanAgg

Last updated: Tuesday, December 3, 2024

This is a collection of common C++ classes and templates that we often use, and thus are perfect opportunities for interviewers to quiz on to see if the candidate is comfortable with the language, can write clean code, and is aware about the internals of the language.


Smart Pointers

General Smart Pointer

This is the general implementation of a smart pointer that manages the memory of the pointer. The smart pointer is a class that wraps a raw pointer and manages the memory of the raw pointer. The smart pointer is responsible for deleting the memory when the object goes out of scope. The smart pointer is an application of the RAII (Resource Acquisition Is Initialization) idiom, where the resource is acquired in the constructor and released in the destructor.

1
template <typename T>
2
class AutoPtr {
3
T *ptr;
4
5
AutoPtr(T *ptr = nullptr) : ptr(ptr) {}
6
~AutoPtr() {
7
delete ptr;
8
}
9
10
// Copy constructor [Moves the pointer to self]
11
AutoPtr(AutoPtr &other) {
12
ptr = other.ptr;
13
other.ptr = nullptr;
14
}
15
16
// Copy assignment operator [Moves the pointer to self]
17
AutoPtr& operator=(AutoPtr &other) {
18
if (this == &other)
19
return *this;
20
21
delete ptr; // Free any existing resource
22
ptr = other.ptr;
23
other.ptr = nullptr;
24
}
25
26
// Overloading the operators for convinience
27
T& operator*() const { return *ptr; }
28
T* operator->() const { return ptr; }
29
30
}

While this class is a good starting point, it has some limitations:

Because of many more such reasons, the AutoPtr class was first introduced in C++98, but was later deprecated in C++11. It was replaced by std::unique_ptr, std::shared_ptr, and std::weak_ptr, which implement the move semantics and the copy semantics in a more robust way.

Unique Pointer

A std::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the std::unique_ptr goes out of scope. The std::unique_ptr is unique in the sense that it cannot be copied or assigned to another std::unique_ptr, and thus there is only one owner of the object (the owner can be transferred using the move semantics).

1
template<typename T>
2
class UniquePtr {
3
T *ptr;
4
5
public:
6
UniquePtr(T* ptr = nullptr): ptr(ptr) {}
7
~UniquePtr() {
8
delete ptr;
9
}
10
11
// Disable the copy assignment and copy constructor
12
UniquePtr(UniquePtr &ptr) = delete;
13
UniquePtr& operator=(UniquePtr &other) = delete;
14
15
// Move constructor
16
UniquePtr(UniquePtr &&other) {
17
ptr = other.ptr;
18
other.ptr = nullptr;
19
}
20
21
// Move assignment operator
22
void operator=(UniquePtr &&other) {
23
ptr = other.ptr;
24
other.ptr = nullptr;
25
}
26
27
// Overloading Opertor
28
T& operator*() const { return *this; }
29
T* operator->() const {return this; }
30
};

Shared Pointer

A std::shared_ptr is a smart pointer that retains shared ownership of an object through a pointer. Several std::shared_ptr objects may own the same object. The object is destroyed and its memory deallocated when all std::shared_ptr objects that own it are destroyed or reset. The std::shared_ptr uses a reference count to keep track of the number of std::shared_ptr objects that own the object.

1
template<typename T>
2
class SharedPtr {
3
T *ptr;
4
int *count;
5
6
void cleanUp() {
7
(*count)--;
8
if (*count == 0) {
9
delete ptr;
10
delete count;
11
}
12
}
13
14
public:
15
SharedPtr(T *ptr = nullptr): ptr(ptr) {
16
if (ptr)
17
count = new int(1);
18
else
19
count = new int(0);
20
}
21
22
~SharedPtr() { cleanUp(); }
23
24
// Copy constructor
25
SharedPtr(SharedPtr &other) {
26
ptr = other.ptr;
27
count = other.count;
28
if (ptr)
29
(*count)++;
30
}
31
32
// Copy assignment operator
33
SharedPtr& operator=(SharedPtr &other) {
34
cleanUp();
35
36
ptr = other.ptr;
37
count = other.count;
38
if (ptr)
39
(*count)++;
40
41
return *this;
42
}
43
44
// Move constructor
45
SharedPtr(SharedPtr &&other) {
46
ptr = other.ptr;
47
count = other.count;
48
other.ptr = nullptr;
49
other.count = nullptr;
50
}
51
52
// Move assignment operator
53
SharedPtr& operator=(SharedPtr &&other) {
54
cleanUp();
55
56
ptr = other.ptr;
57
count = other.count;
58
other.ptr = nullptr;
59
other.count = nullptr;
60
}
61
62
// Overloading operators
63
T& operator*() const { return *ptr; }
64
T* operator->() const { return ptr; }
65
}

Hit Counter

Base Variant

You need to implement a HitCounter class that supports the following operations:

Each function accepts a timestamp parameter (in seconds granularity) and you may assume that calls are being made to the system in chronological order (i.e. the timestamp is monotonically increasing).

1
class HitCounter {
2
// Front of the queue will have the oldest timestamp
3
// Monotonically increasing timestamps
4
dequeue<pair<int, int>> hits;
5
int curCnt;
6
7
const int MAX_TIME = 5 * 60;
8
9
void removeOldHits(int timestamp) {
10
while (!hits.empty() && hits.front().first <= timestamp - MAX_TIME) {
11
curCnt -= hits.front().second;
12
hits.pop_front();
13
}
14
}
15
16
public:
17
HitCounter(): curCnt(0) {}
18
19
void hit(int timestamp) {
20
removeOldHits(timestamp);
21
if (!hits.empty() && hits.back().first == timestamp)
22
hits.back().second++;
23
else
24
hits.push_back({timestamp, 1});
25
curCnt++;
26
}
27
28
int getHits(int timestamp) {
29
removeOldHits(timestamp);
30
return curCnt;
31
}
32
};

This function uses a deque to store the hits, and a curCnt to store the total number of hits. The removeOldHits function is used to remove the hits that are older than 5 minutes. One salient feature of this implementation is that it does not store the hits for each second, but rather stores the number of hits at each second, and thus this implementation would gracefully handle the case when there are multiple hits at the same second. If the same was not a requirement, we could have used a queue instead of a deque to store the hits.

Unordered Timestamps

Implement the same interface as above, but now the timestamps are not guaranteed to be in order.

1
class HitCounter {
2
map<int, int> hits;
3
const int MAX_TIME = 5 * 60;
4
5
public:
6
HitCounter() {}
7
8
void hit(int timestamp) {
9
hits[timestamp]++;
10
}
11
12
int getHits(int timestamp) {
13
int cnt = 0;
14
for (int time = timestamp - MAX_TIME + 1; time <= timestamp; time++) {
15
if (hits.find(time) != hits.end())
16
cnt += hits[time];
17
}
18
return cnt;
19
}
20
};

Concurrent Hit Counter

Implement the same interface as above, but now the class needs to be thread-safe. You can assume that the threads would be calling the same in chronological order.

1
class HitCounter {
2
mutex mtx;
3
deque<pair<int, int>> hits;
4
int curCnt;
5
6
const int MAX_TIME = 5 * 60;
7
8
void removeOldHits(int timestamp) {
9
while (!hits.empty() && hits.front().first <= timestamp - MAX_TIME) {
10
curCnt -= hits.front().second;
11
hits.pop_front();
12
}
13
}
14
15
public:
16
HitCounter(): curCnt(0) {}
17
18
void hit(int timestamp) {
19
lock_guard<mutex> lock(mtx);
20
21
removeOldHits(timestamp);
22
if (!hits.empty() && hits.back().first == timestamp)
23
hits.back().second++;
24
else
25
hits.push_back({timestamp, 1});
26
curCnt++;
27
}
28
29
int getHits(int timestamp) {
30
lock_guard<mutex> lock(mtx);
31
32
removeOldHits(timestamp);
33
return curCnt;
34
}
35
};

Infinite Timestamps in the Past

In this variant, we want the hit counter to be able to answer queries of all the time in the past, and the solution should use constant memory. Some inaccuracy is allowed in the answer. To handle the same, we would make use of buckets of size of powers of 2, so that the older buckets store more points and are updated less frequently.

1
class HitCouter;
2
3
class Bucket
4
{
5
int quantum;
6
int cnt;
7
int lastUpdate;
8
9
friend class HitCouter;
10
11
public:
12
Bucket(int p)
13
{
14
quantum = 1 << p;
15
cnt = 0;
16
lastUpdate = 0;
17
}
18
};
19
20
class HitCouter
21
{
22
int MAX_BUCKETS = 12;
23
vector<Bucket> buckets;
24
int curTime;
25
26
void shiftBuckets(int deltaTime)
27
{
28
if (!deltaTime)
29
return;
30
31
// Last bucket represents liftime hits, and should not be shifted
32
for (int i = 0; i < MAX_BUCKETS - 1; i++)
33
{
34
Bucket &b = buckets[i];
35
if (curTime - b.lastUpdate >= b.quantum)
36
{
37
buckets[i + 1].cnt += b.cnt;
38
buckets[i].lastUpdate = curTime;
39
buckets[i].cnt = 0;
40
}
41
}
42
}
43
44
void update(int timestamp)
45
{
46
int deltaTime = timestamp - curTime;
47
curTime = timestamp;
48
shiftBuckets(deltaTime);
49
}
50
51
public:
52
HitCouter()
53
{
54
curTime = 0;
55
for (int i = 0; i < MAX_BUCKETS; i++)
56
buckets.emplace_back(i);
57
}
58
59
void hit(int timestamp)
60
{
61
update(timestamp);
62
buckets[0].cnt++;
63
}
64
65
int getHits(int timestamp, int prevTime)
66
{
67
update(timestamp);
68
69
int cnt = 0;
70
int idx = 0;
71
for (; idx < MAX_BUCKETS; idx++)
72
{
73
if (prevTime >= buckets[idx].quantum)
74
cnt += buckets[idx].cnt;
75
else
76
break;
77
}
78
79
return cnt;
80
}
81
};

Vector

The std::vector is a dynamic array that can grow and shrink in size. It is a sequence container that encapsulates dynamic size arrays. The storage of the vector is handled automatically, being expanded and contracted as needed. Vectors usually occupy more space than static arrays, because more memory is allocated to handle future growth. This way a vector does not need to reallocate each time an element is inserted, but only when the additional memory is exhausted.

For our implementation, we set the capacity of the vector to be powers of 2, so that the reallocation is done less frequently (though other exponential growth factors can be used as well). We will make use of templates to make the vector generic, and try to provide the same interface as the std::vector. We will also provide the operator[] to access the elements of the vector, and implement iterators to traverse the vector.

Note that in the following implementation, we have used std:::move to move the elements of the vector so that we can avoid copying the elements. To also provide support for emplace_back, we have used the Args &&...args to forward the arguments to the constructor of the object, and thus the template class needs an additional template parameter parameter Args which is vardiadic.

1
template <typename T, typename... Args>
2
class Vector
3
{
4
public:
5
typedef T *iterator;
6
7
// Constructors
8
Vector() : capacity(1), size(0) {
9
arr = new T[capacity];
10
}
11
12
Vector(int cap) : capacity(cap), size(0) {
13
arr = new T[capacity];
14
}
15
16
// Destructor
17
~Vector() {
18
delete[] arr;
19
}
20
21
void emplace_back(Args &&...args)
22
{
23
push_back(T(std::forward<Args>(args)...));
24
}
25
26
void push_back(T &&val)
27
{
28
if (size == capacity)
29
_resize();
30
arr[size++] = move(val);
31
}
32
33
void pop_back()
34
{
35
if (size > 0)
36
size--;
37
if (size < capacity / 2)
38
_shrink();
39
}
40
41
int len()
42
{
43
return size;
44
}
45
46
T front()
47
{
48
return arr[0];
49
}
50
51
T back()
52
{
53
return arr[size - 1];
54
}
55
56
// Iterators
57
iterator begin()
58
{
59
return arr;
60
}
61
62
iterator end()
63
{
64
return arr + size;
65
}
66
67
// Overloading the operators
68
T &operator[](int idx)
69
{
70
return arr[idx];
71
}
72
73
private:
74
int capacity;
75
int size;
76
T *arr;
77
78
void _resize()
79
{
80
capacity *= 2;
81
T *newArr = new T[capacity];
82
for (int i = 0; i < size; i++)
83
newArr[i] = move(arr[i]);
84
delete[] arr;
85
arr = newArr;
86
}
87
88
void _shrink()
89
{
90
capacity /= 2;
91
T *newArr = new T[capacity];
92
for (int i = 0; i < size; i++)
93
newArr[i] = move(arr[i]);
94
delete[] arr;
95
arr = newArr;
96
}
97
};

Semaphore

A semaphore is a synchronization primitive that is used to control access to a shared resource. It is a variable that is used to control access to a shared resource by multiple processes in a concurrent system such that the resource is not used by more than a certain number of process at a time. The semaphore has two operations, acquire and release, which are also known as P and V operations respectively. The acquire function waits until the resource is available, and then decrements the available count. The release function increments the available count and notifies one of the waiting threads.

With Condition Variables

This implementation uses a std::mutex and a std::condition_variable to implement the semaphore. A condition variable allows a thread to be signaled when something of interest to that thread occurs, which is useful to avoid busy waiting. Note the use of std::unique_lock to lock the mutex, instead of the usual std::lock_guard, as the std::condition_variable requires the mutex to be locked when calling wait.

1
class Semaphore
2
{
3
size_t avail;
4
std::mutex mtx;
5
std::condition_variable cv;
6
7
public:
8
Semaphore(int avail = 1) : avail(avail){}
9
10
void acquire() // P(x)
11
{
12
std::unique_lock<std::mutex> lk(mtx);
13
// Wait until the lambda function returns true
14
cv.wait(lk, [this](){
15
return avail > 0;
16
});
17
avail--;
18
}
19
20
void release() // V(x)
21
{
22
std::unique_lock<std::mutex> lk(mtx);
23
avail++;
24
cv.notify_one();
25
}
26
};

Without Condition Variables

Since we are not using condition variables, we need to busy wait until the resource is available. This is not a good practice as it wastes CPU cycles, but it is useful to understand the concept of semaphores. We now make use of std::atomic to make the operations atomic, and a std::mutex to protect the critical section.

1
struct Semaphore {
2
int size;
3
atomic<int> count;
4
mutex updateMutex;
5
6
Semaphore(int n) : size(n) { count.store(0); }
7
8
void aquire() {
9
while (1) {
10
while (count >= size) {}
11
12
updateMutex.lock();
13
if (count >= size) {
14
updateMutex.unlock();
15
continue;
16
}
17
++count;
18
updateMutex.unlock();
19
break;
20
}
21
}
22
23
void release() {
24
updateMutex.lock();
25
if (count > 0) {
26
--count;
27
} // Else log error or throw exception
28
updateMutex.unlock();
29
}
30
};

Producer Consumer Problem

The infamous Producer-Consumer problem, also called the Bounded-Buffer problem, is one of the famous real-world scenarios of synchronization. The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer’s job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time. The problem is to make sure that the producer won’t try to add data into the buffer if it’s full and that the consumer won’t try to remove data from an empty buffer.

We make use of the following classes to implement the producer-consumer problem:

In the main function, we create two threads, one for the producer and one for the consumer. The producer produces data and pushes it into the buffer, and the consumer consumes the data by popping it from the buffer. The producer waits if the buffer is full, and the consumer waits if the buffer is empty.

1
class ProducerConsumer {
2
std::mutex mtx;
3
std::condition_variable cv;
4
std::queue<int> buffer;
5
const unsigned int capacity = 10;
6
7
public:
8
void producer(int item) {
9
std::unique_lock<std::mutex> lk(mtx);
10
cv.wait(lk, [this](){
11
return buffer.size() < capacity;
12
});
13
buffer.push(item);
14
std::cout << "Produced: " << item << " | Buffer Size: " << buffer.size() << std::endl;
15
cv.notify_all();
16
}
17
18
void consumer() {
19
std::unique_lock<std::mutex> lk(mtx);
20
cv.wait(lk, [this](){
21
return !buffer.empty();
22
});
23
int item = buffer.front();
24
buffer.pop();
25
std::cout << "Consumed: " << item << " | Buffer Size: " << buffer.size() << std::endl;
26
cv.notify_all();
27
}
28
};
29
30
int main() {
31
ProducerConsumer pc;
32
33
std::thread producer([&pc](){
34
for (int i = 0; i < 100; i++)
35
pc.producer(i);
36
});
37
38
std::thread consumer([&pc](){
39
for (int i = 0; i < 100; i++)
40
pc.consumer();
41
});
42
43
producer.join();
44
consumer.join();
45
46
return 0;
47
}

Singleton Design Pattern

A singleton class is a class such that you can only have one instance of the class. This is useful when you want to have a single instance of a class that is shared across the application. The singleton class is implemented using a static member variable, and a static member function that returns the instance of the class.

Singletons are useful when we want to apply some functionalities to a “global” set of data, but we do not want to pass the object or the data/member functions around. Having the same encapsulated in a singleton class makes it easier to manage and use throughout the application. They can effectively be managed using namespaces as well, but the singleton class provides a more object-oriented approach.

Here are some good examples of a singleton class:

class Singleton {
public:
static Singleton& get() {
static Singleton instance;
return instance;
}
void foo() {};
private:
// Make the constructor private so that the class cannot be instantiated
// from outside, and delete the other constructors to avoid copying
Singleton() {}
Singleton(const Singleton&) = delete;
Singleton& operator=(const Singleton&) = delete;
};
int main() {
Singleton &s = Singleton::get();
s.foo();
// Or equivalently
Singleton::get().foo();
return 0;
}

If you want to call the member functions directly on the class, you can make the member functions static as well with a bit of indirection.

class Singleton {
public:
static Singleton& get() {
static Singleton instance;
return instance;
}
static void foo() {
get().IFoo();
}
private:
void IFoo() {
// Implementation of foo
}
Singleton() {}
Singleton(const Singleton&) = delete;
Singleton& operator=(const Singleton&) = delete;
};
int main() {
Singleton::foo();
return 0;
}