By Taosu
A URL blacklist contains 10 billion URLs, each 64 bytes in size. How do you store this blacklist? How do you determine whether a URL is in the blacklist?
Hash table:
If we consider the blacklist as a set and store it in a hash map, it seems to require as much as 640 GB of space, which is unreasonable.
Bloom filter:
A bloom filter contains a long binary vector and a series of random mapping functions.
It is typically used to test if an element is in a set. It is extremely space-efficient and supports high-efficiency query. A Bloom filter is essentially a bit array in which each element occupies only 1 bit and each element is either 0 or 1.
Each bit in the array is a binary digit. Besides a bit array, a Bloom filter also includes a set of K hash functions. For each incoming element, a Bloom filter performs the following operations:
• Use the K hash functions to calculate the element value K times to obtain K hash values.
• According to the calculated hash values, set the values at the corresponding indexes in the bit array to 1.
With only 2 GB of memory, find the number that appears the most often among 2 billion integers
A common practice is to use a hash table to count the word frequency of each number that occurs. Each key in the hash table represents an integer and the key value records the number of occurrences of that integer. In this question, there are 2 billion numbers and it may be that the same number occurs 2 billion times. To avoid overflow, keys in the hash table are 32-bit (4 bytes) and their values are also 32-bit (4 bytes), so that each entry record in the hash table occupies 8 bytes of space.
If there are 200 million entry records in the hash table, the hash table occupies 1.6 billion bytes (8 bytes × 200 million) of space, requiring at least 1.6 GB of memory (1.6 billion/2^30, 1 GB == 2^30 bytes == 1 billion). Therefore, accommodating 2 billion entry records requires at least 16 GB of memory, which exceeds the memory limit specified in this question.
A solution is to store the 2 billion numbers in multiple files. Use a hash function to distribute the 2 billion entry records evenly to 16 files. If the hash function is defined appropriately, identical numbers should not be distributed to the same file. Then, for each file, use a hash function to count the number of occurrences of each number. This way, we obtain a total of 16 numbers with the most occurrences, each from a different file. Among these 16 numbers, the key with the largest number of occurrences is the answer we need.
Find the missing numbers from 4 billion non-negative integers
If we use a hash table to store the 4 billion numbers and we consider the worst case where all the 4 billion numbers are different with no repetitions, then the hash table must be able to store 4 billion entry records. A 32-bit integer occupies 4 bytes of space, so 4 billion integers occupy 16 billion bytes of space. In general, 1 billion bytes of data occupies about 1 GB of space, so to store 4 billion integers would require about 16 GB of space, which exceeds the memory limit specified in this question.
Another method is to apply for a bit array with the size of 4,294,967,295, which is approximately 4 billion bits, namely, 500 million (4 billion/8) bytes, requiring 0.5 GB of space. The value in each position in the bit array is either 0 or 1. How do we use this bit array, then? The length of this array just covers the range of numbers specified in the question. Each index in the array corresponds to a number from 0 to 4,294,967,295. Traverse the 4 billion non-negative integers to assign values to the bit array. For example, if the number 20 occurs, bitArray[20] = 1; if the number 666 occurs, bitArray[666] = 1. For every number that occurs, the value at the corresponding index is set to 1.
With only 10 MB of memory, find the missing number from 4 billion non-negative integers
Processing 1 billion bytes of data needs about 1 GB of space, so 10 MB of memory is only sufficient to process 10 million bytes of data, that is, 80 million bits. To use bit arrays to process 4 billion non-negative integers, we need to apply for at least 50 (4 billion/80 million) bit arrays. Here we apply for 64 bit arrays.
This advanced solution can be summed up in a few steps as follows:
Author's thoughts
If the task is only to find one missing number, we can perform bitwise modulo operation on the 64 bit arrays, write the results to 64 different files, and then perform the above steps only in the file with the smallest modulo operation result.
With only 1 GB of memory, find all the numbers that occur twice among 4 billion unsigned integers
We can use a bit map to record the occurrences of numbers. Specifically, we apply for a bit array with the size of 4,294,967,295×2 and use two positions to represent the frequency of each number. With one byte occupying 8 bits, the bit array with a length of 4,294,967,295×2 occupies 1 GB of space. Then, what do we do with this bit array? We can traverse these 4 billion unsigned numbers. If a number occurs for the first time, set bitArr[num2+1] and bitArr[num2] to 01. If a number occurs for the second time, set bitArr[num2+1] and bitArr[num2] to 10. If a number occurs for the third time, set bitArr[num2+1] and bitArr[num2] to 11. If a number occurs for the fourth time or more, the values of bitArr[num2+1] and bitArr[num2] remain 11. After the traversal is complete, we now traverse the bit array. If the values of bitArr[i2+1] and bitArr[i2] are 10, then i is the number that occurs twice.
Find the duplicates from 10 billion URLs
We can solve this question by using a conventional solution to big data problems, which is to distribute the processing of a large file to multiple machines or split a large file into smaller files using a hash function. We can repeat this process for further distribution or splitting until the resource limits are respected. First and foremost, you should ask the interviewer if there are any limits on resources such as memory or computing time. Then, you can use a hash function to distribute each URL to multiple machines or split it into several small files. The specific number of machines or files is calculated based on the resource limits.
For example, use a hash function to distribute the processing of a 10 billion-byte file to 100 machines. Each machine counts duplicate URLs among the URLs distributed to it. The nature of hash functions ensures that the same URL cannot be distributed to different machines. Alternatively, on a single machine, use a hash function to split a large file into 1,000 small files and traverse each small file using a hash table to find out duplicate URLs. After distributing to machines or splitting into files, you can also locate duplicates by means of sorting. Anyway, keep in mind that the key to addressing many big data problems is distribution. Use a hash function to either distribute the contents of a large file to different machines or split a large file into small files, and then process each part one by one.
Find the top 100 searches among massive search words
This question can also be solved by using hash distribution. We can assign the processing of the vocabulary file containing tens of billions of words to different machines. The specific number of machines is determined either by the interviewer or based on other limits. For each machine, if the amount of data received is still too large due to insufficient memory or other limits, we can again use a hash function to further split the file on the machine into smaller files for processing.
To process each small file, use a hash table to record each word and count its frequency, and then traverse the hash table. During the traversal process, use a min-heap with a size of 100 to store the top 100 words (unsorted) in each small file. This way, each small file has a min-heap containing its top 100 words (unsorted). Sort the words in the min-heap by frequency to get the top 100 words (sorted) in each small file. Then, on each machine, perform external sorting on all the words in the min-heaps or continue to use min-heaps to select the top 100 words. Next, put the respective top 100 words on different machines together and perform external sorting or use min-heaps to find the top 100 words among all the tens of billions of words. Besides using hash distribution and hash tables for word frequency statistics, other common solutions to topK problems include using heaps and external sorting.
With only 10 MB of memory, find the median of 10 billion integers
① With sufficient memory, you can just sort all the 10 billion integers (even via bubble sort) to find the median. However, your interviewer will definitely not allow that.
② With insufficient memory: The question says 10 billion integers and we suppose that they are signed integers. Each integer occupies 4 bytes, namely, 32 bits.
Assume that 10 billion numbers are stored in a large file. Read the contents of the file into memory (without exceeding the memory limit) part by part. Represent each number using a binary value and check the highest bit (the 32nd bit, also the sign bit; 0 indicates positive and 1 indicates negative) of each binary value. If the highest bit is 0, write the number into file_0. If the highest bit is 1, write the number to file_1.
This way, the 10 billion numbers are divided into two files. Assume that file_0 contains 6 billion numbers and file_1 contains 4 billion numbers. The median is in file_0 and is the billionth number after all the numbers in the file_0 file are sorted. Numbers in file_1 are all negative and numbers in file_0 are all positive. That is, there are only 4 billion negative numbers in total, so after sorting the 5 billionth number must be in file_0.
Now, we only need to further process file_0 and can ignore file_1. For file_0, we can repeat this process: read the contents of file_0 to memory (without exceeding the memory limit) part by part. Represent each number using a binary value and check the second highest bit (the 31st bit) of each binary value. If the second highest bit is 0, write the number into file0_0. If the second highest bit is 1, write the number to file_0_1.
Now assume that there are 3 billion numbers in file_0_0 and 3 billion numbers in file_0_1. The median is the billionth number in file_0_0 after sorting in ascending order.
Next, discard file_0_1 and repeat the preceding process to split file_0_0 based on the third highest bit (the 30th bit). Assume that the two files thus generated are file_0_0_0 containing 0.5 billion numbers and file_0_0_1 containing 2.5 billion numbers. Then, the median is the 0.5 billionth number in file_0_0_1 after sorting.
This process can be repeated until the split files can be loaded directly into memory. You can then sort the numbers to quickly find the median.
Design a short domain name system to convert long URLs into short URLs
(1) Set the initial value of a number dispenser to 0. Whenever a short URL generation request is received, the value of the number dispenser is increased and then converted to an a-zA-Z0-9 value. For example, when the first request is received, the value of the number dispenser is 0 and the corresponding a-zA-Z0-9 value is a. When the second request is received, the value of the number dispenser is increased to 1 and the corresponding a-zA-Z0-9 value is b. When the 10001st request is received, the value of the number dispenser becomes 10,000 and the corresponding a-zA-Z0-9 value is sBc.
(2) Concatenate the domain name of the short URL server with the a-zA-Z0-9 value of the number dispenser to form a short URL. Example: t.cn/sBc.
(3) Redirection: After the short URL is generated, store the mapping relationship between the short URL and the long URL, namely, sBc ->URL. When a request to access the short URL server is sent in a browser, the original URL is obtained based on the URL path and a 302 redirect is performed. Mapping relationships can be stored using key-value storage, such as Redis or Memcache.
Suppose a news report receives a vast number of comments. How would you design the comment read and write mechanisms?
Display comments to users on the frontend page and synchronize the comments to message queues in asynchronous mode for storage.
Implement read/write splitting and periodic loading of top-liked comments into the cache.
Solution to displaying the number of online users of a website
Use Redis tables to maintain data statistics about online users
Display the number of concurrent users of a website
Assume there are 10 million query strings with a high duplication rate. After deduplication, the total number of unique strings does not exceed 3 million. With only 1 GB of memory, find the 10 most popular query strings. (The higher the duplication rate of a query string, the more users are querying it, and the more popular the string is.)
Since the total number of unique strings after deduplication is no more than 3 million, we can consider storing all the strings and their occurrences in a hash map, which occupies 3 million × (255 + 4) bytes ≈ 777 MB (where 4 represents 4 bytes occupied by integers). Obviously, the 1 GB memory limit is well respected.
First, traverse all the strings. If a string is not in the hash map, directly store it in the map and set its value to 1. If a string is in the map, increase its value by 1. The time complexity of this step is O(N)
.
Then, traverse the map to build a min-heap containing 10 elements. If the number of occurrences of a string is greater than that of any string in the heap, replace the string in the heap with the traversed string and adjust the heap to a min-heap.
After the traversal is completed, the 10 strings in the min-heap are the strings with the most occurrences. The time complexity of this step is O(Nlog10)
.
If many strings have the same prefix, we can consider using prefix trees to count the number of occurrences of strings and store these numbers of occurrences as tree nodes. Value 0 indicates that the string does not occur.
Traverse all the strings. Look each string up in the corresponding prefix tree. If the string is found in the prefix tree, increase its node value (its number of occurrences) by 1. If the string is not found in the prefix tree, create a node for this string and set the node value to 1.
Then, use a min-heap to sort the number of occurrences of the strings.
Use the linear cutting method, performing N-1 cuts on each range. The sooner a user clicks on the red envelope, the more money the user gets.
Use the binary mean algorithm to randomly distribute amounts within the range of [0 - Remaining amount/Remaining quota × 2], achieving relatively even distribution.
public class QuickSort {
public static void swap(int[] arr, int i, int j) {
int tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
}
/* Regular quick sort */
public static void quickSort1(int[] arr, int L , int R) {
if (L > R) return;
int M = partition(arr, L, R);
quickSort1(arr, L, M - 1);
quickSort1(arr, M + 1, R);
}
public static int partition(int[] arr, int L, int R) {
if (L > R) return -1;
if (L == R) return L;
int lessEqual = L - 1;
int index = L;
while (index < R) {
if (arr[index] <= arr[R])
swap(arr, index, ++lessEqual);
index++;
}
swap(arr, ++lessEqual, R);
return lessEqual;
}
/* NetherlandsFlag */
public static void quickSort2(int[] arr, int L, int R) {
if (L > R) return;
int[] equalArea = netherlandsFlag(arr, L, R);
quickSort2(arr, L, equalArea[0] - 1);
quickSort2(arr, equalArea[1] + 1, R);
}
public static int[] netherlandsFlag(int[] arr, int L, int R) {
if (L > R) return new int[] { -1, -1 };
if (L == R) return new int[] { L, R };
int less = L - 1;
int more = R;
int index = L;
while (index < more) {
if (arr[index] == arr[R]) {
index++;
} else if (arr[index] < arr[R]) {
swap(arr, index++, ++less);
} else {
swap(arr, index, --more);
}
}
swap(arr, more, R);
return new int[] { less + 1, more };
}
// for test
public static void main(String[] args) {
int testTime = 1;
int maxSize = 10000000;
int maxValue = 100000;
boolean succeed = true;
long T1=0,T2=0;
for (int i = 0; i < testTime; i++) {
int[] arr1 = generateRandomArray(maxSize, maxValue);
int[] arr2 = copyArray(arr1);
int[] arr3 = copyArray(arr1);
// int[] arr1 = {9,8,7,6,5,4,3,2,1};
long t1 = System.currentTimeMillis();
quickSort1(arr1,0,arr1.length-1);
long t2 = System.currentTimeMillis();
quickSort2(arr2,0,arr2.length-1);
long t3 = System.currentTimeMillis();
T1 += (t2-t1);
T2 += (t3-t2);
if (!isEqual(arr1, arr2) || !isEqual(arr2, arr3)) {
succeed = false;
break;
}
}
System.out.println(T1+" "+T2);
// System.out.println(succeed ? "Nice!" : "Oops!");
}
private static int[] generateRandomArray(int maxSize, int maxValue) {
int[] arr = new int[(int) ((maxSize + 1) * Math.random())];
for (int i = 0; i < arr.length; i++) {
arr[i] = (int) ((maxValue + 1) * Math.random())
- (int) (maxValue * Math.random());
}
return arr;
}
private static int[] copyArray(int[] arr) {
if (arr == null) return null;
int[] res = new int[arr.length];
for (int i = 0; i < arr.length; i++) {
res[i] = arr[i];
}
return res;
}
private static boolean isEqual(int[] arr1, int[] arr2) {
if ((arr1 == null && arr2 != null) || (arr1 != null && arr2 == null))
return false;
if (arr1 == null && arr2 == null)
return true;
if (arr1.length != arr2.length)
return false;
for (int i = 0; i < arr1.length; i++)
if (arr1[i] != arr2[i])
return false;
return true;
}
private static void printArray(int[] arr) {
if (arr == null)
return;
for (int i = 0; i < arr.length; i++)
System.out.print(arr[i] + " ");
System.out.println();
}
}
public static void merge(int[] arr, int L, int M, int R) {
int[] help = new int[R - L + 1];
int i = 0;
int p1 = L;
int p2 = M + 1;
while (p1 <= M && p2 <= R)
help[i++] = arr[p1] <= arr[p2] ? arr[p1++] : arr[p2++];
while (p1 <= M)
help[i++] = arr[p1++];
while (p2 <= R)
help[i++] = arr[p2++];
for (i = 0; i < help.length; i++)
arr[L + i] = help[i];
}
public static void mergeSort(int[] arr, int L, int R) {
if (L == R)
return;
int mid = L + ((R - L) >> 1);
process(arr, L, mid);
process(arr, mid + 1, R);
merge(arr, L, mid, R);
}
public static void main(String[] args) {
int[] arr1 = {9,8,7,6,5,4,3,2,1};
mergeSort(arr, 0, arr.length - 1);
printArray(arr);
}
// The extra space complexity of heap sort is O(1).
public static void heapSort(int[] arr) {
if (arr == null || arr.length < 2)
return;
for (int i = arr.length - 1; i >= 0; i--)
heapify(arr, i, arr.length);
int heapSize = arr.length;
swap(arr, 0, --heapSize);
// O(N*logN)
while (heapSize > 0) { // O(N)
heapify(arr, 0, heapSize); // O(logN)
swap(arr, 0, --heapSize); // O(1)
}
}
// Move the incoming value in the arr[index] position up.
public static void heapInsert(int[] arr, int index) {
while (arr[index] > arr[(index - 1) / 2]) {
swap(arr, index, (index - 1) / 2);
index = (index - 1) / 2;
}
}
// Determine whether the value in the arr[index] position can be moved down.
public static void heapify(int[] arr, int index, int heapSize) {
int left = index * 2 + 1; // The index of the left child.
while (left < heapSize) { // When there are still children below:
// Compare two children and assign the index to whoever has a larger value.
// 1) If there is only a left child, assign the index to the left child.
// 2) If there is a left child and a right child, and the right child's value is not greater than the left child's value, assign the index to the left child.
// 3) If there is a left child and a right child and the right child's value is greater than the left child's value, assign the index to the right child.
int largest = left+1 < heapSize && arr[left+1]> arr[left] ? left+1 : left;
// Compare the parent and the child with a larger value and assign the index to whoever has a larger value.
largest = arr[largest] > arr[index] ? largest : index;
if (largest == index)
break;
swap(arr, largest, index);
index = largest;
left = index * 2 + 1;
}
}
public static void swap(int[] arr, int i, int j) {
int tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
}
public static void main(String[] args) {
int[] arr1 = {9,8,7,6,5,4,3,2,1};
heapSort(arr1);
printArray(arr1);
}
public class Singleton {
private volatile static Singleton singleton;
private Singleton() {}
public static Singleton getSingleton() {
if (singleton == null) {
synchronized (Singleton.class) {
if (singleton == null) {
singleton = new Singleton();
}
}
}
return singleton;
}
}
// Based on LinkedHashMap
public class LRUCache {
private LinkedHashMap<Integer,Integer> cache;
private int capacity; // The capacity.
public LRUCache(int capacity) {
cache = new LinkedHashMap<>(capacity);
this.capacity = capacity;
}
public int get(int key) {
// If the key does not exist in the cache, directly return.
if(!cache.containsKey(key)) {
return -1;
}
int res = cache.get(key);
cache.remove(key); // Remove the key from the linked list.
cache.put(key,res); // Put the node at the end of the linked list.
return res;
}
public void put(int key,int value) {
if(cache.containsKey(key)) {
cache.remove(key); // The key exists. Remove it from the current linked list.
}
if(capacity == cache.size()) {
// The cache is full. Delete the head position of the linked list.
Set<Integer> keySet = cache.keySet();
Iterator<Integer> iterator = keySet.iterator();
cache.remove(iterator.next());
}
cache.put(key,value); // Put the key and value at the end of the linked list.
}
}
// Implement a doubly linked list.
class LRUCache {
class DNode {
DNode prev;
DNode next;
int val;
int key;
}
Map<Integer, DNode> map = new HashMap<>();
DNode head, tail;
int cap;
public LRUCache(int capacity) {
head = new DNode();
tail = new DNode();
head.next = tail;
tail.prev = head;
cap = capacity;
}
public int get(int key) {
if (map.containsKey(key)) {
DNode node = map.get(key);
removeNode(node);
addToHead(node);
return node.val;
} else {
return -1;
}
}
public void put(int key, int value) {
if (map.containsKey(key)) {
DNode node = map.get(key);
node.val = value;
removeNode(node);
addToHead(node);
} else {
DNode newNode = new DNode();
newNode.val = value;
newNode.key = key;
addToHead(newNode);
map.put(key, newNode);
if (map.size() > cap) {
map.remove(tail.prev.key);
removeNode(tail.prev);
}
}
}
public void removeNode(DNode node) {
DNode prevNode = node.prev;
DNode nextNode = node.next;
prevNode.next = nextNode;
nextNode.prev = prevNode;
}
public void addToHead(DNode node) {
DNode firstNode = head.next;
head.next = node;
node.prev = head;
node.next = firstNode;
firstNode.prev = node;
}
}
package com.concurrent.pool;
import java.util.HashSet;
import java.util.Set;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
public class MySelfThreadPool {
// The number of threads in the default thread pool.
private static final int WORK_NUM = 5;
// The default number of tasks to be processed.
private static final int TASK_NUM = 100;
private int workNum;// The number of threads.
private int taskNum;// The number of tasks.
private final Set<WorkThread> workThreads;// The collection used to save threads.
private final BlockingQueue<Runnable> taskQueue;// Save tasks in an ordered blocking queue.
public MySelfThreadPool() {
this(WORK_NUM, TASK_NUM);
}
public MySelfThreadPool(int workNum, int taskNum) {
if (workNum <= 0) workNum = WORK_NUM;
if (taskNum <= 0) taskNum = TASK_NUM;
taskQueue = new ArrayBlockingQueue<>(taskNum);
this.workNum = workNum;
this.taskNum = taskNum;
workThreads = new HashSet<>();
// Start a specific number of threads and obtain tasks from the queue for execution.
for (int i=0;i<workNum;i++) {
WorkThread workThread = new WorkThread("thead_"+i);
workThread.start();
workThreads.add(workThread);
}
}
public void execute(Runnable task) {
try {
taskQueue.put(task);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public void destroy() {
System.out.println("ready close thread pool...");
if (workThreads == null || workThreads.isEmpty()) return ;
for (WorkThread workThread : workThreads) {
workThread.stopWork();
workThread = null;//help gc
}
workThreads.clear();
}
private class WorkThread extends Thread{
public WorkThread(String name) {
super();
setName(name);
}
@Override
public void run() {
while (!interrupted()) {
try {
Runnable runnable = taskQueue.take();// Obtain tasks.
if (runnable !=null) {
System.out.println(getName()+" readyexecute:"+runnable.toString());
runnable.run();// Run the tasks.
}
runnable = null;//help gc
} catch (Exception e) {
interrupt();
e.printStackTrace();
}
}
}
public void stopWork() {
interrupt();
}
}
}
package com.concurrent.pool;
public class TestMySelfThreadPool {
private static final int TASK_NUM = 50;// The number of tasks.
public static void main(String[] args) {
MySelfThreadPool myPool = new MySelfThreadPool(3,50);
for (int i=0;i<TASK_NUM;i++) {
myPool.execute(new MyTask("task_"+i));
}
}
static class MyTask implements Runnable{
private String name;
public MyTask(String name) {
this.name = name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
@Override
public void run() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("task :"+name+" end...");
}
@Override
public String toString() {
// TODO Auto-generated method stub
return "name = "+name;
}
}
}
public class Storage {
private static int MAX_VALUE = 100;
private List<Object> list = new ArrayList<>();
public void produce(int num) {
synchronized (list) {
while (list.size() + num > MAX_VALUE) {
System.out.println("Unable to execute production tasks now");
try {
list.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
for (int i = 0; i < num; i++) {
list.add(new Object());
}
System.out.println("Number of produced products"+num+"Repository capacity"+list.size());
list.notifyAll();
}
}
public void consume(int num) {
synchronized (list) {
while (list.size() < num) {
System.out.println("Unable to execute consumption tasks now");
try {
list.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
for (int i = 0; i < num; i++) {
list.remove(0);
}
System.out.println("Number of consumed products"+num+"Repository capacity" + list.size());
list.notifyAll();
}
}
}
public class Producer extends Thread {
private int num;
private Storage storage;
public Producer(Storage storage) {
this.storage = storage;
}
public void setNum(int num) {
this.num = num;
}
public void run() {
storage.produce(this.num);
}
}
public class Customer extends Thread {
private int num;
private Storage storage;
public Customer(Storage storage) {
this.storage = storage;
}
public void setNum(int num) {
this.num = num;
}
public void run() {
storage.consume(this.num);
}
}
public class Test {
public static void main(String[] args) {
Storage storage = new Storage();
Producer p1 = new Producer(storage);
Producer p2 = new Producer(storage);
Producer p3 = new Producer(storage);
Producer p4 = new Producer(storage);
Customer c1 = new Customer(storage);
Customer c2 = new Customer(storage);
Customer c3 = new Customer(storage);
p1.setNum(10);
p2.setNum(20);
p3.setNum(80);
c1.setNum(50);
c2.setNum(20);
c3.setNum(20);
c1.start();
c2.start();
c3.start();
p1.start();
p2.start();
p3.start();
}
}
public class blockQueue {
private List<Integer> container = new ArrayList<>();
private volatile int size;
private volatile int capacity;
private Lock lock = new ReentrantLock();
private final Condition isNull = lock.newCondition();
private final Condition isFull = lock.newCondition();
blockQueue(int capacity) {
this.capacity = capacity;
}
public void add(int data) {
try {
lock.lock();
try {
while (size >= capacity) {
System.out.println("The blocking queue is full");
isFull.await();
}
} catch (Exception e) {
isFull.signal();
e.printStackTrace();
}
++size;
container.add(data);
isNull.signal();
} finally {
lock.unlock();
}
}
public int take() {
try {
lock.lock();
try {
while (size == 0) {
System.out.println("The blocking queue is empty");
isNull.await();
}
} catch (Exception e) {
isNull.signal();
e.printStackTrace();
}
--size;
int res = container.get(0);
container.remove(0);
isFull.signal();
return res;
} finally {
lock.unlock();
}
}
}
public static void main(String[] args) {
AxinBlockQueue queue = new AxinBlockQueue(5);
Thread t1 = new Thread(() -> {
for (int i = 0; i < 100; i++) {
queue.add(i);
System.out.println("Insert" + i);
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
Thread t2 = new Thread(() -> {
for (; ; ) {
System.out.println("Consume"+queue.take());
try {
Thread.sleep(800);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
t1.start();
t2.start();
}
package com.demo.test;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
public class syncPrinter implements Runnable{
// The number of prints.
private static final int PRINT_COUNT = 10;
private final ReentrantLock reentrantLock;
private final Condition thisCondtion;
private final Condition nextCondtion;
private final char printChar;
public syncPrinter(ReentrantLock reentrantLock, Condition thisCondtion, Condition nextCondition, char printChar) {
this.reentrantLock = reentrantLock;
this.nextCondtion = nextCondition;
this.thisCondtion = thisCondtion;
this.printChar = printChar;
}
@Override
public void run() {
// Obtain the print lock and enter the critical section.
reentrantLock.lock();
try {
// Print PRINT_COUNT times consecutively.
for (int i = 0; i < PRINT_COUNT; i++) {
// Print characters.
System.out.print(printChar);
// Use nextCondition to wake up the next thread.
// Because only one thread is waiting, either the signal() or signalAll() method can be used.
nextCondtion.signal();
// If it is not the last time, use thisCondition for the thread to wait to be woken up.
// A conditional statement must be added. Otherwise, after 10 times of printing, a deadlock will occur.
if (i < PRINT_COUNT - 1) {
try {
// The current thread releases the lock and waits to be woken up next time.
thisCondtion.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
} finally {
reentrantLock.unlock();
}
}
public static void main(String[] args) throws InterruptedException {
ReentrantLock lock = new ReentrantLock();
Condition conditionA = lock.newCondition();
Condition conditionB = lock.newCondition();
Condition conditionC = lock.newCondition();
Thread printA = new Thread(new syncPrinter(lock, conditionA, conditionB,'A'));
Thread printB = new Thread(new syncPrinter(lock, conditionB, conditionC,'B'));
Thread printC = new Thread(new syncPrinter(lock, conditionC, conditionA,'C'));
printA.start();
Thread.sleep(100);
printB.start();
Thread.sleep(100);
printC.start();
}
}
// BLOCKING Queue
public class FooBar {
private int n;
private BlockingQueue<Integer> bar = new LinkedBlockingQueue<>(1);
private BlockingQueue<Integer> foo = new LinkedBlockingQueue<>(1);
public FooBar(int n) {
this.n = n;
}
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; i++) {
foo.put(i);
printFoo.run();
bar.put(i);
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n; i++) {
bar.take();
printBar.run();
foo.take();
}
}
}
// Use CyclicBarrier to control the sequence.
class FooBar6 {
private int n;
public FooBar6(int n) {
this.n = n;
}
CyclicBarrier cb = new CyclicBarrier(2);
volatile boolean fin = true;
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; i++) {
while(!fin);
printFoo.run();
fin = false;
try {
cb.await();
} catch (BrokenBarrierException e) {}
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n; i++) {
try {
cb.await();
} catch (BrokenBarrierException e) {}
printBar.run();
fin = true;
}
}
}
// Spin and release CPU.
class FooBar5 {
private int n;
public FooBar5(int n) {
this.n = n;
}
volatile boolean permitFoo = true;
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; ) {
if(permitFoo) {
printFoo.run();
i++;
permitFoo = false;
}else{
Thread.yield();
}
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n; ) {
if(!permitFoo) {
printBar.run();
i++;
permitFoo = true;
}else{
Thread.yield();
}
}
}
}
// Reentrant lock + Condition
class FooBar4 {
private int n;
public FooBar4(int n) {
this.n = n;
}
Lock lock = new ReentrantLock(true);
private final Condition foo = lock.newCondition();
volatile boolean flag = true;
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; i++) {
lock.lock();
try {
while(!flag) {
foo.await();
}
printFoo.run();
flag = false;
foo.signal();
}finally {
lock.unlock();
}
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n;i++) {
lock.lock();
try {
while(flag) {
foo.await();
}
printBar.run();
flag = true;
foo.signal();
}finally {
lock.unlock();
}
}
}
}
// Synchronized keyword + flag + wake-up
class FooBar3 {
private int n;
// The flag bit, which controls the execution sequence. If it is true, execute printFoo; if it is false, execute printBar.
private volatile boolean type = true;
private final Object foo= new Object(); // The lock flag.
public FooBar3(int n) {
this.n = n;
}
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; i++) {
synchronized (foo) {
while(!type){
foo.wait();
}
printFoo.run();
type = false;
foo.notifyAll();
}
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n; i++) {
synchronized (foo) {
while(type){
foo.wait();
}
printBar.run();
type = true;
foo.notifyAll();
}
}
}
}
// The semaphore, suitable for sequence control.
class FooBar2 {
private int n;
private Semaphore foo = new Semaphore(1);
private Semaphore bar = new Semaphore(0);
public FooBar2(int n) {
this.n = n;
}
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; i++) {
foo.acquire();
printFoo.run();
bar.release();
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n; i++) {
bar.acquire();
printBar.run();
foo.release();
}
}
}
Disclaimer: The views expressed herein are for reference only and don't necessarily represent the official views of Alibaba Cloud.
Interview Questions We've Learned Over the Years: Java Basics
Java Logging Part 2: Logging and Package Exclusion with SLF4J + Logback
1,068 posts | 262 followers
FollowAlibaba Cloud Community - May 8, 2024
Alibaba Cloud Community - May 3, 2024
Alibaba Cloud Community - May 6, 2024
Alibaba Cloud Community - May 1, 2024
Alibaba Cloud Community - July 9, 2024
Alibaba Cloud Community - May 29, 2024
1,068 posts | 262 followers
FollowExplore Web Hosting solutions that can power your personal website or empower your online business.
Learn MoreExplore how our Web Hosting solutions help small and medium sized companies power their websites and online businesses.
Learn MoreBuild superapps and corresponding ecosystems on a full-stack platform
Learn MoreWeb App Service allows you to deploy, scale, adjust, and monitor applications in an easy, efficient, secure, and flexible manner.
Learn MoreMore Posts by Alibaba Cloud Community