×
Community Blog Interview Questions We've Learned Over the Years: Fundamentals of Networking and Operating System

Interview Questions We've Learned Over the Years: Fundamentals of Networking and Operating System

This article is part of a series focusing on interview questions for technicians, with a specific emphasis on the fundamentals of networking and the operating system.

1

By Alibaba Technology and Quality

Fundamentals of Networking

TCP Three-way Handshake

The process of a three-way handshake:

The client sends a SYN packet, enters the SYN_SENT state, and waits for the acknowledgment from the server. This is the first handshake.

The server sends a SYN/ACK packet and enters the SYN_RCVD state. This is the second handshake.

The client sends an ACK packet. If the server is available, the connection enters the ESTABLISHED state. This is the third handshake.

Why three times:

The main purpose is to establish a reliable communication channel to ensure that the client and the server can send and receive data at the same time.

Why not twice:

  1. Prevent invalid request segments from being transmitted to the server again, establishing redundant links, and wasting resources.
  2. Two handshakes can only ensure that the one-way connection is unblocked. (In order to achieve reliable data transmission, both parties to TCP must maintain a sequence number to identify which of the sent data packets have been received by the other party. The process of a three-way handshake is the necessary step for both parties to inform each other of the initial sequence number and confirm that the other party has received the number. If there are only two handshakes, at most only the initial sequence number of the connection initiator can be confirmed, while the sequence number selected by the other party cannot be confirmed).

TCP Four-way Handshake

The process of a four-way handshake:

The client sends a FIN packet to the server and closes the connection with the server. The client enters the FIN_WAIT_1 state. This is the first handshake.

After receiving this FIN, the server sends back an ACK packet and the acknowledgment number must be 1 greater than the sequence number of the received FIN. The server enters the CLOSE_WAIT state. This is the second handshake.

The server sends a FIN packet to the client and closes the connection with the client. The client enters the FIN_WAIT_2 state. This is the third handshake.

After receiving this FIN, the client sends back an ACK packet and the acknowledgment number must be 1 greater than the sequence number of the received FIN. Then, the client enters the TIME_WAIT state. This is the fourth handshake.

Why four times:

Ensure that the data between the client and the server can be transmitted.

CLOSE_WAIT:

This state means that the connection is waiting for termination.

TIME_WAIT:

To solve other problems caused by network packet loss and network instability, and ensure that the connected party can close its own connection within the time range.

How to view the number of links in the TIME_WAIT state?

netstat-an|grep TIME_WAIT|wc -l View the number of connections in the TIME_WAIT state.

Why is there an overwhelming number of connections in the TIME_WAIT state? How to solve it?

Possible cause: On a TCP server with high-concurrency short-lived connections, connections are immediately closed after the server processes the requests.

Solution: Consider the load balancer. The web server first closes the connections from the load balancer.

1. OSI and TCP/IP Models

The OSI model consists of seven layers: physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer.

The TCP/IP model consists of five layers: physical layer, data link layer, network layer, transport layer, and application layer.

2. Common Network Services

Application layer: HTTP, SMTP, DNS, and FTP

Transport layer: TCP and UDP

Network layer: ICMP, IP, Router, and Firewall

Data link layer: Network Interface Controller, Bridge, and Switch

Physical layer: Repeater and Hub

3. Differences and Scenarios Between TCP and UDP

Type Features Performance Applied scenarios Header length
TCP Connection-oriented and reliable byte streams Slow transmission and a large number of required resources File and email transmission 20-60
UDP Connectionless and unreliable segment streams Fast transmission and a small number of required resources Voice, video, and live-streaming 8 bytes

UDP-based protocols: RIP, DNS, and SNMP

TCP-based protocols: HTTP, FTP, and SMTP

4. Congestion Control of the TCP Sliding Window

TCP: uses measures such as data segment, packet numbers, checksum, flow control, congestion control, and timeout & retransmission to ensure reliable data transmission.

Purpose: To prevent excessive data from being injected into the network and prevent overloads on routers and links in the network.

Process: TCP maintains a congestion window that dynamically changes with the network conditions. Slow start and congestion avoidance algorithms are used to reduce network congestion.

5. Causes and Solutions of TCP Sticking Packets

A TCP sticking packet: means that several packets of data sent by the sender are stuck into one packet when received by the receiver.

Sender-side cause:

TCP uses the Nagle algorithm by default (mainly to reduce the number of packet segments in the network):

Collecting multiple small packets and sending them together when an acknowledgment arrives may cause the sender to have a sticking packet.

Receiver-side cause:

TCP stores the received data packets in the receive cache. If the speed at which TCP receives the data packets to the cache is faster than the speed at which the application reads the data packets from the cache, multiple packets will be cached so that the application may read multiple packets that are stuck together.

Solution: The root reason is that the receiving peer cannot distinguish the boundary between messages, so it can be solved by giving a boundary, for example:

Send fixed-length packets. The size of each message is the same, and the receiver will regard it as a message as long as the received data is accumulated to a fixed-length value.

Add the rn mark to the end of the package. The FTP protocol does exactly like this. However, the problem is that if the data content also contains rn, it will be misjudged as the boundary of the message.

Calculate the length of the packet head plus the packet body. The packet header is a fixed length of 4 bytes, indicating the length of the packet body. The receiving peer first receives the length of the packet body and receives the packet body according to the length of the packet body.

6. TCP and UDP Segment Formats

TCP segment format:

Source Port and Destination Port: the numbers used to find the sending and receiving application processes. These two values plus the source port IP address and destination port IP address of the header IP uniquely identify a TCP connection.

Sequence Number: The sequence number is used to identify the data byte stream sent from the TCP sender to the TCP receiver, which represents the first data octet in this segment. If you think of a byte stream as a one-way flow between two applications, TCP counts each byte with a sequence number. The sequence number is a 32-bit unsigned number that starts from 0 after reaching 2 ^ 32-1.

When a new connection is established, the SYN flag changes to 1. The sequence number field contains the initial sequence number (ISN) of the connection selected by this host. The sequence number of the first data octet to be sent by the host is ISN+1 because the SYN flag consumes a sequence number.

Acknowledgment Number: If the ACK control bit is set, this field contains the value of the next sequence number that the sender of this segment is expecting. Therefore, the acknowledgment number should be the sequence number of the last successfully received data byte plus 1. The acknowledgment number is valid only when the ACK flag is 1. There is no cost to sending an ACK because the 32-bit acknowledgment number, like the ACK flag, is always part of the TCP header. Therefore, once a connection is established, this field is always set and the ACK flag is always set to 1. TCP provides full-duplex services for the application layer. This means that data can be transmitted independently in both directions. Therefore, each end of the connection must maintain the sequence number of the transmitted data in each direction.

Header Length: The length of the header in 32-bit words. This value is required because the length of the Options field is variable. This field takes up 4 bits, so TCP has a maximum of 60 bytes in the header. However, without the Options field, the normal length is 20 bytes.

Control Flags: There are 6 flag bits in the TCP header. Several of them can be simultaneously set to 1.
URG is validACK sequence number is validPSH receiver hands this segment to the application layer as soon as possibleRST resets the connectionSYN synchronizes sequence numbers to initiate a connection. This flag and the next flag will be introduced in Chapter 18FIN completes the sending task

Window: Flow control for TCP is provided by the declared window size at each end of the connection. Window is the number of data octets beginning with the one indicated in the acknowledgment field which the sender of this segment is willing to accept. It is a 16-bit field, so the maximum value is 65535 bytes.

Checksum: Checksum covers the entire TCP segment: TCP header and TCP data. This is a mandatory field that must be calculated and stored by the sending end and verified by the receiving end.

Urgent Pointer: The urgent pointer is valid only when the URG flag is set to 1. The urgent pointer is a positive offset, and the value in the sequence number field is added to indicate the sequence number of the last byte of urgent data. The urgent mode of TCP is a mode in which the sending end sends urgent data to the other end.

Options: The most common Options field is the maximum size of the segment, also known as the MSS (Maximum Segment Size). Each connected party typically indicates this option in the first segment of the communication (the segment for which the SYN flag is set for establishing the connection). It indicates the maximum length of a segment that can be received at the local end.

UDP segment format:

Port Number: Used to indicate the sending and receiving processes. Since the IP layer has assigned IP datagrams to TCP or UDP (according to the value of the protocol field in the IP header), the TCP port number is viewed by TCP, and the UDP port number is viewed by UDP. The TCP port number is independent of the UDP port number.

Length: The UDP Length field refers to the length in octets of the UDP header and UDP data. The minimum value of this field is 8 bytes (sending a UDP datagram of 0 bytes is OK).

Checksum: A UDP checksum is an end-to-end checksum. It is calculated by the sender and then verified by the receiver. The purpose is to discover any changes to the UDP header and data from the sender to the receiver.

IP Segment Format: The normal IP header length is 20 bytes unless it contains optional fields.

Version (4 bits): The current protocol version number is 4, so IP is sometimes called IPV4.

Header Length (4 bits): The header length refers to the number of 32-bit words in the header, including any options. Since it is a 4-bit field, the maximum header length is 60 bytes.

Type of Service (TOS): This field contains a 3-bit priority field (now ignored). The 4-bit TOS subfield and 1-bit unused bits must be set to 0. The 4-bit TOS represents the minimum delay, maximum throughput, maximum reliability, and minimum cost. Only one bit can be set in the 4-bit TSO. If all 4 bits are 0, it means that it is a general service.

Total Length: The total length field is the length of the entire IP datagram in bytes. Using the header length and total length fields, the start position and length of the data content in the IP datagram are known. Since this field is 16 bits, the maximum IP datagram can be 65535 bytes in length. When a datagram is fragmented, the value of this field also changes.

Identification: The identification field uniquely identifies each datagram sent by the host. Usually, every time a segment is sent, its value will be increased by 1.

Time to Live (TTL): This field sets the maximum number of routers the datagram can pass through. It specifies the lifetime of the datagram. The initial value of TTL is set by the source host (usually 32 or 64), and once it passes through a router that handles it, its value is subtracted by 1. When the value of this field is 0, the datagram will be discarded and an ICMP segment is sent to notify the source host.

Header Checksum: This field is the checksum code calculated from the IP header. It does not calculate the data after the header. ICMP, IGMP, UDP, and TCP all contain a checksum code covering both the header and the data in their respective headers.

Ethernet segment format:

Destination Address and Source Address: The hardware address of the network interface controller (also called MAC address), which is 48 bits in length and is solidified when the controller leaves the factory.

Data: The data length in the Ethernet frame specifies a minimum of 46 bytes and a maximum of 1500 bytes. The length of ARP and RARP data packets is not enough for 46 bytes, so bits should be filled at the end. The maximum 1500 is called the maximum transmission unit (MTU) of Ethernet. Different network types have different MTUs. If a data packet is routed from Ethernet to a dial-up link, and the data packet is greater than the MTU of the dial-up link, the data packet needs to be fragmented. There is also "MTU:1500" in the output of the ifconfig command. Note that the MTU refers to the maximum length of the payload in a data frame, excluding the length of the frame header.

HTTP

1. HTTP/1.0/1.1/2.0

HTTP/1.0: The server disconnects the TCP connection immediately after processing (connectionless). It does not track each client or record past requests (stateless).

HTTP/1.1: The persistent connection avoids the overhead of connection establishment and release (KeepAlived). You can use Content-Length to determine whether all the requested data has been accepted (stateful).

HTTP/2.0: The concepts of binary data frames and streams are introduced, where frames identify data sequentially. Since it is a sequential protocol, servers can transmit data in parallel.

The main differences between HTTP/1.0 and HTTP/1.1 are as follows:

  1. Caching: HTTP/1.1 adds more caching control policies such as Entity tag and If-Match.
  2. Optimization of network connection: HTTP/1.1 supports resumable upload.
  3. Increase in error status code: HTTP/1.1 adds 24 error status response codes and richer error codes have made each status clearer.
  4. Host header: HTTP/1.1 supports the host header domain, and IP is no longer used as the requester flag.
  5. Persistent connections: Reduce the consumption and delay of establishing and closing connections.

The main differences between HTTP/1.1 and HTTP/2.0 are as follows:

  1. New transmission format: HTTP/2.0 works on the binary format, while HTTP/1.0 still works on the textual format.
  2. Multiplexing: HTTP/2.0 allows shared connection, so different requests can use the same connection for transmission. A normal request is combined according to the ID number on each request.
  3. Header compression: Since the header in HTTP/1.X contains a large amount of information and has to be transmitted repeatedly, HTTP/2.0 uses the encoder to reduce the size of the header that needs to be transmitted.
  4. Server push: Same as Google's SPDUY (an upgrade of 1.0).

2. Differences Between HTTP and HTTPS

Differences between HTTP and HTTPS are as follows:

HTTP HTTPS
Default Port 80 Default Port 443
Transmission in plain text, unencrypted data, and poor security SSL encrypted data and better security during transmission
Fast response and low resource consumption Relatively low response, high resource consumption, and a required CA certificate

The process to establish a HTTPS link:

  1. First, the client sends a request to the server.
  2. The server sends an SSL Certificate to the client, including the issuer of the certificate, the validity period, the owner, the signature, and the public key.
  3. The client verifies the authenticity of the public key sent and uses the public key to encrypt the symmetric encryption algorithm and the symmetric key.
  4. The server uses the private key to decrypt and sends the confirmation message encrypted by the symmetric key to the client.
  5. The client and the server then use the symmetric key for information transmission.

Symmetric encryption algorithm:

Both parties hold the same key and the encryption speed is fast. Typical symmetric encryption algorithms include DES and AES.

Asymmetric encryption algorithm:

Keys appear in pairs (the private key and the public key). The private key is kept secret by the owner and not transmitted in the network, while the public key can be shared. Compared with symmetric encryption, the speed of asymmetric encryption is slower. The typical asymmetric encryption algorithms include RSA and DSA.

3. Differences Between GET and POST Request Methods

HTTP request methods:

Methods Description
GET Sends a request to a specific resource, and queries for data but returns an entity
POST Submits data to a specified resource for processing requests, probably resulting in creating new resources and modifying existing resources
PUT Uploads new content to the server
HEAD Similar to a GET request, and no specific content in the returned response to obtain the header
DELETE Requests the server to delete the specified resource
OPTIONS Sends a request to the server to test the functionality of the server
TRACE Echos requests received by the server back for testing or diagnostics
CONNECT Reserved for proxy servers that can change connections to pipes over HTTP/1.1

Differences between GET and POST:

GET POST
Visibility Data is visible to everyone in the URL Data is not displayed in the URL
Security The GET request is comparatively less secure because the data sent is part of the URL The POST request is comparatively more secure because parameters are not saved in browser history or web server logs
Data length Limited with the maximum length of 2 KB Unlimited
Encoding type application/x-www-form-urlencoded multipart/form-data
Cached Can be cached Can not be cached

4. Common HTTP Response Status Code

  • 100: Continue --- Continue. The client should continue the request.
  • 200: OK --- The request succeeds. It is generally used for GET and POST requests.
  • 301: Moved Permanently --- Permanent redirect.
  • 302: Found --- Temporary redirect.
  • 400: Bad Request --- The server cannot understand it because the syntax of the client request is wrong.
  • 403: Forbidden --- The server understands the request from the client, but still refuses to fulfill it.
  • 404: Not Found --- The server cannot find the target resource (web page) requested by the client.
  • 500: Internal Server Error --- The server cannot fulfill the request due to internal errors.
  • 502: Bad Gateway --- The server receives an invalid response when working as a gateway or proxy server to process the request.

5. Differences Between Redirect and Forward

Redirect:

The address bar changes.

Redirect can access other sites (servers).

Redirect includes two requests. You cannot use the request object to share data.

Forward:

The address bar remains unchanged.

Forward can only access resources on the current server.

Forward includes one request. You can use the request object to share data.

6. Differences Between Cookie and Session

Both cookies and sessions are used to track the identity of the browser user, but there are differences between the two:

Cookies are saved on the client side or browser, whereas sessions are saved on the server side.

Cookies are not very secure. Others can analyze cookies stored locally and cheat. Considering the security, we should use sessions.

Cookies are usually used to save the user's information, while sessions are mainly used to record the user's state through the server.

The process of typing a URL in the browser

Process: DNS resolves the request, the browser initiates a TCP connection, the browser sends the HTTP request, the server processes the request and sends back an HTTP segment, the browser renders the content, and the whole process finally ends.

Procedure Used protocols
1. The browser checks caches for DNS entry to find the corresponding IP address, that is the DNS lookup process for the browser cache, router cache, and DNS cache DNS: to obtain the corresponding IP address
2. The browser initiates a TCP connection according to the IP address TCP: to establish a connection with the server
3. The browser sends the HTTP request to the server HTTP: to send a request
4. The server responds to the HTTP request Respond to the HTTP request
5. The browser renders the content

Fundamentals of Operating System

Differences between process and thread

Process: is the minimum unit for resource allocation. One process can have many threads, and they share the heap and method area resources of the process but do not share the stack or PC register.

Thread: is the minimum unit for task scheduling and execution. Parallel execution of threads may result in resource contention and context switching.

Coroutine: is more lightweight than a thread. Like a process with multiple threads, one thread can have many coroutines inside it.

1. Inter-Process Communication (IPC)

Pipe:

Anonymous pipes are used for related processes while named pipes are used for not related processes. Pipes follow FIFO and it's a half-duplex method (one-way communication).

Signals:

Signals is a complex communication method. Users can call the kill command to send signals to other processes.

Message queue:

Message queue IPC overcomes the characteristics of less transmission information of signals, and restrictions of unformatted byte stream only on pipes and limited buffer size.

Shared memory:

• Multiple processes can directly read and write data from the shared memory region, which is the fastest method available for IPC. It is designed for the inefficiency of other communication mechanisms.

• Because multiple processes share the same memory, it is necessary to rely on some synchronization mechanisms (such as semaphores) to achieve synchronization and mutual exclusion between processes.

Semaphores:

Semaphores are counters used by multi-processes to access shared data, and this communication method is mainly used to solve synchronization-related problems and avoid competition conditions.

Sockets:

Simply put, sockets are an agreement between the two communicating parties, using related functions in sockets to complete the communication process.

2. User Mode and Kernel Mode

User mode: allows only limited access to memory and all applications operate in user mode.

Kernel mode: the operating system functions in kernel mode, and the CPU can access all memory data, including peripheral devices.

Necessity for user mode and kernel mode:

Restrict access between different programs to prevent them from obtaining memory data from other programs, or obtaining data from peripheral devices and sending it to the network.

There are three ways to switch from the user mode to the kernel mode:

a. System calls

They are active calls and the core is to use an interrupt specially opened by the operating system for users, such as the int 80h interrupt of Linux.

b. Exceptions

When the CPU is executing a program running in the user state, some exceptions that are unknown in advance occur, such as page missing, triggering handle exceptions to switch to the kernel mode.

c. Peripheral interrupts

When the peripheral device completes the operation requested by the user, it will send a corresponding interrupt signal to the CPU, and then the CPU will switch from the user mode to the kernel mode.

3. Components of a Process in the Operating System

Stack: automatically allocated and released by the compiler to store the parameter values of functions and values of local variables.

Heap: generally allocated and released by the programmer. If not, it may be reclaimed by the OS at the end of the program.

Static: stores global and static variables.

Text: stores the binary code of the function body.

Threads share the heap area and static area.

Memory management in the operating system

Management methods: paging, segmentation, and segmentation-paging.

Segmentation:

The program address is divided into several segments, such as code, data, and stack, so that each process has a two-dimensional address space, independent from each other without interference. The advantage of segmentation is that there is no internal fragmentation because the segment size is variable, and the segment size can be changed to eliminate internal fragmentation. However, when segments are swapped in and out, external fragments will be generated (for example, when 4k segments are exchanged for 5k segments, 1k external fragments will be generated).

Paging:

In paging, the logical address of the program is divided into fixed-size pages while the physical memory is divided into page frames with the same size. When loading the program, you can put any page into any one of the page frames in memory. These frames can be non-contiguous, thus achieving discrete separation. The advantage of the paging method is that there is no external fragmentation because the size of the page is fixed, but there is internal fragmentation because a page may not fill up.

Segmentation-paging:

Segmentation-paging combines the advantages of segmentation and paging. Simply put, the segmentation-paging method is to divide the main memory into several segments first, and each segment is divided into several pages, that is, the different segments and internal segments are discrete in this management mechanism.

1. Page Replacement Algorithms: FIFO and LRU

Replacement algorithm: FIFO, LRU, and OPT.

FIFO:

Disadvantages: it does not take into account the actual page usage frequency, has poor performance, and does not conform to the usual usage rules, so it is rarely used.

LRU:

Principle: it selects the most recent and the most unused page for elimination.

Advantages: it considers the temporal locality of program access and has a better performance, so it is usually used in practical applications.

Disadvantages: IFU and Random can be used.

/**
 * @program: Java
 * @description:The LRU replacement algorithm can be implemented by using LinkedHashMap.
 * @author: Mr.Li
 * @create: 2020-07-17 10:29
 **/
public class LRUCache {
    private LinkedHashMap<Integer,Integer> cache;
    private int capacity;   // Capacity size

    /**
     *Initialize the constructor
     * @param capacity
     */
    public LRUCache(int capacity) {
        cache = new LinkedHashMap<>(capacity);
        this.capacity = capacity;
    }

    public int get(int key) {
        //This key does not exist in the cache, so directly return
        if(!cache.containsKey(key)) {
            return -1;
        }

        int res = cache.get(key);
        cache.remove(key);   // Delete from the linked list first
cache.put(key,res);  // Then put the node at the end of the linked list
        return res;
    }

    public void put(int key,int value) {
        if(cache.containsKey(key)) {
            cache.remove(key); // This key exists, so remove it from the current linked list
}
        if(capacity == cache.size()) {
// The cache is full, so delete the head of the linked list
            Set<Integer> keySet = cache.keySet();
            Iterator<Integer> iterator = keySet.iterator();
            cache.remove(iterator.next());
        }
        cache.put(key,value);  // Insert it to the end of the linked list
}
}
/**
 * @program: Java
 * @description: The LRU replacement algorithm can be implemented by using the internal removeEldestEntry method of LinkedHashMap.
 * @author: Mr.Li
 * @create: 2020-07-17 10:59
 **/
class LRUCache {
    private Map<Integer, Integer> map;
    private int capacity;
    
    /**
* Initialize the constructor
* @param capacity
      */
    public LRUCache(int capacity) {
        this.capacity = capacity;
        map = new LinkedHashMap<Integer, Integer>(capacity, 0.75f, true) {
            @Override
            protected boolean removeEldestEntry(Map.Entry eldest) {
                return size() > capacity; // If the capacity is greater than the capacity, delete it
     }
        };
    }
    public int get(int key) {

// Return the value corresponding to the key. If the key does not exist, return -1
      return map.getOrDefault(key, -1);
    }

    public void put(int key, int value) {
        map.put(key, value);
    }
}

OPT:

Principle: Each time, it selects pages in the current physical block that will not be accessed for a long time in the future or that will not be used in the future for elimination.

Advantages: It has a better performance and can guarantee the least number of page faults.

Disadvantages: it is too idealistic, but actually impossible to achieve because it cannot predict the future of the page.

2. Deadlock Conditions and Solutions

Deadlock refers to the phenomenon of two or more processes waiting for each other due to competition for resources during execution.

Deadlock conditions:

Mutual exclusion: The process does not allow other processes to access the allocated resource. If other processes access the resource, they can only wait until the process that occupies the resource releases the resource.

Hold and wait: After obtaining certain resources, the process requests other resources, and it will not release the resources it has already occupied during the blocking process.

No preemption: The resources obtained by the process cannot be taken forcibly until they are not used, and can only be released voluntarily after the process completes its task.

Circular wait: Several processes in the system form a circular chain, with each process waiting for a resource held by the next process in the chain.

Solution: Either condition is destroyed.

Optimistic lock: destroys the mutual exclusion condition, such as CAS.

One-time allocation of resources: destroys the hold and wait condition, such as tryLock.

Resource preemption: When the new resources of the process are not met, the occupied resources are released, thus destroying the no preemption condition, such as database deadlock timeout.

Ordered resource allocation: The system assigns a sequence number to each type of resource, and each process requests resources in increments of the sequence number. This destroys the circular wait condition, such as transfer scenarios.

Summary

During the interview, it is important to have a clear understanding of the TCP protocol, the OSI network model, and network service grouping. A key aspect is comprehending the implementation of sliding windows and congestion control, and being able to provide detailed analysis based on real-world scenarios. Special attention should be given to the specifics of the three-way handshake for TCP connection establishment and the four-way handshake for disconnection, as well as the principles underlying them. In regard to the operating system, it is essential to understand the distinctions between processes and threads, and how they communicate and are managed within the operating system. Familiarity with common Linux commands is crucial, along with the ability to effectively address issues such as high CPU usage, memory overflow, and process deadlock. Moreover, in the context of page replacement algorithms, mastering the Least Recently Used (LRU) algorithm is a key focus in written examinations and should be skillfully acknowledged.


Disclaimer: The views expressed herein are for reference only and don't necessarily represent the official views of Alibaba Cloud.

0 1 0
Share on

Alibaba Cloud Community

1,066 posts | 262 followers

You may also like

Comments