Libuv data structure and general logic

2.1 Core structure uv_loop_s

uv_loop_s is the core data structure of Libuv, and each event loop corresponds to a uv_loop_s structure. It records core data throughout the event loop. Let's analyze the meaning of each field.

1 Field void* data of user-defined data;

2 The number of active handles, which will affect the use of the loop to exit unsigned int active_handles;

3 handle queue, including active and inactive void* handle_queue[2];

The number of 4 requests will affect the exit of the event loop union { void* unused[2]; unsigned int count; } active_reqs;

5 The flag for the end of the event loop unsigned int stop_flag;

6 Some flags run by Libuv, currently only UV_LOOP_BLOCK_SIGPROF, mainly used to block the SIGPROF signal when epoll_wait, improve performance, SIGPROF is a signal unsigned long flags triggered by the setting of the operating system settimer function;

7 fd of epoll
int backend_fd;

8 pending stage queue void* pending_queue[2];

9 points to the uv__io_t structure queue that needs to register events in epoll void* watcher_queue[2];

10 There is an fd field in the node of the watcher_queue queue, watchers use fd as the index, record the uv__io_t structure uv__io_t** watchers where fd is located;

11 The number of watchers related, set unsigned int nwatchers in maybe_resize function;

12 The number of fds in watchers, generally the number of nodes in the watcher_queue queue unsigned int nfds;

13 After the child thread of the thread pool processes the task, insert the corresponding structure into the wq queue void* wq[2];

14 Control the mutually exclusive access of the wq queue, otherwise there will be problems with simultaneous access by multiple child threads uv_mutex_t wq_mutex;

15 for the sub-thread of the thread pool and the main thread to communicate uv_async_t wq_async;

16 Mutex variable for read-write lock uv_rwlock_t cloexec_lock;

17 Queue in the close phase of the event loop, generated by uv_close uv_handle_t* closing_handles;

18 Process queue from fork void* process_handles[2];

19 Task queue corresponding to the prepare phase of the event loop void* prepare_handles[2];

20 Task queue corresponding to the check phase of the event loop void* check_handles[2];

21 The task queue corresponding to the idle phase of the event loop void* idle_handles[2];

21 async_handles queue, the Poll IO stage executes uv_async_io to traverse the async_handles queue to process the node with pending 1 void* async_handles[2];

22 is used to monitor whether there is an async handle task that needs to be processed uv__io_t async_io_watcher;

23 The write side fd used to save the communication between the child thread and the main thread
int async_wfd;

24 Save the timer binary heap structure struct {
void* min;
unsigned int nelts;
} timer_heap;

25 Manage the id of the timer node, and continuously superimpose uint64_t timer_counter;

26 At the current time, Libuv will update the current time at the beginning of each event loop and in the Poll IO stage, and then use it in subsequent stages to reduce the uint64_t time for system calls;

27 The pipeline used for the communication between the forked process and the main process, used to notify the main process when the child process receives a signal, and then the main process executes the callback registered by the child process node int signal_pipefd[2];

28 Similar to async_io_watcher, signal_io_watcher saves the pipeline read end fd and callback, and then registers it in epoll. When the child process receives the signal, it writes to the pipeline through write, and finally executes the callback uv_io_t signal_io_watcher in the Poll IO stage;
29 handle used to manage the exit signal of the child process
uv_signal_t ​​child_watcher;

30 spare fd
int emfile_fd;

2.2 uv_handle_t

In Libuv, uv_handle_t is similar to the base class in C++, and many subclasses inherit from it. Libuv mainly obtains the effect of inheritance by controlling the layout of memory. handle represents an object with a long life cycle. E.g 1 An active prepare handle whose callback will be executed each time the event loops. 2 A TCP handle executes its callback every time a connection arrives.

Let's take a look at uv_handle_t Definition of

1 Custom data, used to associate some contexts, used in Node.js to associate the C++ object void\* data to which handle belongs;

2 belongs to the event loop uv_loop_t\* loop;

3 handle type uv_handle_type type;

4 After the handle calls uv_close, the callback uv_close_cb that is executed in the closing phase close_cb;

5 The front and rear pointers used to organize the handle queue void\* handle_queue[2];

6 file descriptor union {
 int fd;
 void\* reserved[4];
 } u;

7 When the handle is in the close queue, this field points to the next close node uv_handle_t\* next_closing;

8 handle status and flag unsigned int flags;

2.2.1 uv_stream_s

uv_stream_s is a structure representing a stream. In addition to inheriting the fields of uv_handle_t, it additionally defines the following fields

1 The number of bytes waiting to be sent size_t write_queue_size;

2 Function to allocate memory uv_alloc_cb alloc_cb;

3 Callback uv_read_cb read_cb executed when reading data is successful;

4 Initiate the structure corresponding to the connection uv_connect_t \*connect_req;

5 Close the structure uv_shutdown_t \*shutdown_req corresponding to the write end;

6 Used to insert epoll, register read and write events uv\_\_io_t io_watcher;

7 queue to be sent void\* write_queue[2];

8 Send completed queue void\* write_completed_queue[2];

9 Callback uv_connection_cb connection_cb executed when connection is received;

10 Error code for socket operation failure int delayed_error;

11 fd returned by accept
int accepted_fd;

12 An fd has been accepted, and there is a new fd, temporarily stored void\* queued_fds;

2.2.2 uv_async_s

uv_async_s is a structure that implements asynchronous communication in Libuv. Inherited from uv_handle_t and additionally defines the following fields.

1 Callback uv_async_cb executed when an asynchronous event is triggered async_cb;

2 is used to insert the async-handles queue void* queue[2];

3 The node pending field in the async_handles queue is 1, indicating that the corresponding event has triggered int pending;

2.2.3 uv_tcp_s

uv_tcp_s inherits uv_handle_s and uv_stream_s.

2.2.4 uv_udp_s

1 send bytes size_t send_queue_size;

2 The number of write queue nodes size_t send_queue_count;

3 Allocate the memory for receiving data uv_alloc_cb alloc_cb;

4 Callback uv_udp_recv_cb recv_cb executed after receiving data;

5 Insert the IO watcher in epoll to realize data read and write uv__io_t io_watcher;
6 queue to be sent void* write_queue[2];

7 Send the completed queue (success or failure to send), related to the queue to be sent void* write_completed_queue[2];

2.2.5 uv_tty_s

uv_tty_s inherits from uv_handle_t and uv_stream_t. The following fields are additionally defined.

1 The parameters of the terminal struct termios orig_termios;

2 The working mode of the terminal int mode;

2.2.6 uv_pipe_s

uv_pipe_s inherits from uv_handle_t and uv_stream_t. The following fields are additionally defined.

1 marks whether the pipe can be used to pass the file descriptor int ipc;

2 File path for Unix domain communication const char* pipe_fname;

2.2.7 uv_prepare_s, uv_check_s, uv_idle_s

The above three structure definitions are similar, they all inherit uv_handle_t and define two additional fields.

1 prepare, check, idle stage callback uv_xxx_cb xxx_cb;

2 is used to insert prepare, check, idle queue void* queue[2];

2.2.8 uv_timer_s

uv_timer_s inherits uv_handle_t and additionally defines the following fields.

1 timeout callback uv_timer_cb timer_cb;

2 Insert the field of the binary heap void* heap_node[3];

3 timeout uint64_t timeout;

4 Whether to continue to re-time after the timeout, if so, re-insert the binary heap uint64_t repeat;

5 id mark, used to compare uint64_t start_id when inserting binary heap

2.2.9 uv_process_s

uv_process_s inherits uv_handle_t and additionally defines

1 Callback executed when the process exits uv_exit_cb exit_cb;

2 process id
int pid;

3 for inserting queues, process queues or pending queues void\* queue[2];

4 Exit code, set int status when the process exits;

2.2.10 uv_fs_event_s

uv_fs_event_s is used to monitor file changes. uv_fs_event_s inherits uv_handle_t and additionally defines

1 Monitored file path (file or directory)
char\* path;

2 The callback uv_fs_event_cb cb executed when the file changes;

2.2.11 uv_fs_poll_s

uv_fs_poll_s inherits uv_handle_t and additionally defines

1 poll_ctx points to poll_ctx structure void\* poll_ctx;

struct poll*ctx {
// corresponding handle
uv_fs_poll_t* parent_handle;
// Mark whether to start polling and the reason for failure when polling int busy_polling;
// How often to check if the file content has changed unsigned int interval;
// The start time of each round of polling uint64_t start_time;
// belongs to the event loop uv_loop_t* loop;
// Callback when the file changes uv_fs_poll_cb poll_cb;
// Timer for polling uv_timer_t timer_handle after timing timeout;
// Record the context information of polling, file path, callback, etc. uv_fs_t fs_req;
// Save the file information returned by the operating system when polling uv_stat_t statbuf;
// The monitored file path, the string value is appended to the structure char path[1]; /* variable length \_/
};

2.2.12 uv_poll_s

uv_poll_s inherits from uv_handle_t and additionally defines the following fields.

1 Callback uv_poll_cb poll_cb executed when the monitored fd has an event of interest;

2 Save the IO watcher of fd and callback and register it in epoll uv__io_t io_watcher;

2.1.13 uv_signal_s

uv_signal_s inherits uv_handle_t and additionally defines the following fields

1 Callback uv_signal_cb signal_cb when a signal is received;

2 registered signal int signum;

3 It is used to insert the red-black tree. The process encapsulates the signals and callbacks of interest into uv_signal_s, and then inserts it into the red-black tree. When the signal arrives, the process writes the notification to the pipeline in the signal processing number to notify Libuv. Libuv will execute the callback corresponding to the process in the Poll IO stage. The definition of a red-black tree node is as follows struct {
struct uv_signal_s* rbe_left;
struct uv_signal_s* rbe_right;
struct uv_signal_s\* rbe_parent;
int rbe_color;
} tree_entry;

4 Number of received signals unsigned int caught_signals;

5 Number of processed signals unsigned int dispatched_signals;

2.3 uv_req_s

Send the callback for execution (success or failure) uv_udp_send_cb send_cb;


### 2.3.5 uv_getaddrinfo_s

uv_getaddrinfo_s represents a DNS request to query IP through domain name, additionally defined field

```cpp
1 belongs to the event loop uv_loop_t\* loop;

2 Node struct uv\_\_work work_req for inserting into the thread pool task queue during asynchronous DNS resolution;

3 Callback uv_getaddrinfo_cb cb executed after DNS resolution;

4 DNS query configuration struct addrinfo* hints;
char* hostname;
char\* service;

5 DNS resolution result struct addrinfo\* addrinfo;

6 DNS resolution return code int retcode;

2.3.6 uv_getnameinfo_s

uv_getnameinfo_s represents a DNS query request to query the domain name through IP, and the additionally defined field

1 belongs to the event loop uv_loop_t\* loop;

2 Node struct uv\_\_work work_req for inserting into the thread pool task queue during asynchronous DNS resolution;

3 Callback for socket transfer domain name completion uv_getnameinfo_cb getnameinfo_cb;

4 The socket structure struct sockaddr_storage storage that needs to be transferred to the domain name;

5 Indicates the information returned by the query int flags;

6 Query the returned information char host[NI_MAXHOST];
char service[NI_MAXSERV];

7 Query return code int retcode;

2.3.7 uv_work_s

uv_work_s is used to submit tasks to the thread pool, additionally defined fields

1 belongs to the event loop uv_loop_t\* loop;

2 Function uv_work_cb work_cb for processing tasks;

3 The function uv_after_work_cb after_work_cb executed after the task is processed;

4 Encapsulate a work and insert it into the thread pool queue. The work and done functions of work_req are the encapsulation of the above work_cb and after_work_cb struct uv\_\_work work_req;

uv_fs_s

uv_fs_s represents a file operation request, additionally defined fields

1 file operation type uv_fs_type fs_type;

2 belongs to the event loop uv_loop_t\* loop;

3 Callback uv_fs_cb cb for file operation completion;

4 Return code of file operation ssize_t result;

5 Data returned by file operation void\* ptr;

6 File operation path const char\* path;

7 stat information of the file uv_stat_t statbuf;

8 When the file operation involves two paths, save the destination path const char \*new_path;

9 file descriptor uv_file file;

10 file flags int flags;

11 Operation mode mode_t mode;

12 The data and number passed in when writing the file unsigned int nbufs;
uv_buf_t\* bufs;

13 file offset off_t off;

14 Save the uid and gid that need to be set, such as uv_uid_t uid when chmod;
uv_gid_t gid;

15 Save the file modification and access time that need to be set, such as double atime when fs.utimes;
double mtime;

16 When asynchronous, it is used to insert the task queue, save the work function, and call back the function struct uv\_\_work work_req;

17 Save the read data or length. e.g. read and sendfile
uv_buf_t bufsml[4];

2.4 IO Observer

IO observer is the core concept and data structure in Libuv. Let's take a look at its definition


1  struct uv\_\_io_s {
2  // Callback after the event is triggered 3. uv\_\_io_cb cb;
3  // Used to insert the queue 5. void\* pending_queue[2];
4  void\* watcher_queue[2];
5  // Save the event of interest this time and set it when inserting the IO observer queue 8. unsigned int pevents;
6  // Save the current events of interest 10. unsigned int events;
7  int fd;
8  };

The IO observer encapsulates the file descriptor, events and callbacks, and then inserts it into the IO observer queue maintained by the loop. In the Poll IO stage, Libuv will register the file descriptor with the underlying event-driven module according to the information described by the IO observer. events of interest. When the registered event is triggered, the callback of the IO observer will be executed. Let's look at some logic of how to start the IO observer.

2.4.1 Initialize IO observer


1 void uv**io_init(uv**io_t\* w, uv\_\_io_cb cb, int fd) {
2 // Initialize the queue, callback, fd that needs to be monitored
3 QUEUE_INIT(&w->pending_queue);
4 QUEUE_INIT(&w->watcher_queue);
5 w->cb = cb;
6 w->fd = fd;
7 // Events of interest when epoll was added last time, set 8. w->events = 0;
8 // Currently interested events, set 10. w->pevents = 0 before executing the epoll function again
9 }

2.4.2 Register an IO observer to Libuv.

1. void uv__io_start(uv_loop_t* loop, uv__io_t* w, unsigned int events) {
2. // Set the current events of interest 3. w->pevents |= events;
4. // May need to expand 5. maybe_resize(loop, w->fd + 1);
6. // If the event has not changed, return directly 7. if (w->events == w->pevents)
7. if ((unsigned) w->fd >= loop->nwatchers)
8.  return;
9. // If the IO watcher is not mounted elsewhere, insert it into Libuv's IO watcher queue 10. if (QUEUE_EMPTY(&w->watcher_queue))
11. QUEUE_INSERT_TAIL(&loop->watcher_queue, &w->watcher_queue);
12.  // Save the mapping relationship 13. if (loop->watchers[w->fd] == NULL) {
14.  loop->watchers[w->fd] = w;
15.  loop->nfds++;
16. }

The uv__io_start function is to insert an IO observer into the observer queue of Libuv, and save a mapping relationship in the watchers array. Libuv will process the IO observer queue during the Poll IO phase.

2.4.3 Cancel the IO observer or the event uv

__io_stop to modify the events that the IO observer is interested in. If there are still interesting events, the IO observer will still be in the queue, otherwise it will be removed from


1.  void uv\_\_io_stop(uv_loop_t\* loop,
2.  uv\_\_io_t\* w,
3.  unsigned int events) {
4.  if (w->fd == -1)
5.  return;
6.  assert(w->fd >= 0);
7.  if ((unsigned) w->fd >= loop->nwatchers)
8.  return;
9.  // Clear the previously registered events and save them in pevents, indicating the currently interesting events 10. w->pevents &= ~events;
10. // Not interested in all events 12. if (w->pevents == 0) {
11. // Remove the IO watcher queue 14. QUEUE_REMOVE(&w->watcher_queue);
12. // reset 16. QUEUE_INIT(&w->watcher_queue);
    mark, and record the number of active handles plus one. Only handles in REF and ACTIVE state will affect the exit of the event loop.

2.5.4 uv__req_init

uv__req_init initializes the type of request and records the number of requests, which will affect the exit of the event loop.

1. #define uv__req_init(loop, req, typ)
2. do {
3. (req)->type = (typ);
4. (loop)->active_reqs.count++;
5. }
6. while (0)

2.5.5. uv__req_register

The number of requests plus one


1.  #define uv\_\_req_register(loop, req)
2.  do {
3.  (loop)->active_reqs.count++;
4.  }
5.  while (0)

2.5.6. uv__req_unregister

The number of requests minus one

1. #define uv__req_unregister(loop, req)
2. do {
3. assert(uv__has_active_reqs(loop));
4. (loop)->active_reqs.count--;
5. }
6. while (0)

2.5.7. uv__handle_ref

uv__handle_ref marks the handle as the REF state. If the handle is in the ACTIVE state, the number of active handles is increased by one


1.  #define uv\_\_handle_ref(h)
2.  do {
3.  if (((h)->flags & UV_HANDLE_REF) != 0) break;
4.  (h)->flags |= UV_HANDLE_REF;
5.  if (((h)->flags & UV_HANDLE_CLOSING) != 0) break;
6.  if (((h)->flags & UV_HANDLE_ACTIVE) != 0) uv\_\_active_handle_add(h);
7.  }
8.  while (0)
9.  uv\_\_handle_unref

uv__handle_unref removes the REF state of the handle. If the handle is in the ACTIVE state, the number of active handles is reduced by one


1. #define uv\_\_handle_unref(h)
2. do {
3. if (((h)->flags & UV_HANDLE_REF) == 0) break;
4. (h)->flags &= ~UV_HANDLE_REF;
5. if (((h)->flags & UV_HANDLE_CLOSING) != 0) break;
6. if (((h)->flags & UV_HANDLE_ACTIVE) != 0) uv\_\_active_handle_rm(h);
7. }
8. while (0)