Warning: A bit of a ramble.
Over the years on multiple occasions I have had to write software that takes messages sent from one server (remote) and does something (e.g insertion into a database or executing some script etc). This tutorial/guide will cover my current implementation and include some tips and warnings regarding potential pitfalls.
This kind of architecture can be utilized by any system implementing a central sever - distributed node structure. For example VPS Panels and CDN Systems
Aim: To develop a reliable way for a remote server to send a message or perform an action on another (e.g central server) server. This solution should be reliable (reasonably guaranteed delivery), fault tolerant and reasonably resistant to duplicates. Performance is also important as the system should be able to handle high rates of messages per second and be horizontally scaleable as required.
Problems:
Problem 1: Reliability & Fault Tolerance
Its important that these messages be delivered reliably, this rules out most UDP based solutions. There also should be a memory & disk buffer to cover situations where the receiving central server is otherwise unavailable.
Problem 2: Performance
Previously many of my solutions had involved using rsyslog and its ommysql module to insert into a temporary mysql table (where the messages would be read in chunks and processed). This solution was extremely non performant (max of 200-300 messages per second per worker) and non fault tolerant (reading in chunks of over 5,000 messages).
In addition while this solution was simple it was also capped as far as scaleability due to its common table mysql approach.
----
Our solution:
software -> rsyslog -> RELP TRANSPORT -> rsyslog -> unix pipe -> redis -> php
Parts of the process that exist on the remote server are colored green ORANGE.
Worst case data loss: 25 messages
rsyslog
Now why do we use rsyslog & the syslog protocol as apposed to sending these application messages direct?
Really this is more of a personal choice but there are many great pieces of software out there for dealing with and processing syslog messages (rsyslog, elastic search & kibana etc). And lets be honest, its an easier to use existing software to do the heavy lifting than to develop your own clients, servers and modules for application integration.
RELP
RELP or Reliable Event Logging Protocol is a protocol developed for use in rsyslog for reliabe syslog forwarding over TCP. Its aim is to provide something that is high performing, non duplicating and with a low chance of message loss.
More details on the protocol can be found here: http://www.rsyslog.com/doc/relp.html
For our purposes its a great protocol unlike UDP forwarding it has 'guarunteed' delivery and unlike a plain TCP forwarder it performs nicely without needing to massively multiplex. There are also other advantages such as the window size and being more able to determine the state during connection break & recovery.
Unix FIFO Pipe
A fifo pipe is a great posix way to pass data between processes (IPC). They exist as a "file" with a path and store in memory data in a FIFO manner. These FIFO's have a limited space and it is very important to read as fast as we are able to from them. If we are unable rsyslog's write call will fail and begin writing to the on disk / memory queue (as configured).
In order to process at large volumes you may need to utilize multiple inserts into the redis list, this does increase the chances of loosing small amounts of messages if this service crashes. Here I have attached a link the code we are using for this process, its simple and utilizes a 25 entry buffer (equates to less than 1/100th of a second worth of logs for us). We have tested it at handling over 10,000 log entries a second at very minor (<5%) CPU usage.
Outputting to a unix pipe is as easy as adding lines like below to your rsyslog.conf file.
$template pipe, "~%syslogfacility%|%syslogpriority%|%timegenerated%|%msg%\n"
local3.* |/rsyslog/pipe;x4b_pipe
Redis
By inserting the data from the unix pipe into redis we can now process data from the list from multiple PHP workers (possibly remote) in an atomic way. We can also easily utilize multiple redis servers (for redundancy or via sharding for scalability) or redis cluster when stable.
Redis provides a list construct (implemented as a linked list) that can be used as a simple queue with great performance for insertions and pops. An example of usage in this way can be found in the above linked Gist. The CPU usage of redis is minute for this workload so it is safe to say you can have many consumers working before you will need to consider sharding this resource.
Redis while writing to a list in memory will also syncronise its datastore to disk, this provides added reduncancy incase of issues such as consumer crash. The size of this datastore also provides a safe buffer to handle any peaks that would not otherwise fit in the space allocated to a Unix FIFO pipe.
PHP
Reading from the redis queue and performing the intended task is the next logical step. As this is most likely the most constly part of receiving the message it makes sense that this step may require multiple workers. Redis is 100% atomic and this solution is no exception. Simply calling LPOP is sufficient. Further integration with subscriptions may be a more efficient solution however you would need to wager fault-tolerance or a possibly further engineered and complicated solution.
Remaining points for consideration:
Encryption - You will most likely also want to implement encryption if the data you are transiting may contain sensitive data. IPSec is a good transparent solution for this.
Authentication - If trusting the sending address is not sufficient for you then you will want to develop an authentication method. Kerberos may be able to help you with this.
--
If you are doing something similar and want to submit a better solution feel free (or improvements). Feel free to argue with my logic, there are few weak point in this solution (e.g the number of processing steps) however I feel they are justified. I hope this has interested you, I'm not really sure why I wrote about this particular topic - just felt it might be useful / interesting.
Over the years on multiple occasions I have had to write software that takes messages sent from one server (remote) and does something (e.g insertion into a database or executing some script etc). This tutorial/guide will cover my current implementation and include some tips and warnings regarding potential pitfalls.
This kind of architecture can be utilized by any system implementing a central sever - distributed node structure. For example VPS Panels and CDN Systems
Aim: To develop a reliable way for a remote server to send a message or perform an action on another (e.g central server) server. This solution should be reliable (reasonably guaranteed delivery), fault tolerant and reasonably resistant to duplicates. Performance is also important as the system should be able to handle high rates of messages per second and be horizontally scaleable as required.
Problems:
Problem 1: Reliability & Fault Tolerance
Its important that these messages be delivered reliably, this rules out most UDP based solutions. There also should be a memory & disk buffer to cover situations where the receiving central server is otherwise unavailable.
Problem 2: Performance
Previously many of my solutions had involved using rsyslog and its ommysql module to insert into a temporary mysql table (where the messages would be read in chunks and processed). This solution was extremely non performant (max of 200-300 messages per second per worker) and non fault tolerant (reading in chunks of over 5,000 messages).
In addition while this solution was simple it was also capped as far as scaleability due to its common table mysql approach.
----
Our solution:
software -> rsyslog -> RELP TRANSPORT -> rsyslog -> unix pipe -> redis -> php
Parts of the process that exist on the remote server are colored green ORANGE.
Worst case data loss: 25 messages
rsyslog
Now why do we use rsyslog & the syslog protocol as apposed to sending these application messages direct?
Really this is more of a personal choice but there are many great pieces of software out there for dealing with and processing syslog messages (rsyslog, elastic search & kibana etc). And lets be honest, its an easier to use existing software to do the heavy lifting than to develop your own clients, servers and modules for application integration.
RELP
RELP or Reliable Event Logging Protocol is a protocol developed for use in rsyslog for reliabe syslog forwarding over TCP. Its aim is to provide something that is high performing, non duplicating and with a low chance of message loss.
More details on the protocol can be found here: http://www.rsyslog.com/doc/relp.html
For our purposes its a great protocol unlike UDP forwarding it has 'guarunteed' delivery and unlike a plain TCP forwarder it performs nicely without needing to massively multiplex. There are also other advantages such as the window size and being more able to determine the state during connection break & recovery.
Unix FIFO Pipe
A fifo pipe is a great posix way to pass data between processes (IPC). They exist as a "file" with a path and store in memory data in a FIFO manner. These FIFO's have a limited space and it is very important to read as fast as we are able to from them. If we are unable rsyslog's write call will fail and begin writing to the on disk / memory queue (as configured).
In order to process at large volumes you may need to utilize multiple inserts into the redis list, this does increase the chances of loosing small amounts of messages if this service crashes. Here I have attached a link the code we are using for this process, its simple and utilizes a 25 entry buffer (equates to less than 1/100th of a second worth of logs for us). We have tested it at handling over 10,000 log entries a second at very minor (<5%) CPU usage.
Outputting to a unix pipe is as easy as adding lines like below to your rsyslog.conf file.
$template pipe, "~%syslogfacility%|%syslogpriority%|%timegenerated%|%msg%\n"
local3.* |/rsyslog/pipe;x4b_pipe
Redis
By inserting the data from the unix pipe into redis we can now process data from the list from multiple PHP workers (possibly remote) in an atomic way. We can also easily utilize multiple redis servers (for redundancy or via sharding for scalability) or redis cluster when stable.
Redis provides a list construct (implemented as a linked list) that can be used as a simple queue with great performance for insertions and pops. An example of usage in this way can be found in the above linked Gist. The CPU usage of redis is minute for this workload so it is safe to say you can have many consumers working before you will need to consider sharding this resource.
Redis while writing to a list in memory will also syncronise its datastore to disk, this provides added reduncancy incase of issues such as consumer crash. The size of this datastore also provides a safe buffer to handle any peaks that would not otherwise fit in the space allocated to a Unix FIFO pipe.
PHP
Reading from the redis queue and performing the intended task is the next logical step. As this is most likely the most constly part of receiving the message it makes sense that this step may require multiple workers. Redis is 100% atomic and this solution is no exception. Simply calling LPOP is sufficient. Further integration with subscriptions may be a more efficient solution however you would need to wager fault-tolerance or a possibly further engineered and complicated solution.
Remaining points for consideration:
Encryption - You will most likely also want to implement encryption if the data you are transiting may contain sensitive data. IPSec is a good transparent solution for this.
Authentication - If trusting the sending address is not sufficient for you then you will want to develop an authentication method. Kerberos may be able to help you with this.
--
If you are doing something similar and want to submit a better solution feel free (or improvements). Feel free to argue with my logic, there are few weak point in this solution (e.g the number of processing steps) however I feel they are justified. I hope this has interested you, I'm not really sure why I wrote about this particular topic - just felt it might be useful / interesting.