Crude distillation, the cornerstone of refining processes, involves heating crude oil to separate it into various components based on boiling points. Optimizing this process is critical for enhancing yield, reducing energy consumption and ensuring high-quality outputs. Traditional methods often struggle to adapt to the variable nature of crude inputs, particularly with the rising use of unconventional crudes.
Efficient crude distillation unit (CDU) optimization includes a complex of different components. A major strategy in achieving the goal to run CDU at highest economic performances is to involve different technologies of hardware and software (HW/SW) solutions. Deep reinforcement learning (DRL) is a powerful machine learning technique, which can be effectively used to optimize CDU processes for different strategic goals, allowing to shift focus intelligently and confidently. Reinforcement learning makes use of algorithms that do not rely only on historical data sets, to learn to make a prediction or perform a task. Just like we humans learn using trial and error, these algorithms also do the same. A reinforcement learning agent is given a set of actions that it can apply to its environment to obtain rewards or reach a certain goal. These actions create changes to the state of the agent and the environment. The DRL agent receives rewards based on how its actions bring it closer to its goal.
DRL agents can start by knowing nothing about their environment and selecting random actions. Because reinforcement learning systems figure things out through trial and error, it works best in situations where an action or sequence of events is rapidly established, and feedback is obtained quickly to determine the next course of action — there is no need for reams of historical data for reinforcement learning to crunch through. That makes DRL perfectly suitable for hydrocarbons processing optimization tasks using established metrics in the form of inputs, actions, and rewards.
Being a powerful tool, which requires no historical data for reinforcement learning, DRL has one significant weakness, which makes it more difficult to implement for hydrocarbons processing with wide ranges of operation. DRL is likely to improve performance only where the pre-trained parameters are already close to yielding the correct process steams quality. The observed gains may be due to effects unrelated to the training signal, but rather from changes in the shape of the distribution curve. Therefore, there is a need in real-time monitoring of crude oil quality parameters, rather than only their prediction using the pre-trained models. This can be reached using the on-line analyzers installed in the process to determine the chemical composition or physical properties of substances involved in hydrocarbons processing.
DRL’s effectiveness in hydrocarbons processing is enhanced by real-time monitoring, a function supported by the online MOD-4100 analyzer. This monitoring is crucial for validating the predictive models’ outputs, ensuring that adjustments in process streams are based on accurate and current data.
It is a common practice in refineries to apply simulation software to optimize the operation of crude distillation units (CDU). These programs are based on the optimization of crude blends and process conditions, and contribute to maximize the production capacity of required distillates to the greatest extent.
The key for simulation of crude blends with characteristic properties that enables most efficient distillation requires software that relies on a large database of crude oil assay data that covers a wide range of different crude oils. Crude distillation is a dynamic process. Efficient operation of the CDU can only be achieved when the quality of the process streams are continuously monitored. The quality of the crude feed fluctuates during the distillation process, especially in case of crude switching. This requires ongoing to adjust the process condition, in any case of discrepancies between the measured crude quality and the simulated prognoses.