TY - JOUR AU - Liu, Jia AB - Introduction As globalization accelerates, societal activities are becoming more complex, leading to frequent occurrences of various emergency events. These events underscore the crucial role of emergency logistics in ensuring rapid response and timely resource allocation [1–4]. However, the current emergency logistics system faces significant challenges, such as information asymmetry, a lack of process transparency, and difficulties in assigning responsibilities, all of which severely impact its efficiency and reliability [5–7]. In response to these challenges, consortium blockchain technology, with its distinctive features of decentralization, tamper-proof data, and high transparency, offers a novel solution. Particularly, it demonstrates immense potential in enhancing collaboration among multiple organizations, ensuring the accuracy of data verification, and improving the transparency of the entire process. This study aims to explore the application of consortium blockchain technology in the traceability of emergency logistics information, focusing on optimizing the data provenance model and consensus mechanism design to better meet the practical needs of emergency logistics. Kumar et al. (2023) highlight how advanced blockchain applications can further enhance process transparency and operational efficiency across complex logistical frameworks, making a significant impact on the management of emergency events [8]. Zhang Y have elaborated on the importance of quickly deploying key resources in emergency events, while also pointing out the problems of infor-mation asymmetry and lack of process transparency [9–12]. Chen B and others have further investigated how these issues lead to resource wastage and delays in response in prac-tical operations. Blockchain technology is considered a potential solution to these problems [13–15]. Liu D and others have conducted in-depth discussions, particularly ana-lyzing how the decentralized and immutable nature of blockchain can enhance data security in the overall supply chain [16–18]. Specifically, consortium blockchain, with its semi-public, multi-party managed model, has been proven by Liao C to be partic-ularly effective for data sharing and verification [19, 20]. However, there are challenges in directly applying it to emergency logistics. Pan H Y has noted that in environments with high data turnover, the consensus mechanism of consortium blockchain needs to be optimized to ensure efficiency and security [21–25]. Building upon existing research in the field of emergency logistics, this study delves into the potential of consortium blockchain technology in enhancing the traceability of emergency logistics information and optimizing the consensus mechanism. Contrasting with prior research, our work focuses on how consortium blockchain technology can optimize the transparency and efficiency of information flow in emergency logistics. Additionally, we have conducted an in-depth study on optimizing the consensus mechanism of consortium blockchain to improve the efficiency and security of the emergency logistics system in high data turnover environments. These aspects have not been thoroughly explored in previous studies, thus making our research innovative and significant both theoretically and practically. Not only does our study propose new solutions, but it also validates the effectiveness of these solutions through simulation experiments, providing valuable references for future improvements in emergency logistics systems. Materials and methods Requirement quantification and model design In the contemporary landscape, emergency logistics plays a pivotal role, especially in responding to sudden events [26–29]. From natural disasters to public health emergencies, the rapid and effective distribution and management of resources are crucial for mitigating impacts and safeguarding public safety [30–33]. Despite this, the existing emergency logistics systems frequently encounter challenges such as inefficiencies in resource allocation, resource wastage, and the potential failure of rescue missions, often exacerbated by a lack of transparency and information asymmetry. To overcome these challenges, the integration of Consortium Blockchain Technology proves essential [34–37]. By establishing the Emergency Logistics Information Traceability Model (ELITM-CBT) based on Consortium Blockchain, not only does it enhance the transparency and accuracy of information management but it also significantly improves the operational speed and efficiency of emergency logistics systems. Drawing on the findings from Zhu et al, our model underscores the potential of blockchain to streamline these processes by ensuring data immutability and facilitating real-time updates, critical for effective decision-making during crises [38–40]. Additionally, incorporating methodologies from Korkmaz and Erkayman enhances our model’s capacity to optimize resource allocation and boost the responsiveness of emergency logistics, further contributing to its efficiency [41]. Requirements quantification. Emergency logistics, as a unique form of logistics, involves multiple participating entities, various types of goods, and numerous stages of circulation and management. Efficient and accurate emergency logistics are crucial for achieving rapid response and optimal resource allocation in the context of specific emergencies or disaster response processes. However, the transparency, accuracy, and timeliness of emergency logistics information are often affected by various factors. To tackle these challenges, we have developed the Emergency Logistics Information Traceability Model (ELITM-CBT) based on consortium blockchain technology. This model leverages the distributed, decentralized, tamper-proof, and highly transparent characteristics of consortium blockchain to significantly enhance the accuracy and efficiency of information management. By applying consortium blockchain technology in emergency logistics, we not only address the issues of information asymmetry and process opacity but also achieve efficient and fair resource allocation, ensuring the timeliness and effectiveness of emergency logistics responses. To deeply understand the construction needs of the information tracing model in emergency logistics under consortium blockchain technology, it is first necessary to comprehensively detail the actual operational processes of emergency logistics and its key parameters. Demand Analysis and Identification:After the occurrence of an emergency, it is necessary to accurately identify the types, quantities, timing, and locations of the required materials. Let the set of required materials be, D = {D1, D2, …, Dn} where each materiali Di ncludes quantity Qi, type, Ti demand location, Pi and demand time ti. Resource Pool Establishment:The materials involved in emergency logistics are distributed across multiple storage points. Let the set of storage points be W = {W1, W2, …, Wm} where each storage point Wj includes material types Tj, quantities Qj, geographic locations Pj, and available allocation times tj. Logistics Path Planning:Based on the demands and resources, rational logistics paths need to be planned. Let the set of logistics paths be R = {R1, R2, …, Rk}where each path Rl includes starting point Sl, endpoint El, set of waypoints Nl estimated duration Tl cost Cl, and risk factors Fl. Information Recording and Verification:During the logistics process, the circulation information of materials needs to be recorded and verified in real-time. Let the set of logistics information records be L = {L1, L2, …, Lo} where each record Lp, includes material IDp, circulation start, Sp circulation end, Ep circulation time tp, circulation cost, Cp and circulation status Stp. Data On-Chain and Consensus:Upload the logistics information records to the blockchain and verify them through the consortium blockchain’s consensus mechanism. Let the set of data blocks be, B = {B1, B2, …, Bo} where each block Br includes a set of information records, Lr generation time tr, hash value of the previous block Hr−1, current block hash value Hr, and consensus verification time CTr. Real-Time Monitoring and Dispatch Decision: Based on the on-chain data, monitor the logistics process in real-time and make dispatch decisions. For example, if a blockage is detected in a logistics path, Rl a re-planning of the path can be initiated. Information Traceability and Emergency Response:During or after the logistics process, information can be traced through the blockchain to respond to sudden changes or for post-event analysis. For example, if there is an issue with a material, Di its circulation path, Rl storage point, Wj and information record Lp can be traced back. Model assumptions explanation. To construct a viable and precise emergency logistics information traceability model based on consortium blockchain technology, it is necessary to first establish a set of assumptions and constraints. These will lay the foundation for subsequent model construction and algorithm design. The assumptions and constraints are analyzed in detail from the aspects of logistics demand, logistics resources, route planning, information flow, and data uploading to the blockchain. Demand stability assumption. In a short time window (e.g., the initial phase of emergency response), we assume that the demand Di in terms of quantity Qi, type Ti, location Pi, and time ti is relatively stable. Demand priority assumption. Depending on the urgency of the emergency response, different demands Di have different priorities Pri. Resource schedulability assumption. It is assumed that the materials at the storage point Wj are available within a feasible allocation time, tj and resource scheduling can be completed within a certain time frame. Resource completeness assumption. It is assumed that the type Tj and quantity Qj of materials at each storage point kW Wj nown and accurate. Path connectivity assumption. It is assumed that there is at least one logistics path Rl between any two storage points Wa and Wb. Path cost constraint. For each logistics path Rl, its cost Cl must meet a predetermined cost ceiling Cmax. On-chain timeliness assumption. Logistics information records Lp need to be uploaded to the blockchain within a certain time tup after generation. Consensus mechanism assumption. Nodes in the consortium blockchain follow a uniform consensus mechanism, able to validate and reach consensus on data block Br within a certain time CTr. Monitoring completeness assumption. Any anomalies or changes in the logistics process can be captured by the real-time monitoring system. Decision-making real-time assumption. In the event of anomalies detected during the logistics process, dispatch decisions can be made and executed swiftly. Model construction. To address the issues mentioned above, we have developed a sophisticated information traceability model for emergency logistics under the framework of consortium blockchain technology. This integration of consortium blockchain technology is pivotal in enhancing the emergency logistics system by ensuring data immutability and streamlining the information verification process. This study delves into blockchain-related factors, especially how the time of recording to the blockchain, consensus time, and data blocks influence the efficiency of the model operation and the accuracy of the results. We meticulously analyze the application of blockchain technology to improve the traceability and reliability of logistics information, showcasing the potential of blockchain in revolutionizing emergency logistics management. Symbol Explanation: Di: The ith logistics demand point; Qi, Ti, Pi, ti, Pri: Represent the demand quantity, type, location, time, and priority of the demand point Di respectively; Wj: The jth warehouse point; Qj, Tj, tj: Represent the inventory quantity, type, and available dispatch time of the warehouse point Wj respectively; Rl: The lth logistics route; Tl, Cl: Represent the estimated duration and cost of the logistics route Rl, respectively; Lp: The pth logistics information record; IDp, Sp, Ep, tp, Cp, Stp: Represent the item ID, starting point, endpoint, transit time, transit cost, and transit status of the logistics information record Lp, respectively; tup, CTr, Br: Represent the time of recording to the blockchain, consensus time, and data block, respectively; Xijl: A decision variable, indicating whether to send materials from warehouse point Wj to demand point Di via route Rl. Xijl = 1 if materials are sent, and 0 otherwise. Establishing the Model. Our objective is to optimize logistics distribution costs, meet demand priorities, reduce transit times, and ensure the accuracy and timeliness of information traceability. To this end, we have constructed a complex multi-objective optimization function(1): (1) Subject to: (2) (3) (4) (5) (6) (7) (8) (9) In Eq (1), the weight coefficients α, β, γ, and δ are used to balance the importance of various objectives; Eq (2) ensures that the demands of all demand points are met; Eq (3) ensures that the supplies at the storage points are not over-allocated; Eq (4) ensures that at least one path is selected between each demand point and storage point; Eq (5) ensures that the total cost does not exceed the budget; Eq (6) ensures that logistics information is timely recorded on the blockchain; Eq (7) ensures that consensus on the data block is achieved within a certain time frame; Eq (8) ensures the integrity and authenticity of logistics information. Algorithm design. Having constructed an emergency logistics information tracing model under consortium blockchain technology, we recognized the necessity for a uniquely tailored algorithm to address this model’s complex, multi-objective challenges. We innovatively designed and utilized a Hybrid Genetic Algorithm and Simulated Annealing (HGASA), specifically combining the strengths of Genetic Algorithms (GA) and Simulated Annealing (SA) to uniquely address the specific needs of the ELITM-CBT model, showcasing a novel approach in optimizing emergency logistics processes. Genetic Algorithm is a heuristic search algorithm that mimics the process of natural selection to solve optimization problems. Simulated Annealing, on the other hand, is a probabilistic technique that simulates the cooling process of physical annealing to solve optimization issues. The hybrid algorithm amalgamates the global search capabilities of GA with the local search prowess of SA, aiming to circumvent local optima and enhance solution quality. The algorithmic steps are as follows: Initialization. Generate an initial population of size N. Each individual comprises decision variables xijl and ypl representing logistics distribution and information recording strategies. Additionally, we introduce domain-specific heuristic knowledge to guide the generation of the initial population towards potential optimal solution regions, accelerating convergence speed and increasing the probability of finding optimal solutions; Fitness Evaluation. For each individual Ik, compute the value of the objective function Z(Ik) leading to a fitness evaluation. This incorporates weight factors for cost, time, priority, and information accuracy, denoted as α, β, γ, δ; Selection. Apply the roulette wheel selection method. Individuals are chosen based on their fitness levels, with higher fitness individuals having a greater probability of being selected; Crossover. Randomly select pairs of individuals for crossover. Each pair undergoes single or multi-point crossover operations at a given crossover probability Pc producing new offspring.Adaptive methods are employed to dynamically adjust the crossover and mutation probabilities based on the algorithm’s progress, enhancing the population’s exploration and exploitation capabilities; Mutation. Mutate gene positions of individuals at a certain mutation probability Pm to enhance population diversity; Simulated Annealing Adjustment.Conduct local search on selected individuals. For each Ik, perform the following: Randomly tweak a decision variable to generate a new individual ; Compute the objective function value ; Determine acceptance of the new individual using simulated annealing criteria: if , accept the new individual, replacing Ik with ; if , accept the new individual with a certain probability that varies with the temperature parameter T and the difference ; Update the temperature parameter to T × cooling rate where the cooling rate is typically less than 1. Replacement. Implement an elitist strategy, retaining a certain proportion of the fittest individuals in the current population, while replacing the rest with newly generated individuals.This elite retention mechanism ensures that optimal solutions are not lost over generations. Termination Condition Check. Verify if termination criteria are met: if the maximum number of generations is reached or the change in fitness is below a preset threshold, the algorithm stops. Repeat steps 2 to 8 until these conditions are satisfied. The HGASA algorithm capitalizes on GA’s global search capability and SA’s local search ability. Through intricately designed crossover, mutation, and simulated annealing adjustments, it addresses the multi-objective optimization problems of emergency logistics information tracing under consortium blockchain technology. The innovation lies in enhancing GA results with SA optimization, improving solution quality and avoiding local optima. In terms of complexity analysis, the algorithm’s time complexity depends on population size N, number of generations G, length of individual genes L, and the number of annealing steps S in SA. Thus, the overall time complexity is approximately O(N ⋅ G ⋅ L ⋅ S) while the space complexity mainly depends on population storage, estimated as O(N ⋅ L). Optimization of consensus mechanisms Applying consortium blockchain technology in emergency logistics information tracing presents several challenges, with the selection of an appropriate consensus algorithm being the most formidable. Through a comparative analysis of mainstream consensus mechanism features (Table 1), it becomes evident that Proof of Work (PoW) and Proof of Stake (PoS) algorithms, which are suitable for public blockchains with a large number of nodes, are not ideal for the Emergency Logistics Information Tracing Model using Consortium Blockchain Technology (ELITM-CBT). This is due to their significant computational power requirements, complex network configurations, and token mechanisms. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 1. Comparative analysis of mainstream consensus mechanism features. https://doi.org/10.1371/journal.pone.0303143.t001 The Practical Byzantine Fault Tolerance (PBFT) and Proof of Authority (PoA) algorithms, commonly utilized in consortium blockchains, face constraints in terms of the number of participating nodes and data consistency, rendering them unsuitable for the Emergency Logistics Information Traceability Model (ELITM-CBT) based on consortium blockchain. To overcome these challenges, and acknowledging the frequent changes in emergency supply entities as primary nodes, we have developed a novel consensus algorithm specifically tailored for ELITM-CBT, named C-PBFT. This algorithm is an enhancement of the PBFT algorithm, integrated with the primary node rotation concept from the Clique algorithm, optimizing adaptability to dynamic network conditions and strengthening the assurance of data consistency. Selection of the primary node. In the C-PBFT consensus mechanism, the selection of the primary node is updated at the block height of each target block. Additionally, C-PBFT incorporates a view mechanism when calculating the primary node to address Byzantine problems that may arise with the primary node. The formula for calculating the primary node is as follows (10). In the formula, Pi represents the primary block node; h represents the block height; v represents the view number; |R| represents the number of authoritative nodes. (10) Block verification. In the C-PBFT consensus mechanism, the ELITM-CBT clients send a request to the primary node block, which is then followed by the proposal of the block by the primary node currently in rotation. The backup nodes will go through two rounds of voting to validate and confirm the block before it is finally committed to the chain(Fig 1). Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. Block verification and commit process. The figure illustrates the process of block verification in the C-PBFT consensus mechanism. A: Client Request phase. B: Pre-Preparation phase. C: Preparation phase. D: Commit Preparation phase. E: Commit and Chain phase. https://doi.org/10.1371/journal.pone.0303143.g001 Client Request: The client c sends a request to the primary node 0; Pre-Preparation: Upon receiving the request, the primary node 0 broadcasts a pre-prepare message to the backup nodes at the current block height h; Preparation: The backup nodes, after receiving the pre-prepare message, save and validate it within the allowed time frame, and then broadcast a prepare message; Commit Preparation: During the prepare phase, if a node receives 2f + 1 matching prepare messages, it broadcasts a commit message to the other nodes and enters the commit phase; Commit and Chain: In the final commit phase, if a node receives 2f + 1 commit messages from other nodes, it proceeds with the block commitment to the chain. The C-PBFT consensus mechanism requires that the total number of honest nodes in the network satisfies N ≥ 3f + 1, which means . This implies that the model can tolerate up to faulty nodes. Clearly, when the primary node is honest and there are at most faulty nodes, the C-PBFT consensus mechanism can continue to operate normally. This also ensures the fault tolerance and stability of the ELITM-CBT system. View change mechanism. When the primary node encounters an error, a view change mechanism is triggered. With each update of the block height, all nodes start a timer. If a backup node does not receive a message from the primary node, or if the message from the primary node is erroneous and consensus cannot be reached within the allotted time, the backup node will broadcast a view-change message. Subsequently, the nodes broadcast a prepare message in the new view. If nodes collect prepare messages from different views and find that it is not possible for 2f + 1 nodes to enter the prepare phase, they will not execute the request message. At this point, the calculation for a new primary node is also initiated. The flowchart for triggering the view change mechanism in the C-PBFT consensus mechanism when the primary node is faulty is illustrated in Fig 2. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 2. C-PBFT consensus mechanism and view change flowchart. https://doi.org/10.1371/journal.pone.0303143.g002 C-PBFT consensus process. The C-PBFT consensus process is divided into three main phases: pre-prepare, prepare, and commit. There are two main conditions for rotating the primary node: one is the normal cyclical rotation, and the other is the update of the primary node using the view change mechanism due to non-response or erroneous behavior by the primary node. The main steps of the C-PBFT consensus process are as follows: Client Request: Client (c) sends a message to the nodes with the structure 〈request, o, t, c〉. Here, ‘request’ stands for the content, ‘o’ for the operation requested, ‘t’ for the timestamp, and ‘c’ for the client’s identifier, along with the client’s signature. Primary Node Broadcast: Nodes calculate whether they are the primary node for the current target block height. If so, they verify the client’s signature. If the signature is incorrect, the request is discarded. If correct, the primary node broadcasts 〈pre- prepare, h, v, d, r〉 to the backup nodes, where ‘h’ is the block height, ‘v’ is the view number, ‘d’ is the digest of the request ‘r’, with the primary node’s signature. Backup Node Validation: If a node is not the primary, it checks for a 〈pre- prepare, h, v, d, r〉 message from the primary. Upon receipt, it validates the message (verifying the primary’s signature, consistency of ‘d’ with the request digest, and ‘v’ across messages). If validation fails, the message is discarded. If it passes, the backup node saves and broadcasts 〈prepare, h, v, d, i〉, with ‘i’ indicating the backup node’s identifier and includes the backup node’s signature, awaiting prepare votes. Commit Broadcast: After receiving n ≥ 2f + 1 prepare votes, the node broadcasts 〈commit, h, v, d, i〉 to other nodes, waiting for their 〈commit, h, v, d, i〉 messages. Block Commit: Upon receiving n ≥ 2f + 1 commit votes, the node proceeds with the block commitment operation. Consensus Interruption and View Change: During the consensus process, if a backup node is non-responsive within a specified time or consensus is not achieved, it broadcasts 〈view change, h, v, i〉 to other nodes, waiting for votes. If the primary node is faulty, the same view change mechanism described above is initiated. When n ≥ 2f + 1 view change votes are received, the primary node and the corresponding view number are updated. Requirement quantification and model design In the contemporary landscape, emergency logistics plays a pivotal role, especially in responding to sudden events [26–29]. From natural disasters to public health emergencies, the rapid and effective distribution and management of resources are crucial for mitigating impacts and safeguarding public safety [30–33]. Despite this, the existing emergency logistics systems frequently encounter challenges such as inefficiencies in resource allocation, resource wastage, and the potential failure of rescue missions, often exacerbated by a lack of transparency and information asymmetry. To overcome these challenges, the integration of Consortium Blockchain Technology proves essential [34–37]. By establishing the Emergency Logistics Information Traceability Model (ELITM-CBT) based on Consortium Blockchain, not only does it enhance the transparency and accuracy of information management but it also significantly improves the operational speed and efficiency of emergency logistics systems. Drawing on the findings from Zhu et al, our model underscores the potential of blockchain to streamline these processes by ensuring data immutability and facilitating real-time updates, critical for effective decision-making during crises [38–40]. Additionally, incorporating methodologies from Korkmaz and Erkayman enhances our model’s capacity to optimize resource allocation and boost the responsiveness of emergency logistics, further contributing to its efficiency [41]. Requirements quantification. Emergency logistics, as a unique form of logistics, involves multiple participating entities, various types of goods, and numerous stages of circulation and management. Efficient and accurate emergency logistics are crucial for achieving rapid response and optimal resource allocation in the context of specific emergencies or disaster response processes. However, the transparency, accuracy, and timeliness of emergency logistics information are often affected by various factors. To tackle these challenges, we have developed the Emergency Logistics Information Traceability Model (ELITM-CBT) based on consortium blockchain technology. This model leverages the distributed, decentralized, tamper-proof, and highly transparent characteristics of consortium blockchain to significantly enhance the accuracy and efficiency of information management. By applying consortium blockchain technology in emergency logistics, we not only address the issues of information asymmetry and process opacity but also achieve efficient and fair resource allocation, ensuring the timeliness and effectiveness of emergency logistics responses. To deeply understand the construction needs of the information tracing model in emergency logistics under consortium blockchain technology, it is first necessary to comprehensively detail the actual operational processes of emergency logistics and its key parameters. Demand Analysis and Identification:After the occurrence of an emergency, it is necessary to accurately identify the types, quantities, timing, and locations of the required materials. Let the set of required materials be, D = {D1, D2, …, Dn} where each materiali Di ncludes quantity Qi, type, Ti demand location, Pi and demand time ti. Resource Pool Establishment:The materials involved in emergency logistics are distributed across multiple storage points. Let the set of storage points be W = {W1, W2, …, Wm} where each storage point Wj includes material types Tj, quantities Qj, geographic locations Pj, and available allocation times tj. Logistics Path Planning:Based on the demands and resources, rational logistics paths need to be planned. Let the set of logistics paths be R = {R1, R2, …, Rk}where each path Rl includes starting point Sl, endpoint El, set of waypoints Nl estimated duration Tl cost Cl, and risk factors Fl. Information Recording and Verification:During the logistics process, the circulation information of materials needs to be recorded and verified in real-time. Let the set of logistics information records be L = {L1, L2, …, Lo} where each record Lp, includes material IDp, circulation start, Sp circulation end, Ep circulation time tp, circulation cost, Cp and circulation status Stp. Data On-Chain and Consensus:Upload the logistics information records to the blockchain and verify them through the consortium blockchain’s consensus mechanism. Let the set of data blocks be, B = {B1, B2, …, Bo} where each block Br includes a set of information records, Lr generation time tr, hash value of the previous block Hr−1, current block hash value Hr, and consensus verification time CTr. Real-Time Monitoring and Dispatch Decision: Based on the on-chain data, monitor the logistics process in real-time and make dispatch decisions. For example, if a blockage is detected in a logistics path, Rl a re-planning of the path can be initiated. Information Traceability and Emergency Response:During or after the logistics process, information can be traced through the blockchain to respond to sudden changes or for post-event analysis. For example, if there is an issue with a material, Di its circulation path, Rl storage point, Wj and information record Lp can be traced back. Model assumptions explanation. To construct a viable and precise emergency logistics information traceability model based on consortium blockchain technology, it is necessary to first establish a set of assumptions and constraints. These will lay the foundation for subsequent model construction and algorithm design. The assumptions and constraints are analyzed in detail from the aspects of logistics demand, logistics resources, route planning, information flow, and data uploading to the blockchain. Demand stability assumption. In a short time window (e.g., the initial phase of emergency response), we assume that the demand Di in terms of quantity Qi, type Ti, location Pi, and time ti is relatively stable. Demand priority assumption. Depending on the urgency of the emergency response, different demands Di have different priorities Pri. Resource schedulability assumption. It is assumed that the materials at the storage point Wj are available within a feasible allocation time, tj and resource scheduling can be completed within a certain time frame. Resource completeness assumption. It is assumed that the type Tj and quantity Qj of materials at each storage point kW Wj nown and accurate. Path connectivity assumption. It is assumed that there is at least one logistics path Rl between any two storage points Wa and Wb. Path cost constraint. For each logistics path Rl, its cost Cl must meet a predetermined cost ceiling Cmax. On-chain timeliness assumption. Logistics information records Lp need to be uploaded to the blockchain within a certain time tup after generation. Consensus mechanism assumption. Nodes in the consortium blockchain follow a uniform consensus mechanism, able to validate and reach consensus on data block Br within a certain time CTr. Monitoring completeness assumption. Any anomalies or changes in the logistics process can be captured by the real-time monitoring system. Decision-making real-time assumption. In the event of anomalies detected during the logistics process, dispatch decisions can be made and executed swiftly. Model construction. To address the issues mentioned above, we have developed a sophisticated information traceability model for emergency logistics under the framework of consortium blockchain technology. This integration of consortium blockchain technology is pivotal in enhancing the emergency logistics system by ensuring data immutability and streamlining the information verification process. This study delves into blockchain-related factors, especially how the time of recording to the blockchain, consensus time, and data blocks influence the efficiency of the model operation and the accuracy of the results. We meticulously analyze the application of blockchain technology to improve the traceability and reliability of logistics information, showcasing the potential of blockchain in revolutionizing emergency logistics management. Symbol Explanation: Di: The ith logistics demand point; Qi, Ti, Pi, ti, Pri: Represent the demand quantity, type, location, time, and priority of the demand point Di respectively; Wj: The jth warehouse point; Qj, Tj, tj: Represent the inventory quantity, type, and available dispatch time of the warehouse point Wj respectively; Rl: The lth logistics route; Tl, Cl: Represent the estimated duration and cost of the logistics route Rl, respectively; Lp: The pth logistics information record; IDp, Sp, Ep, tp, Cp, Stp: Represent the item ID, starting point, endpoint, transit time, transit cost, and transit status of the logistics information record Lp, respectively; tup, CTr, Br: Represent the time of recording to the blockchain, consensus time, and data block, respectively; Xijl: A decision variable, indicating whether to send materials from warehouse point Wj to demand point Di via route Rl. Xijl = 1 if materials are sent, and 0 otherwise. Establishing the Model. Our objective is to optimize logistics distribution costs, meet demand priorities, reduce transit times, and ensure the accuracy and timeliness of information traceability. To this end, we have constructed a complex multi-objective optimization function(1): (1) Subject to: (2) (3) (4) (5) (6) (7) (8) (9) In Eq (1), the weight coefficients α, β, γ, and δ are used to balance the importance of various objectives; Eq (2) ensures that the demands of all demand points are met; Eq (3) ensures that the supplies at the storage points are not over-allocated; Eq (4) ensures that at least one path is selected between each demand point and storage point; Eq (5) ensures that the total cost does not exceed the budget; Eq (6) ensures that logistics information is timely recorded on the blockchain; Eq (7) ensures that consensus on the data block is achieved within a certain time frame; Eq (8) ensures the integrity and authenticity of logistics information. Algorithm design. Having constructed an emergency logistics information tracing model under consortium blockchain technology, we recognized the necessity for a uniquely tailored algorithm to address this model’s complex, multi-objective challenges. We innovatively designed and utilized a Hybrid Genetic Algorithm and Simulated Annealing (HGASA), specifically combining the strengths of Genetic Algorithms (GA) and Simulated Annealing (SA) to uniquely address the specific needs of the ELITM-CBT model, showcasing a novel approach in optimizing emergency logistics processes. Genetic Algorithm is a heuristic search algorithm that mimics the process of natural selection to solve optimization problems. Simulated Annealing, on the other hand, is a probabilistic technique that simulates the cooling process of physical annealing to solve optimization issues. The hybrid algorithm amalgamates the global search capabilities of GA with the local search prowess of SA, aiming to circumvent local optima and enhance solution quality. The algorithmic steps are as follows: Initialization. Generate an initial population of size N. Each individual comprises decision variables xijl and ypl representing logistics distribution and information recording strategies. Additionally, we introduce domain-specific heuristic knowledge to guide the generation of the initial population towards potential optimal solution regions, accelerating convergence speed and increasing the probability of finding optimal solutions; Fitness Evaluation. For each individual Ik, compute the value of the objective function Z(Ik) leading to a fitness evaluation. This incorporates weight factors for cost, time, priority, and information accuracy, denoted as α, β, γ, δ; Selection. Apply the roulette wheel selection method. Individuals are chosen based on their fitness levels, with higher fitness individuals having a greater probability of being selected; Crossover. Randomly select pairs of individuals for crossover. Each pair undergoes single or multi-point crossover operations at a given crossover probability Pc producing new offspring.Adaptive methods are employed to dynamically adjust the crossover and mutation probabilities based on the algorithm’s progress, enhancing the population’s exploration and exploitation capabilities; Mutation. Mutate gene positions of individuals at a certain mutation probability Pm to enhance population diversity; Simulated Annealing Adjustment.Conduct local search on selected individuals. For each Ik, perform the following: Randomly tweak a decision variable to generate a new individual ; Compute the objective function value ; Determine acceptance of the new individual using simulated annealing criteria: if , accept the new individual, replacing Ik with ; if , accept the new individual with a certain probability that varies with the temperature parameter T and the difference ; Update the temperature parameter to T × cooling rate where the cooling rate is typically less than 1. Replacement. Implement an elitist strategy, retaining a certain proportion of the fittest individuals in the current population, while replacing the rest with newly generated individuals.This elite retention mechanism ensures that optimal solutions are not lost over generations. Termination Condition Check. Verify if termination criteria are met: if the maximum number of generations is reached or the change in fitness is below a preset threshold, the algorithm stops. Repeat steps 2 to 8 until these conditions are satisfied. The HGASA algorithm capitalizes on GA’s global search capability and SA’s local search ability. Through intricately designed crossover, mutation, and simulated annealing adjustments, it addresses the multi-objective optimization problems of emergency logistics information tracing under consortium blockchain technology. The innovation lies in enhancing GA results with SA optimization, improving solution quality and avoiding local optima. In terms of complexity analysis, the algorithm’s time complexity depends on population size N, number of generations G, length of individual genes L, and the number of annealing steps S in SA. Thus, the overall time complexity is approximately O(N ⋅ G ⋅ L ⋅ S) while the space complexity mainly depends on population storage, estimated as O(N ⋅ L). Requirements quantification. Emergency logistics, as a unique form of logistics, involves multiple participating entities, various types of goods, and numerous stages of circulation and management. Efficient and accurate emergency logistics are crucial for achieving rapid response and optimal resource allocation in the context of specific emergencies or disaster response processes. However, the transparency, accuracy, and timeliness of emergency logistics information are often affected by various factors. To tackle these challenges, we have developed the Emergency Logistics Information Traceability Model (ELITM-CBT) based on consortium blockchain technology. This model leverages the distributed, decentralized, tamper-proof, and highly transparent characteristics of consortium blockchain to significantly enhance the accuracy and efficiency of information management. By applying consortium blockchain technology in emergency logistics, we not only address the issues of information asymmetry and process opacity but also achieve efficient and fair resource allocation, ensuring the timeliness and effectiveness of emergency logistics responses. To deeply understand the construction needs of the information tracing model in emergency logistics under consortium blockchain technology, it is first necessary to comprehensively detail the actual operational processes of emergency logistics and its key parameters. Demand Analysis and Identification:After the occurrence of an emergency, it is necessary to accurately identify the types, quantities, timing, and locations of the required materials. Let the set of required materials be, D = {D1, D2, …, Dn} where each materiali Di ncludes quantity Qi, type, Ti demand location, Pi and demand time ti. Resource Pool Establishment:The materials involved in emergency logistics are distributed across multiple storage points. Let the set of storage points be W = {W1, W2, …, Wm} where each storage point Wj includes material types Tj, quantities Qj, geographic locations Pj, and available allocation times tj. Logistics Path Planning:Based on the demands and resources, rational logistics paths need to be planned. Let the set of logistics paths be R = {R1, R2, …, Rk}where each path Rl includes starting point Sl, endpoint El, set of waypoints Nl estimated duration Tl cost Cl, and risk factors Fl. Information Recording and Verification:During the logistics process, the circulation information of materials needs to be recorded and verified in real-time. Let the set of logistics information records be L = {L1, L2, …, Lo} where each record Lp, includes material IDp, circulation start, Sp circulation end, Ep circulation time tp, circulation cost, Cp and circulation status Stp. Data On-Chain and Consensus:Upload the logistics information records to the blockchain and verify them through the consortium blockchain’s consensus mechanism. Let the set of data blocks be, B = {B1, B2, …, Bo} where each block Br includes a set of information records, Lr generation time tr, hash value of the previous block Hr−1, current block hash value Hr, and consensus verification time CTr. Real-Time Monitoring and Dispatch Decision: Based on the on-chain data, monitor the logistics process in real-time and make dispatch decisions. For example, if a blockage is detected in a logistics path, Rl a re-planning of the path can be initiated. Information Traceability and Emergency Response:During or after the logistics process, information can be traced through the blockchain to respond to sudden changes or for post-event analysis. For example, if there is an issue with a material, Di its circulation path, Rl storage point, Wj and information record Lp can be traced back. Model assumptions explanation. To construct a viable and precise emergency logistics information traceability model based on consortium blockchain technology, it is necessary to first establish a set of assumptions and constraints. These will lay the foundation for subsequent model construction and algorithm design. The assumptions and constraints are analyzed in detail from the aspects of logistics demand, logistics resources, route planning, information flow, and data uploading to the blockchain. Demand stability assumption. In a short time window (e.g., the initial phase of emergency response), we assume that the demand Di in terms of quantity Qi, type Ti, location Pi, and time ti is relatively stable. Demand priority assumption. Depending on the urgency of the emergency response, different demands Di have different priorities Pri. Resource schedulability assumption. It is assumed that the materials at the storage point Wj are available within a feasible allocation time, tj and resource scheduling can be completed within a certain time frame. Resource completeness assumption. It is assumed that the type Tj and quantity Qj of materials at each storage point kW Wj nown and accurate. Path connectivity assumption. It is assumed that there is at least one logistics path Rl between any two storage points Wa and Wb. Path cost constraint. For each logistics path Rl, its cost Cl must meet a predetermined cost ceiling Cmax. On-chain timeliness assumption. Logistics information records Lp need to be uploaded to the blockchain within a certain time tup after generation. Consensus mechanism assumption. Nodes in the consortium blockchain follow a uniform consensus mechanism, able to validate and reach consensus on data block Br within a certain time CTr. Monitoring completeness assumption. Any anomalies or changes in the logistics process can be captured by the real-time monitoring system. Decision-making real-time assumption. In the event of anomalies detected during the logistics process, dispatch decisions can be made and executed swiftly. Model construction. To address the issues mentioned above, we have developed a sophisticated information traceability model for emergency logistics under the framework of consortium blockchain technology. This integration of consortium blockchain technology is pivotal in enhancing the emergency logistics system by ensuring data immutability and streamlining the information verification process. This study delves into blockchain-related factors, especially how the time of recording to the blockchain, consensus time, and data blocks influence the efficiency of the model operation and the accuracy of the results. We meticulously analyze the application of blockchain technology to improve the traceability and reliability of logistics information, showcasing the potential of blockchain in revolutionizing emergency logistics management. Symbol Explanation: Di: The ith logistics demand point; Qi, Ti, Pi, ti, Pri: Represent the demand quantity, type, location, time, and priority of the demand point Di respectively; Wj: The jth warehouse point; Qj, Tj, tj: Represent the inventory quantity, type, and available dispatch time of the warehouse point Wj respectively; Rl: The lth logistics route; Tl, Cl: Represent the estimated duration and cost of the logistics route Rl, respectively; Lp: The pth logistics information record; IDp, Sp, Ep, tp, Cp, Stp: Represent the item ID, starting point, endpoint, transit time, transit cost, and transit status of the logistics information record Lp, respectively; tup, CTr, Br: Represent the time of recording to the blockchain, consensus time, and data block, respectively; Xijl: A decision variable, indicating whether to send materials from warehouse point Wj to demand point Di via route Rl. Xijl = 1 if materials are sent, and 0 otherwise. Establishing the Model. Our objective is to optimize logistics distribution costs, meet demand priorities, reduce transit times, and ensure the accuracy and timeliness of information traceability. To this end, we have constructed a complex multi-objective optimization function(1): (1) Subject to: (2) (3) (4) (5) (6) (7) (8) (9) In Eq (1), the weight coefficients α, β, γ, and δ are used to balance the importance of various objectives; Eq (2) ensures that the demands of all demand points are met; Eq (3) ensures that the supplies at the storage points are not over-allocated; Eq (4) ensures that at least one path is selected between each demand point and storage point; Eq (5) ensures that the total cost does not exceed the budget; Eq (6) ensures that logistics information is timely recorded on the blockchain; Eq (7) ensures that consensus on the data block is achieved within a certain time frame; Eq (8) ensures the integrity and authenticity of logistics information. Algorithm design. Having constructed an emergency logistics information tracing model under consortium blockchain technology, we recognized the necessity for a uniquely tailored algorithm to address this model’s complex, multi-objective challenges. We innovatively designed and utilized a Hybrid Genetic Algorithm and Simulated Annealing (HGASA), specifically combining the strengths of Genetic Algorithms (GA) and Simulated Annealing (SA) to uniquely address the specific needs of the ELITM-CBT model, showcasing a novel approach in optimizing emergency logistics processes. Genetic Algorithm is a heuristic search algorithm that mimics the process of natural selection to solve optimization problems. Simulated Annealing, on the other hand, is a probabilistic technique that simulates the cooling process of physical annealing to solve optimization issues. The hybrid algorithm amalgamates the global search capabilities of GA with the local search prowess of SA, aiming to circumvent local optima and enhance solution quality. The algorithmic steps are as follows: Initialization. Generate an initial population of size N. Each individual comprises decision variables xijl and ypl representing logistics distribution and information recording strategies. Additionally, we introduce domain-specific heuristic knowledge to guide the generation of the initial population towards potential optimal solution regions, accelerating convergence speed and increasing the probability of finding optimal solutions; Fitness Evaluation. For each individual Ik, compute the value of the objective function Z(Ik) leading to a fitness evaluation. This incorporates weight factors for cost, time, priority, and information accuracy, denoted as α, β, γ, δ; Selection. Apply the roulette wheel selection method. Individuals are chosen based on their fitness levels, with higher fitness individuals having a greater probability of being selected; Crossover. Randomly select pairs of individuals for crossover. Each pair undergoes single or multi-point crossover operations at a given crossover probability Pc producing new offspring.Adaptive methods are employed to dynamically adjust the crossover and mutation probabilities based on the algorithm’s progress, enhancing the population’s exploration and exploitation capabilities; Mutation. Mutate gene positions of individuals at a certain mutation probability Pm to enhance population diversity; Simulated Annealing Adjustment.Conduct local search on selected individuals. For each Ik, perform the following: Randomly tweak a decision variable to generate a new individual ; Compute the objective function value ; Determine acceptance of the new individual using simulated annealing criteria: if , accept the new individual, replacing Ik with ; if , accept the new individual with a certain probability that varies with the temperature parameter T and the difference ; Update the temperature parameter to T × cooling rate where the cooling rate is typically less than 1. Replacement. Implement an elitist strategy, retaining a certain proportion of the fittest individuals in the current population, while replacing the rest with newly generated individuals.This elite retention mechanism ensures that optimal solutions are not lost over generations. Termination Condition Check. Verify if termination criteria are met: if the maximum number of generations is reached or the change in fitness is below a preset threshold, the algorithm stops. Repeat steps 2 to 8 until these conditions are satisfied. The HGASA algorithm capitalizes on GA’s global search capability and SA’s local search ability. Through intricately designed crossover, mutation, and simulated annealing adjustments, it addresses the multi-objective optimization problems of emergency logistics information tracing under consortium blockchain technology. The innovation lies in enhancing GA results with SA optimization, improving solution quality and avoiding local optima. In terms of complexity analysis, the algorithm’s time complexity depends on population size N, number of generations G, length of individual genes L, and the number of annealing steps S in SA. Thus, the overall time complexity is approximately O(N ⋅ G ⋅ L ⋅ S) while the space complexity mainly depends on population storage, estimated as O(N ⋅ L). Optimization of consensus mechanisms Applying consortium blockchain technology in emergency logistics information tracing presents several challenges, with the selection of an appropriate consensus algorithm being the most formidable. Through a comparative analysis of mainstream consensus mechanism features (Table 1), it becomes evident that Proof of Work (PoW) and Proof of Stake (PoS) algorithms, which are suitable for public blockchains with a large number of nodes, are not ideal for the Emergency Logistics Information Tracing Model using Consortium Blockchain Technology (ELITM-CBT). This is due to their significant computational power requirements, complex network configurations, and token mechanisms. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 1. Comparative analysis of mainstream consensus mechanism features. https://doi.org/10.1371/journal.pone.0303143.t001 The Practical Byzantine Fault Tolerance (PBFT) and Proof of Authority (PoA) algorithms, commonly utilized in consortium blockchains, face constraints in terms of the number of participating nodes and data consistency, rendering them unsuitable for the Emergency Logistics Information Traceability Model (ELITM-CBT) based on consortium blockchain. To overcome these challenges, and acknowledging the frequent changes in emergency supply entities as primary nodes, we have developed a novel consensus algorithm specifically tailored for ELITM-CBT, named C-PBFT. This algorithm is an enhancement of the PBFT algorithm, integrated with the primary node rotation concept from the Clique algorithm, optimizing adaptability to dynamic network conditions and strengthening the assurance of data consistency. Selection of the primary node. In the C-PBFT consensus mechanism, the selection of the primary node is updated at the block height of each target block. Additionally, C-PBFT incorporates a view mechanism when calculating the primary node to address Byzantine problems that may arise with the primary node. The formula for calculating the primary node is as follows (10). In the formula, Pi represents the primary block node; h represents the block height; v represents the view number; |R| represents the number of authoritative nodes. (10) Block verification. In the C-PBFT consensus mechanism, the ELITM-CBT clients send a request to the primary node block, which is then followed by the proposal of the block by the primary node currently in rotation. The backup nodes will go through two rounds of voting to validate and confirm the block before it is finally committed to the chain(Fig 1). Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. Block verification and commit process. The figure illustrates the process of block verification in the C-PBFT consensus mechanism. A: Client Request phase. B: Pre-Preparation phase. C: Preparation phase. D: Commit Preparation phase. E: Commit and Chain phase. https://doi.org/10.1371/journal.pone.0303143.g001 Client Request: The client c sends a request to the primary node 0; Pre-Preparation: Upon receiving the request, the primary node 0 broadcasts a pre-prepare message to the backup nodes at the current block height h; Preparation: The backup nodes, after receiving the pre-prepare message, save and validate it within the allowed time frame, and then broadcast a prepare message; Commit Preparation: During the prepare phase, if a node receives 2f + 1 matching prepare messages, it broadcasts a commit message to the other nodes and enters the commit phase; Commit and Chain: In the final commit phase, if a node receives 2f + 1 commit messages from other nodes, it proceeds with the block commitment to the chain. The C-PBFT consensus mechanism requires that the total number of honest nodes in the network satisfies N ≥ 3f + 1, which means . This implies that the model can tolerate up to faulty nodes. Clearly, when the primary node is honest and there are at most faulty nodes, the C-PBFT consensus mechanism can continue to operate normally. This also ensures the fault tolerance and stability of the ELITM-CBT system. View change mechanism. When the primary node encounters an error, a view change mechanism is triggered. With each update of the block height, all nodes start a timer. If a backup node does not receive a message from the primary node, or if the message from the primary node is erroneous and consensus cannot be reached within the allotted time, the backup node will broadcast a view-change message. Subsequently, the nodes broadcast a prepare message in the new view. If nodes collect prepare messages from different views and find that it is not possible for 2f + 1 nodes to enter the prepare phase, they will not execute the request message. At this point, the calculation for a new primary node is also initiated. The flowchart for triggering the view change mechanism in the C-PBFT consensus mechanism when the primary node is faulty is illustrated in Fig 2. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 2. C-PBFT consensus mechanism and view change flowchart. https://doi.org/10.1371/journal.pone.0303143.g002 C-PBFT consensus process. The C-PBFT consensus process is divided into three main phases: pre-prepare, prepare, and commit. There are two main conditions for rotating the primary node: one is the normal cyclical rotation, and the other is the update of the primary node using the view change mechanism due to non-response or erroneous behavior by the primary node. The main steps of the C-PBFT consensus process are as follows: Client Request: Client (c) sends a message to the nodes with the structure 〈request, o, t, c〉. Here, ‘request’ stands for the content, ‘o’ for the operation requested, ‘t’ for the timestamp, and ‘c’ for the client’s identifier, along with the client’s signature. Primary Node Broadcast: Nodes calculate whether they are the primary node for the current target block height. If so, they verify the client’s signature. If the signature is incorrect, the request is discarded. If correct, the primary node broadcasts 〈pre- prepare, h, v, d, r〉 to the backup nodes, where ‘h’ is the block height, ‘v’ is the view number, ‘d’ is the digest of the request ‘r’, with the primary node’s signature. Backup Node Validation: If a node is not the primary, it checks for a 〈pre- prepare, h, v, d, r〉 message from the primary. Upon receipt, it validates the message (verifying the primary’s signature, consistency of ‘d’ with the request digest, and ‘v’ across messages). If validation fails, the message is discarded. If it passes, the backup node saves and broadcasts 〈prepare, h, v, d, i〉, with ‘i’ indicating the backup node’s identifier and includes the backup node’s signature, awaiting prepare votes. Commit Broadcast: After receiving n ≥ 2f + 1 prepare votes, the node broadcasts 〈commit, h, v, d, i〉 to other nodes, waiting for their 〈commit, h, v, d, i〉 messages. Block Commit: Upon receiving n ≥ 2f + 1 commit votes, the node proceeds with the block commitment operation. Consensus Interruption and View Change: During the consensus process, if a backup node is non-responsive within a specified time or consensus is not achieved, it broadcasts 〈view change, h, v, i〉 to other nodes, waiting for votes. If the primary node is faulty, the same view change mechanism described above is initiated. When n ≥ 2f + 1 view change votes are received, the primary node and the corresponding view number are updated. Selection of the primary node. In the C-PBFT consensus mechanism, the selection of the primary node is updated at the block height of each target block. Additionally, C-PBFT incorporates a view mechanism when calculating the primary node to address Byzantine problems that may arise with the primary node. The formula for calculating the primary node is as follows (10). In the formula, Pi represents the primary block node; h represents the block height; v represents the view number; |R| represents the number of authoritative nodes. (10) Block verification. In the C-PBFT consensus mechanism, the ELITM-CBT clients send a request to the primary node block, which is then followed by the proposal of the block by the primary node currently in rotation. The backup nodes will go through two rounds of voting to validate and confirm the block before it is finally committed to the chain(Fig 1). Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. Block verification and commit process. The figure illustrates the process of block verification in the C-PBFT consensus mechanism. A: Client Request phase. B: Pre-Preparation phase. C: Preparation phase. D: Commit Preparation phase. E: Commit and Chain phase. https://doi.org/10.1371/journal.pone.0303143.g001 Client Request: The client c sends a request to the primary node 0; Pre-Preparation: Upon receiving the request, the primary node 0 broadcasts a pre-prepare message to the backup nodes at the current block height h; Preparation: The backup nodes, after receiving the pre-prepare message, save and validate it within the allowed time frame, and then broadcast a prepare message; Commit Preparation: During the prepare phase, if a node receives 2f + 1 matching prepare messages, it broadcasts a commit message to the other nodes and enters the commit phase; Commit and Chain: In the final commit phase, if a node receives 2f + 1 commit messages from other nodes, it proceeds with the block commitment to the chain. The C-PBFT consensus mechanism requires that the total number of honest nodes in the network satisfies N ≥ 3f + 1, which means . This implies that the model can tolerate up to faulty nodes. Clearly, when the primary node is honest and there are at most faulty nodes, the C-PBFT consensus mechanism can continue to operate normally. This also ensures the fault tolerance and stability of the ELITM-CBT system. View change mechanism. When the primary node encounters an error, a view change mechanism is triggered. With each update of the block height, all nodes start a timer. If a backup node does not receive a message from the primary node, or if the message from the primary node is erroneous and consensus cannot be reached within the allotted time, the backup node will broadcast a view-change message. Subsequently, the nodes broadcast a prepare message in the new view. If nodes collect prepare messages from different views and find that it is not possible for 2f + 1 nodes to enter the prepare phase, they will not execute the request message. At this point, the calculation for a new primary node is also initiated. The flowchart for triggering the view change mechanism in the C-PBFT consensus mechanism when the primary node is faulty is illustrated in Fig 2. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 2. C-PBFT consensus mechanism and view change flowchart. https://doi.org/10.1371/journal.pone.0303143.g002 C-PBFT consensus process. The C-PBFT consensus process is divided into three main phases: pre-prepare, prepare, and commit. There are two main conditions for rotating the primary node: one is the normal cyclical rotation, and the other is the update of the primary node using the view change mechanism due to non-response or erroneous behavior by the primary node. The main steps of the C-PBFT consensus process are as follows: Client Request: Client (c) sends a message to the nodes with the structure 〈request, o, t, c〉. Here, ‘request’ stands for the content, ‘o’ for the operation requested, ‘t’ for the timestamp, and ‘c’ for the client’s identifier, along with the client’s signature. Primary Node Broadcast: Nodes calculate whether they are the primary node for the current target block height. If so, they verify the client’s signature. If the signature is incorrect, the request is discarded. If correct, the primary node broadcasts 〈pre- prepare, h, v, d, r〉 to the backup nodes, where ‘h’ is the block height, ‘v’ is the view number, ‘d’ is the digest of the request ‘r’, with the primary node’s signature. Backup Node Validation: If a node is not the primary, it checks for a 〈pre- prepare, h, v, d, r〉 message from the primary. Upon receipt, it validates the message (verifying the primary’s signature, consistency of ‘d’ with the request digest, and ‘v’ across messages). If validation fails, the message is discarded. If it passes, the backup node saves and broadcasts 〈prepare, h, v, d, i〉, with ‘i’ indicating the backup node’s identifier and includes the backup node’s signature, awaiting prepare votes. Commit Broadcast: After receiving n ≥ 2f + 1 prepare votes, the node broadcasts 〈commit, h, v, d, i〉 to other nodes, waiting for their 〈commit, h, v, d, i〉 messages. Block Commit: Upon receiving n ≥ 2f + 1 commit votes, the node proceeds with the block commitment operation. Consensus Interruption and View Change: During the consensus process, if a backup node is non-responsive within a specified time or consensus is not achieved, it broadcasts 〈view change, h, v, i〉 to other nodes, waiting for votes. If the primary node is faulty, the same view change mechanism described above is initiated. When n ≥ 2f + 1 view change votes are received, the primary node and the corresponding view number are updated. Results and discussion Simulation experiment In response to an earthquake disaster in a particular region, this study constructs an emergency logistics information tracing model involving multiple locations. The model consists of several supply and demand points, with logistics transportation between points recorded and monitored using consortium blockchain technology. This ensures the effective allocation of materials and transparency and traceability of information. The model assumes the involvement of the following entities: Supply Points (S): Warehouses or logistics centers that provide disaster relief materials. Demand Points (D): Disaster areas or relief stations that require disaster relief materials. Materials (M): Various types of disaster relief materials, such as medical supplies, food, water, tents, etc. Transport Network: Transportation routes connecting supply points to demand points. Each supply point has a certain reserve of materials, while each demand point has a certain demand for materials. Materials are allocated with different priorities based on their urgency. Logistics activities need to be completed within a specific time frame to ensure a rapid response. Case study solution. In this case study, we employed multi-objective optimization methods to solve the emergency logistics distribution problem. Our goals include minimizing total transportation time, minimizing total cost, and maximizing the fairness of material distribution. Considering the complexity of actual emergency logistics scenarios, we introduced an improved consensus algorithm to ensure the rapid and accurate transmission of information. Simulation Environment. The simulation experiment was conducted on a computer equipped with an Intel Core i7-10750H CPU @ 2.60GHz and 16GB RAM. All algorithms were written in Py-thon 3.8, using the NumPy library for data operations and calculations, the Pandas li-brary for data management, and graphical displays generated through the Matplotlib and Seaborn libraries. The simulation experiment is divided into baseline and improved experiments, corresponding to the consensus algorithm before and after improvement. Each part of the experiment was run dozens of times to ensure the stability and accuracy of the sta-tistical results. Statistical analysis of the experiment results was performed using Py-thon’s SciPy library. All experimental results are recorded in tabular form and presented graphically for easy comparison and analysis. The storage and management of experimental re-sults utilized the Git version control system, ensuring the integrity and traceability of the experimental data. In this study, we have conducted a detailed analysis of the ELITM-CBT model’s effectiveness and its performance under two different consensus mechanisms: PBFT (the baseline experimental scheme (Table 2)) and C-PBFT (the improved experimental scheme(Table 3)). Each experimental scheme generated data for ten allocation plans, with performance metrics categorized into total transportation time, total cost, fairness index, consensus time, and throughput. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 2. Baseline experiment (Before improvement) results. https://doi.org/10.1371/journal.pone.0303143.t002 Download: PPT PowerPoint slide PNG larger image TIFF original image Table 3. Improved experiment (After improvement) results. https://doi.org/10.1371/journal.pone.0303143.t003 Performance Analysis of C-PBFT and PBFT Consensus Mechanisms. In this research, we delve into a comprehensive analysis of the performance differences between C-PBFT and PBFT consensus mechanisms within the Emergency Logistics Information Traceability Model (ELITM-CBT). The primary objective of this analysis is to evaluate and compare the performance of these two consensus mechanisms in terms of consensus time and throughput. To facilitate an effective comparison, we first define two key performance indicators: consensus time and throughput. Consensus time refers to the duration required to achieve consensus in the blockchain, serving as a crucial metric for evaluating the efficiency of a consensus mechanism. A shorter consensus time indicates higher efficiency of the consensus mechanism, which, in the context of emergency logistics, means faster processing and confirmation of transactions, thereby accelerating the overall response speed. Throughput represents the number of transactions that the system can handle per unit of time. A higher throughput denotes the system’s enhanced capability to effectively process a large volume of transactions, which is vital for addressing the extensive needs and complex resource allocation during sudden events. Through comprehensive data analysis and visual comparison as illustrated in Fig 3, we highlight the significant optimization role of the C-PBFT consensus mechanism within the emergency logistics information system. The mechanism’s superior performance in consensus time over the PBFT not only reflects higher efficiency but is particularly crucial in the high-stress environment of emergency logistics. Specifically, C-PBFT achieves a notable reduction in decision-making delays by optimizing the consensus process, ensuring that critical supplies and information can be rapidly and accurately distributed and traced in the event of an emergency. Moreover, the enhanced processing capability of C-PBFT—as evidenced by the significant increase in throughput shown in Fig 3—further demonstrates its advantages in scenarios involving high loads and massive transaction processing. These improvements not only strengthen the response capability and processing efficiency of the emergency logistics information system but also, through the application of blockchain technology, enhance the system’s transparency and reliability, providing a more secure, efficient, and transparent solution for emergency logistics management. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 3. Comparative analysis of consensus time and throughput for PBFT and C-PBFT mechanisms. https://doi.org/10.1371/journal.pone.0303143.g003 Fairness Index Analysis. In our study, the Fairness Index (FI) is utilized as an essential metric to evaluate the efficiency of resource allocation within emergency logistics. The fundamental premise of the Fairness Index is to gauge the equitability of resource allocation across various demand points. Within the sphere of emergency logistics, the equitable distribution of resources plays a pivotal role in ensuring both effective and timely relief operations. The computation of the Fairness Index is predicated on the ratio of the amount of resources allocated to each demand point in relation to its respective requirements, as delineated in Eq (11). Herein, N signifies the aggregate number of demand points, Qa,i denotes the quantity of resources allocated to the ith demand point, and Qr,i is indicative of the requisite amount for the ith demand point. (11) In order to conduct a visual analysis of the impact exerted by disparate consensus mechanisms on the Fairness Index, line graphs (Fig 4) are employed to demonstrate the fluctuation of the Fairness Index relative to the two data sets under scrutiny. The horizontal axis of the graph is representative of varying plan IDs, whilst the vertical axis depicts the corresponding values of the Fairness Index. Observations from the line graph elucidate that, notwithstanding the presence of individual discrepancies in the Fairness Index under the two consensus mechanisms, the overall levels predominantly exhibit a high threshold, consistently maintaining between 0.8 and 0.9. Such findings attest to the considerable strengths of the ELITM-CBT model in facilitating an equitable resource distribution, thereby ensuring a balanced allocation of resources in emergency scenarios and effectively catering to the distinct requirements of various demand points. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 4. Fairness index analysis. https://doi.org/10.1371/journal.pone.0303143.g004 Optimization of Total Transportation Time. To thoroughly assess the optimization of total transportation time within the emergency logistics system, we introduce an improved index that considers weighting factors. This index synthesizes transportation time data from both baseline and improved plans, with weights reflecting the importance of transportation time in different scenarios. This allows for a more comprehensive understanding of how the improved consensus mechanism impacts total transportation time. (12) In this analysis, and represent the total transportation time for the ith plan under baseline and improved conditions, respectively, while Wi denotes the importance weight of the ith plan, and n is the total number of plans. This formula (12) measures the degree of optimization in total transportation time by considering the importance of each plan and calculating the weighted squared difference between baseline and improved scenarios. The line graph presented in Fig 5 vividly illustrates the comparison of total transportation time between the baseline and improved schemes. The degree of optimization in total transportation time for each plan is quantified by calculating the weighted squared difference between the two states. This method allows for a more comprehensive revelation of the impact of the improvement strategies on reducing transportation time. Notably, the overall transportation time optimization of the improved plans is consistently shorter than that of the baseline plans. This analysis underscores the effectiveness of the ELITM-CBT model in optimizing the transportation time of emergency logistics systems, particularly when employing the C-PBFT consensus mechanism. Such optimizations contribute significantly to enhancing the efficiency of emergency responses, ensuring that resources are allocated to the required locations in the shortest possible time. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 5. Comparison of total transportation time optimization. https://doi.org/10.1371/journal.pone.0303143.g005 Cost-Benefit Analysis. Cost-benefit analysis is a key aspect of evaluating the effectiveness of improvements in an emergency logistics system. This study introduces an improved index that comprehensively considers both costs and benefits. By examining the cost variability index of each plan, the study more thoroughly assesses the impact of the improved consensus mechanism on cost-effectiveness. (13) In this assessment, and represent the total cost of the ith plan under baseline and improved conditions, respectively, while Vi is the cost volatility index for the ith plan, and n is the total number of plans. The formula (13) measures the cost-benefit of each plan by calculating the percentage of cost savings and incorporating the cost volatility index. This method not only considers the absolute amount of cost savings but also looks at the proportion of cost savings relative to the original cost, as well as the cost volatility of each plan. This approach provides a more comprehensive and in-depth cost-benefit analysis, offering a nuanced view of the economic efficiency of the plans. The bar chart (Fig 6) clearly demonstrates the comparison of total costs be-tween the baseline and improved plans. In each group of bars, blue represents the cost of the baseline plan, while orange represents the cost of the improved plan. By com-paring the two sets of bars, it is evident that, in most cases, the improved plan incurs lower costs than the baseline plan. Additionally, considering the cost volatility index, this chart also reflects the resilience of each plan against cost fluctuations. Taller bars indicate that the improved plans can achieve greater cost savings in a fluctuating market environment, thereby enhancing overall cost-effectiveness. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 6. Cost-benefit analysis. https://doi.org/10.1371/journal.pone.0303143.g006 Simulation experiment In response to an earthquake disaster in a particular region, this study constructs an emergency logistics information tracing model involving multiple locations. The model consists of several supply and demand points, with logistics transportation between points recorded and monitored using consortium blockchain technology. This ensures the effective allocation of materials and transparency and traceability of information. The model assumes the involvement of the following entities: Supply Points (S): Warehouses or logistics centers that provide disaster relief materials. Demand Points (D): Disaster areas or relief stations that require disaster relief materials. Materials (M): Various types of disaster relief materials, such as medical supplies, food, water, tents, etc. Transport Network: Transportation routes connecting supply points to demand points. Each supply point has a certain reserve of materials, while each demand point has a certain demand for materials. Materials are allocated with different priorities based on their urgency. Logistics activities need to be completed within a specific time frame to ensure a rapid response. Case study solution. In this case study, we employed multi-objective optimization methods to solve the emergency logistics distribution problem. Our goals include minimizing total transportation time, minimizing total cost, and maximizing the fairness of material distribution. Considering the complexity of actual emergency logistics scenarios, we introduced an improved consensus algorithm to ensure the rapid and accurate transmission of information. Simulation Environment. The simulation experiment was conducted on a computer equipped with an Intel Core i7-10750H CPU @ 2.60GHz and 16GB RAM. All algorithms were written in Py-thon 3.8, using the NumPy library for data operations and calculations, the Pandas li-brary for data management, and graphical displays generated through the Matplotlib and Seaborn libraries. The simulation experiment is divided into baseline and improved experiments, corresponding to the consensus algorithm before and after improvement. Each part of the experiment was run dozens of times to ensure the stability and accuracy of the sta-tistical results. Statistical analysis of the experiment results was performed using Py-thon’s SciPy library. All experimental results are recorded in tabular form and presented graphically for easy comparison and analysis. The storage and management of experimental re-sults utilized the Git version control system, ensuring the integrity and traceability of the experimental data. In this study, we have conducted a detailed analysis of the ELITM-CBT model’s effectiveness and its performance under two different consensus mechanisms: PBFT (the baseline experimental scheme (Table 2)) and C-PBFT (the improved experimental scheme(Table 3)). Each experimental scheme generated data for ten allocation plans, with performance metrics categorized into total transportation time, total cost, fairness index, consensus time, and throughput. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 2. Baseline experiment (Before improvement) results. https://doi.org/10.1371/journal.pone.0303143.t002 Download: PPT PowerPoint slide PNG larger image TIFF original image Table 3. Improved experiment (After improvement) results. https://doi.org/10.1371/journal.pone.0303143.t003 Performance Analysis of C-PBFT and PBFT Consensus Mechanisms. In this research, we delve into a comprehensive analysis of the performance differences between C-PBFT and PBFT consensus mechanisms within the Emergency Logistics Information Traceability Model (ELITM-CBT). The primary objective of this analysis is to evaluate and compare the performance of these two consensus mechanisms in terms of consensus time and throughput. To facilitate an effective comparison, we first define two key performance indicators: consensus time and throughput. Consensus time refers to the duration required to achieve consensus in the blockchain, serving as a crucial metric for evaluating the efficiency of a consensus mechanism. A shorter consensus time indicates higher efficiency of the consensus mechanism, which, in the context of emergency logistics, means faster processing and confirmation of transactions, thereby accelerating the overall response speed. Throughput represents the number of transactions that the system can handle per unit of time. A higher throughput denotes the system’s enhanced capability to effectively process a large volume of transactions, which is vital for addressing the extensive needs and complex resource allocation during sudden events. Through comprehensive data analysis and visual comparison as illustrated in Fig 3, we highlight the significant optimization role of the C-PBFT consensus mechanism within the emergency logistics information system. The mechanism’s superior performance in consensus time over the PBFT not only reflects higher efficiency but is particularly crucial in the high-stress environment of emergency logistics. Specifically, C-PBFT achieves a notable reduction in decision-making delays by optimizing the consensus process, ensuring that critical supplies and information can be rapidly and accurately distributed and traced in the event of an emergency. Moreover, the enhanced processing capability of C-PBFT—as evidenced by the significant increase in throughput shown in Fig 3—further demonstrates its advantages in scenarios involving high loads and massive transaction processing. These improvements not only strengthen the response capability and processing efficiency of the emergency logistics information system but also, through the application of blockchain technology, enhance the system’s transparency and reliability, providing a more secure, efficient, and transparent solution for emergency logistics management. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 3. Comparative analysis of consensus time and throughput for PBFT and C-PBFT mechanisms. https://doi.org/10.1371/journal.pone.0303143.g003 Fairness Index Analysis. In our study, the Fairness Index (FI) is utilized as an essential metric to evaluate the efficiency of resource allocation within emergency logistics. The fundamental premise of the Fairness Index is to gauge the equitability of resource allocation across various demand points. Within the sphere of emergency logistics, the equitable distribution of resources plays a pivotal role in ensuring both effective and timely relief operations. The computation of the Fairness Index is predicated on the ratio of the amount of resources allocated to each demand point in relation to its respective requirements, as delineated in Eq (11). Herein, N signifies the aggregate number of demand points, Qa,i denotes the quantity of resources allocated to the ith demand point, and Qr,i is indicative of the requisite amount for the ith demand point. (11) In order to conduct a visual analysis of the impact exerted by disparate consensus mechanisms on the Fairness Index, line graphs (Fig 4) are employed to demonstrate the fluctuation of the Fairness Index relative to the two data sets under scrutiny. The horizontal axis of the graph is representative of varying plan IDs, whilst the vertical axis depicts the corresponding values of the Fairness Index. Observations from the line graph elucidate that, notwithstanding the presence of individual discrepancies in the Fairness Index under the two consensus mechanisms, the overall levels predominantly exhibit a high threshold, consistently maintaining between 0.8 and 0.9. Such findings attest to the considerable strengths of the ELITM-CBT model in facilitating an equitable resource distribution, thereby ensuring a balanced allocation of resources in emergency scenarios and effectively catering to the distinct requirements of various demand points. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 4. Fairness index analysis. https://doi.org/10.1371/journal.pone.0303143.g004 Optimization of Total Transportation Time. To thoroughly assess the optimization of total transportation time within the emergency logistics system, we introduce an improved index that considers weighting factors. This index synthesizes transportation time data from both baseline and improved plans, with weights reflecting the importance of transportation time in different scenarios. This allows for a more comprehensive understanding of how the improved consensus mechanism impacts total transportation time. (12) In this analysis, and represent the total transportation time for the ith plan under baseline and improved conditions, respectively, while Wi denotes the importance weight of the ith plan, and n is the total number of plans. This formula (12) measures the degree of optimization in total transportation time by considering the importance of each plan and calculating the weighted squared difference between baseline and improved scenarios. The line graph presented in Fig 5 vividly illustrates the comparison of total transportation time between the baseline and improved schemes. The degree of optimization in total transportation time for each plan is quantified by calculating the weighted squared difference between the two states. This method allows for a more comprehensive revelation of the impact of the improvement strategies on reducing transportation time. Notably, the overall transportation time optimization of the improved plans is consistently shorter than that of the baseline plans. This analysis underscores the effectiveness of the ELITM-CBT model in optimizing the transportation time of emergency logistics systems, particularly when employing the C-PBFT consensus mechanism. Such optimizations contribute significantly to enhancing the efficiency of emergency responses, ensuring that resources are allocated to the required locations in the shortest possible time. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 5. Comparison of total transportation time optimization. https://doi.org/10.1371/journal.pone.0303143.g005 Cost-Benefit Analysis. Cost-benefit analysis is a key aspect of evaluating the effectiveness of improvements in an emergency logistics system. This study introduces an improved index that comprehensively considers both costs and benefits. By examining the cost variability index of each plan, the study more thoroughly assesses the impact of the improved consensus mechanism on cost-effectiveness. (13) In this assessment, and represent the total cost of the ith plan under baseline and improved conditions, respectively, while Vi is the cost volatility index for the ith plan, and n is the total number of plans. The formula (13) measures the cost-benefit of each plan by calculating the percentage of cost savings and incorporating the cost volatility index. This method not only considers the absolute amount of cost savings but also looks at the proportion of cost savings relative to the original cost, as well as the cost volatility of each plan. This approach provides a more comprehensive and in-depth cost-benefit analysis, offering a nuanced view of the economic efficiency of the plans. The bar chart (Fig 6) clearly demonstrates the comparison of total costs be-tween the baseline and improved plans. In each group of bars, blue represents the cost of the baseline plan, while orange represents the cost of the improved plan. By com-paring the two sets of bars, it is evident that, in most cases, the improved plan incurs lower costs than the baseline plan. Additionally, considering the cost volatility index, this chart also reflects the resilience of each plan against cost fluctuations. Taller bars indicate that the improved plans can achieve greater cost savings in a fluctuating market environment, thereby enhancing overall cost-effectiveness. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 6. Cost-benefit analysis. https://doi.org/10.1371/journal.pone.0303143.g006 Case study solution. In this case study, we employed multi-objective optimization methods to solve the emergency logistics distribution problem. Our goals include minimizing total transportation time, minimizing total cost, and maximizing the fairness of material distribution. Considering the complexity of actual emergency logistics scenarios, we introduced an improved consensus algorithm to ensure the rapid and accurate transmission of information. Simulation Environment. The simulation experiment was conducted on a computer equipped with an Intel Core i7-10750H CPU @ 2.60GHz and 16GB RAM. All algorithms were written in Py-thon 3.8, using the NumPy library for data operations and calculations, the Pandas li-brary for data management, and graphical displays generated through the Matplotlib and Seaborn libraries. The simulation experiment is divided into baseline and improved experiments, corresponding to the consensus algorithm before and after improvement. Each part of the experiment was run dozens of times to ensure the stability and accuracy of the sta-tistical results. Statistical analysis of the experiment results was performed using Py-thon’s SciPy library. All experimental results are recorded in tabular form and presented graphically for easy comparison and analysis. The storage and management of experimental re-sults utilized the Git version control system, ensuring the integrity and traceability of the experimental data. In this study, we have conducted a detailed analysis of the ELITM-CBT model’s effectiveness and its performance under two different consensus mechanisms: PBFT (the baseline experimental scheme (Table 2)) and C-PBFT (the improved experimental scheme(Table 3)). Each experimental scheme generated data for ten allocation plans, with performance metrics categorized into total transportation time, total cost, fairness index, consensus time, and throughput. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 2. Baseline experiment (Before improvement) results. https://doi.org/10.1371/journal.pone.0303143.t002 Download: PPT PowerPoint slide PNG larger image TIFF original image Table 3. Improved experiment (After improvement) results. https://doi.org/10.1371/journal.pone.0303143.t003 Performance Analysis of C-PBFT and PBFT Consensus Mechanisms. In this research, we delve into a comprehensive analysis of the performance differences between C-PBFT and PBFT consensus mechanisms within the Emergency Logistics Information Traceability Model (ELITM-CBT). The primary objective of this analysis is to evaluate and compare the performance of these two consensus mechanisms in terms of consensus time and throughput. To facilitate an effective comparison, we first define two key performance indicators: consensus time and throughput. Consensus time refers to the duration required to achieve consensus in the blockchain, serving as a crucial metric for evaluating the efficiency of a consensus mechanism. A shorter consensus time indicates higher efficiency of the consensus mechanism, which, in the context of emergency logistics, means faster processing and confirmation of transactions, thereby accelerating the overall response speed. Throughput represents the number of transactions that the system can handle per unit of time. A higher throughput denotes the system’s enhanced capability to effectively process a large volume of transactions, which is vital for addressing the extensive needs and complex resource allocation during sudden events. Through comprehensive data analysis and visual comparison as illustrated in Fig 3, we highlight the significant optimization role of the C-PBFT consensus mechanism within the emergency logistics information system. The mechanism’s superior performance in consensus time over the PBFT not only reflects higher efficiency but is particularly crucial in the high-stress environment of emergency logistics. Specifically, C-PBFT achieves a notable reduction in decision-making delays by optimizing the consensus process, ensuring that critical supplies and information can be rapidly and accurately distributed and traced in the event of an emergency. Moreover, the enhanced processing capability of C-PBFT—as evidenced by the significant increase in throughput shown in Fig 3—further demonstrates its advantages in scenarios involving high loads and massive transaction processing. These improvements not only strengthen the response capability and processing efficiency of the emergency logistics information system but also, through the application of blockchain technology, enhance the system’s transparency and reliability, providing a more secure, efficient, and transparent solution for emergency logistics management. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 3. Comparative analysis of consensus time and throughput for PBFT and C-PBFT mechanisms. https://doi.org/10.1371/journal.pone.0303143.g003 Fairness Index Analysis. In our study, the Fairness Index (FI) is utilized as an essential metric to evaluate the efficiency of resource allocation within emergency logistics. The fundamental premise of the Fairness Index is to gauge the equitability of resource allocation across various demand points. Within the sphere of emergency logistics, the equitable distribution of resources plays a pivotal role in ensuring both effective and timely relief operations. The computation of the Fairness Index is predicated on the ratio of the amount of resources allocated to each demand point in relation to its respective requirements, as delineated in Eq (11). Herein, N signifies the aggregate number of demand points, Qa,i denotes the quantity of resources allocated to the ith demand point, and Qr,i is indicative of the requisite amount for the ith demand point. (11) In order to conduct a visual analysis of the impact exerted by disparate consensus mechanisms on the Fairness Index, line graphs (Fig 4) are employed to demonstrate the fluctuation of the Fairness Index relative to the two data sets under scrutiny. The horizontal axis of the graph is representative of varying plan IDs, whilst the vertical axis depicts the corresponding values of the Fairness Index. Observations from the line graph elucidate that, notwithstanding the presence of individual discrepancies in the Fairness Index under the two consensus mechanisms, the overall levels predominantly exhibit a high threshold, consistently maintaining between 0.8 and 0.9. Such findings attest to the considerable strengths of the ELITM-CBT model in facilitating an equitable resource distribution, thereby ensuring a balanced allocation of resources in emergency scenarios and effectively catering to the distinct requirements of various demand points. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 4. Fairness index analysis. https://doi.org/10.1371/journal.pone.0303143.g004 Optimization of Total Transportation Time. To thoroughly assess the optimization of total transportation time within the emergency logistics system, we introduce an improved index that considers weighting factors. This index synthesizes transportation time data from both baseline and improved plans, with weights reflecting the importance of transportation time in different scenarios. This allows for a more comprehensive understanding of how the improved consensus mechanism impacts total transportation time. (12) In this analysis, and represent the total transportation time for the ith plan under baseline and improved conditions, respectively, while Wi denotes the importance weight of the ith plan, and n is the total number of plans. This formula (12) measures the degree of optimization in total transportation time by considering the importance of each plan and calculating the weighted squared difference between baseline and improved scenarios. The line graph presented in Fig 5 vividly illustrates the comparison of total transportation time between the baseline and improved schemes. The degree of optimization in total transportation time for each plan is quantified by calculating the weighted squared difference between the two states. This method allows for a more comprehensive revelation of the impact of the improvement strategies on reducing transportation time. Notably, the overall transportation time optimization of the improved plans is consistently shorter than that of the baseline plans. This analysis underscores the effectiveness of the ELITM-CBT model in optimizing the transportation time of emergency logistics systems, particularly when employing the C-PBFT consensus mechanism. Such optimizations contribute significantly to enhancing the efficiency of emergency responses, ensuring that resources are allocated to the required locations in the shortest possible time. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 5. Comparison of total transportation time optimization. https://doi.org/10.1371/journal.pone.0303143.g005 Cost-Benefit Analysis. Cost-benefit analysis is a key aspect of evaluating the effectiveness of improvements in an emergency logistics system. This study introduces an improved index that comprehensively considers both costs and benefits. By examining the cost variability index of each plan, the study more thoroughly assesses the impact of the improved consensus mechanism on cost-effectiveness. (13) In this assessment, and represent the total cost of the ith plan under baseline and improved conditions, respectively, while Vi is the cost volatility index for the ith plan, and n is the total number of plans. The formula (13) measures the cost-benefit of each plan by calculating the percentage of cost savings and incorporating the cost volatility index. This method not only considers the absolute amount of cost savings but also looks at the proportion of cost savings relative to the original cost, as well as the cost volatility of each plan. This approach provides a more comprehensive and in-depth cost-benefit analysis, offering a nuanced view of the economic efficiency of the plans. The bar chart (Fig 6) clearly demonstrates the comparison of total costs be-tween the baseline and improved plans. In each group of bars, blue represents the cost of the baseline plan, while orange represents the cost of the improved plan. By com-paring the two sets of bars, it is evident that, in most cases, the improved plan incurs lower costs than the baseline plan. Additionally, considering the cost volatility index, this chart also reflects the resilience of each plan against cost fluctuations. Taller bars indicate that the improved plans can achieve greater cost savings in a fluctuating market environment, thereby enhancing overall cost-effectiveness. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 6. Cost-benefit analysis. https://doi.org/10.1371/journal.pone.0303143.g006 Conclusion In this research, we successfully constructed an Emergency Logistics Information Traceability Model (ELITM-CBT) based on consortium blockchain technology, addressing the urgent need for efficient information flow and resource allocation in emergency scenarios. This study has proven the transformative impact of consortium blockchain technology on the emergency logistics field, particularly highlighting its capabilities in ensuring data immutability, enhancing transparency, and achieving accurate traceability. By leveraging the unique features of blockchain, such as decentralization, security, and consensus mechanisms, we provided a comprehensive solution that overcomes the limitations of traditional emergency logistics models. Our in-depth simulation experiments showcased the ELITM-CBT model’s remarkable improvements in total transportation time, cost efficiency, and equitable resource allocation, underscoring the model’s superior handling of emergency logistics complexities. These findings not only demonstrate the practical applicability and effectiveness of consortium blockchain technology in reshaping emergency logistics but also emphasize its role in advancing the transparency and efficiency of logistical operations. The integration of the Hybrid Genetic Simulated Annealing Algorithm (HGASA) further accentuated the model’s optimization capabilities, showcasing a synergistic blend of algorithmic precision and blockchain’s robustness. As we look to the future, the integration of consortium blockchain technology within emergency logistics holds tremendous promise. Continuous technological progression promises even more sophisticated consensus mechanisms and blockchain functionalities, aiming to slash operational costs and boost data throughput. Emphasizing the integration of cutting-edge technologies such as big data and artificial intelligence with blockchain will likely herald a new era of innovation in emergency logistics, offering enhanced analytical capabilities and decision-making precision. Such advancements will undoubtedly open new avenues for research, driving forward the development of more resilient, transparent, and efficient emergency logistics systems. TI - Research on emergency logistics information traceability model and resource optimization allocation strategies based on consortium blockchain JO - PLoS ONE DO - 10.1371/journal.pone.0303143 DA - 2024-05-20 UR - https://www.deepdyve.com/lp/public-library-of-science-plos-journal/research-on-emergency-logistics-information-traceability-model-and-cnQBGCKULI SP - e0303143 VL - 19 IS - 5 DP - DeepDyve ER -