TY - JOUR AU - Osman, Radwa Ahmed AB - Introduction Internet of Things (IoT)-based wireless technologies have grown significantly in a number of industries in recent years. IoT is a network that enables autonomous communication between physical objects, machinery, sensors, and other things [1, 2]. Wireless devices are being used in the agriculture sector to use modern technology and improve cost management and farming productivity [3, 4]. Smart IoT devices are used in precision agriculture to monitor crop conditions at different growth stages and for remote sensing [5, 6]. Agriculture, which emphasizes the need of efficiently managing the water resources for plants, crops, and ensuring the survival of agricultural land, is one of the most important economic sectors in many countries [7, 8]. When it comes to using precision agriculture, one of the most popular technologies is sensor systems [9]. Utilising the data aggregation and transmission capabilities of sensors, remote sensing approaches have begun to communicate with Internet of Things devices for autonomous activities. Technologies include transportation, healthcare, the military, mobile phones, and home appliances are made possible by a number of real-time scenarios investigating machine learning approaches with sensors [10, 11]. Many environmental changes in the modern period impact crop and field conditions; IoT-based technologies help farmers increase productivity while reducing expenditures. The growth of smart agriculture is supported by the integration of current wireless communications technologies with cloud platforms, which may raise production productivity and improve product quality [12, 13]. However, in terms of sensing, identification, transmission, monitoring, and feedback capabilities, agriculture-related operations can be correctly carried out utilising a more dependable and sustainable method [14]. Network integrity is achieved and authentic functionalities are performed in a distributed way by secured technologies [15]. However, effective and lightweight communication paradigms require agriculture systems to have strong machine learning model functionalities. Until it is received on authorized storage and processing systems, the private agriculture data must be reliable and shielded from unwanted access [16]. It worth be mentioned the importance of fifth-generation (5G) networks ushers in a strong strategy for IoT in different situations [17]. Several 5G-enabled strategies have been given to improve the capabilities of IoT devices different circumstances [18, 19]. Furthermore, machine learning and artificial intelligence optimization algorithms play a vital role in increasing system performance by providing different solutions for models with a variety of characteristics and sectors [20, 21]. These advancements strengthen communication networks, assuring the rapid and dependable flow of critical data for timely and effective response. IoT devices in smart agriculture often send data to the gateway or other places, sharing the 5G spectrum with D2D or cellular user equipment (CUE) connection. Interference may arise due to this shared spectrum, which could affect the dependability and efficiency of IoT connectivity. Interference in agricultural settings can result in data integrity being compromised, which can impact important decisions about crop management and access to resources. In order to improve the effectiveness and reliability of IoT communication in smart agriculture, this work suggests an novel 1D-CNN and Lagrange optimization-based approach. In order to provide a reliable and efficient transmission of agricultural data, the main goal is to reduce interference difficulties at the gateway or destination. Among this article’s major contributions are: A Lagrange optimization problem is built in order to determine the communication reliability of IoT devices and Gateway in smart agriculture. For increased IoT communication efficiency, this method uses a one-dimensional convolutional neural network (1D-CNN) The suggested approach, which is especially designed for use in smart agriculture applications, attempts to maximize communication inside IoT networks. In order to guarantee the dependable transfer of data under a variety of environmental circumstances, this optimization entails figuring out the required distance between IoT devices and Gateways. Furthermore, other variables that impact system performance are taken into account as alternative parameters. These variables include path loss, the necessary signal-to-interference-plus-noise ratio (SINRth), transmission power, and the existence of possible interfering devices. IoT transmission devices can anticipate the ideal transmission distance between IoT devices and Gateways using a deep learning model that takes into consideration channel circumstances. This predictive feature ensures that patient data is received accurately and reliably in smart agriculture. The usefulness of the suggested technology in smart agriculture is assessed by examining the achievable data rate and energy efficiency across various environmental factors. This evaluation took into account transmission power, needed signal-to-interference-plus-noise ratio (SINRth), and different interference transmission ranges. These findings contribute to the optimisation of IoT networks in agricultural contexts. The structure of the paper is as follows: The details of the the proposed strategy are revealed after related work. Next, the the analytical and experimental studies for the proposed strategy are examined. Finally, a summary wraps up the paper. Related work IoT communication is important since it allows for smooth connectivity across numerous devices, sensors, and agricultural equipment. This network’s allows for real-time data transmission, providing farmers with vital insights for precision farming, resource optimisation, and better decision-making. In smart farming, [22] developed a machine learning-based smart optimisation model for reliable and quality-aware sustainable agriculture. It optimised network parameters using intelligent devices, performance analysis, and blockchain-based security, which were proven through simulations and testing. Furthermore, [23] used IoT technologies to improve smart agriculture by suggesting optimised smart irrigation systems. Simulations using Network Simulator-2 (NS2) using Hierarchy Shuffled Shepherd Clustering (HSSC) and Emperor Penguin Jellyfish Optimizer (EPJO) revealed significant gains in energy efficiency and network lifetime when compared to previous approaches. Additionally, [24] proposed an Internet of Things-based smart agricultural system for India, with a focus on autonomous irrigation and insect detection. It accurately anticipated water needs and identified plant illnesses using machine learning techniques, obtaining an 84% accuracy. Moreover, [25] examined how IoT improves smart farming by monitoring soil and detecting pests with wireless sensors. For dependable information dispersion, the suggested IoT-based Wireless Sensor Network (WSN) prioritised efficient data collecting and cluster head selection. In terms of energy efficiency for IoT networks, particularly smart agriculture network. The study presented in [26] looked at rural farms’ energy inefficiency as well as the sluggish adoption of renewable energy and resource management systems. Current renewable energy sources have been deemed insufficient for effective energy management. The report described a developed system that was applied in a farm in central Portugal, with an emphasis on integrated energy control. Solar harvesting and multi-access edge computing (MEC) were introduced in [27] for long-term monitoring in IoT-based smart agriculture. It improved network computations and energy efficiency by optimising resource scheduling and computation offloading to maximize capacity under solar energy constraints. Furthermore, [28] noted concerns about increased energy import dependency in agriculture and emphasises the importance of resource and energy allocation for improved productivity. The suggested naïve multi-phase resource allocation algorithm seeks to improve energy efficiency in dynamic agricultural situations. Additionally, [29] proposed a complete methodology for interference reduction in smart homes, emphasizing the confluence of deep learning and mathematical optimization to improve data reception reliability. A novel and secure method for obtaining data from Internet of Things devices is proposed in [30]. SEED enabled improved throughput and energy efficiency in contrast to current methodologies by using MD5 hashing to assure data integrity and fixing network difficulties via aggregator node upgrades. Furthermore, [31] examined data transfer challenges and offered an energy-efficient Massive MIMO-NOMA IoT network for communications beyond 5G. The proposed method outperformed previous algorithms in terms of user fairness, convergence, and energy efficiency by employing sequential convex approximation and fractional programming. In addition, [32] proposed an interference control strategy to optimise 5G cellular networks and IoT. It improved crucial QoS measures such as energy economy and system reliability by decreasing interference via Lagrange optimisation, Moreover, [33] addressed interference concerns in the coexistence of 5G and IoT. It suggested a distributed deep learning model for optimising communication distances, improving throughput and energy efficiency while decreasing interference. Furthermore, [34] suggested a method for improving IoE network performance through Lagrange optimization and deep learning. It optimized transmission power for efficiency and throughput, while minimizing interference. A deep learning network predicted optimal transmission power using Lagrange optimization data, which was validated by testing. This study addresses a crucial crop monitoring challenge in order to improve the IoT network for smart agriculture. The goal is to use the Internet of Things (IoT) to improve connectivity between farmers and agricultural systems through effective communication. In particular, the ideal range for Internet of Things communication is established when there is a chance of interference from other devices using the same frequency range. The goal is to pinpoint the necessary components and setups to enhance IoT connectivity in intelligent farming. The suggested approach combines a deep learning model with an analytical optimization technique to overcome this difficulty. The method teaches agricultural devices to dynamically modify their proximity for optimal monitoring by utilizing a distributed deep learning model within the IoT network. A comparative analysis highlighting the unique characteristics of the proposed model over previous research efforts is shown in Table 1. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 1. Comparison between different related works and the proposed model. https://doi.org/10.1371/journal.pone.0311601.t001 Proposed model This section presents an analytical optimization technique that outlines a recommended approach to enhance the gateway-IoT sensor connection in smart farming. Next, a strong deep neural network architecture is shown, meant for real-world implementation in Internet of Things networks, validating the dataset generated by the analytically proposed model. System model and problem formulation For the suggested approach, it is assumed that a smart farm consists of numerous IoT sensors are installed throughout the smart farm to monitor environmental factors such as humidity, temperature, and water levels. These sensors wirelessly transfer the collected data to a central IoT gateway. IoT gateway acts as a hub that aggregates data from all IoT sensors. It then processes this data locally and transmits it to remote servers or the cloud for further analysis and decision-making. As shown in Fig 1, the spectrum where IoT-sensors send data is shared by C number of CUEs, base station (BS), D number of D2D, which consists of transmitting devices (Dtx) and receiving devices (Drx), and V number of V2V, which consists of transmitting vehicles (Vtx) and receiving vehicles (Vrx). CUEs are typical mobile phones or communication devices that link to a traditional cellular service provider’s base station (BS). CUEs and IoT sensors share the same frequency range of operation. D2D communication helps lower network latency and congestion. Similar to D2D, V2V communication involves transmitting vehicles (Vtx) and receiving vehicles (Vrx) that exchange data directly. This setup is useful for applications involving autonomous or connected vehicles within the farm. This includes devices labelled as transmitting devices (Dtx) and receiving devices (Drx) that communicate directly with each other without routing through the BS. Communication scenarios include: (i) transmitting data from IoT sensors to an IoT gateway; (ii) regular cellular communication in which CUEs connect with BS; (iii) Dtx and Drx communicating D2D; and (iv) Vtx and Vrx communicating V2V. When several devices use the same spectrum and broadcast at the same time, interference in the system results. Interference may arise, for instance, if data is transmitted by a CUE, Vtx, or Dtx in the same frequency that IoT sensors use to talk to the gateway. This transmission overlap has the potential to lower data communication reliability and deteriorate signal quality. The suggested model uses a one-dimensional Convolutional Neural Network (1D-CNN) in conjunction with Lagrange optimisation to dynamically modify important communication parameters as transmission power, distances between devices, and signal-to-interference-plus-noise ratio (SINR). With the help of this adaptive technique, network performance may be optimised in real time, lowering interference and improving the dependability of IoT sensor data transfer. The suggested methodology’s main objective is to maximise the IoT system’s overall performance in smart farms. This is accomplished by optimizing the transmission power and other characteristics of D2D devices, V2V networks, CUEs, and IoT sensors in order to maximise Energy Efficiency (EE). The optimisation seeks to minimise power consumption while achieving the necessary SINR. Moreover, increasing the overall possible data throughput through the optimization of communication channels between gateways, IoT sensors, and other devices. The model accounts for a number of factors, such as interference levels, transmission power limitations, and SINR, to guarantee the best possible data throughput in a variety of environmental scenarios. The maximum energy efficiency (EE) and maximum achievable data rate (R) can be expressed as: (1) (2) Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. Proposed system model. https://doi.org/10.1371/journal.pone.0311601.g001 In the context of the optimization problem, the total achievable data rate is denoted by Ri,c,d,v, while the system energy efficiency is denoted by EEi,c,d,v. These measures are related to the v − th path between V2V, the d − th path between D2D devices, the k − th path between CUE and BS, and the i − th path between IoT-sensors and gateways. The symbols SINRth and SINRIG represent the required system signal-to-interference-plus-noise ratio and signal-to-interference-plus-noise ratio for the Internet of Things to gateway connection, respectively. Similarly, PC and PCmax indicate the CUE’s transmission power and maximum transmission power, whereas PD and PDmax reflect the D2D communication link’s transmission power and maximum transmission power. Finally, PV and PVmax signify the maximum transmission power and transmission power of the V2V communication link, respectively. Non-orthogonal multiple access (NOMA) is chosen as the appropriate access method in the proposed paradigm [35, 36] to facilitate the broad implementation of IoT-sensors, CUE, D2D, and V2V for smart farms and to allow them concurrent access to the channel. Furthermore, the proposed model assumes a Rayleigh fading channel with additive white Gaussian noise (AWGN) [37]. Furthermore, the model assumes statistical independence between the channel fading coefficients for different transmission connections. As a result, the network’s energy efficiency (EE) and possible data rate (R) can be stated as follows: (3) (4) where RIG, RCB, , and , respectively, represent the achievable data rates for IoT-sensors and gateways, CUE-BS link, D2D communication connection, and V2V communication link. The variables PI and Po denote the IoT transmission power and internal circuit power consumption. As a result, the expressions RIG, RCB, , and are: (5) (6) (7) (8) where the channel gain coefficient between IoT-sensors and gateways (G), Cellular equipment deices (CUE) and gateways, transmitted device (Dtx) and gateways, and transmitted vehicle (Vtx) and gateways are HIG, HCkG, HDdG and HVvG, respectively. The channel gain coefficient between Cellular equipment deices (CUE) and base station (BS), IoT-sensors and base station (BS), transmitted device (Dtx) and base station (BS), and transmitted vehicle (Vtx) and base station (BS) are represented by HCB, HIiB, HDdB and HVvB, respectively. The channel gain coefficient between transmitted device (Dtx) and receiving device (Drx), IoT-sensors and receiving device (Drx), Cellular equipment deices (CUE) and receiving device (Drx), and transmitted vehicle (Vtx) receiving device (Drx) are represented by , , and HVvD, respectively. The channel gain coefficient between transmitted vehicle (Vtx) and receiving vehicle (Vrx), IoT-sensors and receiving vehicle (Vrx), CUE and receiving vehicle (Vrx), and Dtx and receiving vehicle (Vrx) are represented by HVV, , and , respectively. In this case, N and B represent the noise power and channel system bandwidth, respectively. The main goal of the proposed methodology is to maximize the total achievable data rate (R) and overall energy efficiency (EE) of the Internet of Things network in smart agriculture under different environmental scenarios, as demonstrated by Eqs 1 and 2. Under the restriction that the signal-to-interference-plus-noise ratio (SINR) between Internet of Things devices and gateways (SINRIG) must either meet or above a predetermined threshold (SINRth), Energy Efficiency (EE) is optimized. Furthermore, the maximum limits (PDmax, PDmax, PVmax) for the transmission powers of the Cellular User Equipment (CUE), Device-to-Device (D2D) links, and Vehicle-to-Vehicle (V2V) links must be adhered to. Furthermore, Similar restrictions apply to the maximum achievable data rate optimization as they do to energy efficiency, guaranteeing dependable communication and peak network performance. The Lagrange multipliers method was applied to the optimization problems for EE and R in order to accommodate these limitations. The following Eqs 1 and 2 is the formulation of the Lagrangian functions for the optimization problems: (9) (10) The non-negative Lagrangian multipliers are designated by the symbols λ1, λ2, λ3, λ4, μ1, μ2, μ3, and μ4. The values of λ1, λ2, λ3, and λ4 must be determined by calculating the derivative of Eq 9 with reference to PI, PC, PD, and PV to fulfil the conditions of the optimization problem for energy efficiency (EE). These multipliers help to adjust the optimization problem by penalizing any violation of the constraints. An increased Lagrange multiplier value signifies a more robust impact of the associated constraint on the optimization procedure. As a result, λ1, λ2, λ3, and λ4 can be calculated as follows: (11) (12) (13) (14) where The derivation of Eq 9 with respect to λ1, λ2, λ3, and λ4 yields the optimal required distance (dIG) between IoT-sensors and gateways, the optimal required CUE interfere transmission power (PC), the optimal required Dtx interfere transmission power (PD), and the optimal required Vtx interfere transmission power (PV). These can be found as: (15) where the path loss exponent and constant path loss are expressed by α and plo, respectively. (16) (17) (18) The values of μ1, μ2, μ3, and μ4 can be found using the derivative of Eq 10 with regard to PI, PC, PD, and PV in order to satisfy the constraint of the optimization problem for (R). Then, μ1, μ2, μ3, and μ4 may represent as follows: (19) (20) (21) (22) The optimal required interference distance (dIG) between IoT-sensors and the gateway, the optimal required CUE interfere transmission power (PC), the optimal required Dtx interfere transmission power (PD), and the optimal required Vtx interfere transmission power (PV) can all be obtained by deriving Eq 10 with respect to μ1, μ2, μ3, and μ4. This will make it possible to optimize the overall attainable data rate (R), which may be calculated with the following formula: (23) (24) (25) (26) Dataset generation The necessary datasets have been made available through MATLAB simulations, and the equations for the proposed model—which are described in Section —have been put into practice. The simulation’s parameter values are displayed in Table 2. To improve communication between IoT-sensors and gateway, the datasets will be utilized to train models that will be put on all transmitting devices. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 2. Simulation parameters. https://doi.org/10.1371/journal.pone.0311601.t002 There are 44679 records in all. Each record contains a unique combination of these variables to represent the following: the distances between CUB and BS (dCB), Dtx and Drx (dDDrx), and Vtx and Vrx (dVVrx); the necessary signal-to-interference-plus-noise-ratio threshold (SINRth); the IoT device transmission power (PI), the CUE transmission power (PC), the D2D transmission power (PD), and the V2V transmission power (PV) Fig 2 shows the Pearson coefficients that illustrate the relationship between each input and output parameter. The graph shows that EE has a substantial negative correlation with the PI, PC, PD, and PV parameters, whereas the output dIG has a strong association with the dCB, , and parameters. Furthermore, there isn’t much of a link between parameters R and the input parameters. Each of these variables must be used to train the deep learning model, and the results section will provide an explanation of the association’s significance. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 2. Pearson correlation coefficients of each input parameter (dCB, , , SINRth, PI, PC, PD and PV) and the output (dIG, EE and R). https://doi.org/10.1371/journal.pone.0311601.g002 Proposed deep learning model In this section, the suggested deep learning model is demonstrated and explained. Before adding the variables to the recommended deep learning model, a normalization phase must be completed in order to help with the learning of the model weights. Each variable is normalized using the min-max scaling procedure before being incorporated to the model. From the final dense layer, the eight input variables, dCB, , , SINRth, PI, PC, PD, and PV, are used to derive the output parameters, dIG, EE, and R. The model has three distinct phases, namely 1D-CNN, flattening, and thick layers, as illustrated in Fig 3. The normalized input parameters are processed by three 1D-CNN layers: one with a size 1 kernel and each with 64, 64, and 128 filters. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 3. Proposed deep learning model. https://doi.org/10.1371/journal.pone.0311601.g003 To maintain a constant width of the output matrix, each layer of the 1D-CNN produces padded results. Next, a flattening layer receives the output from the third 1D-CNN and reformats the dimension to prepare it for input into the dense layers. Regression is produced by six dense layers that come after the flattening layer. Before choosing how many nodes to utilise for the dense layers and how many filters to employ for the 1D-CNN, a grid search was utilised to test out a number of options. The activation function for each hidden layer has been the Rectified Linear Unit (ReLU). The grid search took activation function selection into consideration and tested several methods for following the hidden layers in the proposed model. For optimal results, the output of each hidden layer was input into an activation function known as a parametric rectified linear unit, or PReLU. The root mean square error (RMSE) and mean absolute error (MAE) loss function are the objectives of the adaptive moment (Adam) optimization used in the proposed model. Adam’s learning method allows him to acquire the required abilities. Whereas RMSE is the root square of the average of the squared disparities between real and anticipated values, MAE measures the average difference between the actual and expected values. They can be referred to as these: (27) (28) where xj is the predicted value, yj is the actual value, and n is the total number of data points that were recorded. The experiments that were conducted in order to develop, validate, and test the proposed model are covered in the section that follows. System model and problem formulation For the suggested approach, it is assumed that a smart farm consists of numerous IoT sensors are installed throughout the smart farm to monitor environmental factors such as humidity, temperature, and water levels. These sensors wirelessly transfer the collected data to a central IoT gateway. IoT gateway acts as a hub that aggregates data from all IoT sensors. It then processes this data locally and transmits it to remote servers or the cloud for further analysis and decision-making. As shown in Fig 1, the spectrum where IoT-sensors send data is shared by C number of CUEs, base station (BS), D number of D2D, which consists of transmitting devices (Dtx) and receiving devices (Drx), and V number of V2V, which consists of transmitting vehicles (Vtx) and receiving vehicles (Vrx). CUEs are typical mobile phones or communication devices that link to a traditional cellular service provider’s base station (BS). CUEs and IoT sensors share the same frequency range of operation. D2D communication helps lower network latency and congestion. Similar to D2D, V2V communication involves transmitting vehicles (Vtx) and receiving vehicles (Vrx) that exchange data directly. This setup is useful for applications involving autonomous or connected vehicles within the farm. This includes devices labelled as transmitting devices (Dtx) and receiving devices (Drx) that communicate directly with each other without routing through the BS. Communication scenarios include: (i) transmitting data from IoT sensors to an IoT gateway; (ii) regular cellular communication in which CUEs connect with BS; (iii) Dtx and Drx communicating D2D; and (iv) Vtx and Vrx communicating V2V. When several devices use the same spectrum and broadcast at the same time, interference in the system results. Interference may arise, for instance, if data is transmitted by a CUE, Vtx, or Dtx in the same frequency that IoT sensors use to talk to the gateway. This transmission overlap has the potential to lower data communication reliability and deteriorate signal quality. The suggested model uses a one-dimensional Convolutional Neural Network (1D-CNN) in conjunction with Lagrange optimisation to dynamically modify important communication parameters as transmission power, distances between devices, and signal-to-interference-plus-noise ratio (SINR). With the help of this adaptive technique, network performance may be optimised in real time, lowering interference and improving the dependability of IoT sensor data transfer. The suggested methodology’s main objective is to maximise the IoT system’s overall performance in smart farms. This is accomplished by optimizing the transmission power and other characteristics of D2D devices, V2V networks, CUEs, and IoT sensors in order to maximise Energy Efficiency (EE). The optimisation seeks to minimise power consumption while achieving the necessary SINR. Moreover, increasing the overall possible data throughput through the optimization of communication channels between gateways, IoT sensors, and other devices. The model accounts for a number of factors, such as interference levels, transmission power limitations, and SINR, to guarantee the best possible data throughput in a variety of environmental scenarios. The maximum energy efficiency (EE) and maximum achievable data rate (R) can be expressed as: (1) (2) Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. Proposed system model. https://doi.org/10.1371/journal.pone.0311601.g001 In the context of the optimization problem, the total achievable data rate is denoted by Ri,c,d,v, while the system energy efficiency is denoted by EEi,c,d,v. These measures are related to the v − th path between V2V, the d − th path between D2D devices, the k − th path between CUE and BS, and the i − th path between IoT-sensors and gateways. The symbols SINRth and SINRIG represent the required system signal-to-interference-plus-noise ratio and signal-to-interference-plus-noise ratio for the Internet of Things to gateway connection, respectively. Similarly, PC and PCmax indicate the CUE’s transmission power and maximum transmission power, whereas PD and PDmax reflect the D2D communication link’s transmission power and maximum transmission power. Finally, PV and PVmax signify the maximum transmission power and transmission power of the V2V communication link, respectively. Non-orthogonal multiple access (NOMA) is chosen as the appropriate access method in the proposed paradigm [35, 36] to facilitate the broad implementation of IoT-sensors, CUE, D2D, and V2V for smart farms and to allow them concurrent access to the channel. Furthermore, the proposed model assumes a Rayleigh fading channel with additive white Gaussian noise (AWGN) [37]. Furthermore, the model assumes statistical independence between the channel fading coefficients for different transmission connections. As a result, the network’s energy efficiency (EE) and possible data rate (R) can be stated as follows: (3) (4) where RIG, RCB, , and , respectively, represent the achievable data rates for IoT-sensors and gateways, CUE-BS link, D2D communication connection, and V2V communication link. The variables PI and Po denote the IoT transmission power and internal circuit power consumption. As a result, the expressions RIG, RCB, , and are: (5) (6) (7) (8) where the channel gain coefficient between IoT-sensors and gateways (G), Cellular equipment deices (CUE) and gateways, transmitted device (Dtx) and gateways, and transmitted vehicle (Vtx) and gateways are HIG, HCkG, HDdG and HVvG, respectively. The channel gain coefficient between Cellular equipment deices (CUE) and base station (BS), IoT-sensors and base station (BS), transmitted device (Dtx) and base station (BS), and transmitted vehicle (Vtx) and base station (BS) are represented by HCB, HIiB, HDdB and HVvB, respectively. The channel gain coefficient between transmitted device (Dtx) and receiving device (Drx), IoT-sensors and receiving device (Drx), Cellular equipment deices (CUE) and receiving device (Drx), and transmitted vehicle (Vtx) receiving device (Drx) are represented by , , and HVvD, respectively. The channel gain coefficient between transmitted vehicle (Vtx) and receiving vehicle (Vrx), IoT-sensors and receiving vehicle (Vrx), CUE and receiving vehicle (Vrx), and Dtx and receiving vehicle (Vrx) are represented by HVV, , and , respectively. In this case, N and B represent the noise power and channel system bandwidth, respectively. The main goal of the proposed methodology is to maximize the total achievable data rate (R) and overall energy efficiency (EE) of the Internet of Things network in smart agriculture under different environmental scenarios, as demonstrated by Eqs 1 and 2. Under the restriction that the signal-to-interference-plus-noise ratio (SINR) between Internet of Things devices and gateways (SINRIG) must either meet or above a predetermined threshold (SINRth), Energy Efficiency (EE) is optimized. Furthermore, the maximum limits (PDmax, PDmax, PVmax) for the transmission powers of the Cellular User Equipment (CUE), Device-to-Device (D2D) links, and Vehicle-to-Vehicle (V2V) links must be adhered to. Furthermore, Similar restrictions apply to the maximum achievable data rate optimization as they do to energy efficiency, guaranteeing dependable communication and peak network performance. The Lagrange multipliers method was applied to the optimization problems for EE and R in order to accommodate these limitations. The following Eqs 1 and 2 is the formulation of the Lagrangian functions for the optimization problems: (9) (10) The non-negative Lagrangian multipliers are designated by the symbols λ1, λ2, λ3, λ4, μ1, μ2, μ3, and μ4. The values of λ1, λ2, λ3, and λ4 must be determined by calculating the derivative of Eq 9 with reference to PI, PC, PD, and PV to fulfil the conditions of the optimization problem for energy efficiency (EE). These multipliers help to adjust the optimization problem by penalizing any violation of the constraints. An increased Lagrange multiplier value signifies a more robust impact of the associated constraint on the optimization procedure. As a result, λ1, λ2, λ3, and λ4 can be calculated as follows: (11) (12) (13) (14) where The derivation of Eq 9 with respect to λ1, λ2, λ3, and λ4 yields the optimal required distance (dIG) between IoT-sensors and gateways, the optimal required CUE interfere transmission power (PC), the optimal required Dtx interfere transmission power (PD), and the optimal required Vtx interfere transmission power (PV). These can be found as: (15) where the path loss exponent and constant path loss are expressed by α and plo, respectively. (16) (17) (18) The values of μ1, μ2, μ3, and μ4 can be found using the derivative of Eq 10 with regard to PI, PC, PD, and PV in order to satisfy the constraint of the optimization problem for (R). Then, μ1, μ2, μ3, and μ4 may represent as follows: (19) (20) (21) (22) The optimal required interference distance (dIG) between IoT-sensors and the gateway, the optimal required CUE interfere transmission power (PC), the optimal required Dtx interfere transmission power (PD), and the optimal required Vtx interfere transmission power (PV) can all be obtained by deriving Eq 10 with respect to μ1, μ2, μ3, and μ4. This will make it possible to optimize the overall attainable data rate (R), which may be calculated with the following formula: (23) (24) (25) (26) Dataset generation The necessary datasets have been made available through MATLAB simulations, and the equations for the proposed model—which are described in Section —have been put into practice. The simulation’s parameter values are displayed in Table 2. To improve communication between IoT-sensors and gateway, the datasets will be utilized to train models that will be put on all transmitting devices. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 2. Simulation parameters. https://doi.org/10.1371/journal.pone.0311601.t002 There are 44679 records in all. Each record contains a unique combination of these variables to represent the following: the distances between CUB and BS (dCB), Dtx and Drx (dDDrx), and Vtx and Vrx (dVVrx); the necessary signal-to-interference-plus-noise-ratio threshold (SINRth); the IoT device transmission power (PI), the CUE transmission power (PC), the D2D transmission power (PD), and the V2V transmission power (PV) Fig 2 shows the Pearson coefficients that illustrate the relationship between each input and output parameter. The graph shows that EE has a substantial negative correlation with the PI, PC, PD, and PV parameters, whereas the output dIG has a strong association with the dCB, , and parameters. Furthermore, there isn’t much of a link between parameters R and the input parameters. Each of these variables must be used to train the deep learning model, and the results section will provide an explanation of the association’s significance. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 2. Pearson correlation coefficients of each input parameter (dCB, , , SINRth, PI, PC, PD and PV) and the output (dIG, EE and R). https://doi.org/10.1371/journal.pone.0311601.g002 Proposed deep learning model In this section, the suggested deep learning model is demonstrated and explained. Before adding the variables to the recommended deep learning model, a normalization phase must be completed in order to help with the learning of the model weights. Each variable is normalized using the min-max scaling procedure before being incorporated to the model. From the final dense layer, the eight input variables, dCB, , , SINRth, PI, PC, PD, and PV, are used to derive the output parameters, dIG, EE, and R. The model has three distinct phases, namely 1D-CNN, flattening, and thick layers, as illustrated in Fig 3. The normalized input parameters are processed by three 1D-CNN layers: one with a size 1 kernel and each with 64, 64, and 128 filters. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 3. Proposed deep learning model. https://doi.org/10.1371/journal.pone.0311601.g003 To maintain a constant width of the output matrix, each layer of the 1D-CNN produces padded results. Next, a flattening layer receives the output from the third 1D-CNN and reformats the dimension to prepare it for input into the dense layers. Regression is produced by six dense layers that come after the flattening layer. Before choosing how many nodes to utilise for the dense layers and how many filters to employ for the 1D-CNN, a grid search was utilised to test out a number of options. The activation function for each hidden layer has been the Rectified Linear Unit (ReLU). The grid search took activation function selection into consideration and tested several methods for following the hidden layers in the proposed model. For optimal results, the output of each hidden layer was input into an activation function known as a parametric rectified linear unit, or PReLU. The root mean square error (RMSE) and mean absolute error (MAE) loss function are the objectives of the adaptive moment (Adam) optimization used in the proposed model. Adam’s learning method allows him to acquire the required abilities. Whereas RMSE is the root square of the average of the squared disparities between real and anticipated values, MAE measures the average difference between the actual and expected values. They can be referred to as these: (27) (28) where xj is the predicted value, yj is the actual value, and n is the total number of data points that were recorded. The experiments that were conducted in order to develop, validate, and test the proposed model are covered in the section that follows. Results and discussion This section presents the performance of the suggested deep learning and analytical models. Furthermore, the effectiveness of the suggested method was assessed in terms of achievable data rate and improved energy efficiency using MATLAB and Python simulations. As seen in Fig 4, the suggested deep learning model from section is assessed and put to the test. An 80% train set and a 20% test set were created from the datasets. The needed dIG, EE, and R are shown as the training and validation mean absolute errors in Fig 4(a)–4(c), respectively. Since the results were not changing noticeably beyond epoch 100, all of these graphs demonstrate that additional training was not necessary. Furthermore, Fig 4(d) shows nearly similar independent training and validation errors for each output, indicating that the proposed model was neither overfitted nor underfitted. It also shows how the independent training and validation mistakes loss eventually decrease and stabilise. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 4. Training and validation mean absolute error generated during training the proposed model. https://doi.org/10.1371/journal.pone.0311601.g004 It has been assumed for the system evaluation that the transmission power of IoT-sensors (PI), CUE (PC), D2D (PD), and V2V (PV) are equal, and that SINRth and the transmission distance are always changing. The interference transmission lengths between any interfere transmitter and its destination (D2D, V2V, and CUE-BS connections) are shown in Fig 5 in comparison to the necessary transmission length (dIG) between IoT-sensors and gateway (G) for the deep learning and analytical models. Various values of SINRth, namely 5 dB, 15 dB, and 20 dB, have been investigated to evaluate the efficacy of the proposed model. Furthermore, it has been assumed that the transmission power for all IoT-sensors and interfere devices, which is 23 dBm, and the transmission distances between Dtx and Drx, as well as between Vtx and Vrx, are 1/5 of the transmission distances between CUE and BS. In the worst case scenario, high amounts of transmission power interference may affect the transfer data from IoT sensors. For each SINRth supplied, and for each interfere transmission distance, as can be seen in Fig 5, there exists an ideal necessary transmission distance between IoT-sensors and G (dIG) in order to fulfil the required IoT system performance for both the analytical and deep learning models. For example, when SINRth is 5 and the interference transmission distance is 100.7 m, the optimal necessary transmission distance between IoT-sensors and G (dIG) for the analytical and deep learning models, respectively, to meet IoT system performance is 103.399 m and 104.7424 m. On the other hand, with SINRth = 20, the interference transmission distance is 101 m for the analytical and deep learning models, and 44.3611 m for the optimal necessary transmission distance between IoT-sensors and G (dIG). After comparing the two scenarios, it can be concluded that increasing SINRth guarantees that the data will be sent across an efficient communication channel and that the received data is reliable enough to support decision-making while also reducing the required transmission distance between IoT-sensors and G. (dIG). Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 5. Interference distance (m) vs required distance between IoT-sensors and gateway (dIG). https://doi.org/10.1371/journal.pone.0311601.g005 The best transmission distance between IoT-sensors and the gateway (G) is found by evaluating the system under various thresholds for the Signal-to-Interference-plus-noise-ratio (SINRth) (5 dB, 10 dB, and 20 dB), taking into account the various transmission power levels of IoT-sensors. It was believed that the transmission power of all interference is always equal to the transmission power of IoT sensors (PI). Upon closer inspection, Fig 6 displays a noteworthy trend: an increase in the transmission power of IoT-sensors. This suggests that all interfering devices in the three SINRth circumstances are simultaneously seeing a rise in transmission power. Surprisingly, for both analytical and deep learning models, IoT-sensors and gateways (G) need a constant, optimal transmission distance in order to maintain system performance. This statement highlights the system’s ability to maintain performance requirements over time and how adaptable it is to variations in transmission power. It is also important to highlight that an increase in the optimal transmission distance required between IoT-sensors and gateway (G) correlates with a drop in SINRth. This phenomena is necessary for precise and efficient information transmission. In other words, the system compensates for SINRth dips by extending the transmission distance, which maintains effective information exchange and improves the system’s overall dependability and efficiency in a range of conditions. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 6. IoT-sensors transmission power (PI) (dBm) required distance between IoT-sensors and gateway (dIG) (m). https://doi.org/10.1371/journal.pone.0311601.g006 As previously noted, Fig 7 illustrates the relationship between the transmission power of IoT-sensors and the overall energy efficiency of the system for the three distinct values of SINRth. Here, the demonstrated decrease in energy efficiency with increasing IoT-sensors transmission power for both the analytical and deep learning models is explained by a well considered equivalency between IoT-sensors and interference transmission powers. This hypothesis states that the power of interference transmission for both analytical and deep learning models increases in direct proportion to each rise in IoT-sensors transmission power, resulting in a situation where both powers increase at the same time. This correlation introduces an important trade-off in the system dynamics. On the one hand, increasing the transmission power of IoT-sensors can help to improve signal strength and communication dependability. Nevertheless, this surge also directly causes more interference, endangering the overall efficacy of the system. It becomes evident that upholding this trade-off necessitates a careful balance, emphasizing the need for strategic decision-making in the process of selecting the optimal transmission power levels. From a practical perspective, this event highlights how important it is to consider both the benefits of increased IoT-sensor transmission power as well as the challenges posed by increased interference. System operators and designers are responsible for managing this trade-off and determining an equilibrium that maximizes energy savings without sacrificing the accuracy and dependability of information sent. As a result, the observed drop in energy efficiency is a subtle signal that warrants additional research to fully understand the complex interplay between power management and interference control within the system framework. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 7. IoT-sensors transmission power (PI) (dBm) versus overall system energy efficiency (EE) (bit/J). https://doi.org/10.1371/journal.pone.0311601.g007 By comparing the desired distance between IoT-sensors and G (dIG) with the effect of all required SINRth, Fig 8 illustrates the efficacy and robustness of the proposed approach. Three distinct interference distances (50, 100, and 250 metres) have been established in order to assess the effectiveness of the suggested model. Additionally, it has been considered that all interfere devices and IoT sensor transmission power sent data with a maximum interfere transmission power of 23 dBm. In the worst scenario, IoT-sensor data transport may be impacted by significant amounts of interference transmission power. There is an optimal necessary transmission distance between IoT-sensors and G for both the analytical and deep learning models, for each interfere transmission distance and for interference distance given, in order to achieve the required system (SINRth) this is shown in Fig 8. For instance, the ideal required transmission distance between IoT-sensors and G (dIG) for the analytical and deep learning models, respectively, to fulfil desired system (SINRth), is 35.735 m and 36.562862 m when SINRth is 12.2 dB and interference distance is 50 m. On the other hand, the optimum necessary transmission distance for the analytical and deep learning models, respectively, is 68.8212 m and 69.39796 m for IoT-sensors and G (dIG), where the interference distance is 100 m for the same specified SINRth. It is feasible to conclude from a comparison of the two cases that a transmission distance between IoT-sensors and G (dIG) is needed to accomplish the desired SINRth. This guarantees that the information will be received with sufficient accuracy and reliability and that it will be delivered via an effective communication route. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 8. Required signal-to-interference-plus-noise-ratio (SINRth) versus required distance between IoT-sensors and gateway (dIG). https://doi.org/10.1371/journal.pone.0311601.g008 Figs 9 and 10 represent the correlation between the required signal-to-interference-plus-noise-ratio (SINRth) and the overall energy efficiency of the system, as well as the correlation between the required signal-to-interference-plus-noise-ratio (SINRth) and the overall achievable data rate. As mentioned before, three distinct interference distance values have been considered. Fig 9, using either the analytical or deep learning model, shows that there is no discernible change in the energy efficiency performance of the system when the interference distance is increased with varying values of SINRth. Moreover, the same performance is obtained for both analytical and deep learning models, as Fig 10 illustrates, suggesting that increasing interference distance with different values of SINRth has no effect on the achievable data rate (R) of the system. The findings displayed in Figs 9 and 10 corroborate the idea presented in Fig 8 that altering the transmission distance in light of interference distance knowledge is one of the most crucial strategies for achieving the desired system performance. It is demonstrated that variations in the required signal-to-interference-plus-noise-ratio (SINRth) have minimal impact on the overall energy efficiency of the system. This resilience is proof of the proactive adjusting mechanism of IoT sensors. By dynamically selecting transmission distances, these devices confirm that the system meets the predefined performance parameters, hence validating the effectiveness of the previously established adaptive transmission method. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 9. Required signal-to-interference-plus-noise-ratio (SINRth) versus overall system energy efficiency (EE) (bit/J). https://doi.org/10.1371/journal.pone.0311601.g009 Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 10. Required signal-to-interference-plus-noise-ratio (SINRth) versus overall system achievable data rate (R) (bit/s). https://doi.org/10.1371/journal.pone.0311601.g010 To prove the effectiveness of the proposed model, a comparison with an existing method has been made [31]. A comparison between the proposed model and the model reported in [31], which focuses on the relationship between transmission power and overall energy efficiency as shown in Fig 11, demonstrates the higher performance of this suggested strategy. The recommended approach outperforms the alternative in terms of overall energy efficiency, and this outcome can be attributed to several significant factors. First, the recommended method most likely makes use of sophisticated algorithms or methodologies to figure out the best transmission distance between IoT-sensors and G in order to maximize achievable data throughput and energy efficiency. The result of this optimization is improved energy efficiency, which allows for more efficient use of transmission power. Moreover, this adaptability enables the proposed approach to attain optimal equilibrium between signal quality and transmission range, contributing to heightened energy economy. Furthermore, energy efficiency may automatically be increased by the basic architecture or design principles of the proposed technique. This could involve new techniques for transmission protocols, interference management, or modulation methods that when combined produce a more energy-efficient system than the alternative paradigm. In summary, the proposed technique’s enhanced energy efficiency under transmission power and SINRth variations which can be ascribed to its sophisticated optimization methods, versatility, and effective architecture. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 11. IoT-sensors transmission power (PI) vs overall energy efficiency (EE). https://doi.org/10.1371/journal.pone.0311601.g011 Conclusion This paper presents a novel communication method that combines analytical and deep learning models to enable effective communication between IoT-sensors and gateways in smart farms. The issue of maximizing energy efficiency and feasible data rate has been taken into consideration in order to guarantee dependable and accurate data transmission between IoT-sensors and gateway, as well as to lessen the impact of concurrent transmission from other devices sharing the same spectrum. The quality of the data received may be impacted by such interference, which would impact the data being sent to any destination. Farmers’ decisions are impacted when the destination cannot obtain accurate data because of delayed data transmission. The first step in solving the challenge of maximizing energy efficiency and an achievable data rate is to use the Lagrange optimization technique to determine the ideal required distance between IoT-sensors and gateways. Next, this distance is simulated using MATLAB. Consequently, a recommendation was made for the recommended model for the ensuing 1D-CNN-based deep learning model. Reducing computational complexity is the aim of 1D-CNN, which makes it perfect for real-time applications and permits processing energy efficiency and overall attainable data rate. Consequently, in order to get a virtually optimal outcome, the deep learning model utilized for IoT-sensors is able to determine the ideal needed transmission distance. Therefore, the deep learning model applied to IoT-sensors can estimate the optimal necessary transmission distance to get an almost flawless result. Consequently, both approaches have been used to evaluate the optimal transmission distance between IoT-sensors and the gateway. The analytical results of anticipating the ideal required transmission distance between IoT-sensors and gateway are thus presented in order to meet the fundamental system performance requirements. Results on the achievable data rate and system energy efficiency indicate that the proposed model may perform effectively under a range of environmental circumstances. Furthermore, it has been shown through the use of analytical and deep learning techniques that a number of factors, such as the required signal-to-interference-plus-noise (SINRth), interfere devices transmission distance, and IoT-sensors transmission power (PI), can affect the required transmission distance. The findings demonstrate that, at maximum IoT-sensor and interference device power, the required signal-to-interference-plus-noise SINRth climbs, as does the necessary transmission distance between IoT-sensors and gateways. This is because interference needs to be reduced or mitigated in order to get the high needed signal-to-interference-plus-noise ratio. This can be achieved by reducing the transmission data, which in turn reduces the necessary transmission distance. In the end, the findings demonstrate that the suggested model may maintain a respectable degree of efficacy and dependability while achieving the necessary IoT performance for smart farm communication. TI - Towards efficient IoT communication for smart agriculture: A deep learning framework JF - PLoS ONE DO - 10.1371/journal.pone.0311601 DA - 2024-11-21 UR - https://www.deepdyve.com/lp/public-library-of-science-plos-journal/towards-efficient-iot-communication-for-smart-agriculture-a-deep-OV3NgwM2S2 SP - e0311601 VL - 19 IS - 11 DP - DeepDyve ER -