• 多基站下基于DRL的RAN切片资源分配

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2022-05-10 Cooperative journals: 《计算机应用研究》

    Abstract: In the fifth generation mobile communication, network slicing is used to provide an optimal network for various services. For the RAN slice scenario under multi base stations, the previous resource allocation methods can not meet the demand of slices when the number of slices changes, and are only suitable for specific scenarios. To solve this problem, this paper proposes a method to achieve the best resource allocation independent of the number of slices. This method first uses Ape-X method (a DRL method) allocate resources to slices, and then meet the needs of users through the resource mapping from slices to base stations and user resource allocation. The simulation results show that the proposed method can allocate resources according to the state and demand of slices, allocate the necessary number of RBs to meet the demand of slices, and is not affected by the change of the number of slices. At the same time, this method also has high general performance and scalability.

  • 基于NOMA异构云无线接入网的联合子信道和功率分配算法

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2022-05-10 Cooperative journals: 《计算机应用研究》

    Abstract: In heterogeneous cloud radio access network based on NOMA, the increase of network utility comes at the cost of energy consumption, which leads to low energy efficiency. To solve this problem, this paper proposed a joint sub-channel and power allocation solution. The algorithm first defined the difference between network utility and grid energy cost as the system revenue, and then established an optimization problem with the goal of maximizing system revenue where constraints of maximum transmit power, battery capacity of energy harvesting (EH) , user’s minimum data rate requirements and cross-layer interference thresholds were considered. Next, it used the greedy algorithm to pair users and assign sub-channels, which obtained a low-complexity sub-optimal solution. Finally, it used the Lagrangian maximization method based on the alternating direction method of multipliers to optimize the power allocation. The simulation results show that, compared with the scheme in NOMA system without considering EH units, the system revenue has increased by about 18.8%; compared with that in OFDMA system equipped with EH units, the system revenue has increased by about 11.8%.

  • 基于意图的物联网服务描述与发现

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2022-05-10 Cooperative journals: 《计算机应用研究》

    Abstract: In the process of Internet of Things service discovery, users usually express their needs with their own intentions, while service description is the description of service functions, so the mismatch between the two will affect the accuracy of service discovery. At the same time, the accuracy of service discovery decreases with the increasing of service types. In order to solve the above problems, this paper proposed a method of introducing intentional service ontology into the description of iot services, and extended the service context and Qos (Quality of Service) in the intentional service ontology. The extended intentional service ontology was stored in OWL-S(Ontology Web Language for Services) files, which can express service functions in an intentional way, enrich the semantics of iot service description, and improve the accuracy of service discovery. Simulation results showed that the proposed service description method and the corresponding service discovery algorithm can improve the accuracy of 6.7% compared with the traditional service discovery method.

  • 基于负载预测的微服务混合自动扩展

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2022-04-07 Cooperative journals: 《计算机应用研究》

    Abstract: Since edge clouds are not more powerful than the central cloud, it is easy to lead to unexpected autoscaling or low resource processing capabilities in response to dynamic workload. Therefore, we used the microservice application in a real edge cloud environment to experimentally evaluate two synthesis and two actual workloads. And we proposed a hybrid autoscaling method based on workload prediction (Predictively Horizontal and Vertical Pod Auto-scaling, Pre-HVPA) , which first uses machine learning to carry out the workload data characteristics. After obtaining the final workload prediction result, we use the predictive workload for hybrid autoscaling module. The simulation shows that the microservice autoscaling policy based on this method can reduce more scaled jitter and more pod container use, and the method is scalable, so it is suitable for the microservice applications in edge cloud environment.

  • 在移动战术环境下的终端安全接入方案

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2020-09-28 Cooperative journals: 《计算机应用研究》

    Abstract: Poor communication conditions and frequent terminal movements in search, rescue, military and other environments make it difficult for traditional security authentication systems to achieve secure access to terminals. To solve this problem, this paper proposed a scheme about security connection with terminals in the mobile tactical environment. This scheme used a certificateless key management mechanism to analyze the security authentication process after the terminals were moved and the secure processing method after terminals and gateways were damaged or invaded. Simulation results show that the scheme improves the security of authorization and authentication between the gateway and the terminal, can well resist some known attacks, solves the problem of lack of mutual authentication and key escrow in the tactical environment, and the certificateess key algorithm used in this sheme has better security and less computational overhead than other algorithms, and can balance the security of accessing a gateway with energy consumption during a terminal movement.

  • 空间相关信道下大规模MU-MISO系统频谱效率分析

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2018-11-29 Cooperative journals: 《计算机应用研究》

    Abstract: In the large-scale MU-MISO system, the spectral efficiency affect by the channel and power allocation is studied in spatially correlated channels. Firstly, the transmit correlation matrix of the user was derived from steering matrix which was determined by the wave path difference of the transmission signal. The spatial correlation channel was established under the large scale shadow fading and the scattering environment with Gaussian distribution. Then based on the spatial correlation channel, precoding which could reduce the interference between multiple users was simulated for the spectral efficiency performance. Finally, a power allocation algorithm that make the spectrum efficiency maximized was proposed in the scenario where the transmit power was limited and signal to interference plus noise ratio of each user was limited in receive signal. The simulation show that the spectral efficiency of RZF precoding in spatially correlated channels is better than that of MRT precoding. And compared with the average power allocation, the spectral efficiency with the algorithm which has strong theoretical and practical significance has the improvement.

  • 一种基于模式匹配度的用户移动规则挖掘及位置预测方法研究

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2018-08-13 Cooperative journals: 《计算机应用研究》

    Abstract: The location prediction of mobile user is effective for the temporal and spatial allocation of system resources, and it can improve the resource utilization of the mobile communication system. The traditional location prediction method has a problem of low prediction accuracy due to unreasonable calculation of support. To overcome these challenges, this paper presented a method called mining user mobile rule based on pattern matching degree and location prediction. It can predict location with the base coverage mesh as the unit for mobile users in mobile communication systems. The detailed procedure contained three phases: a) mining user mobile patterns from graph traversal; b) generating user mobile rules from user mobile patterns; c) predicting based on user mobile rules. Finally, the conducted experiments on 10 batches of trajectory data demonstrate that this method has the characteristics of fewer rules, high support, and high confidence than traditional methods, and highly predict accuracy.

  • 基于能效的异构蜂窝网络微基站部署研究

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2018-05-20 Cooperative journals: 《计算机应用研究》

    Abstract: This paper shows the research on the energy efficiengy of heterogenous cellular networks in homogenous Poisson point process. Firstly, modeling the heterogenous cellular networks by using homogenous Poisson process. Secondly, deriving the expression of system’s coverage probability and achievable rata in Rayleigh channel situations, and then deducing as well as the expression of energy efficiency’s closed form. Finally, optimizing the density of micro-base station via convex-optimum method in order to maximize networks’ energy efficiency. The simulation results show that micro-base station’s density has significant effects on system’s energy efficiency. Its essence is through reasonable setting of micro-base station’s density to available help improve its efficiency.

  • 极化码串行抵消译码算法延迟性的改进

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2018-05-20 Cooperative journals: 《计算机应用研究》

    Abstract: Abstruct: The polar codes proposed by Arikan caused much attention due to their simple structure. As a high-performance channel encoding, polar codes will achieve good performances once the code length N is greater than 210. On the basis of the Successive Cancelation (SC) decoding algorithm, the decoding latency increases with the increase of code length. By analysis of the position of frozen bits and the SC decoder architecture, this paper proposed an improved SC decoding algorithm, which effectively reduces the latency of the traditional SC decoder algorithm. The algorithm improves the decoding delay by 50% compared to the original. What’s more, it further improves the error correction capability of the decoding algorithm by using the Succession Cancellation Single-Bit-Flipping.

  • 一种基于CUDA的截断重叠维特比译码算法

    Subjects: Computer Science >> Integration Theory of Computer Science submitted time 2018-04-12 Cooperative journals: 《计算机应用研究》

    Abstract: In order to solve the bottleneck problem of channel decoding in high-throughput communication systems, a truncated overlap Viterbi decoder based on CUDA is proposed for convolutional codes to solve it by analyzing of parallel processing based on Compute Unified Device Architecture (CUDA) and exploring of the parallel implementation of Viterbi decoding. The algorithm performs both independent forward metrics computing and back-track procedure in parallel through the overlapping of truncated sub-grid. The experiment shows that the method keeps low BER, achieves a performance improvement of 1.3-3.5 times of the existing implementation and reduces hardware consumption. It can be effectively used in practical high-throughput communication systems.