Loading...
Design a Task Offloading Policy for Computational Tasks in Metaverse
Amin, Mohammad Javad | 2025
0
Viewed
- Type of Document: M.Sc. Thesis
- Language: Farsi
- Document No: 58417 (05)
- University: Sharif University of Technology
- Department: Electrical Engineering
- Advisor(s): Hossein Khalaj, Babak; Ashtiani, Farid
- Abstract:
- This study addresses the challenge of optimizing computational task offloading in the metaverse—the next-generation human communication platform. The main objective is to guarantee user Quality of Experience (QoE) under strict computational and communication limits on end devices. To this end, a three-tier end–edge–cloud architecture is considered and the offloading process is modeled. To raise service quality, a task-prioritization strategy with hard and soft deadlines is adopted so that the on-time task completion rate is maximized while energy consumption on end devices is minimized. Since no comprehensive dataset that covers all scenarios is available, reinforcement learning is chosen instead of fully supervised, data-driven methods, enabling the policy to be learned through interaction with the environment and reward feedback. For policy learning, Double DQN and Double Dueling DQN are implemented. Both belong to the off-policy family; therefore they are more sample-efficient than on-policy methods, need less data, work reliably in high-dimensional state spaces, and converge more quickly to useful policies in practice. Policies are designed and compared under two decision patterns: a central agent and per-edge agents. Although a single, global agent can theoretically reach a better optimum, finding the optimal policy in large state spaces is difficult and demands heavy computation. In contrast, a multi-agent design reduces each agent’s observation space and distributes the learning load, which leads to faster convergence toward near-optimal policies. Simulation results show that the proposed algorithms deliver consistently higher QoE: deadline violations drop, the success rate within hard and soft windows rises, and energy consumption remains at a desirable level. Deploying agents on edge servers also outperforms a centralized scheme, since the reduced state dimension enables faster learning and more efficient decisions. Overall, the proposed methods consume less energy than baseline algorithms across diverse scenarios and raise the task-completion rate by at least 20٪ in a wide range of settings. The combination of Double Dueling DQN, a multi-agent edge-centric architecture, hard/soft deadline handling, and task prioritization offers an effective and scalable approach to improving QoE in end–edge–cloud systems, particularly for metaverse applications
- Keywords:
- Metaverse ; Task Offloading ; Edge Computing ; Cloud Computing ; Deep Reinforcement Learning ; Cloud-Edge-End Computing
