Technical Tracks

Track 4: Edge and Cloud Computing
Track Co-Chairs:

Description:
The pervasiveness of personal embedded mobile devices is a common phenomenon nowadays. With the widespread use of the Internet of Things (IoT), an increasing number of connected devices are being deployed. The use of Cloud computing to support massive amounts of data generated and consumed by these devices has some limitations, such as increased latency and substantial network traffic, which hampers support for a variety of applications that require low response times. This leads to the emergence of edge computing, where the data processing is moved closer to the devices, where it is actually generated. On the other hand, the cloud is crucial for handling larger applications that demand processing tasks with various data sources, which cannot be handled at the edge. Therefore, a combination of end mobile and embedded devices, edge processing devices, and the cloud is needed to support a variety of applications with heterogeneous requirements. The infrastructure, comprising devices, edge, and cloud, composes a continuum of computing capacity that requires new management mechanisms and algorithms to support the efficient execution of applications. As applications of Large Language Models (LLMs) gain popularity and demand on mobile devices, resource-limited mobile terminals face challenges in efficiently performing large-model inference tasks. LLMs introduce significant complexities in task offloading and resource allocation within edge-cloud computing. The Edge and Cloud Computing track aims to attract research that explores networking and computing management in the aforementioned computing continuum.

Track Topics:
• Resource management and allocation in Edge-Fog-Cloud
• Resource allocation in Edge-Fog-Cloud
• Joint scheduling and optimization of networking and distributed computing resources
• Integration of NFV into the Edge-Fog-Cloud
• Edge/fog computing and network services
• Middleware for cloud/fog computing applications
• Resource slicing in the computing continuum
• Autonomic distributed service and network management
• Business models for the computing continuum
• QoS/QoE management for static and mobile applications
• Distributed infrastructure monitoring
• Machine learning and distributed learning for edge and cloud resource management
• Edge Intelligence: Management of distributed machine learning tasks
• Distributed learning deployment, management, and applications
• Datacenter networking
• Caching into the Edge-Fog-Cloud
• DNN Partitioning and Offloading in Edge-Fog-Cloud
• Big AI Models with collaborative edge-cloud computing
• Large Language Models (LLMs) inference offloading in Edge-Fog-Cloud
• Training acceleration for LLMs in Edge-Fog-Cloud
• LLMs empowered autonomous edge intelligence
• Real-time Internet of Things (IoT) applications

TPC List:
• Aleksey Charapko; University of New Hampshire, USA
• Changxin Bai; Kettering University, USA
• Daniel Corujo; University of Aveiro, Portugal
• Daniele Tarchi; University of Florence, Italy
• G.N.V. RajaReddy; University of Sasketchewan, Canada
• Habeeb Olufowobi; University of Texas at Arlington, USA
• Lanyu Xu; Oakland University, USA
• Liangkai Liu; University of Michigan, USA
• Maximo Morales Cespedes; UC3M, Spain
• Miguel Gutiérrez Gaitán, Pontificia Universidad Católica de Chile, Chile
• Mohammad Alquraan; University of Glasgow, UK
• Nathaniel Hudson; Illinois Institute of Technology, USA
• Qiang Liu; University of Nebraska-Lincoln, USA
• Xin Zhang; Binghamton University, USA
• Yasir Saleem; Aberystwyth University, UK
• Zheng Song; University of Michigan-Dearborn, USA
• Zhengrui Qin; Northwest Missouri State University, USA