Keynote Speakers at ICCCN 2026
July 27, 2026 (Mon)Nexus of AI and Cybersecurity
Prasant Mohapatra, University of South Florida
Abstract: Artificial Intelligence (AI) is rapidly transforming the technological landscape, redefining how systems learn, reason, and operate at scale. At the same time, cybersecurity has emerged as a critical global challenge, with cyberattacks growing in frequency, sophistication, and economic impact. This talk explores the powerful and complex intersection between AI and cybersecurity, highlighting both the opportunities and emerging risks in this evolving domain. AI offers unprecedented capabilities to strengthen cybersecurity defenses. However, these same capabilities are increasingly being weaponized by adversaries. Attackers are leveraging AI to automate cyberattacks, enhance social engineering, exploit vulnerabilities, and scale malicious operations, thereby intensifying the asymmetry between defenders and attackers. The talk delves into the emerging paradigm of Generative AI and Agentic AI systems, which introduce new security challenges. The session will also present mitigation strategies, including secure system design, runtime monitoring, governance frameworks, and the integration of traditional cybersecurity principles with AI-specific safeguards. Finally, the talk reflects on the future coexistence of real and artificial worlds, emphasizing the need for robust, ethical, and adaptive security frameworks. As AI continues to evolve faster than traditional systems, building resilient, trustworthy, and human-centric cybersecurity solutions will be essential.

July 28, 2026 (Tue)
Adaptive Application and Offloading for Video Analytics and Communications
Hang Liu, The Catholic University of America and The U.S. National Science Foundation
Abstract: Video analytics is a popular computer vision task with many applications, spanning from surveillance, autonomous driving, and AR/VR to industrial automation, smart cities, and remote healthcare. Particularly, semantic communication has recently attracted considerable research interest as a new intelligent paradigm that focuses on transmitting the meaning or intent of information. It can drastically reduce network bandwidth usage and transmission latency. With assistance of generative AI, semantic communication offers greater flexibility by enabling data modifications and modality conversion during communications, e.g., generating images with different object orientations or translating text to videos. However, it understands and extracts relevant data semantics and features at the sender, while the receiver interprets and reconstructs a meaningful version of the original message and performs a specific task with the features, which may cause not only data compression but also information loss and distortion. In addition, computation is shipped to the sender and receiver. Resource-constrained mobile or Internet of Things (IoT) devices may suffer from processing burden and battery draining, leading to high latency and low accuracy. The sender and receiver also need to share external knowledge and context for semantic communication, and the semantic significance of data varies across different tasks and over time at the receiving end. They result in additional overheads and challenges to maintain the quality of service. This talk will discuss the tradeoffs between on-device semantic information/feature coding and transmission of raw compressed video. The impacts of video configurations and video analytics pipelines are considered. The approaches for adaptation of communication strategies and distributed machine learning actions are explored.

July 29, 2026 (Wed)
Principles of Machine Learning Theories and Network Applications
Nageswara S. V. Rao, Oak Ridge National Laboratory
Abstract: Machine learning (ML) computations of increasing sophistication and complexity are being developed to solve complex, data-driven problems in diverse areas. Their output is often subject to undesirable phenomena such as overfitting and hallucinations that are hard to detect, resulting in their lower scientific rigor and confidence. We propose the concept of ML-solvability by combining the theories of learnability, computing and logic, which characterizes the model space, the learning algorithm that estimates a model using samples, and the inference algorithm that utilizes the model. It provides insights into the applicability and generalization of ML codes, and the possibility of incomplete and unsound inferences if the underlying problem is not ML-solvable. We describe a framework for ML-solvability and generalization analyses based on a combination of laws that govern system or network and information laws that characterize the learning processes. We briefly describe the uses of smooth and non- smooth laws to develop or analyze ML solutions in problems in systems and networks. We illustrate solutions to two problems in networking: (i) estimation of concave-convex throughput profiles of data transport networks, and (ii) converting inaccurate network measurement from digital twins to match those of physical and cloud networks and their testbed emulations.



