Empirical Network Optimization via Non-uniform Sampling
Large-scale distributed optimization is a key component of the design of multi-scale network protocols. Traditionally, architecture-level hyperparameters and protocol parameters are optimized manually at run-time, which requires a human fine-tuning step to produce an optimized network performance metric. We will capitalize on the abundance of data and past design instances to automate the fine-tuning step via AI-enabled empirical optimization methods.
Architecture design is a fundamental component of TILOS research. The central goal of this TILOS project is to apply AI-enabled optimization methods for blackbox function optimization to the design and fine-tuning of large-scale networks. We have sought to automate the architecture design cycle in a data-driven manner by modeling the network performance metric as an unknown function of the architecture hyperparameters. We recently arrived at an entirely new framework that combines adaptive sampling and Gaussian process bandit optimization to inherently account for reward shaping by focusing on instance-dependent performance metrics and adaptivity to hyper-parameters. We now focus on extending these adaptive sampling approaches to account for model misspecification, resulting in an optimization framework that simultaneously adapts the sampling strategy and the underlying models based on the data.
Consider the problem of maximizing a black-box objective function f : X→R which can only be accessed through (possibly noisy) observations at query points. The goal is to design a query point selection strategy that uses a finite query (or evaluation) budget of n query points to learn about the maximizer x∗ of f in a sample-efficient manner and recommend a point once the budget is exhausted.
We have connected the problem of GP bandit optimization with the following two classes of problems: (1) Lipschitz continuous function optimization and (2) sampling theorems and full reconstruction of bandlimited functions. While initial work (Shubhanshu & Javidi, ICML 2022) provided the first instance-dependent regret analysis, subsequent work (Lee, Haddadin & Javidi, IEEE 2023) focused on an entirely novel FFT-based approach. Currently we are working to synthesize these works and bridge the gap between the problem classes by leveraging the inherent spectral decay of smooth functions. Our approach uses adaptive, incrementally updated models based on bandlimited reconstructions to optimize smooth functions.
Team Members
Tara Javidi1
Farinaz Koushanfar1
Collaborators
Osama Haddadin2
1. UC San Diego
2. L3Harris