Test-time adaptation (TTA) has emerged as a promising solution to address performance decay due to
unforeseen distribution shifts between training and test data. While recent TTA methods excel in
adapting to test data variations, such adaptability exposes a model to vulnerability against malicious
examples. Indeed, previous studies have uncovered security vulnerabilities within TTA even when a
small proportion of the test batch is maliciously manipulated. In response to the emerging threat, we
propose median batch normalization (MedBN), leveraging the robustness of the median for statistics
estimation within the batch normalization layer during test-time inference. Our method is
algorithm-agnostic, thus allowing seamless integration with existing TTA frameworks. Our experimental
results on benchmark datasets, including CIFAR10-C, CIFAR100-C, and ImageNet-C, consistently
demonstrate that MedBN outperforms existing approaches in maintaining robust performance across
different attack scenarios, encompassing both instant and cumulative attacks. Through extensive
experiments, we show that our approach sustains the performance even in the absence of attacks,
achieving a practical balance between robustness and performance.