In this article, we present a machine-learning approach to predict the execution time of concurrent applications by leveraging isolated execution times and performance monitoring counters. Through extensive experiments across various application pairs, we explore the challenges of modeling execution time in a concurrent environment. This study highlights three progressively refined models, transitioning from simple neural networks to complex Bayesian-optimized ensemble techniques. A key finding highlights the significant role of interference in execution time variability as shared resource contention leads to deviations from isolated behavior. These insights enhance the understanding of the role of scheduler and application dynamics in concurrent environments.