The quality of machine learning & deep learning models are sensitive to adjustable parameters (hyperparameters) set before training even begins. Efficient hyperparameter exploration is of great importance to users in order to find quality models with affordable time/cost. This process is challenging due to a large search space, expensive training times, sparsity of good configurations, and scarcity of resources. We present two primary contributions: (1) a hyperparameter exploration framework, HyperDrive, that enables pluggable search algorithms & policies while being agnostic to learning frameworks & domains and (2) a scheduling algorithm POP that quickly identifies among promising, opportunistic, and poor configurations. Our evaluation of HyperDrive with POP shows we speedup the training process of complex and deep models by up to 6.7x compared with basic approaches like random/grid search and up to 2.1x compared with state-of-the-art approaches while not sacrificing model quality.