The family of k-nearest neighbor (k NN) schemes comprises simple and effective algorithms that can be used for general machine learning tasks such as classification and regression. The goal of linear k NN regression is to predict the value of an unseen datum as a linear combination of its k neighbors on a graph, which could be found via certain distance metric. Despite the simplicity and effectiveness of kmathrmNN regression, a general problem facing this approach is the proper choice of k. Most of the conventional k NN algorithms simply apply the same k for all data samples, meaning that the data samples are assumed to lie on a regular graph wherein each data sample is connected with k neighbors. However, a constant choice may not be optimal for data lying in a heterogeneous feature space. On the other hand, existing algorithms for adaptively choosing k usually incur high computational complexity, especially for large training datasets. In order to cope with this challenge, this paper introduces a novel algorithm which can adaptively choose k for each data sample; meanwhile it is capable of greatly reducing the training complexity by actively choosing training samples. Real data tests corroborate the efficiency and effectiveness of the novel algorithm.