In applications of tensor analysis, missing data is an important issue that is usually handled via weighted least-squares fitting, imputation, or iterative expectation-maximization. The resulting algorithms are often cumbersome, and tend to fail when the percentage of missing samples is large. This paper proposes a novel and refreshingly simple approach for handling randomly missing values in big tensor analysis. The stepping stone is random multi-way tensor compression, which enables indirect tensor factorization via analysis of compressed 'replicas' of the big tensor. A Bernoulli model for the misses, and two opposite ends of the tensor modeling spectrum are considered: independent and identically distributed (i.i.d.) tensor elements, and low-rank (and in particular rank-one) tensors whose latent factors are i.i.d. In both cases, analytical results are established, showing that the tensor approximation error variance is inversely proportional to the number of available elements. Coupled with recent developments in robust CP decomposition, these results show that it is possible to ignore missing values without losing the ability to identify the underlying model.