ABSTRACT Monte-Carlo estimates for skip-free Markov chains exhibit surprisingly exotic behavior. For example, simulation of the mean of a Reflected Random Walk (RRW) exhibits non-standard behavior, even for light-tailed increment distributions with negative drift. The Large Deviation Principle (LDP) holds for deviations below the mean, but for deviations at the usual speed, above the mean the rate function is null. Similar results hold for "norm like" function of a skip-free Markov chain. This talk takes a deeper look at these results for the RRW. A complete sample path LDP analysis is described that helps to explain why simulating a RRW is hard. This gives rise to non-convex rate function and elegant concave most likely paths. Similar qualitative results are obtained for more general classes of Markov models. These results suggest new ways of designing control variates to construct a pair of estimators that provide upper and lower confidence bounds in simulation.