Testing is today the most widely used software quality assurance approach. However, it is well known that the necessarily limited number of tests developed and run in-house are not representative of the rich variety of user executions in the field. In order to bridge this gap between in-house tests and field executions, we need (1) a way to identify the behaviors exercised in the field that were not exercised in-house and (2) a way to generate new tests that exercise such behavior. In this context, we propose Replica, a technique that uses field execution data to guide test generation. Replica instruments the software before deploying it, so that field data collection is triggered when a user exercises an untested behavior B, currently expressed as the violation of an invariant. When it receives the so collected field data, Replica uses guided symbolic execution to generate one or more executions that exercise the previously untested behavior B. Our initial empirical evaluation, performed on a set of real user executions, shows that Replica can successfully generate tests that mirror field behavior and have similar fault-detection capability. Our results also show that Replica can outperform a traditional input generation approach that does not use field-data guidance.