Deep neural networks (DNNs) have a wide range of applications, and software employing them must be thoroughly tested, especially in safety-critical domains. However, traditional software test coverage metrics cannot be applied directly to DNNs. In this paper, inspired by the MC/DC coverage criterion, we propose four novel test criteria that are tailored to structural features of DNNs and their semantics. We validate the criteria by demonstrating that the generated test inputs guided via our proposed coverage criteria are able to capture un-desired behaviours in a DNN. Test cases are generated using both the symbolic approach and the gradient-based heuristic. Our experiments are conducted on state-of-the-art DNNs obtained using the MNIST and ImageNet datasets.