We demonstrate a natural language understanding module for a question-answering dialog agent in a resource-constrained virtual patient domain, which combines both rule-based and machine learning approaches. We further validate the model development work by performing a replication study using live subjects, broadly confirming the findings from the development process using a fixed dataset, but highlighting important deficits. In particular, the hybrid approach continues to show substantial improvements over either rule-based or machine learning approaches individually, even handling unseen classes with some success; however, the system has unexpected difficulty handling out-of-domain questions. We attempt to mitigate this issue with moderate success, and provide analysis of the problem to suggest future improvements.