Accurate assessment of student’s ability is the key task of a test. Assessments based on final responses are the standard. As the infrastructure advances, substantially more information is observed. One of such instances is process data that are collected by computer-based interactive items and contain a student’s detailed interactive processes. In this paper, we show both theoretically and empirically that appropriately including such information in assessment will substantially improve relevant assessment precision. The precision is measured empirically by out-of-sample test reliability.