In this paper, from the functional point of view, a statistics-based approach for rule extraction from trained neural networks is proposed. This approach introduces statistical technique to evaluate extracted rules so that the rule set could well cover the instance space. It deals with continuous attributes in a unique way so that the subjectivity and complexity of discretization are lowered. It adopts ordered rule representation so that not only the rules have concise appearance but also the consistency process could be released when the rules are used. Moreover, this approach is independent of the architecture and training algorithm so that it could be easily applied to diversified neural classifiers. Experimental results show that the symbolic rules extracted via this approach are comprehensible, compact, and with high fidelity.