The embedding and extraction of useful knowledge is a recent trend in machine learning applications, e.g., to supplement existing datasets that are small. Whilst, as the increasing use of machine learning models in security-critical applications, the embedding and extraction of malicious knowledge are equivalent to the notorious backdoor attack and its defence, respectively. This paper studies the embedding and extraction of knowledge in tree ensemble classifiers, and focuses on knowledge expressible with a generic form of Boolean formulas, e.g., robustness properties and backdoor attacks. For the embedding, it is required to be preservative(the original performance of the classifier is preserved), verifiable(the knowledge can be attested), and stealthy(the embedding cannot be easily detected). To facilitate this, we propose two novel, and effective, embedding algorithms, one of which is for black-box settings and the other for white-box settings.The embedding can be done in PTIME. Beyond the embedding, we develop an algorithm to extract the embedded knowledge, by reducing the problem to be solvable with an SMT (satisfiability modulo theories) solver. While this novel algorithm can successfully extract knowledge, the reduction leads to an NP computation. Therefore, if applying embedding as backdoor attacks and extraction as defence, our results suggest a complexity gap (P vs. NP) between the attack and defence when working with tree ensemble classifiers. We apply our algorithms toa diverse set of datasets to validate our conclusion extensively.