Identifying the differing ways in which political actors and groups express themselves is a key task in the study of legislatures, campaigning, and communication. A variety of computational tools exist to help find and describe these patterns, typically summarizing differences with weighted word lists representing either lexical frequencies or semantic fields. I identify two limits to the inferences that can be made based on this method: the ambiguity of the semantic value of words without wider context and an inability to detect differences outside of lexical semantics. I present a combination of text annotation and deep-learning feature attribution, a set of techniques for evaluating the relative importance of data inputs to the prediction of a neural network classifier, as an alternative means of identifying differentiating language usage in political texts. Results are evaluated with comparison to existing text-as-data tools on a dataset of US presidential campaign advertisements from Facebook between 2017 and 2020.