Current discussions on the military use of artificial intelligence (AI), in particular concerning autonomous weapons systems, have largely focused on the challenges for the attribution of individual criminal responsibility for war crimes whenever such systems do not perform as initially intended by human operators. Yet, recent observations evidence the pressing need to shift the discussion on the responsibility gap further to include challenges raised by the intentional use of AI systems for the commission of war crimes and other international crimes. Additionally, the increasing development and use of AI systems, based on data-driven learning (DDL) methods, demands particular attention due to the difficulty these systems’ lack of predictability and explainability poses in terms of anticipation of their effects. Against this background, this article complements the present discussion on the responsibility gap by discussing some concerns that the intentional use of DDL systems for the commission of international crimes raises regarding the required mental element and thus, the ascription of individual criminal responsibility. Ultimately, this article proposes preliminary avenues to address these concerns.