This study addresses the issue of stereotypical biases in Natural Language Processing (NLP), which have primarily been studied in English models, often neglecting other languages like German. By analyzing both static and contextualized word embeddings, the research explores bias in German word representations, using datasets partly derived from a workshop with experts in human resources and machine learning in Switzerland, aimed at identifying language-specific biases in the labour market. The findings show that both types of embeddings exhibit significant biases across several dimensions.