This course is best suited for software engineers, data scientists, and graduate students in computer science or engineering fields who wish to develop expertise in building and deploying natural language processing systems to solve real-world language understanding challenges.



Kompetenzen, die Sie erwerben
- Kategorie: Artificial Neural Networks
- Kategorie: Deep Learning
- Kategorie: Algorithms
- Kategorie: Applied Machine Learning
- Kategorie: Machine Learning Methods
- Kategorie: PyTorch (Machine Learning Library)
- Kategorie: Natural Language Processing
- Kategorie: Statistical Machine Learning
- Kategorie: Large Language Modeling
Wichtige Details

Zu Ihrem LinkedIn-Profil hinzufügen
Oktober 2025
21 Aufgaben
Erfahren Sie, wie Mitarbeiter führender Unternehmen gefragte Kompetenzen erwerben.

In diesem Kurs gibt es 7 Module
This module delves into the critical preprocessing step of tokenization in NLP, where text is segmented into smaller units called tokens. You will explore various tokenization techniques, including character-based, word-level, Byte Pair Encoding (BPE), WordPiece, and Unigram tokenization. Then you’ll examine the importance of normalization and pre-tokenization processes to ensure text uniformity and improve tokenization accuracy. Through practical examples and hands-on exercises, students will learn to handle out-of-vocabulary (OOV) issues, manage large vocabularies efficiently, and understand the computational complexities involved. By the end of the module, you will be equipped with the knowledge to implement and optimize tokenization methods for diverse NLP applications.
Das ist alles enthalten
1 Video13 Lektüren2 Aufgaben1 App-Element
In this module, we will explore foundational models in natural language processing (NLP), focusing on language models, feedforward neural networks (FFNNs), and Hidden Markov Models (HMMs). Language models are crucial in predicting and generating sequences of text by assigning probabilities to words or phrases within a sentence, allowing for applications such as autocomplete and text generation. FFNNs, though limited to fixed-size contexts, are foundational neural architectures used in language modeling, learning complex word relationships through non-linear transformations. In contrast, HMMs model sequences based on hidden states, which influence observable outcomes. They are particularly useful in tasks like part-of-speech tagging and speech recognition. As the module progresses, we will also examine modern advancements like neural transition-based parsing and the evolution of language models into sophisticated architectures such as transformers and large-scale pre-trained models like BERT and GPT. This module provides a comprehensive view of how language modeling has developed from statistical methods to cutting-edge neural architectures.
Das ist alles enthalten
2 Videos19 Lektüren4 Aufgaben
In this module, we will explore Recurrent Neural Networks (RNNs), a fundamental architecture in deep learning designed for sequential data. RNNs are particularly well-suited for tasks where the order of inputs matters, such as time series prediction, language modeling, and speech recognition. Unlike traditional neural networks, RNNs have connections that allow them to “remember” information from previous steps by sharing parameters across time steps. This ability enables them to capture temporal dependencies in data, making them powerful for sequence-based tasks. However, RNNs come with challenges like vanishing and exploding gradients which affect their ability to learn long-term dependencies. Throughout the module, you will explore different RNN variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs), which address these challenges. You will also delve into advanced training techniques and applications of RNNs in real-world NLP and time series problems.
Das ist alles enthalten
2 Videos22 Lektüren2 Aufgaben1 App-Element
This module introduces students to advanced Natural Language Processing (NLP) techniques, focusing on foundational tasks such as Part-of-Speech (PoS) tagging, sentiment analysis, and sequence modeling with recurrent neural networks (RNNs). Students will examine how PoS tagging helps in understanding grammatical structures, enabling applications such as machine translation and named entity recognition (NER). The module delves into sentiment analysis, highlighting various approaches from traditional machine learning models (e.g., Naive Bayes) to advanced deep learning techniques (e.g., bidirectional RNNs and transformers). Students will learn to implement both forward and backward contextual understanding using bidirectional RNNs, which improves accuracy in tasks where sequence order impacts meaning. By the end of the course, students will gain hands-on experience building NLP models for real-world applications, equipping them to handle sequential data and capture complex dependencies in text analysis.
Das ist alles enthalten
1 Video15 Lektüren4 Aufgaben
This module introduces you to core tasks and advanced techniques in Natural Language Processing (NLP), with a focus on structured prediction, machine translation, and sequence labeling. You will explore foundational topics such as Named Entity Recognition (NER), Part-of-Speech (PoS) tagging, and sentiment analysis and use neural network architectures like Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Conditional Random Fields (CRFs). The module will cover key concepts in sequence modeling, such as bidirectional and multi-layer RNNs, which capture both past and future context to enhance the accuracy of tasks like NER and PoS tagging. Additionally, you will delve into Neural Machine Translation (NMT), examining encoder-decoder models with attention mechanisms to address challenges in translating long sequences. Practical implementations will involve integrating these models into real-world applications, focusing on handling complex language structures, rare words, and sequential dependencies. By the end of this module, you will be proficient in building and optimizing deep learning models for a variety of NLP tasks.
Das ist alles enthalten
3 Videos18 Lektüren4 Aufgaben
In this module we’ll focus on attention mechanisms and explore the evolution and significance of attention in neural networks, starting with its introduction in neural machine translation. We’ll cover the challenges of traditional sequence-to-sequence models and how attention mechanisms, particularly in Transformer architectures, address issues like long-range dependencies and parallelization, which enhances the model's ability to focus on relevant parts of the input sequence dynamically. Then, we’ll turn our attention to Transformers and delve into the revolutionary architecture introduced by Vaswani et al. in 2017, which has significantly advanced natural language processing. We’ll cover the core components of Transformers, including self-attention, multi-head attention, and positional encoding to explain how these innovations address the limitations of traditional sequence models and enable efficient parallel processing and handling of long-range dependencies in text.
Das ist alles enthalten
2 Videos25 Lektüren3 Aufgaben2 App-Elemente
In this module, we’ll hone in on pre-training and explore the foundational role of pre-training in modern NLP models, highlighting how models are initially trained on large, general datasets to learn language structures and semantics. This pre-training phase, often involving tasks like masked language modeling, equips models with broad linguistic knowledge, which can then be fine-tuned on specific tasks, enhancing performance and reducing the need for extensive task-specific data.
Das ist alles enthalten
1 Video19 Lektüren2 Aufgaben
Dozent

Mehr von Machine Learning entdecken
Status: Kostenloser Testzeitraum
Status: Kostenloser TestzeitraumDeepLearning.AI
Status: VorschauNortheastern University
Status: Kostenloser Testzeitraum
Warum entscheiden sich Menschen für Coursera für ihre Karriere?





Neue Karrieremöglichkeiten mit Coursera Plus
Unbegrenzter Zugang zu 10,000+ Weltklasse-Kursen, praktischen Projekten und berufsqualifizierenden Zertifikatsprogrammen - alles in Ihrem Abonnement enthalten
Bringen Sie Ihre Karriere mit einem Online-Abschluss voran.
Erwerben Sie einen Abschluss von erstklassigen Universitäten – 100 % online
Schließen Sie sich mehr als 3.400 Unternehmen in aller Welt an, die sich für Coursera for Business entschieden haben.
Schulen Sie Ihre Mitarbeiter*innen, um sich in der digitalen Wirtschaft zu behaupten.
Häufig gestellte Fragen
To access the course materials, assignments and to earn a Certificate, you will need to purchase the Certificate experience when you enroll in a course. You can try a Free Trial instead, or apply for Financial Aid. The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile.
Yes. In select learning programs, you can apply for financial aid or a scholarship if you can’t afford the enrollment fee. If fin aid or scholarship is available for your learning program selection, you’ll find a link to apply on the description page.
Weitere Fragen
Finanzielle Unterstützung verfügbar,
enthalten
