Northeastern University
Applied Natural Language Processing in Engineering Part 1

kurs ist nicht verfügbar in Deutsch (Deutschland)

Wir übersetzen es in weitere Sprachen.
Northeastern University

Applied Natural Language Processing in Engineering Part 1

Bei Coursera Plus enthalten

Verschaffen Sie sich einen Einblick in ein Thema und lernen Sie die Grundlagen.
3 Wochen zu vervollständigen
unter 10 Stunden pro Woche
Flexibler Zeitplan
In Ihrem eigenen Lerntempo lernen
Verschaffen Sie sich einen Einblick in ein Thema und lernen Sie die Grundlagen.
3 Wochen zu vervollständigen
unter 10 Stunden pro Woche
Flexibler Zeitplan
In Ihrem eigenen Lerntempo lernen

Kompetenzen, die Sie erwerben

  • Kategorie: Unstructured Data
  • Kategorie: Supervised Learning
  • Kategorie: Natural Language Processing
  • Kategorie: Artificial Intelligence
  • Kategorie: Deep Learning
  • Kategorie: Linear Algebra
  • Kategorie: Machine Learning
  • Kategorie: Probability Distribution
  • Kategorie: Artificial Neural Networks

Wichtige Details

Zertifikat zur Vorlage

Zu Ihrem LinkedIn-Profil hinzufügen

Kürzlich aktualisiert!

Oktober 2025

Bewertungen

22 Aufgaben

Unterrichtet in Englisch

Erfahren Sie, wie Mitarbeiter führender Unternehmen gefragte Kompetenzen erwerben.

 Logos von Petrobras, TATA, Danone, Capgemini, P&G und L'Oreal

In diesem Kurs gibt es 7 Module

This module provides an in-depth exploration of Natural Language Processing (NLP), a crucial area of artificial intelligence that enables computers to understand, interpret, and generate human language. By combining computational linguistics with machine learning, NLP is applied in various technologies, from chatbots and sentiment analysis to machine translation and speech recognition. The module introduces fundamental NLP tasks such as text classification, Named Entity Recognition (NER), and neural machine translation, showcasing how these applications shape real-world interactions with AI. Additionally, it highlights the complexities of teaching language to machines, including handling ambiguity, grammar, and cultural nuances. Through the course, you will gain hands-on experience and knowledge about key techniques like word representation and distributional semantics, preparing them to solve language-related challenges in modern AI systems.

Das ist alles enthalten

4 Videos19 Lektüren2 Aufgaben1 App-Element

This module focuses on optimization techniques critical for machine learning, particularly in natural language processing (NLP) tasks. It introduces Gradient Descent (GD), a fundamental algorithm used to minimize cost functions by iteratively adjusting model parameters. You’ll explore variants like Stochastic Gradient Descent (SGD) and Mini-Batch Gradient Descent to learn more about their efficiency in handling large datasets. Advanced methods such as Momentum and Adam are covered to give you insight on how to enhance convergence speed by smoothing updates and adapting learning rates. The module also covers second-order techniques like Newton’s Method and Quasi-Newton methods (e.g., BFGS), which leverage curvature information for more direct optimization, although they come with higher computational costs. Overall, this module emphasizes balancing efficiency, accuracy, and computational feasibility in optimizing machine learning models.

Das ist alles enthalten

2 Videos16 Lektüren3 Aufgaben

This module explores Named Entity Recognition (NER), a core task in Natural Language Processing (NLP) that identifies and classifies entities like people, locations, and organizations in text. We’ll begin by examining how logistic regression can be used to model NER as a binary classification problem, though this approach faces limitations with complexity and context capture. We’ll then transition to more advanced techniques, such as neural networks, which excel at handling the complex patterns and large-scale data that traditional models struggle with. Neural networks' ability to learn hierarchical features makes them ideal for NER tasks, as they can capture contextual information more effectively than simpler models. Throughout the module, we compare these methods and highlight how deep learning approaches such as Recurrent Neural Networks (RNNs) and transformers like BERT improve NER accuracy and scalability.

Das ist alles enthalten

2 Videos14 Lektüren3 Aufgaben1 App-Element

The Word2Vec and GloVe models are popular word embedding techniques in Natural Language Processing (NLP), each offering unique advantages. Word2Vec, developed by Google, operates via two key models: Continuous Bag of Words (CBOW) and Skip-gram, focusing on predicting a word based on its context or vice versa (Word2Vec). The GloVe model, on the other hand, created by Stanford, combines count-based and predictive approaches by leveraging word co-occurrence matrices to learn word vectors (GloVe). Both models represent words in a high-dimensional vector space and capture semantic relationships. Word2Vec focuses on local contexts, learning efficiently from large datasets, while GloVe emphasizes global word co-occurrence patterns across the entire corpus, revealing deeper word associations. These embeddings enable tasks like analogy-solving, semantic similarity, and other linguistic computations, making them central to modern NLP applications.

Das ist alles enthalten

3 Videos29 Lektüren4 Aufgaben1 App-Element

This module delves into the evaluation techniques for Natural Language Processing (NLP) models, focusing on both intrinsic and extrinsic evaluation methods. Intrinsic evaluation assesses the model's performance based on internal criteria, such as word embedding quality, parsing accuracy, and language model perplexity. In contrast, extrinsic evaluation measures the model's effectiveness in real-world applications, including tasks like machine translation, sentiment analysis, and named entity recognition. You’ll also learn more about key differences between these evaluation types, and the importance of context and application in determining a model's utility. Additionally, you’ll review specific metrics like cross-entropy loss, perplexity, BLEU, and ROUGE scores, providing a comprehensive understanding of how to evaluate and improve NLP models.

Das ist alles enthalten

9 Lektüren2 Aufgaben1 App-Element

This module explores various techniques for topic modeling in natural language processing (NLP), focusing on methods like Latent Semantic Analysis (LSA), Non-Negative Matrix Factorization (NMF), and Latent Dirichlet Allocation (LDA). It begins with an introduction to matrix factorization and the importance of transforming textual data into numerical representations. You’ll delve into the mechanics of LSA and NMF, paying attention to their use of TF-IDF and Singular Value Decomposition (SVD) to uncover latent semantic structures. Additionally, you’ll review LDA's probabilistic approach to topic modeling, explaining its reliance on Dirichlet distributions and Bayesian inference to identify hidden topics within a corpus. Through detailed examples and mathematical explanations, the module provides a comprehensive understanding of how these techniques can be applied to extract meaningful topics from large text datasets.

Das ist alles enthalten

1 Video16 Lektüren4 Aufgaben1 App-Element

This module delves into the essential techniques of syntactic and semantic parsing in natural language processing (NLP). You’ll begin with an exploration of linguistic structures, focusing on phrase structure and dependency structure, which are fundamental for understanding sentence syntax. Then you’ll review various parsing methods, including transition-based and graph-based dependency parsing, highlighting their respective advantages and challenges. Additionally, you’ll touch on neural transition-based parsing, which leverages neural networks for improved accuracy and efficiency. Finally, the module touches on semantic parsing, emphasizing its role in mapping sentences to formal representations of meaning, crucial for applications like dialogue systems and information extraction.

Das ist alles enthalten

2 Videos32 Lektüren4 Aufgaben

Dozent

Ramin Mohammadi
Northeastern University
4 Kurse526 Lernende

von

Mehr von Machine Learning entdecken

Warum entscheiden sich Menschen für Coursera für ihre Karriere?

Felipe M.
Lernender seit 2018
„Es ist eine großartige Erfahrung, in meinem eigenen Tempo zu lernen. Ich kann lernen, wenn ich Zeit und Nerven dazu habe.“
Jennifer J.
Lernender seit 2020
„Bei einem spannenden neuen Projekt konnte ich die neuen Kenntnisse und Kompetenzen aus den Kursen direkt bei der Arbeit anwenden.“
Larry W.
Lernender seit 2021
„Wenn mir Kurse zu Themen fehlen, die meine Universität nicht anbietet, ist Coursera mit die beste Alternative.“
Chaitanya A.
„Man lernt nicht nur, um bei der Arbeit besser zu werden. Es geht noch um viel mehr. Bei Coursera kann ich ohne Grenzen lernen.“
Coursera Plus

Neue Karrieremöglichkeiten mit Coursera Plus

Unbegrenzter Zugang zu 10,000+ Weltklasse-Kursen, praktischen Projekten und berufsqualifizierenden Zertifikatsprogrammen - alles in Ihrem Abonnement enthalten

Bringen Sie Ihre Karriere mit einem Online-Abschluss voran.

Erwerben Sie einen Abschluss von erstklassigen Universitäten – 100 % online

Schließen Sie sich mehr als 3.400 Unternehmen in aller Welt an, die sich für Coursera for Business entschieden haben.

Schulen Sie Ihre Mitarbeiter*innen, um sich in der digitalen Wirtschaft zu behaupten.

Häufig gestellte Fragen