fet_Scikit_Learn

Pages

  • Home
  • A. fet_FACULTY
  • 1. fet_Language_&_Linguistics
  • 2. fet_Literature
  • 3. fet_Culture
  • 4. fet_Science
  • 5. fet_Technology
  • fet_Artificial_Intelligence
  • fet_Computer_Science_&_Engineering
  • fet_Data_Science
  • fet_Electrical_&_Electronics_Engineering
  • fet_Machine_Learning
  • fet_Scikit-learn.org
  • fet_Scikit-learn_Getting_Started
  • 1. Supervised learning
  • 2. Unsupervised learning
  • 4. Inspection
  • 5. Visualizations
  • 6. Dataset transformations
  • 7. Dataset loading utilities
  • 8. Computing with scikit-learn

Pages

  • 1. Supervised learning
  • 1.1. Linear Models
  • 1.2. Linear and Quadratic Discriminant Analysis
  • 1.3. Kernel ridge regression
  • 1.4. Support Vector Machines
  • 1.5. Stochastic Gradient Descent
  • 1.6. Nearest Neighbors
  • 1.7. Gaussian Processes
  • 1.8. Cross decomposition
  • 1.9. Naive Bayes
  • 1.10. Decision Trees
  • 1.11. Ensemble methods
  • 1.12. Multiclass and multilabel algorithms
  • 1.13. Feature selection
  • 1.14. Semi-Supervised
  • 1.15. Isotonic regression
  • 1.16. Probability calibration
  • 1.17. Neural network models (supervised)

1.2. Linear and Quadratic Discriminant Analysis

  • 1.2. Linear and Quadratic Discriminant Analysis
    • 1.2.1. Dimensionality reduction using Linear Discriminant Analysis
    • 1.2.2. Mathematical formulation of the LDA and QDA classifiers
    • 1.2.3. Mathematical formulation of LDA dimensionality reduction
    • 1.2.4. Shrinkage
    • 1.2.5. Estimation algorithms
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest

No comments:

Post a Comment

Home
Subscribe to: Posts (Atom)
  • 1. Supervised learning
    • 1.1. Linear Models
    • 1.2. Linear and Quadratic Discriminant Analysis
    • 1.3. Kernel ridge regression
    • 1.4. Support Vector Machines
    • 1.5. Stochastic Gradient Descent
    • 1.6. Nearest Neighbors
    • 1.7. Gaussian Processes
    • 1.8. Cross decomposition
    • 1.9. Naive Bayes
    • 1.10. Decision Trees
    • 1.11. Ensemble methods
    • 1.12. Multiclass and multilabel algorithms
    • 1.13. Feature selection
    • 1.14. Semi-Supervised
    • 1.15. Isotonic regression
    • 1.16. Probability calibration
    • 1.17. Neural network models (supervised)
  • 2. Unsupervised learning
    • 2.1. Gaussian mixture models
    • 2.2. Manifold learning
    • 2.3. Clustering
    • 2.4. Biclustering
    • 2.5. Decomposing signals in components (matrix factorization problems)
    • 2.6. Covariance estimation
    • 2.7. Novelty and Outlier Detection
    • 2.8. Density Estimation
    • 2.9. Neural network models (unsupervised)
  • 3. Model selection and evaluation
    • 3.1. Cross-validation: evaluating estimator performance
    • 3.2. Tuning the hyper-parameters of an estimator
    • 3.3. Metrics and scoring: quantifying the quality of predictions
    • 3.4. Model persistence
    • 3.5. Validation curves: plotting scores to evaluate models
  • 4. Inspection
    • 4.1. Partial dependence plots
    • 4.2. Permutation feature importance
  • 5. Visualizations
    • 5.1. Available Plotting Utilities
  • 6. Dataset transformations
    • 6.1. Pipelines and composite estimators
    • 6.2. Feature extraction
    • 6.3. Preprocessing data
    • 6.4. Imputation of missing values
    • 6.5. Unsupervised dimensionality reduction
    • 6.6. Random Projection
    • 6.7. Kernel Approximation
    • 6.8. Pairwise metrics, Affinities and Kernels
    • 6.9. Transforming the prediction target (y)
  • 7. Dataset loading utilities
    • 7.1. General dataset API
    • 7.2. Toy datasets
    • 7.3. Real world datasets
    • 7.4. Generated datasets
    • 7.5. Loading other datasets
  • 8. Computing with scikit-learn
    • 8.1. Strategies to scale computationally: bigger data
    • 8.2. Computational Performance
    • 8.3. Parallelism, resource management, and configuration

Blog Archive

  • ▼  2020 (1)
    • ▼  July (1)
      • fet_INDEX_A_to_Z

About Me

fet_Faculty
View my complete profile
  • 2. Unsupervised learning
    • 2.1. Gaussian mixture models
    • 2.2. Manifold learning
    • 2.3. Clustering
    • 2.4. Biclustering
    • 2.5. Decomposing signals in components (matrix factorization problems)
    • 2.6. Covariance estimation
    • 2.7. Novelty and Outlier Detection
    • 2.8. Density Estimation
    • 2.9. Neural network models (unsupervised)
  • 3. Model selection and evaluation
    • 3.1. Cross-validation: evaluating estimator performance
    • 3.2. Tuning the hyper-parameters of an estimator
    • 3.3. Metrics and scoring: quantifying the quality of predictions
    • 3.4. Model persistence
    • 3.5. Validation curves: plotting scores to evaluate models
  • 4. Inspection
    • 4.1. Partial dependence plots
    • 4.2. Permutation feature importance
  • 5. Visualizations
    • 5.1. Available Plotting Utilities
  • 6. Dataset transformations
    • 6.1. Pipelines and composite estimators
    • 6.2. Feature extraction
    • 6.3. Preprocessing data
    • 6.4. Imputation of missing values
    • 6.5. Unsupervised dimensionality reduction
    • 6.6. Random Projection
    • 6.7. Kernel Approximation
    • 6.8. Pairwise metrics, Affinities and Kernels
    • 6.9. Transforming the prediction target (y)
  • 7. Dataset loading utilities
    • 7.1. General dataset API
    • 7.2. Toy datasets
    • 7.3. Real world datasets
    • 7.4. Generated datasets
    • 7.5. Loading other datasets
  • 8. Computing with scikit-learn
    • 8.1. Strategies to scale computationally: bigger data
    • 8.2. Computational Performance
    • 8.3. Parallelism, resource management, and configuration
Simple theme. Powered by Blogger.