Murphy Kevin P. (1970- )
Sortowanie
Źródło opisu
Książki, czasopisma i zbiory specjalne
(1)
Forma i typ
Książki
(1)
Publikacje naukowe
(1)
Dostępność
tylko na miejscu
(1)
Placówka
Biblioteka Międzywydziałowa
(1)
Autor
Berłowski Paweł
(189)
Kotowski Włodzimierz
(179)
Praca zbiorowa
(157)
Skoczylas Zbigniew
(152)
Stiasny Grzegorz
(143)
Murphy Kevin P. (1970- )
(-)
Sadlik Ryszard
(142)
Blum Maciej
(140)
Michalski Dariusz
(134)
Lewandowski Maciej
(131)
Majewski Jerzy S
(131)
Etzold Hans-Rüdiger
(120)
Leśniewski Mariusz
(116)
Gewert Marian
(108)
Maruchin Wojciech
(107)
Guryn Halina
(105)
Traczyk Wojciech
(101)
Chalastra Michał
(99)
Kardyś Marta
(97)
Nazwisko Imię
(96)
Marx Karl (1818-1883)
(94)
Berkieta Mateusz
(93)
Tomczak Małgorzata
(93)
Polkowski Sławomir
(92)
Engels Friedrich (1820-1895)
(91)
Jakubiec Izabela
(90)
Kotapski Roman
(90)
Rybicki Piotr
(90)
Krysicki Włodzimierz (1905-2001)
(88)
Teleguj Kazimierz
(88)
Kapołka Maciej
(86)
Mikołajewska Emilia
(84)
Zaborowska Joanna
(81)
Starosolski Włodzimierz (1933- )
(80)
Meryk Radosław
(79)
Piątek Grzegorz
(79)
Rudnicki Bogdan
(79)
Górczyński Robert
(78)
Polit Ryszard
(77)
Mroczek Wojciech
(76)
Kulawik Marta
(74)
Mycielski Krzysztof
(74)
Myszkorowski Jakub
(73)
Konopka Eduard
(71)
Jabłoński Marek
(70)
Bielecki Jan (1942-2001)
(69)
Knosala Ryszard (1949- )
(68)
Rajca Piotr (1970- )
(68)
Rymarz Małgorzata
(68)
Walczak Krzysztof
(68)
Walkiewicz Łukasz
(68)
Wiecheć Marek
(68)
Jabłoński Adam
(67)
Laszczak Mirosław
(66)
Piwko Łukasz
(66)
Wodziczko Piotr
(65)
Dziedzic Zbigniew
(64)
Sidor-Rządkowska Małgorzata
(64)
Żakowski Wojciech (1929-1993)
(64)
Pasko Marian
(62)
Włodarski Lech (1916-1997)
(62)
Czakon Wojciech
(61)
Leyko Jerzy (1918-1995)
(61)
Paszkowska Małgorzata
(61)
Jankowski Mariusz
(60)
Kostecka Alicja
(60)
Lenin Włodzimierz (1870-1924)
(60)
Wróblewski Piotr
(60)
Karpińska Marta
(59)
Próchnicki Wojciech
(59)
Rogala Elżbieta
(59)
Bielecki Maciej
(57)
Jelonek Jakub
(57)
Malkowski Tomasz
(57)
Pilch Piotr
(57)
Rauziński Robert (1933- )
(57)
Gawrońska Joanna
(56)
Ajdukiewicz Andrzej (1939- )
(55)
Cieślak Piotr
(55)
Draniewicz Bartosz
(55)
Godek Piotr
(55)
Osiński Zbigniew (1926-2001)
(55)
Jasiński Filip
(54)
Klupiński Kamil
(54)
Kuliński Włodzisław
(54)
Suchodolski Bogdan (1903-1992)
(54)
Forowicz Krystyna
(53)
Szkutnik Leon Leszek
(52)
Zdanikowski Paweł
(52)
Wantuch-Matla Dorota
(51)
Barowicz Marek
(50)
Trammer Hubert
(50)
Walczak Tomasz
(50)
Watrak Andrzej
(50)
Zgółkowa Halina (1947- )
(50)
Barańska Katarzyna
(49)
Czajkowska-Matosiuk Katarzyna
(49)
Jurlewicz Teresa
(49)
Pikoń Andrzej
(49)
Szargut Jan (1923- )
(49)
Chojnacki Ireneusz
(48)
Rok wydania
2010 - 2019
(1)
Okres powstania dzieła
2001-
(1)
Kraj wydania
Stany Zjednoczone
(1)
Język
angielski
(1)
Temat
Rachunek prawdopodobieństwa
(1)
Uczenie maszynowe
(1)
Gatunek
Podręcznik
(1)
Dziedzina i ujęcie
Informatyka i technologie informacyjne
(1)
1 wynik Filtruj
Brak okładki
Książka
W koszyku
Machine learning : a probabilistic perspective / Kevin P. Murphy. - Cambridge, Massachusetts ; London : The MIT Press, copyright 2012. - XXIX, 1071 : ilustracje ; 24 cm.
(Adaptive Computation and Machine Learning Series)
Bibliografia strony 1019-1050. Indeks.
Machinę learning Supervised learning Unsupervised learning Discovering clusters Discovering latent factors Discovering graph structure Matrix completion Some basie concepts in machinę learning Parametric vs non-parametric models A simple non-parametric classifier: JsT-nearest neighbors The curse of dimensionality Parametric models for classification and regression Linear regression Logistic regression Oerfitting Model selection No free lunch theorem Probability Discrete random variables Fundamental rules Bayes' rule Independence and conditional independence Continuous random variables Quantiles Mean and variance Some common discrete distributions The binomial and Bernoulli distributions The multinomial and multinoulli distributions The Poisson distribution The empirical distribution Some common continuous distributions Gaussian (normal) distribution Degenerate pdf The Student's t distribution The Laplace distribution The gamma distribution The beta distribution Pareto distribution Joint probability distributions Covariance and correlation The multivariate Gaussian Multivariate Student t distribution Dirichlet distribution Transformations of random variables Linear transformations General transformations Central limit theorem Monte Carlo approximation Example: change of variables, the MC way Example: estimating n by Monte Carlo integration Accuracy of Monte Carlo approximation Information theory Entropy KL divergence Mutual information Generative models for discrete data Bayesian concept learning Likelihood Prior Posterior Posterior predictive distribution A more complex prior The beta-binomial model Posterior predictive distribution The Dirichlet-multinomial model Naive Bayes classifiers Model fitting Using the model for prediction The log-sum-exp trick Feature selection using mutual information Classifying documents using bag of words Gaussian models MLE for an MVN Maximum entropy derivation of the Gaussian Gaussian discriminant analysis Quadratic discriminant analysis (QDA) Linear discriminant analysis (LDA) Two-class LDA MLE for discriminant analysis Strategies for preventing overfitting Regularized LDA Diagonal LDA Nearest shrunken centroids classifier Inference in jointly Gaussian distributions Statement of the result Information form Linear Gaussian systems Statement of the result Digression: The Wishart distribution Inverse Wishart distribution Visualizing the Wishart distribution Inferring the parameters of an MVN Sensor fusion with unknown precisions Bayesian statistics Summarizing posterior distributions MAP estimation Credible intervals Inference for a difference in proportions Bayesian model selection Bayesian Occam's razor Computing the marginal likelihood (evidence) Bayes factors Jeffreys-Lindley paradox Priors Uninformative priors Jeffreys priors Robust priors Mixtures of conjugate priors Hierarchical Bayes modeling related cancer rates Empirical Bayes Example: beta-binomial model Example: Gaussian-Gaussian model Bayesian decision theory Bayes estimators for common loss functions The false positive vs false negative tradeoff Frequentist statistics Sampling distribution of an estimator Bootstrap Large sample theory for the MLE Freąuentist decision theory Bayes risk Minimax risk Admissible estimators Desirable properties of estimators Consistent estimators Unbiased estimators Minimum variance estimators The bias-variance tradeoff Empirical risk minimization Regularized risk minimization Structural risk minimization Estimating the risk using cross validation Upper bounding the risk using statistical learning theory Surrogate loss functions Pathologies of freąuentist statistics Counter-intuitive behavior of confidence intervals p-values considered harmful The likelihood principle Why isn't everyone a Bayesian? Linear regression Model specification Maximum likelihood estimation (least sąuares) Derivation of the MLE Geometrie interpretation Robust linear regression Ridge regression Basic idea Numerically stable computation Connection with PCA Regularization effects of big data Bayesian linear regression Computing the posterior Computing the posterior predictive Bayesian inference when
Logistic regression Model specification Model fitting Steepest descent Newtohs method Iteratively reweighted least sąuares (IRLS) Quasi-Newton (variable metric) methods Multi-class logistic regression Bayesian logistic regression Laplace approximation Derivation of the Bayesian information criterion (BIC) Gaussian approximation for logistic regression Approximating the posterior predictive Residual analysis (outlier detection) Online learning and stochastic optimization Online learning and regret minimization Stochastic optimization and risk minimization The LMS algorithm The perceptron algorithm A Bayesian view Generative vs discriminative classihers Pros and cons of each approach Dealing with missing data Fisher's linear discriminant analysis (FLDA) Generalized linear models and the exponential family The exponential family Log partition function MLE for the exponential family Bayes for the exponential family Maximum entropy derivation of the exponential family Generalized linear models (GLMs) ML and MAP estimation Bayesian inference Probit regression ML/MAP estimation using gradient-based optimization Latent variable interpretation Ordinal probit regression Multinomial probit models Multi-task learning Hierarchical Bayes for multi-task learning Application to personalized email spam filtering Application to domain adaptation Other kinds of prior Generalized linear mixed models Example: semi-parametric GLMMs for medical data Computational issues Learning to rank The pointwise approach The pairwise approach The listwise approach Loss functions for ranking Directed graphical models (Bayes nets) Chain rule Conditional independence Graphical models Graph terminology Directed graphical models Naive Bayes classihers Markov and hidden Markov models Medical diagnosis Genetic linkage analysis Directed Gaussian graphical models Inference Learning Platę notation Learning from complete data Learning with missing and/or latent variables Conditional independence properties of DGMs d-separation and the Bayes Bali algorithm (global Markov properties) Other Markov properties of DGMs Markov blanket and fuli conditionals Influence (decision) diagrams U Mixture models and the EM algorithm Latent variable models Mixture models Mixtures of Gaussians Mixture of multinoullis Using mixture models for clustering Mixtures of experts Parameter estimation for mbrture models Unidentifiability Computing a MAP estimate is non-convex The EM algorithm Basic idea EM for GMMs EM for mixture of experts EM for DGMs with hidden variables EM for the Student distribution EM for probit regression Theoretical basis for EM Online EM Other EM variants Model selection for latent variable models Model selection for probabilistic models Model selection for non-probabilistic methods Fitting models with missing data EM for the MLE of an MVN with missing data 1Latent linear models Factor analysis FA is a Iow rank parameterization of an MVN Inference of the latent factors Unidentifiability Mixtures of factor analysers EM for factor analysis models Fitting FA models with missing data Principal components analysis (PCA) Classical PCA: statement of the theorem Proof Singular value decomposition (SVD) Probabilistic PCA EM algorithm for PCA Choosing the number of latent dimensions Model selection for FA/PPCA Model selection for PCA PCA for categorical data PCA for paired and multi-view data Supervised PCA (latent factor regression) Partial least sąuares Canonical correlation analysis Independent Component Analysis (ICA) Maximum likelihood estimation The FastlCA algorithm Using EM 416 Other estimation principles Sparse linear models Bayesian variable selection The spike and slab model From the Bernoulli-Gaussian model to £q regularization Algorithms regularization: basics Why does i\ regularization yield sparse solutions? Optimality conditions for lasso Comparison of least sąuares, lasso, ridge and subset selection Regularization path Model selection Bayesian inference for linear models with Laplace priors regularization: algorithms Coordinate descent LARS and other homotopy methods Prcorimal and gradient projection methods EM for lasso regularization: extensions Group lasso Fused lasso Elastic net (ridge and lasso combined) Non-convex regularizers Bridge regression Hierarchical adaptive lasso Other hierarchical priors Automatic relevance determination (ARD)/sparse Bayesian learning (SBL) ARD for linear regression Whence sparsity? Connection to MAP estimation Algorithms for ARD ARD for logistic regression Sparse coding Learning a sparse coding dictionary Results of dictionary learning from image patches Compressed sensing Image inpainting and denoising Kernel functions RBF kernels Kernels for comparing documents Mercer (positive definite) kernels Linear kernels Matern kernels String kernels Pyramid match kernels Kernels derived from probabilistic generative models Using kernels inside GLMs Kernel machines LlVMs, RVMs, and other sparse vector machines The kernel trick Kernelized nearest neighbor classification Kernelized K-medoids clustering Kernelized ridge regression Kernel PCA Support vector machines (SVMs) SVMs for regression SVMs for classification Choosing C A probabilistic interpretation of SVMs Comparison of discriminative kernel methods Kernels for building generative models Smoothing kernels Kernel density estimation (KDE) From KDE to KNN Kernel regression Locally weighted regression
Gaussian processes GPs for regression Predictions using noise-free observations Predictions using noisy observations Effect of the kernel parameters Estimating the kernel parameters Computational and numerical issues Semi-parametric GPs GPs meet GLMs Binary classification Multi-class classification GPs for Poisson regression Connection with other methods Linear models compared to GPs Linear smoothers compared to GPs SVMs compared to GPs L1VM and RVMs compared to GPs Neural networks compared to GPs Smoothing splines compared to GPs RKHS methods compared to GPs GP latent variable model Approximation methods for large datasets 544 Adaptive basis function models Classification and regression trees (CART) Growing a tree 547Pruning a tree Pros and cons of trees Random forests CART compared to hierarchical mhtture of experts Generalized additive models Backntting Computational efficiency Multivariate adaptive regression splines (MARS) Boosting Forward stagewise additive modeling L2boosting AdaBoost LogitBoost Boosting as functional gradient descent Sparse boosting Multivariate adaptive regression trees (MART) Why does boosting work so well? A Bayesian view Feedforward neural networks (multilayer perceptrons) 565Convolutional neural networks Other kinds of neural networks A brief history of the field The backpropagation algorithm Identifiability Regularization Bayesian inference Ensemble learning Stacking Error-correcting output codes Ensemble learning is not equivalent to Bayes model averaging Experimental comparison Low-dimensional features High-dimensional features Interpreting black-box models Markov and hidden Markou models Markov models Transition matrix Application: Language modeling Stationary distribution of a Markov chain Application: Google's PageRank algorithm for web page ranking Hidden Markov models Applications of HMMs Inference in HMMs Types of inference problems for temporal models The forwards algorithm The forwards-backwards algorithm The Viterbi algorithm Forwards filtering, backwards sampling Learning for HMMs Training with fully observed data EM for HMMs (the Baum-Welch algorithm) Bayesian methods for "fitting" HMMs * Discriminative training Model selection Generalizations of HMMs Variable duration (semi-Markov) HMMs Hierarchical HMMs Input-output HMMs Auto-regressive and buried HMMs Factorial HMM Coupled HMM and the influence model Dynamie Bayesian networks (DBNs) State space models Applications of SSMs SSMs for object tracking Robotic SLAM Online parameter learning using recursive least sąuares SSM for time series forecasting Inference in LG-SSM The Kalman filtering algorithm The Kalman smoothing algorithm Learning for LG-SSM Identifiability and numerical stability Training with fully observed data EM for LG-SSM Subspace methods Bayesian methods for "fitting" LG-SSMs Approximate online inference for non-linear, non-Gaussian SSMs Extended Kalman filter (EKF) Unscented Kalman filter (UKF) Assumed density filtering (ADF) Hybrid discrete/continuous SSMs Inference Application: data association and multi-target tracking Application: fault diagnosis Application: econometric forecasting Undirected graphical models (Markov random fields) Conditional independence properties of UGMs An undirected alternative to d-separation Comparing directed and undirected graphical models Parameterization of MRFs The Hammersley-Clifford theorem Representing potential functions Ising model Hopfield networks Potts model Gaussian MRFs Markov logie networks Learning Training maxent models using gradient method Training partially observed maxent models Approximate methods for computing the MLEs of MRFs Pseudo likelihood Stochastic maximum likelihood Feature induction for maxent models Iterative proportional fitting (IPF) Conditional random fields (CRFs) Chain-structured CRFs, MEMMs and the label-bias problem Structural SVMs SSVMs: a probabilistic view SSVMs: a non-probabilistic view Cutting piane methods for fitting SSVMs Online algorithms for fitting SSVMs Latent structural SVMs Exact inference for graphical models Belief propagation for trees Serial protocol Parallel protocol Gaussian belief propagation * Other BP variants The variable elimination algorithm The generalized distributive law Computational complexity of VE A weakness of VE The junction tree algorithm Creating a junction tree Message passing on a junction tree Computational complexity of JTA generalizations * 728 20.5 Computational intractability of exact inference in the worst case 728 20.5.1 Approximate inference Variational inference Variational inference Alternative interpretations of the variational objective Forward or reverse KL? The mean field method Derivation of the mean field update eąuations Example: mean field for the Ising model Structured mean field Example: factorial HMM Variational Bayes Example: VB for a univariate Gaussian Example: VB for linear regression Variational Bayes EM Example: VBEM for mixtures of Gaussians *Variational message passing and VIBES Local variational bounds Motivating applications Bohning's ąuadratic bound to the log-sum-exp function Bounds for the sigmoid function Other bounds and approximations to the log-sum-exp function Variational inference based on upper bounds More mriationał inference Loopy belief propagation: algorithmic issues LBP on pairwise models LBP on a factor graph Convergence Accuracy of LBP Other speedup tricks for LBP Loopy belief propagation: theoretical issues UGMs represented in exponential family form The marginal polytope Exact inference as a variational optimization problem Mean field as a variational optimization problem LBP as a variational optimization problem Loopy BP vs mean field Extensions of belief propagation Generalized belief propagation Convex belief propagation Expectation propagation EP as a variational inference problem Optimizing the EP objective using moment matching EP for the clutter problem LBP is a special case of EP Ranking players using TrueSkill Other applications of EP MAP state estimation Linear programming relaxation Max-product belief propagation Graphcuts Experimental comparison of graphcuts and BP Dual decomposition Monte Carlo inference Sampling from standard distributions Using the cdf Sampling from a Gaussian (Box-Muller method) Rejection sampling Application to Bayesian statistics Adaptive rejection sampling Rejection sampling in high dimensions Importance sampling Handling unnormalized distributions Importance sampling for a DGM: likelihood weighting Sampling importance resampling (SIR) Particie filtering Seąuential importance sampling The degeneracy problem The resampling step The proposal distribution Application: robot localization Application: visual object tracking Application: time series forecasting Rao-Blackwellised particie filtering (RBPF) RBPF for switching LG-SSMs Application: tracking a maneuvering target Application: Fast SLAM 24 Markov chain Monte Carlo (MCMC) inference Gibbs sampling Example: Gibbs sampling for the Ising model Example: Gibbs sampling for inferring the parameters of a GMM Collapsed Gibbs sampling Gibbs sampling for hierarchical GLMs BUGS and JAGS The Imputation Posterior (IP) algorithm Blocking Gibbs sampling Metropolis Hastings algorithm Gibbs sampling is a special case of MH Proposal distributions Adaptive MCMC Initialization and mode hopping Why MH works Reversible jump (trans-dimensional) MCMC Speed and accuracy of MCMC The burn-in phase Mixing rates of Markov chains Practical convergence diagnostics Accuracy of MCMC Auxiliary variable MCMC Auxiliary variable sampling for logistic regression Slice sampling Swendsen Wang Hybrid/Hamiltonian MCMC Annealing methods Simulated annealing Annealed importance sampling Parallel tempering Approximating the marginal likelihood The candidate method Harmonie mean estimate Annealed importance sampling Clustering Measuring (dis)similarity Evaluating the output of clustering methods Dirichlet process mbriure models From finite to infinite mbeture models The Dirichlet process Applying Dirichlet processes to mixture modeling Fitting a DP mixture model Affinity propagation Spectral clustering Graph Laplacian Normalized graph Laplacian Hierarchical clustering Agglomerative clustering Divisive clustering Choosing the number of clusters Bayesian hierarchical clustering Clustering datapoints and features Biclustering Multi-view clustering Graphical model structure learning Structure learning for knowledge discovery Relevance networks Dependency networks Learning tree structures Directed or undirected tree? Chow-Liu algorithm for finding the ML tree structure 9Finding the MAP forest Mbrtures of trees Learning DAG structures Markov equivalence Exact structural inference Scaling up to larger graphs Learning DAG structure with latent variables Approximating the marginal likelihood when we have missing data Structural EM Discovering hidden variables Case study: Googles Rephil Structural eąuation models Learning causal DAGs Causal interpretation of DAGs Using causal DAGs to resolve Simpsohs paradox Learning causal DAG structures Learning undirected Gaussian graphical models MLE for a GGM Graphical lasso Bayesian inference for GGM structure Handling non-Gaussian data using copulas Learning undirected discrete graphical models Graphical lasso for MRFs/CRFs Thin junction trees Latent variable models for discrete data Distributed state LVMs for discrete data Mbrture models Exponential family PCA LDA and mPCA GaP model and non-negative matrix factorization Latent Dirichlet allocation (LDA) Unsupervised discovery of topics Quantitatively evaluating LDA as a language model Fitting using (collapsed) Gibbs sampling Fitting using batch variational inference Fitting using online variational inference Determining the number of topics Correlated topie model Dynamie topie model LDA-HMM Supervised LDA LVMs for graph-structured data Stochastic błock model Mixed membership stochastic błock model Relational topie model LVMs for relational data Infinite relational model Probabilistic matrix factorization for collaborative filtering Restricted Boltzmann machines (RBMs) Varieties of RBMs Learning RBMs Applications of RBMs Deep learning Deep generative models Deep directed networks Deep Boltzmann machines Deep belief networks Greedy layer-wise learning of DBNs Deep neural networks Deep multi-layer perceptrons Deep auto-encoders Stacked denoising auto-encoders Applications of deep networks Handwritten digit classification using DBNs Data visualization and feature discovery using deep auto-encoders Information retrieval using deep auto-encoders (semantic hashing) Learning audio features using ld convolutional DBNs Learning image features using 2d convolutional DBNs
Sygnatura czytelni BMW: XIII 90 (nowy)
1 placówka posiada w zbiorach tę pozycję. Rozwiń informację, by zobaczyć szczegóły.
Biblioteka Międzywydziałowa
Egzemplarze są dostępne wyłącznie na miejscu w bibliotece: sygn. MZ 394 N (1 egz.)
Pozycja została dodana do koszyka. Jeśli nie wiesz, do czego służy koszyk, kliknij tutaj, aby poznać szczegóły.
Nie pokazuj tego więcej

Deklaracja dostępności