Probabilistic Graphical Models 1
Representation

share ›
‹ links

Below are the top discussions from Reddit that mention this online Coursera course from Stanford University.

Offered by Stanford University. Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over ... Enroll for free.

Reddsera may receive an affiliate commission if you enroll in a paid course after using these buttons to visit Coursera. Thank you for using these buttons to support Reddsera.

Taught by
Daphne Koller
Professor
and 10 more instructors

Offered by
Stanford University

Reddit Posts and Comments

2 posts • 52 mentions • top 14 shown below

r/MachineLearning • post
102 points • dataoverflow
Stanford's Probabilistic Graphical Models class on Coursera will run again this August
r/MachineLearning • post
18 points • NicolasGuacamole
Daphne Koller's PGM course

Soon, Daphne Koller's course on probabilistic graphical models is starting again. I've recently become interested in this area, and will be doing the course once it comes out.

I was wondering if anybody from this sub also intends to do it, and if they would like to form an online study-group around it. Supposedly it is a very challenging course.

Link here.

edit: Looks like there is already a (dead) sub for the class. I propose we work out of there. /r/pgmclass

r/pgmclass • post
5 points • hammerheadquark
Probabilistic Graphical Models 1: Representation is up and running on coursera.
r/MachineLearning • post
11 points • RobRomijnders
[D] What does the graphical model for a GAN look like?

I like to think in terms of graphical models (book, course, coursera). It helped me to understand models like neural nets, naive bayes classifier and the VAE. It also visualizes learning for me in terms of MAP inference, Variatonal inference and MCMC.

Over the last weeks, I have been wondering what the graphical model for the GAN looks like. Questions like: What are the conditional distributions, would it be directed or undirected?

r/MLQuestions • comment
2 points • RidgeRegressor

I can't tell you about whether they are widely applied in industry or not since I'm still a student myself. Though, I would highly advise you on taking a PGM course as it gave me a really good insight into another very important chapter of ML.

Check out Daphne Koller's PGM course on Coursera - according to our PGM Prof she is one of the leading researchers in the PGM field and does an aweso job at teaching them with very clear examples.

r/politics • comment
1 points • ryanbuck_

Also audit the first lesson of Daphne Koller’s PGM course (if you can for free.. not sure that’s possible any more) for an idea of what a higher level algorithm might look like.

https://www.coursera.org/learn/probabilistic-graphical-models

Warning: I would not recommend taking PGM as a framework for thinking about these things. But the first lesson of that course is nice. There are other ways to learn the same basic mathematics that are much more flexible, although harder to get access to (other top universities). She is skilled, it’s just easy to get ‘trapped’ by PGM, it has a very specific language and set of tools that can be hard to break free from. And it’s quite inaccessible unless you already have a lot of prerequisite knowledge.

r/datascience • comment
1 points • ratterstinkle

I am also very interested in knowing “Why?”

Any thoughts on Daphne Koller’s Coursera courses on Probabilistic graphical models?

r/learnmachinelearning • comment
1 points • dan994
r/CapitalismVSocialism • comment
2 points • metalliska

> In systems theory, a system provides one or more functions.

Ok I like starting with the definition. It sounds "purpose-based", but that's fine.

> We can use this approach to analyse both natural and artificial systems.

So long as both conform to "purpose".

>We can divide a system up into subsystems in order to analyse their function in isolation from the other subsystems.

Right, especially if the isolation can be tested against a variety of pre-conditions (especially shooting random at one link in the chain). I finished this course in coursera a while ago; it's really effective at this

>A system is simple if it is just an aggregation of the subsystems, without any of the subsystems interacting with each other

Let it be known that defining things cleans up discourse. Latent variables, for example, may "show up" only once two (previously isolated) factors get introduced in a certain context. These may be separate (and undetectable) from any other error.

>An emergent property is a function of a system that arises over time as a result of interaction between the subsystems, but is not a function of any subsystem in isolation.

Ok this sounds like latent variables which react in a chamber. This can run into graph theory isomorphism problems.

>but the overall system may fail to meet the requirements if the interaction between the subsystems puts the overall system outside of the requirements.

yes. I have some historical engineering examples I can provide here but I'll hold off. Something to consider: "why are there requirements for "function"" ? (Basically a Ship of Theseus problem involving "function of sailworthy").

> In ages gone by if we saw something in the natural world that looked too ordered to be random we would either conclude that it was by design of some intelligent being, either human or divine.

Think about how many people concluded "We don't know yet, but don't assign it to purposeful order". That's what I would've said in those ages gone.

>They do not require a designer, only each other and time.

and purposeful function of each subcomponent.

>the reproduction process is still not a result of the interactions between vesicles and is therefore not complex.

if this level of diagram were to include "sexual selection", would that change your answer? Additionally, if 2 vesicles become 1 "mega" vesicle, that could be the same thing.

>I would therefore conclude that the rotational motion is an emergent property that is self-organised by the interaction between water molecules when acting under an external force.

Water molecules exert gravitational forces on one another. I don't think you can label it as an "external force".

> Are there at least some methods you think are helpful or conclusions you think are justifiable? For example how did you reach this conclusion?

As for methods, ecodynamics at Washington State provide ways to take a look at this. One thing I'd advocate more for these models (compared to economic ones) is that they can take data of non-humans (vegetables, sheep, cows, whatever). Humans measuring and assessing cattle is a more universal (IMO) set of criteria then people measuring and assessing themselves. Because whether you re-create this type of ecological context, cows are still cows.

So a group of people can rotate through crops and living spaces without insisting that it be a trade network. You might say that a trade network would emerge. But I'd claim how these anthropological and ecological models reveal how division of labor / specialization, or other commercial-first roles aren't elevated more than any other relationship (political, gifts, families, altruistic social, war).

>For example how did you reach this conclusion?

I keep reading books about how money is created. Most of the trade history I was taught in high school (that I remember) acts as if money is some sort of grassroots movement, when in fact (I can provide examples) it's based from the State (King).

Thus the trade history is impacted by a King's influence. There might be nuggets of knowledge by which reassembling of trade networks happened "anyway", giving more credit to commoners' decisions, and that's what I'd say is the difference between "financial" and "economic" decisions.

"The problem" for me stems from the fact that more financial records (ledgers) were kept, giving an impartial sense of power. Thus any debt holder or creditor who makes records has an unfairly louder "voice". This "voice" drowns out other possible ingenuity from someone non-involved with this.

r/learnmachinelearning • comment
3 points • IndianAmericanNerd

What sort of advanced courses are you looking for? Generally, advanced ML courses are narrowly focused on specific applications or areas of ML. Some examples:

Probabilistic Graphical Models

Deep Learning in Computer Vision

Reinforcement Learning

r/CapitalismVSocialism • comment
1 points • DebonairBud

>the military. Military always gets first crack at new toys before auction.

Historically yes, but I think the US military is somewhat behind the curve these days. In my view the government was subordinated to capital and the corporate world sometime between 1960-75. I wouldn't draw a strict line here though, governments have always been in a codetermining relationship with the mechanisms of capital, it's more of a spectrum of relative autonomy.

>on graph theory: Causality by Judeal Pearl 2000 was my first real introduction; it's quite dense, but it reveals as to how in a model, errors between nodes can be predicted and curtailed for using "random noise"
>
>https://www.coursera.org/learn/probabilistic-graphical-models
>
>was another good way to look at this.

Thanks for the link. I'll have to find some time to look into this.

r/artificial • comment
1 points • ronnyma

No experience, but I recall reading Probabilistic Graphical Models by Prof. Daphne Koller. She writes about medical diagnosing utilizing data structures modeling a probability distribution which may take some symptoms as input and output a probability for e.g. a "cold or hay fever" (sic, I think). Where does the AI come in? It is of course possible to make these models better with more information about symptoms -> diagnosis. No suggestions at the moment, but I guess you should check it out.

​

Here is the online course (first part): https://www.coursera.org/learn/probabilistic-graphical-models

​

Best of luck!!

r/datascience • comment
2 points • ski__

Judea Pearl's books get recommended a lot for a good reason :)

RL is really fun to play with, but again, unless you are an ML researcher, most problems are either based around tabular data, natural language, or images. Tabular data dominates the business world.

Here is the only GP book you should ever need.

http://www.gaussianprocess.org/gpml/chapters/RW.pdf

If you know the Python stack, I would recommend trying to implement a basic GP yourself, referring to

https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/gaussian_process/gpr.py#L21

when you get stuck. Note: Scikit does a lot of error checking, so it may take some effort to find the meat of the algorithm. For this reason, I would recommend looking up a basic implementation of a GP online as well.

If you have gone through Axler, you know enough LA. Focus on the Stats and the Bayesian methods.

One more thing you could consider is graphical models. They are great for explainability, and they let business use their domain knowledge to define relationships between various variables. However, Graphical models are a a whole field on their own.

Fantastic Coursera course on the subject: https://www.coursera.org/learn/probabilistic-graphical-models

The book to accompany the course: https://www.amazon.com/Probabilistic-Graphical-Models-Principles-Computation/dp/0262013193

r/CapitalismVSocialism • comment
2 points • potato_cabbage

>this is not really a good way to think about experimental design. this book goes into model development regarding controlling variables and randomizing inputs, and developing counterfactuals.

Expand on this. Why not?

Fundamentally to test in experimental conditions is to test a part of the system in isolation. This requires knowledge of what to isolate, what it does to the the overall system and confidence that all external factors have been accounted for.

Similarly, a model only conveys what we are aware of and its level of complexity is limited by what we think is sensible given available computational capability and the requirements for the model.

Looping back to my usual point, this makes such experimentation highly unreliable when applied to the economy as it is an immensely complex system.

>This course I took around 2013 also goes into what influence different graph nodes implicate on one another. So, no, scale wise, there's not a "hard limit". Not one present in this book's examples anyways.

What do you mean by "hard limit"? To what?

Logically that statement does not align. It sounds like: You took course, therefore there is no hard limit, at least according to the book.

This doesnt make sense. I can't respond to that. Expand please.

>Nope, just that "emergent order" has no boundary to tell which is fictitious and what isn't.

We know a-priori that, say, the solar system emerged and wasnt imposed by deliberate action of some conscious entity, specifically not through human action. I get what you mean, we can't prove it empirically. I argue we don't have to.

>Right, wrong, and "end up" are all functions of input by a biased humanoid. Particularly involving "errors of input", data selection, and adapting older models (knowledge) to newer.

Look, if my goal is to train an AI to identify trees, then a tree would be right, a non tree would be wrong. The fact that we call a stick with dangling bits a "tree" has little significance in this context.

>I'm currently planning and executing a FPGA Evolvable Hardware project, where "we don't know what our ignorance looks like" on the circuitry level". Doesn't default to "emergent order"; no more than a "God of the Gaps" defaults to "Must be God".

Within the framework that I have outlined emergent order is an order that occurs naturally without deliberate outside interference by means of human action.

Its been a while since I did anything with FPGAs/ASICs but you essentially run a genetic algorithm of sorts to come up with the most efficient design given initial parameters and restrictions.

Either way you clarified that you don't deny existence of complexity as a whole so lets just leave this point be.

>Quite the opposite. I'm an atheist, so I'm still asserting that any aspects of "order" are imposed. We don't have any "somewhat ordered", "non-ordered", "imposed ordered", "emergent ordered", "99.999% deterministic ordered", "43.2% non-deterministic ordered" (....etc...) Universes to compare against. Thus, no demarcation of repeatability and falsifiability. We're stuck with what we got, and any claims of "order" are likely made by someone with something to prove. Teleology ain't a science.

>The fact that you can't articulate this doesn't reveal insidiousness, rather, it reveals the knowledge you've learned is biased. It's like bad set theory by reusing "error", "loss", "noise", and other "wrong" variables in inappropriate contexts.

This looks to me like an argument of definitions with an ultra-empiricist twist.

Arguing definitions across frameworks is pointless. Definitions dont prove anything in their own right but are merely tools to assist in conveying a message.

Lets take a step back, and before we continue this discussion define "emergent order" the way you see it within your framework, then lets compare to the way I define it to see if we are discussing the same thing to begin with.