Are you over 18 and want to see adult content?
More Annotations

A complete backup of www.rtbf.be/sport/football/belgique/jupilerproleague/detail_charleroi-standard-un-choc-wallon-avec-le-podiu
Are you over 18 and want to see adult content?

A complete backup of ilpiccolo.gelocal.it/trieste/cronaca/2020/03/01/news/coronavirus-quattro-nuovi-casi-in-fvg-uno-a-trieste-e-
Are you over 18 and want to see adult content?

A complete backup of www.svt.se/nyheter/utrikes/primarval-i-south-carolina
Are you over 18 and want to see adult content?

A complete backup of www.mediasetplay.mediaset.it/article/chivuolesseremilionario/pedigree-l-etimologia-del-termine-che-indica-i
Are you over 18 and want to see adult content?

A complete backup of www.bbc.com/mundo/noticias-51692446
Are you over 18 and want to see adult content?
Favourite Annotations

A complete backup of israelections2015.wordpress.com
Are you over 18 and want to see adult content?

A complete backup of thehouseofhearing.com
Are you over 18 and want to see adult content?

A complete backup of radissonrewards.com
Are you over 18 and want to see adult content?

A complete backup of postalszipcode.com
Are you over 18 and want to see adult content?

A complete backup of uniprojects.com.ng
Are you over 18 and want to see adult content?
Text
SEARCH - LESSWRONG
A community blog devoted to refining the art of rationality CRITICISMS OF THE RATIONALIST MOVEMENT Criticisms of the rationalist movement and LessWrong have existed for most of its duration on various grounds. CULT OF RATIONALITY Less Wrong has been referred to as a cult phyg on numerous occasions,123 with Eliezer Yudkowsky as its leader. Eliezer's confidence in his AI safety work outside of mainstream academia and self-professed intelligence renders him highly unpopular with his critics.4GROWTH STORIES
Recollections of personal progress, lessons learned, memorable experiences, coming of age, in autobiographical form. Related Pages: Postmortems & Retrospectives, Updated Beliefs (examples of), Self Improvement, Progress Studies (society level) CRYPTOCURRENCY & BLOCKCHAIN 30 Exploiting Crypto Prediction Markets for Fun and Profit. SrdjanMiletic. 2mo
MESA-OPTIMIZATION
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization Daemons. EXAMPLES Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable RATIONALIST MOVEMENTCULTS - LESSWRONG
All posts related to Cults, sorted by relevance LESSWRONGROKO'S BASILISKWORLD OPTIMIZATIONWHAT MAKES PEOPLE INTELLECTUALLY ACTIVEBENQUO As johnswentworth recounts in Core Pathways of Aging, as an organism ages active transposons within it's stem cells duplicate and that mechanism might lead to increased average transposons count in stem cells.Those transposons then produce DNA damage which in turn leads to cell senescence. If that hypothesis is true, there's evolutionary pressure to keep the count of active transposons low. A MAP THAT REFLECTS THE TERRITORY LessWrong is a community blog devoted to refining the art of human rationality. This is a collection of our best essays from 2018, as determined by our 2018 Review.It contains over 40 redesigned graphs, packaged into a beautiful set of 5 books with each book small enough to fit in your pocket. A HUMAN'S GUIDE TO WORDS A series on the use and abuse of words; why you often can't define a word any way you like; how human brains seem to process definitions. First introduces the Mind projection fallacy and the concept of how an algorithm feels from inside, which makes it a basic intro to key elements of the LW zeitgeist.. A guide to this sequence is available at 37 Ways That Words Can Be Wrong.SEARCH - LESSWRONG
A community blog devoted to refining the art of rationality CRITICISMS OF THE RATIONALIST MOVEMENT Criticisms of the rationalist movement and LessWrong have existed for most of its duration on various grounds. CULT OF RATIONALITY Less Wrong has been referred to as a cult phyg on numerous occasions,123 with Eliezer Yudkowsky as its leader. Eliezer's confidence in his AI safety work outside of mainstream academia and self-professed intelligence renders him highly unpopular with his critics.4GROWTH STORIES
Recollections of personal progress, lessons learned, memorable experiences, coming of age, in autobiographical form. Related Pages: Postmortems & Retrospectives, Updated Beliefs (examples of), Self Improvement, Progress Studies (society level) CRYPTOCURRENCY & BLOCKCHAIN 30 Exploiting Crypto Prediction Markets for Fun and Profit. SrdjanMiletic. 2mo
MESA-OPTIMIZATION
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization Daemons. EXAMPLES Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable RATIONALIST MOVEMENTCULTS - LESSWRONG
All posts related to Cults, sorted by relevanceALL QUESTIONS
A community blog devoted to refining the art of rationality. 7 Should I advocate for people to not buy Ivermectin as a treatment for COVID-19 in the Philippines for now? Q A HUMAN'S GUIDE TO WORDS A series on the use and abuse of words; why you often can't define a word any way you like; how human brains seem to process definitions. First introduces the Mind projection fallacy and the concept of how an algorithm feels from inside, which makes it a basic intro to key elements of the LW zeitgeist.. A guide to this sequence is available at 37 Ways That Words Can Be Wrong.VIRTUES - LESSWRONG
Virtues are traits that one ought to possess, for the benefit of the world or oneself. On LessWrong the focus is often on epistemic virtues, as in Eliezer Yudowsky's essay Twelve Virtues of Rationality which offers this list of virtues (roughly summarized): * Curiosity - the burning desire to pursue truth; * Relinquishment - not being attached to mistaken beliefs; * Lightness - updating your LESS WRONG - LESSWRONGWIKI Less Wrong is devoted to refining the art of human rationality - the art of thinking. The new math and science deserves to be applied to our daily lives, and heard in our public voices. Less Wrong is a partially moderated community blog that allows general authors to contribute posts as well as comments.POMODORO TECHNIQUE
The Pomodoro Technique is a productivity technique where you alternate between 25 minutes of work and 5 minutes of break time. It gets its name from a kitchen timer shaped like a tomato (pomodoro in Italian). The basic intuition for the pomodoro technique is that: 1. People concentrate most effectively shortly after a break 2. Most people are not innately good at noticing when they are notCOMPLEXITY OF VALUE
Complexity of value is the thesis that human values have high Kolmogorov complexity; that our preferences, the things we care about, cannot be summed by a few simple rules, or compressed. Fragility of value is the thesis that losing even a small part of the rules that make up our values could lead to results that most of us would now consider as unacceptable (just like dialing nine out of ten HARRY POTTER AND THE METHODS OF RATIONALITY Harry Potter and the Methods of Rationality is a Harry Potter rational fanfic by Eliezer Yudkowsky, AI researcher and decision theorist. The book is also available in audiobook version here.. This is an Alternate Universe story, where Petunia married a scientist. TIMELESS DECISION THEORY Timeless decision theory (TDT) is a decision theory, developed by Eliezer Yudkowsky which, in slogan form, says that agents should decide as if they are determining the output of the abstract computation that they implement. This theory was developed in response to the view that rationality should be about winning (that is, about agents achieving their desired ends) rather than about behaving AI TAKEOFF - LESSWRONG AI Takeoff refers to the process of an Artificial General Intelligence going from a certain threshold of capability (often discussed as "human-level") to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., "soft" vs"hard".
LITANY OF GENDLIN
The Litany of Gendlin: What is true is already so. Owning up to it doesn't make it worse. Not being open about it doesn't make it go away. And because it's true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, for they are already enduring it. —Eugene Gendlin BLOGPOSTS * Why Truth?
LESSWRONGROKO'S BASILISKWORLD OPTIMIZATIONWHAT MAKES PEOPLE INTELLECTUALLY ACTIVEBENQUO 134 Chapter 1: A Day of Very Low Probability. First post in Harry Potter and the Methods of Rationality. Eliezer Yudkowsky. 216 Welcome to LessWrong! Ruby, habryka, Ben Pace, Raemon, jimrandomh. 2y. 24. 170 A Year of Spaced Repetition Software in the Classroom.ALL QUESTIONS
Q. 16 What weird beliefs do you have? Q. 7 Is Ray Kurzweil's prediction accuracy still being tracked? Q. 44 What are some real life Inadequate Equilibria? Q. -33 Now that Bill Gates has been linked to Epstein, does that mean we need to stop using Microsoft products andVIRTUES - LESSWRONG
Virtues are traits that one ought to possess, for the benefit of the world or oneself. On LessWrong the focus is often on epistemic virtues, as in Eliezer Yudowsky's essay Twelve Virtues of Rationality which offers this list of virtues (roughly summarized): * Curiosity - the burning desire to pursue truth; * Relinquishment - not being attached to mistaken beliefs; * Lightness - updating yourGROWTH STORIES
Recollections of personal progress, lessons learned, memorable experiences, coming of age, in autobiographical form. Related Pages: Postmortems & Retrospectives, Updated Beliefs (examples of), Self Improvement, Progress Studies (society level)NEWCOMB'S PROBLEM
Newcomb's Problem is a thought experiment in decision theory exploring problems posed by having other agents in the environment who can predict your actions. THE PROBLEM From Newcomb's Problem and Regret of Rationality: A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange littlegame.
MESA-OPTIMIZATION
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization Daemons. EXAMPLES Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable CRITICISMS OF THE RATIONALIST MOVEMENT Criticisms of the rationalist movement and LessWrong have existed for most of its duration on various grounds. CULT OF RATIONALITY Less Wrong has been referred to as a cult phyg on numerous occasions,123 with Eliezer Yudkowsky as its leader. Eliezer's confidence in his AI safety work outside of mainstream academia and self-professed intelligence renders him highly unpopular with his critics.4 DOMINICQ - LESSWRONG 68 Bureaucracy is a world of magic 2mo. 31. 28 Buying a house and making friends in unexpected places 2mo. 4. 11 Making friends 2mo. 0. 2 dominicq's Shortform 4mo. 2. 79 Taking money seriously 4mo.ELIEZER YUDKOWSKY
The Bayesian Conspiracy. Three Worlds Collide. Highly Advanced Epistemology 101 for Beginners. Inadequate Equilibria. The Craft and the Community. Challenging the Difficult. Yudkowsky's Coming of Age. Quantified Humanism. Value Theory.LESS WRONG SLACK
Less Wrong Slack. LessWrong has a Slack group! Slack is a communication tool, to exchange and discuss about various topics. The LW Slack looks like a chatroom and works like a chatroom, with emphasis on topic-centered channels to enable multiple discussionsrunning in
LESSWRONGROKO'S BASILISKWORLD OPTIMIZATIONWHAT MAKES PEOPLE INTELLECTUALLY ACTIVEBENQUO 134 Chapter 1: A Day of Very Low Probability. First post in Harry Potter and the Methods of Rationality. Eliezer Yudkowsky. 216 Welcome to LessWrong! Ruby, habryka, Ben Pace, Raemon, jimrandomh. 2y. 24. 170 A Year of Spaced Repetition Software in the Classroom.ALL QUESTIONS
Q. 16 What weird beliefs do you have? Q. 7 Is Ray Kurzweil's prediction accuracy still being tracked? Q. 44 What are some real life Inadequate Equilibria? Q. -33 Now that Bill Gates has been linked to Epstein, does that mean we need to stop using Microsoft products andVIRTUES - LESSWRONG
Virtues are traits that one ought to possess, for the benefit of the world or oneself. On LessWrong the focus is often on epistemic virtues, as in Eliezer Yudowsky's essay Twelve Virtues of Rationality which offers this list of virtues (roughly summarized): * Curiosity - the burning desire to pursue truth; * Relinquishment - not being attached to mistaken beliefs; * Lightness - updating yourGROWTH STORIES
Recollections of personal progress, lessons learned, memorable experiences, coming of age, in autobiographical form. Related Pages: Postmortems & Retrospectives, Updated Beliefs (examples of), Self Improvement, Progress Studies (society level)NEWCOMB'S PROBLEM
Newcomb's Problem is a thought experiment in decision theory exploring problems posed by having other agents in the environment who can predict your actions. THE PROBLEM From Newcomb's Problem and Regret of Rationality: A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange littlegame.
MESA-OPTIMIZATION
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization Daemons. EXAMPLES Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable CRITICISMS OF THE RATIONALIST MOVEMENT Criticisms of the rationalist movement and LessWrong have existed for most of its duration on various grounds. CULT OF RATIONALITY Less Wrong has been referred to as a cult phyg on numerous occasions,123 with Eliezer Yudkowsky as its leader. Eliezer's confidence in his AI safety work outside of mainstream academia and self-professed intelligence renders him highly unpopular with his critics.4 DOMINICQ - LESSWRONG 68 Bureaucracy is a world of magic 2mo. 31. 28 Buying a house and making friends in unexpected places 2mo. 4. 11 Making friends 2mo. 0. 2 dominicq's Shortform 4mo. 2. 79 Taking money seriously 4mo.ELIEZER YUDKOWSKY
The Bayesian Conspiracy. Three Worlds Collide. Highly Advanced Epistemology 101 for Beginners. Inadequate Equilibria. The Craft and the Community. Challenging the Difficult. Yudkowsky's Coming of Age. Quantified Humanism. Value Theory.LESS WRONG SLACK
Less Wrong Slack. LessWrong has a Slack group! Slack is a communication tool, to exchange and discuss about various topics. The LW Slack looks like a chatroom and works like a chatroom, with emphasis on topic-centered channels to enable multiple discussionsrunning in
A MAP THAT REFLECTS THE TERRITORY LessWrong is a community blog devoted to refining the art of human rationality. This is a collection of our best essays from 2018, as determined by our 2018 Review.It contains over 40 redesigned graphs, packaged into a beautiful set of 5 books with each book small enough to fit in your pocket.THE LIBRARY
A community blog devoted to refining the art of rationality. Rationality: A-Z by Eliezer Yudkowsky A set of essays by Eliezer Yudkowsky that serve as a long-form introduction to formative ideas behind Less Wrong, the Machine Intelligence Research Institute, the Center for Applied Rationality, and substantial parts of the effectivealtruism community.
A HUMAN'S GUIDE TO WORDS A series on the use and abuse of words; why you often can't define a word any way you like; how human brains seem to process definitions. First introduces the Mind projection fallacy and the concept of how an algorithm feels from inside, which makes it a basic intro to key elements of the LW zeitgeist.. A guide to this sequence is available at 37 Ways That Words Can Be Wrong. THE CODEX - LESSWRONG The Codex. The Codex is a collection of essays written by Scott Alexander that discuss how good reasoning works, how to learn from the institution of science, and different ways society has been and could be designed. It also contains several short interludes containing fictional tales and real-life stories.SEARCH - LESSWRONG
A community blog devoted to refining the art of rationalityVALUE LEARNING
Value learning is a proposed method for incorporating human values in an AGI. It involves the creation of an artificial learner whose actions consider many possible set of values and preferences, weighed by their likelihood. Value learning could prevent an AGI of having goals detrimental to human values, hence helping in the creation ofFriendly AI.
HARRY POTTER AND THE METHODS OF RATIONALITY Harry Potter and the Methods of Rationality is a Harry Potter rational fanfic by Eliezer Yudkowsky, AI researcher and decision theorist. The book is also available in audiobook version here.. This is an Alternate Universe story, where Petunia married a scientist. LESS WRONG - LESSWRONGWIKI Less Wrong is devoted to refining the art of human rationality - the art of thinking. The new math and science deserves to be applied to our daily lives, and heard in our public voices. Less Wrong is a partially moderated community blog that allows general authors to contribute posts as well as comments.COMPLEXITY OF VALUE
Complexity of value is the thesis that human values have high Kolmogorov complexity; that our preferences, the things we care about, cannot be summed by a few simple rules, or compressed. Fragility of value is the thesis that losing even a small part of the rules that make up our values could lead to results that most of us would now consider as unacceptable (just like dialing nine out of tenLESSWRONG.COM
lesswrong.com
LESSWRONGROKO'S BASILISKWORLD OPTIMIZATIONWHAT MAKES PEOPLE INTELLECTUALLY ACTIVEBENQUO 134 Chapter 1: A Day of Very Low Probability. First post in Harry Potter and the Methods of Rationality. Eliezer Yudkowsky. 216 Welcome to LessWrong! Ruby, habryka, Ben Pace, Raemon, jimrandomh. 2y. 24. 170 A Year of Spaced Repetition Software in the Classroom.ALL QUESTIONS
Q. 16 What weird beliefs do you have? Q. 7 Is Ray Kurzweil's prediction accuracy still being tracked? Q. 44 What are some real life Inadequate Equilibria? Q. -33 Now that Bill Gates has been linked to Epstein, does that mean we need to stop using Microsoft products andVIRTUES - LESSWRONG
Virtues are traits that one ought to possess, for the benefit of the world or oneself. On LessWrong the focus is often on epistemic virtues, as in Eliezer Yudowsky's essay Twelve Virtues of Rationality which offers this list of virtues (roughly summarized): * Curiosity - the burning desire to pursue truth; * Relinquishment - not being attached to mistaken beliefs; * Lightness - updating yourGROWTH STORIES
Recollections of personal progress, lessons learned, memorable experiences, coming of age, in autobiographical form. Related Pages: Postmortems & Retrospectives, Updated Beliefs (examples of), Self Improvement, Progress Studies (society level)NEWCOMB'S PROBLEM
Newcomb's Problem is a thought experiment in decision theory exploring problems posed by having other agents in the environment who can predict your actions. THE PROBLEM From Newcomb's Problem and Regret of Rationality: A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange littlegame.
MESA-OPTIMIZATION
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization Daemons. EXAMPLES Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable CRITICISMS OF THE RATIONALIST MOVEMENT Criticisms of the rationalist movement and LessWrong have existed for most of its duration on various grounds. CULT OF RATIONALITY Less Wrong has been referred to as a cult phyg on numerous occasions,123 with Eliezer Yudkowsky as its leader. Eliezer's confidence in his AI safety work outside of mainstream academia and self-professed intelligence renders him highly unpopular with his critics.4 DOMINICQ - LESSWRONG 68 Bureaucracy is a world of magic 2mo. 31. 28 Buying a house and making friends in unexpected places 2mo. 4. 11 Making friends 2mo. 0. 2 dominicq's Shortform 4mo. 2. 79 Taking money seriously 4mo.ELIEZER YUDKOWSKY
The Bayesian Conspiracy. Three Worlds Collide. Highly Advanced Epistemology 101 for Beginners. Inadequate Equilibria. The Craft and the Community. Challenging the Difficult. Yudkowsky's Coming of Age. Quantified Humanism. Value Theory.LESS WRONG SLACK
Less Wrong Slack. LessWrong has a Slack group! Slack is a communication tool, to exchange and discuss about various topics. The LW Slack looks like a chatroom and works like a chatroom, with emphasis on topic-centered channels to enable multiple discussionsrunning in
LESSWRONGROKO'S BASILISKWORLD OPTIMIZATIONWHAT MAKES PEOPLE INTELLECTUALLY ACTIVEBENQUO 134 Chapter 1: A Day of Very Low Probability. First post in Harry Potter and the Methods of Rationality. Eliezer Yudkowsky. 216 Welcome to LessWrong! Ruby, habryka, Ben Pace, Raemon, jimrandomh. 2y. 24. 170 A Year of Spaced Repetition Software in the Classroom.ALL QUESTIONS
Q. 16 What weird beliefs do you have? Q. 7 Is Ray Kurzweil's prediction accuracy still being tracked? Q. 44 What are some real life Inadequate Equilibria? Q. -33 Now that Bill Gates has been linked to Epstein, does that mean we need to stop using Microsoft products andVIRTUES - LESSWRONG
Virtues are traits that one ought to possess, for the benefit of the world or oneself. On LessWrong the focus is often on epistemic virtues, as in Eliezer Yudowsky's essay Twelve Virtues of Rationality which offers this list of virtues (roughly summarized): * Curiosity - the burning desire to pursue truth; * Relinquishment - not being attached to mistaken beliefs; * Lightness - updating yourGROWTH STORIES
Recollections of personal progress, lessons learned, memorable experiences, coming of age, in autobiographical form. Related Pages: Postmortems & Retrospectives, Updated Beliefs (examples of), Self Improvement, Progress Studies (society level)NEWCOMB'S PROBLEM
Newcomb's Problem is a thought experiment in decision theory exploring problems posed by having other agents in the environment who can predict your actions. THE PROBLEM From Newcomb's Problem and Regret of Rationality: A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange littlegame.
MESA-OPTIMIZATION
Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization Daemons. EXAMPLES Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable CRITICISMS OF THE RATIONALIST MOVEMENT Criticisms of the rationalist movement and LessWrong have existed for most of its duration on various grounds. CULT OF RATIONALITY Less Wrong has been referred to as a cult phyg on numerous occasions,123 with Eliezer Yudkowsky as its leader. Eliezer's confidence in his AI safety work outside of mainstream academia and self-professed intelligence renders him highly unpopular with his critics.4 DOMINICQ - LESSWRONG 68 Bureaucracy is a world of magic 2mo. 31. 28 Buying a house and making friends in unexpected places 2mo. 4. 11 Making friends 2mo. 0. 2 dominicq's Shortform 4mo. 2. 79 Taking money seriously 4mo.ELIEZER YUDKOWSKY
The Bayesian Conspiracy. Three Worlds Collide. Highly Advanced Epistemology 101 for Beginners. Inadequate Equilibria. The Craft and the Community. Challenging the Difficult. Yudkowsky's Coming of Age. Quantified Humanism. Value Theory.LESS WRONG SLACK
Less Wrong Slack. LessWrong has a Slack group! Slack is a communication tool, to exchange and discuss about various topics. The LW Slack looks like a chatroom and works like a chatroom, with emphasis on topic-centered channels to enable multiple discussionsrunning in
A MAP THAT REFLECTS THE TERRITORY LessWrong is a community blog devoted to refining the art of human rationality. This is a collection of our best essays from 2018, as determined by our 2018 Review.It contains over 40 redesigned graphs, packaged into a beautiful set of 5 books with each book small enough to fit in your pocket.THE LIBRARY
A community blog devoted to refining the art of rationality. Rationality: A-Z by Eliezer Yudkowsky A set of essays by Eliezer Yudkowsky that serve as a long-form introduction to formative ideas behind Less Wrong, the Machine Intelligence Research Institute, the Center for Applied Rationality, and substantial parts of the effectivealtruism community.
A HUMAN'S GUIDE TO WORDS A series on the use and abuse of words; why you often can't define a word any way you like; how human brains seem to process definitions. First introduces the Mind projection fallacy and the concept of how an algorithm feels from inside, which makes it a basic intro to key elements of the LW zeitgeist.. A guide to this sequence is available at 37 Ways That Words Can Be Wrong. THE CODEX - LESSWRONG The Codex. The Codex is a collection of essays written by Scott Alexander that discuss how good reasoning works, how to learn from the institution of science, and different ways society has been and could be designed. It also contains several short interludes containing fictional tales and real-life stories.SEARCH - LESSWRONG
A community blog devoted to refining the art of rationalityVALUE LEARNING
Value learning is a proposed method for incorporating human values in an AGI. It involves the creation of an artificial learner whose actions consider many possible set of values and preferences, weighed by their likelihood. Value learning could prevent an AGI of having goals detrimental to human values, hence helping in the creation ofFriendly AI.
HARRY POTTER AND THE METHODS OF RATIONALITY Harry Potter and the Methods of Rationality is a Harry Potter rational fanfic by Eliezer Yudkowsky, AI researcher and decision theorist. The book is also available in audiobook version here.. This is an Alternate Universe story, where Petunia married a scientist. LESS WRONG - LESSWRONGWIKI Less Wrong is devoted to refining the art of human rationality - the art of thinking. The new math and science deserves to be applied to our daily lives, and heard in our public voices. Less Wrong is a partially moderated community blog that allows general authors to contribute posts as well as comments.COMPLEXITY OF VALUE
Complexity of value is the thesis that human values have high Kolmogorov complexity; that our preferences, the things we care about, cannot be summed by a few simple rules, or compressed. Fragility of value is the thesis that losing even a small part of the rules that make up our values could lead to results that most of us would now consider as unacceptable (just like dialing nine out of tenLESSWRONG.COM
lesswrong.com
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.LESSWRONG
LW
Login
Home Concepts LibraryRationality: A-Z
The Codex
HPMORCommunity EventsSeattle Robot Cult
Texas Far-Comers Meetup in Austin DC Summer Solstice PicnicAll Posts Subscribe (RSS/Email)Open Questions
About
FAQ
Donate
Home Concepts Library CommunityAll Posts
RECOMMENDATIONS
Predictably Wrong
by Eliezer Yudkowsky Argument and Analysisby Scott Alexander
The Methods of Rationality by Eliezer Yudkowsky 164Scope Insensitivity First post in Rationality: A-ZEliezer Yudkowsky
529Eight Short Studies On Excuses First post in The CodexScott Alexander
137Chapter 1: A Day of Very Low Probability First post in Harry Potter and the Methods of RationalityEliezer Yudkowsky
217Welcome to LessWrong! Ruby , habryka , Ben Pace , Raemon , jimrandomh2y
24
108A Sketch of Good CommunicationBen Pace 3y
34
LATEST
Show Tag FiltersRationality AI World Modeling World OptimizationPractical Community
Personal BlogHidden+130 Curated
conversations with brilliant rationalistsspencerg 2d
16
183 Core
Pathways of Aging
johnswentworth 7d
79
99Alcohol, health, and the ruthless logic of the Asian flushdynomight 2d
26
18Search-in-Territory vs Search-in-MapΩ
johnswentworth 9h
Ω
0
53Social behavior curves, equilibria, and radicalismUnexpectedValues 1d
3
81An Intuitive Guide to Garrabrant InductionΩ
Mark Xu 2d
Ω
13
22Rules for Epistemic Warfare?Gentzel 19h
7
128Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AIAndrew_Critch 5d
15
90Attributions, Karma and better discoverability for wiki/tag features jimrandomh , habryka 3d8
89Selection Has A Quality Ceilingjohnswentworth 4d
15
17Hypothesis: lab mice have more active transposons then wild miceChristianKl 20h
3
9Restoration of energy homeostasis by SIRT6 extends healthy lifespan AllAmericanBreakfast 11h1
52Rogue AGI Embodies Valuable Intellectual PropertyΩ
Mark Xu , CarlShulman 2dΩ
4
4Anthropic Paradoxes and Self Referencedadadarren 5h
0
29Finite Factored Sets: Introduction and FactorizationsΩ
Scott Garrabrant 2d
Ω
1
Load MoreAdvanced Sorting/FilteringRECENT DISCUSSION
The dumbest kid in the world (joke)14
CronoDAS Newcomb's Problem Humor Decision TheoryFiction
Personal Blog5h
"THE DUMBEST KID IN THE WORLD" > A young boy enters a barber shop and the barber whispers to his > customer, “This is the dumbest kid in the world. Watch while I > prove it to you.”>
> The barber puts a dollar bill in one hand and two quarters in the > other, then calls the boy over and asks, “Which do you want,> son?”
>
> The boy takes the quarters and leaves.>
> “What did I tell you?” said the barber. “That kid never > learns!” Later, when the customer leaves, he sees the same young > boy coming out of the ice cream store.>
> “Hey, son! May I ask you a question? Why did you take the quarters > instead of the dollar bill?”>
> The boy licked his cone and replied, “Because the day I take the > dollar, the game is over!” 2shminux3hNot smart enough to pretend to be dumb when asked for hisreasons, is he.
3gilch3hHow else are we supposed to get a punchline?Rafael Harth 3m
2
If you just cut everything from "Later" in the third-to-last paragraph onward, smart readers would probably still get it but it would be lessobvious.
Reply
Rules for Epistemic Warfare?22
Gentzel
Frontpage19h
In partisan contests of various forms, dishonesty, polarization, and groupthink are widespread. Political warfare creates societal collateral damage: it makes it harder for individuals to arrive at true beliefs on many subjects, because their social networks provide strong incentive to promote false beliefs. To escape this situation, improving social norms and technology may help, however if only one side of a conflict becomes more honest, the other side may exploit that as a weakness, just as conquerors could exploit countries were less violent. Coming up with rules analogous to rules of war, may help ratchet partisan contests toward higher levels of honesty and integrity over time, enabling more honest coalitions to become more competitive. What follows is a naïve shot at an ethos of what suchrules...
(See More – 380 more words) 2G Gordon Worley III3hI'm kinda surprised this comment is so controversial. I'm curious what people are objecting to resulting indownvotes.
Raemon 8m
2
I'm surprised by the degree of controversialness of the OP and... all the comments so far?Reply
9Raemon12hI haven't yet thought in detail about whether this particular set of suggestions is good, but I think dealing with the reality of "conflict incentivizes deception", figuring out what sort of rules regarding deception can become stable schelling points seemsreally important.
2ChristianKl16hI think the model of a war between two sites is fundamentally flawed for epistemic warfare. For most players with power internal struggles within their community matter more for their personal success then whether their community win against other communities. See all the post about the problems of moral mazes. Reflection of Hierarchical Relationship via Nuanced Conditioning of Game Theory Approach for AI Development and Utilization2
Kyoung-cheol Kim Game TheoryAI
Frontpage2d
APPLICATION OF GAME THEORY TO AI DEVELOPMENT AND UTILIZATION A recent research post “Game Theory as an Engine for Large-Scale Data Analysis” by a Google team (McWilliamson et al. 2021) provides a tremendously helpful viewpoint to think about AI development and also relevant implications on organization and governance with interventions of AI. By taking a multi-agent approach, beyond thinking about some aspects of related impacts, the theory of AI and social sciences can greatly share fundamental commonalities in operation. Still, however, it seems there are some limitations from the perspective of organization study and public administration. That being said, the game theory from economics conceptually worked greatly in the approach case, and now we need to additionally consider a characteristic organizational perspective, which deals with decision-making and execution... (Continue Reading – 2515 more words) 1Justin Bullock15hThank you for this post, Kyoung-Cheol. I like how you have used Deep Mind's recent work to motivate the discussion of the consideration of "authority as a consequence of hierarchy" and that "processing information to handle complexity requires speciality which implies hierarchy." I think there is some interesting work on this forum that captures these same types of ideas, sometimes with similar language, and sometimes with slightly different language. In particular, you may find the recent post from Andrew Critch on "Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI" to sympathetic to core pieces of your argument here. It also looks like Kaj Sotala is having some similar thoughts on adjustments to game theory approaches that I think you would find interesting. I wanted to share with you an idea that remains incomplete, but I think there is an interesting connection between Kaj Sotala's discussion of non-agent and multi-agent models of the mind and Andrew Critch's robust agent-agnostic processes that connects with your ideas here and the general points I make inthe IBS
post. Okay, finally, I had been looking for the most succinct quotefrom Herbert Simo
Kyoung-cheol Kim 25m1
Thank you very much for your valuable comments, Justin (I am pretending that I met you for the first time). Yes, I am new to this forum and learning a lot from various viewpoints as you indicated using similar language or slightly different language. In doing so, I think my ideas provided here are highly aligned with unipolar/multipolar (pertaining to the configuration of the very top position level in bureaucracy) and non-agent/multi-agent (ultimately, regarding whether having needs or remaining of organization with the intervention of surpassingly develop... (readmore)
Reply
What to optimize for in life?3
Henrik Karlsson 31m
I listened to an interview with Patrick Collison were he claimed that when coding one should always optimize for speed - even when speed is not an issue. (Presumably because it leads to good coding practices, clean code, less build up of unnecessary functionality etc.) Assuming that is correct - and I think there is something to it - it makes me wonder: is there something similar that one could optimize for in life? Life is such a multivariate thing that it can at times be hard to know what to prioritize. What parameter is a candidate for having most positive side effects on your life when optimized? MIRI location optimization (and related topics) discussion137
Rob Bensinger Machine Intelligence Research Institute(MIRI) The SF Bay
Area Community
Personal Blog1mo
MIRI is moving (with high probability)! We haven’t finalized a location yet, but there’s a good chance we’ll make our decision in the next six weeks. I want to solicit: * Feedback on our current top location candidates. * Ideas for other places that might fit our criteria. I’m also interested in a more general location-optimizing discussion. What are your general thoughts on where you’d like to live, and have they changed any since the hub conversations Claire Wang began in Septemberand November ?
If a new rationality community hub sprang up at any of these locations, would you be tempted to join? Is there a different place you’d prefer (either personally, or for the community)? Anything from 'statements of personal preferences' to 'models of how the rationality community might make humanity's future much... (Continue Reading – 3165 more words)Malcolm Collins 40m
1
Since my last comment here did not seem to work I put it in a google document. This is a way to deep dive on where are the best places in the USA to raise a family with more than two kids: https://docs.google.com/document/d/1tq9rY1TCs49XHckWtzOowYz_xHXFnRpOZq8r0lE5JSQ/edit#heading=h.cywpygwwkhblReply
The Nature of Counterfactuals9
Chris_Leong Ω 3Decision TheoryCounterfactuals
Frontpage1d
I'm finally beginning to feel that I have a clear idea of the true nature of counterfactuals. In this post I'll argue that counterfactuals are just intrinsicly a part of how we make sense of the world. However, it would be inaccurate to present them as purely a human invention as we were shaped by evolution in such a way as to ground these conceptions in reality. Unless you're David Lewis, you're probably going to be rather dubious of the claim that all possibilities exist (ie. that counterfactuals are ontologically real). Instead, you'll probably be willing to concede that they're something we construct; that they're in the map rather than in the territory. Things in the map are tools, they are constructed because they are useful. In other words,... (Continue Reading – 1478 more words)JBlack 2h
1
A "counterfactual" seems to be just any output of a model given by inputs that were not observed. That is, a counterfactual is conceptually almost identical to a prediction. Even in deterministic universes, being able to make predictions based on incomplete information is likely useful to agents, and ability to handle counterfactuals is basically free if you have anything resembling a predictive model of the world. If we have a model that Omega's behaviour requires that anyone choosing box B must receive 10 utility, then our counterfactuals (model outputs) ... (read more)Reply
1TAG7h1. All realistic agents have finite and imperfect knowledge. 2. Therefore, for any one agent, there is a set of counterfactual claims that are crazy in the sense of contradicting what they already know. 3. Likewise, for any one agent, there is a set of counterfactual claims that are sane in the sense of not contradicting what theyalready know.
Thoughts on the Alignment Implications of Scaling Language Models60
leogao Ω 20GPT Scaling Laws Machine Learning Outer Alignment AI World ModelingFrontpage3d
This post is also available on my personal blog.
_Thanks to Gwern Branwen, Steven Byrnes, Dan Hendrycks, Connor Leahy, Adam Shimi, Kyle and Laria for the insightful discussions andfeedback._
BACKGROUND
By now, most of you have probably heard about GPT-3 and what it does. There’s been a bunch of different opinions on what it means for alignment, and this post is yet another opinion from a slightly different perspective. Some background: I'm a part of EleutherAI, a decentralized research collective (read: glorified discord server - come join us on Discord for ML, alignment, and dank memes). We're best known for our ongoing effort to... (Continue Reading – 4867 more words)Victor Levoso 3h
1
Well if Mary does learn something new( how it feels "from the inside" to see red or whatever ) she would notice, and her brainstate would reflect that plus whatever information she learned. Otherwise it doesn't make sense to say she learned anything. And just the fact she learned something and might have thought something like "neat, so that's what red looks like" would be relevant to predictions of her behavior even ignoring possible informationcontent of qualia.
So it seems distinguishable to me.Reply
Anthropic Paradoxes and Self Reference4
dadadarren Sleeping Beauty ParadoxAnthropics World
Modeling RationalityFrontpage5h
_In this post I will explain how anthropic paradoxes are connected with self-reference. _ SLEEPING BEAUTY PROBLEM The contention is how to treat the fact that "I am awake NOW/TODAY". To briefly summarize the debate at the cost of oversimplification: SIA suggests treating today as a random sample of all days. While SSA suggests treating today as a random sample of all _awakening _days in the experiment. This debate can be considered as the dispute over the correct way of interpreting/defining TODAY. It should be noted the direct use of the word "today" is not necessary. For example, in _Technicolor Beauty _due to Titelbaum, the experimenter would randomly pick a day and paint the room blue and paint the room red the other day. Say I wake up and sees the... (See More – 485 more words) Open and Welcome Thread - May 202125
habryka Open ThreadsFrontpage1mo
If it’s worth saying, but not worth its own post, here's a place toput it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts , seeing if there are any meetups in your area , and checking out the Getting Startedsection of
the LessWrong FAQ . If you want to orient to the content on the site, you can also check out the new Concepts section . The Open Thread tag is here . The Open Thread sequence is here.
1GeneSmith6hAnyone have reading recommendations for fiction or even just a summary description of what a positive future with AI looks like? I've been trying to decide what to work on for the rest of my career. I really want to work on genetics, but worry that, like every other field, it's basically going to become irrelevant since AI will do everything in the future.habryka 6h
2
I literally 2 minutes ago created the June Open thread for this year and pinned that one. So if I were you I would probably repost this there instead of here: https://www.lesswrong.com/posts/QTyMwaezwDiYoyAop/open-and-welcome-thread-june-2021Reply
Details
Copyright © 2023 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0