Currently waves of anti-Trump protests are sweeping the nation, in light of his unethical and likely un-constitutional executive order restricting the safe passage of immigrants and refugees. Since the women’s march on Washington I’ve heard two or three different people now say there is no way Trump will be re-elected in 2020. Everyone who wishes Trump to be defeated should try to cultivate a realistic view on how difficult it might be. Under my current calibration, my 95% confidence interval for Trump winning in 2020 is is 20%-85%. That factors in a ~10% chance he will be impeached. Here’s my thinking: Continue reading
This is a piece I wrote in June, 2015 for “Stony Brook Frontiers Magazine“, a graduate-student run STEM review magazine for middle school students. I had the idea to do an article on AI, as it is a technology that will become pervasive in the next few decades. AI/ machine learning is a good career path for young people, especially as many jobs are susceptible to replacement by AI.
Recently I experimented with decision trees for classification, to get a better idea of how they work. First I created some 2 dimensional training data with 2 categories, using sci-kit-learn:
A lot of people commented during and after the recent Ethics of AI conference at NYU that we still don’t know what the necessary conditions for consciousness are, and that this problem lingered like an elephant in the room. The implication seemed to be that this problem cast a pale on a lot of the work that was discussed at the conference. One commentator even summarized the conference as a ‘road to nowhere‘ at least partially because of this issue.
The ‘conditions for consciousness problem’ is critically important, and the reasons for this were articulated especially well by the panelist Susan Schneider. Several important ‘forks in the road’ in the future development of mankind hinge on whether we think the AIs we create are conscious:
Wikipedia has a lot of problems, including some subtle but serious ones that seem difficult to fix without radical changes to how the platform operates.
I’ve watched the growth of Wikipedia since my first edit, which was in 2004. Since then, I’ve accrued 16,044 edits. 10,287 of those were in 2007, when I was very active as an anti-vandalism patroller. Over the years I’ve created 68 pages in total. Wikipedia has always had obvious problems such as vandalism, systemic bias, and link dropping, which are being addressed by a variety of concerted efforts. Lately I’ve been noticing a lot of more subtle problems with Wikipedia articles, which has caused me to seek out higher quality sources of information. To put it bluntly, Wikipedia articles are just not very well written. They lack logical progression and consistency in their style and level of technical depth. Of course, it’s difficult for the Wikipedia platform to achieve either, since many different authors are constantly adding and subtracting sentences from every article. Continue reading
Although the macroscopic properties of water have been heavily studied, there are things we don’t understand about this ubiquitous substance. In this post, I will provide an introduction to the problem of describing water’s structure. At first glance, the idea of a liquid having structure seems preposterous. Indeed, liquids cannot maintain a structural arrangement of atoms like solids can. Instead, the atoms/molecules tumble past each other in constant state of motion. This allows for the defining property of the liquid state – the ability to fill a container. Continue reading
Our paper, “The hydrogen-bond network of water supports propagating optical phonon-like modes” was published on January 4th in Nature Communications (full open access pdf). A press release about our work has been issued by the Stony Brook Newsroom and picked up by news aggregator Phys.org.
Our work shows that propagating vibrations or phonons can exist in water, just as in ice. The work analyzes both experimental data and the results of extensive molecular dynamics simulations performed with a rigid model (TIP4P/eps), a flexible model (TIP4P/2005f), and an ab-initio based polarizable model (TTM3F). Many of these simulations were performed on the new supercomputing cluster at Stony Brook’s Institute for Advanced Computational Science.
by Charles Stross
2006, 415 pg
“There is an intrinsic unknowability about the technological singularity. Most writers leave it safely offstage or invent reasons why it doesn’t happen. Not Charles Stross. Accelerando lives up to its name, and is the most unflinching look into radical optimism I’ve seen.” – Vernor Vinge
During winter break I finally read Accelerando. I say “finally” because this book was first recommended to me in 2009 at the (now defunct) Rensselaer Polytechnic Institute transhumanism club. Accelerando is notable as being perhaps the first novel to have a storyline which traverses directly through a technological singularity.
How do we assign priors?
If we don’t have any prior knowledge, then the obvious solution is to use the principle of indifference. This principle says that if we have no reason for suspecting one outcome over any other, than all outcomes must be considered equally likely. Jakob Bernoulli called this the “principle of insufficient reason”, a play on the “principle of sufficient reason”, which asserts that everything must have a reason or cause. This may be the case, but if we are ignorant of reasons, we cannot say that one outcome will be more likely than any other.