December 21, 2020
We previously have covered the many weighty claims made by the progenitors of A.I. algorithms who claim that their technology can stop crime before it happens. Similar predictive A.I. is increasingly being used to stop the spread of misinformation, disinformation and general “fake news” by analyzing trends in behavior and language used across social media.
However, as we’ve also covered, these systems have more often that not failed quite spectacularly, as many artificial intelligence experts and mathematicians have highlighted. One expert in particular — Uri Gal, Associate Professor in Business Information Systems, at the University of Sydney, Australia — noted that from what he has seen so far, these systems are “no better at telling the future than a crystal ball.”
Please keep this in mind as you look at the latest lofty pronouncements from the University of Sheffield below. Nevertheless, we should also be aware that — similar their real-world counterparts in street-level pre-crime — these systems most likely will be rolled out across social media (if they haven’t been already) regardless, until further exposure of their inherent flaws, biases and their own disinformation is revealed.
Are there Limits to Inquiry?
Should faculty be restrained from or even punished for investigating complex and controversial events of enormous political significance?
Jim Fetzer and Stephen Francis are very pleased to present:
Academic Freedom Conference (Taped Saturday, 27-28 August 2016)
AFC II: Introduction: James H. Fetzer
WHY DOES IT MATTER?
AFC II: Session 1: Francis A. Boyle, Ph.D.,
noted Professor of International Law at the University of Illinois College
of Law, earned his A.B. in Political Science from Chicago, J.D. from
Harvard Law School and his A.M. and Ph.D. in Political Science
also from Harvard University.