Welcome to XAIES, the expert platform for explainable
AI solutions and systems.
LARGE RESOURCE OF CURATED AND UPDATED
INFORMATION ON XAI
Knowledge
Graph Embeddings for XAI
https://paperswithcode.com/paper/knowledge-graph-embeddings-and-explainable-ai
ARXIV.ORG
TECHNOLOGYREVIEW.COM
Our weird behavior during the pandemic is
messing with AI models
https://www.technologyreview.com/…/covid-pandemic-broken-a…/
XAI
in June 2020 🙂
SCIENCEDIRECT.COM
Explainable Artificial Intelligence (XAI):
Concepts, taxonomies, opportunities and challenges toward responsible AI
https://www.sciencedirect.com/science/article/pii/S1566253519308103
http://interpretable-ml.org/icml2020workshop/
ICML
OmniTact
vs. GelSight sensor
https://bair.berkeley.edu/blog/2020/05/14/omnitact/
BAIR.BERKELEY.EDU
OmniTact: A Multi-Directional High-Resolution
Touch Sensor
Complexity
control by gradient descent in deep networks
https://www.nature.com/articles/s41467-020-14663-9
NATURE.COM
Complexity control by gradient descent in deep
networks
Learning
with Known Operators (the free text of the paper published in Nature):
https://arxiv.org/abs/1907.01992
ARXIV.ORG
Learning with Known Operators reduces Maximum
Training Error Bounds
Do
we still need Traditional Pattern Recognition, Machine Learning, and Signal
Processing in the Age of Deep Learning? Comments are welcomed. One answer is
here: https://towardsdatascience.com/do-we-still-need-traditional-pattern-recognition-machine-learning-and-signal-processing-in-the-age-9ffe58512ff9
"Funny"
AI story for a Sunday sci read.
https://arxiv.org/abs/1701.08711
ARXIV.ORG
Predicting Auction Price of Vehicle License
Plate with Deep Recurrent Neural Network
XAI
in Papers with Code, Evaluation chapter:
https://paperswithcode.com/paper/evaluating-explainable-ai-which-algorithmic
Implemented in one code library.
PAPERSWITHCODE.COM
Implemented in one code
library.
Implemented in one code library.
Visualisation
and knowledge discovery from interpretable models (very recent paper !)
https://arxiv.org/abs/2005.03632
ARXIV.ORG
Visualisation and knowledge discovery from
interpretable models
Brain
age prediction with XAI !
BIORXIV.ORG
Objectives Define a
clinically usable…
Principal
Components of XAI (long read, approx. 39 pages but very informative an up to
date):
https://arxiv.org/pdf/2005.01908.pdf
ARXIV.ORG
Marketing
and Introduction again :))
from another perspective:
https://towardsdatascience.com/explainable-artificial-intelligence-14944563cc79
https://cd-make.net/make-explainable-ai/
CD-MAKE.NET
Fairness
and machine learning. If you feel like giving feedback on this book:
FAIRMLBOOK.ORG
Carnegie
Mellon's Event
https://www.cs.cmu.edu/calendar/mon-2020-05-04-1530/software-research-seminar
Fridays
are good for introducing XAI tools 🙂:
Captum, Model Interpretability for PyTorch, https://captum.ai/
Model Interpretability for PyTorch
CAPTUM.AI
Captum · Model Interpretability for PyTorch
Model Interpretability for PyTorch
This
paper is practically a XAIES for COVID19. It extracts IF...THEN rules from data
after the deep neural network classification.
https://www.medrxiv.org/content/10.1101/2020.04.24.20078584v1.full.pdf
MEDRXIV.ORG
Today
I introduce you to Doctor XAI 🙂!
https://dl.acm.org/doi/abs/10.1145/3351095.3372855
DL.ACM.ORG
Doctor XAI | Proceedings of the 2020
Conference on Fairness, Accountability, and Transparency
LIME
was one of the major tools we used for XAI. But as you know it was not really
stable. This paper proposes an improvement of interpretability and fidelity for
LIME.
https://arxiv.org/abs/2004.12277
ARXIV.ORG
An Extension of LIME with Improvement of
Interpretability and Fidelity
A
new XAI model from Berkeley, Neural-Backed
Decision Trees
(NBDTs): https://bair.berkeley.edu/blog/2020/04/23/decisions/
My
comments are the following: It is possible to extract an expert system from a
decision tree. It should be then possible to extract an XAIES from an NBDT.
About this website
BAIR.BERKELEY.EDU
Making Decision Trees Accurate Again:
Explaining What Explainable AI Did Not
Yesterday
it was DNA Day ! But can you see DNA ? Yes, there is a XAI for DNA and a good
delicious one made out of strawberries. And you can do it at home: https://unlockinglifescode.org/node/653
Saturday
is for academic fun with Stanford University. Enjoy !
https://events.stanford.edu/events/877/87701/
April 25, 2020, 8:00 PM. Zoom.
About this website
EVENTS.STANFORD.EDU
Center for South Asia Comedy Night
April 25, 2020, 8:00 PM. Zoom.
Back
to signal processing? Not quite, still another XAI perspective: sparse
representations.
NOTION.SO
Explainable AI, Sparse Representations, and
Signals
Distributed knowledge is hard to explain
Facial
recognition with face masks ! The new challenge of today. Not easy to solve for
the general case, but it works for individuals and their phones. XAI is needed
to know what the network learns when recognizing a masked face.
BIOMETRICUPDATE.COM
VinAI biometric recognition technology for
phones works with protective face masks
VinAI Research has
developed a method of facial recognition technology that can accurately
identify individuals who wear surgical face masks, the Vingroup-funded research
lab announced. Claiming it…
I
am interested in this Special Issue. There is enough time for submission also.
Recently, the multimedia landscape underwent a revolution around several
technological innovations.
JOURNALS.ELSEVIER.COM
Signal Processing: Image Communication
One
more reason for XAI, to understand AI powered robots
THESUN.CO.UK
Facebook shuts off AI experiment after two
robots begin speaking in their OWN language only they understand
A
bit of XAI history !
https://arxiv.org/abs/2003.07520
ARXIV.ORG
Foundations of Explainable Knowledge-Enabled
Systems
XAI
research jobs
JOBBNORGE.NO
Job title: 4 PhD
positions: Explainable Artificial Intelligence (XAI) for critical applications
(184834), Employer: NTNU - Norwegian University of Science and Technology,
Deadline: Monday, April 27, 2020
CIRCUITS
for XAI, the study of the connections between neurons.
https://distill.pub/2020/circuits/zoom-in/
DISTILL.PUB
Zoom In: An Introduction to Circuits
Alexander
Binder: Explaining Deep Learning for Identifying Structures and Biases in
Computer Vision
https://interpretablevision.github.io/slide/iccv19_binder_slide.pdf
INTERPRETABLEVISION.GITHUB.IO
ICASSP
an A class conference for FREE !
You
can register here: https://cmsworkshops.com/ICASSP2020/Registration.asp
2020.IEEEICASSP.ORG
Signal Processing: from Sensors to Information, at the Heart of Data
Science
Master
thesis in Computer Science at the KTH ROYAL INSTITUTE OF TECHNOLOGY.
Title:
Explainable AI - Visualization of Neuron Functionality in Recurrent Neural
Networks for Text Prediction
Link:
http://www.diva-portal.org/smash/get/diva2:1394892/FULLTEXT01.pdf
DIVA-PORTAL.ORG
Three
more calls for papers.
1.
https://www.mdpi.com/si/BDCC/image_detection
2.
https://www.mdpi.com/journal/mathematics/special_issues/New_Trends_Machine_Learning_Theory_Practice
Big Data and Cognitive Computing, an international, peer-reviewed Open
Access journal.
MDPI.COM
Big Data and Cognitive Computing
A.I Theory of Making a Horse from Two
Hundred Rabbits
https://www.ameinfo.com/…/ai-theory-human-consciousness-int…
XAI
that can be used for COVID19.
Take
a look at the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a
model with DNA mutations causing diseases.
Quantization
was the first thing I learned in digital signal processing. Now, in deep
learning, quantization is about converting from floating point to fixed point
integer. Training is faster but also XAI through surrogate models like LIME
could become more stable despite of noise.
https://towardsdatascience.com/speeding-up-deep-learning-with-quantization-3fe3538cbb9
TOWARDSDATASCIENCE.COM
Speeding up Deep Learning with Quantization
XAI
for TAI ?
https://arxiv.org/abs/1912.00747
ARXIV.ORG
Defining and Unpacking Transformative AI
From
the Elsevier's journal Artificial Intelligence
1.
Paper:
https://www.sciencedirect.com/science/article/pii/S0004370218305988?via%3Dihub
2.
Special
issue: https://www.journals.elsevier.com/artificial-intelligence/call-for-papers/special-issue-on-explainable-artificial-intelligence
SCIENCEDIRECT.COM
Explanation in artificial intelligence:
Insights from the social sciences
There has been a recent
resurgence in the area of explainable artificial intelligence as researchers
and practitioners seek to provide more transparen…
Search
with context, Not keywords !
COVID19.MENDEL.AI
COVID-19 Scholarly Articles Search
Web site created using create-react-app
STAGES
OF AI EXPLAINABILITY
https://towardsdatascience.com/the-how-of-explainable-ai-explainable-modelling-55c8c43d7bed
TOWARDSDATASCIENCE.COM
The How of Explainable AI: Explainable
Modelling
In the first part of our
overview of the How of Explainable AI, we looked a pre-modelling
explainability. However, the true scope of…
Towards
Medical XAI
RESEARCHGATE.NET
(PDF) A Survey on Explainable Artificial
Intelligence (XAI): Towards Medical XAI
PDF | Recently,
artificial intelligence, especially machine learning has demonstrated
remarkable performances in many tasks, from image processing to... | Find, read
and cite all the research you need on ResearchGate
Today,
some tech updates:
I
am still a big fan of Colab and Tensorflow, but it is good to know that we have
open source alternatives like MindSpore to understand the internal algorithms
and mechanisms: https://www.techradar.com/news/huawei-open-sources-tensorflow-competitor-mindspore?fbclid=IwAR0-sodYYcvuFf1n39FAC2gbA-dWgsi6m15CHtaKBpzrwiQ003o9930A0eM .
There
is also Colab Pro. It might be a good thing but for sure it is not free
anymore: https://colab.research.google.com/notebooks/pro.ipynb#scrollTo=SKQ4bH7qMGrA
It
would be nice if for research and science purposes, we could have access to
Colab Pro.
Nice
and easy reading: https://medium.com/luminovo/ai-and-the-question-of-explainability-9778ef70df7a
COWS
GO TO COLLEGE ? I HEARD THAT A COW WENT TO HARVARD....
Meena can supposedly talk about anything and can even make up bad jokes
About this website
THENATIONAL.AE
Google's new chatbot Meena is supposed to be
the best one yet, but how human is it really?
Meena can supposedly talk
about anything and can even make up bad jokes
Meena can supposedly talk about anything and can even make up bad jokes
The
most recent AI debate, more to come...: https://www.technologyreview.com/s/615416/ai-debate-gary-marcus-danny-lange/?fbclid=IwAR2r9MIPphROXDQpSJ95JNgIC6ipz2DrKDp7EKL01v_acHKLErYNUOkChgE
The field is in disagreement about where it should go and why.
About this website
TECHNOLOGYREVIEW.COM
A debate between AI experts shows a battle
over the technology’s future
The field is in
disagreement about where it should go and why.
The field is in disagreement about where it should go and why.
Neural
Network Receptive Field, a step forward for XAI in CNNs:
https://www.learnopencv.com/cnn-receptive-field-computation-using-backprop/
LEARNOPENCV.COM
CNN Receptive Field Computation Using Backprop
| Learn OpenCV
How to understand which
area on the input image is visible for the output…
For
sure interesting:
RSVP for COVID-19 and AI: A Virtual Conference.
About this website
HAI.STANFORD.EDU
COVID-19 and AI: A Virtual Conference
RSVP for COVID-19 and AI:
A Virtual Conference.
RSVP for COVID-19 and AI: A Virtual Conference.
Just
to let you know, I've created a virtual assistant for XAI. It is still in the
testing phase but soon I will will upload it to the web page.
Soon
after our XAIES web (see pixelatus), KAGGLE reinforced its Explainability
Section. Could it be a coincidence ? https://www.kaggle.com/learn/machine-learning-explainability
KAGGLE.COM
Learn Machine Learning Explainability
Tutorials
Extract human-understandable insights from any machine learning model.
Free
paper from the prestigious journal NATURE only using the link below: https://www.nature.com/articles/s42256-019-0138-9.epdf?shared_access_token=RCYPTVkiECUmc0CccSMgXtRgN0jAjWel9jnR3ZoTv0O81kV8DqPb2VXSseRmof0Pl8YSOZy4FHz5vMc3xsxcX6uT10EzEoWo7B-nZQAHJJvBYhQJTT1LnJmpsa48nlgUWrMkThFrEIvZstjQ7Xdc5g%3D%3D
About this website
NATURE.COM
From local explanations to global
understanding with explainable AI for trees
Tree-based machine
learning models are widely used in domains such as healthcare, finance and
public services. The authors present an explanation method for trees that
enables the computation of optimal local explanations for individual
predictions, and demonstrate their method on three medical data...
https://www.mdpi.com/…/Explainable_Artificial_Intelligence_…
Applied Sciences, an international, peer-reviewed Open Access journal.
MDPI.COM
Applied Sciences, an
international, peer-reviewed Open Access journal.
Applied Sciences, an international, peer-reviewed Open Access journal.
State
of Data Science and Machine Learning 2019
Kaggle-State-of-Data-Science-and-Machine-Learning-2019.pdf
PDF
XAI
is Good but Can I Trust the Explainer?
https://arxiv.org/abs/1910.02065
ARXIV.ORG
Can I Trust the Explainer? Verifying Post-hoc
Explanatory Methods
For AI systems to garner
widespread public acceptance, we must develop methods capable of explaining the
decisions of black-box models such as neural networks. In this work, we
identify two issues of current explanatory…
Stability
of Interpretable Models from the xai-project.eu people: https://arxiv.org/pdf/1810.09352.pdf
ARXIV.ORG
https://christophm.github.io/interpretable-ml-book/
CHRISTOPHM.GITHUB.IO
Interpretable Machine Learning
Machine learning
algorithms usually operate as black boxes and it is unclear how they derived a
certain decision. This book is a guide for practitioners to make machine
learning decisions interpretable.
Finally,
we have XAIES (http://pixelatus.com/home.php) the first platform for Explainable
Artificial Intelligence Expert Systems competitions, developed from scratch
with our enthusiastic students. You need to login to create new XAI competition
or to submit your explainable results to our proposed use-cases.
PIXELATUS.COM
the expert platform for
Explainable Artificial Intelligence solutions and systems. We use
well-established models like the versions of the LIME (Local Interpretable
Model-Agnostic Explanations) algorithm, activations…
Timely
models, no XAI yet: https://onezero.medium.com/amp/p/f4ec40acdba0
Algorithms that can detect infections, differentiate COVID-19 from the
common flu, and more
About this website
ONEZERO.MEDIUM.COM
Computer Scientists Are Building Algorithms to
Tackle COVID-19
Algorithms that can
detect infections, differentiate COVID-19 from the common flu, and more
Algorithms that can detect infections, differentiate COVID-19 from the
common flu, and more
When
interpretability is needed (and when it is not): https://arxiv.org/abs/1702.08608
ARXIV.ORG
Towards A Rigorous Science of Interpretable
Machine Learning
As machine learning
systems become ubiquitous, there has been a surge of interest in interpretable
machine learning: systems that provide explanation for their outputs. These
explanations are often used to qualitatively assess…
LAST
CONFERENCE BEFORE THE VIRUS (february 2020): https://xaitutorial2020.github.io/
XAITUTORIAL2020.GITHUB.IO
What is explainable AI
(XAI for short) i.e., what are explanations from the various streams of the AI
community (Machine Learning, Logics, Constraint Programming, Diagnostics)? What
are the metrics for explanations?
https://colfaxresearch.com/canonical-stratification-for-non-mathematicians-tda/
COLFAXRESEARCH.COM
Explainable Artificial Intelligence and
Topological Data Analysis
Learn how topological
data analysis with canonical stratification can lead to better explanation and
justification of AI decisions
https://arxiv.org/abs/1903.08510
ARXIV.ORG
Topological Data Analysis in Information Space
Various kinds of data are
routinely represented as discrete probability distributions. Examples include
text documents summarized by histograms of word occurrences and images
represented as histograms of oriented…
Explain
AI in the fight with covid19:
Share what you've made with ML.
MADEWITHML.COM
Made With ML - Share what you've made with ML
Share what you've made
with ML.
Share what you've made with ML.
Topological
and non-topological data understading:
https://opendatascience.com/explainable-ai-from-prediction-to-understanding/
OPENDATASCIENCE.COM
Explainable AI: From Prediction To
Understanding - Open Data Science
Dr. George Cevora
explains why the black box of AI may not always be appropriate and why we need
explainable AI to know how to look for the right answers.
Asking
and answering questions ! The expert systems are back:
https://arxiv.org/pdf/2001.02478.pdf
ARXIV.ORG
https://www.ayasdi.com/…/artificial-i…/going-beyond-xai-tda/
AYASDI.COM
Beyond Explainability - XAI, Research Areas +
TDA | Ayasdi
It is becoming well
understood that in order to make Artificial Intelligence broadly useful, it is
critical that humans can interact with and have confidence in the algorithms
that are being used. This observation has led to development of the notion of
explainable AI (sometimes called XAI) which wa...