cover photo, No photo description available.

Welcome to XAIES, the expert platform for explainable AI solutions and systems.

 

LARGE RESOURCE OF CURATED AND UPDATED INFORMATION ON XAI

 

 

Knowledge Graph Embeddings for XAI

https://paperswithcode.com/paper/knowledge-graph-embeddings-and-explainable-ai

ARXIV.ORG

arxiv.org

Top of Form

 

 

TECHNOLOGYREVIEW.COM

Our weird behavior during the pandemic is messing with AI models

In the week of April 12-18, the top 10 search terms on Amazon.com were: toilet paper, face mask, hand sanitizer, paper towels, Lysol spray, Clorox wipes, mask, Lysol, masks for germ protection, and N95 mask. People weren’t just searching, they were buying too—and in bulk.

https://www.technologyreview.com/…/covid-pandemic-broken-a…/

Top of Form

 

 

XAI in June 2020 🙂

 

SCIENCEDIRECT.COM

Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expected.

https://www.sciencedirect.com/science/article/pii/S1566253519308103

Top of Form

 

 

http://interpretable-ml.org/icml2020workshop/

ICML Top of FormWORKSHOP IN VIENNA

 

 

 

OmniTact vs. GelSight sensor

https://bair.berkeley.edu/blog/2020/05/14/omnitact/

The BAIR Blog

BAIR.BERKELEY.EDU

OmniTact: A Multi-Directional High-Resolution Touch Sensor

Top of Form

 

 

Complexity control by gradient descent in deep networks

https://www.nature.com/articles/s41467-020-14663-9

NATURE.COM

Complexity control by gradient descent in deep networks

Understanding the underlying mechanisms behind the successes of deep networks remains a challenge. Here, the author demonstrates an implicit regularization in training deep networks, showing that the control of complexity in the training is hidden within the optimization technique of gradient descent.

 

Top of Form

 

Learning with Known Operators (the free text of the paper published in Nature):

https://arxiv.org/abs/1907.01992

ARXIV.ORG

Learning with Known Operators reduces Maximum Training Error Bounds

We describe an approach for incorporating prior knowledge into machine learning algorithms. We aim at applications in physics and signal processing in which we know that certain operations must be embedded into the algorithm.

 

Top of Form

 

 

Do we still need Traditional Pattern Recognition, Machine Learning, and Signal Processing in the Age of Deep Learning? Comments are welcomed. One answer is here: https://towardsdatascience.com/do-we-still-need-traditional-pattern-recognition-machine-learning-and-signal-processing-in-the-age-9ffe58512ff9

Top of Form

 

 

"Funny" AI story for a Sunday sci read.

https://arxiv.org/abs/1701.08711

ARXIV.ORG

Predicting Auction Price of Vehicle License Plate with Deep Recurrent Neural Network

 

In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions. Unlike other valuable items, license plates are not allocated an estimated price before auction.

 

Top of Form

 

XAI in Papers with Code, Evaluation chapter:

https://paperswithcode.com/paper/evaluating-explainable-ai-which-algorithmic

Implemented in one code library.

 

PAPERSWITHCODE.COM

Papers with Code - Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?

Implemented in one code library.

Implemented in one code library.

 

Top of Form

 

 

Visualisation and knowledge discovery from interpretable models (very recent paper !)

https://arxiv.org/abs/2005.03632

ARXIV.ORG

Visualisation and knowledge discovery from interpretable models

Increasing number of sectors which affect human lives, are using Machine Learning (ML) tools. Hence the need for understanding their working mechanism and evaluating their fairness in decision-making, are becoming paramount, ushering in the era of Explainable AI (XAI).

 

Top of Form

 

 

Brain age prediction with XAI !

 

BIORXIV.ORG

Brain age prediction of healthy subjects on anatomic MRI with deep learning : going beyond with an “explainable AI” mindset

Objectives Define a clinically usable…

Objectives Define a clinically usable preprocessing pipeline for MRI data Predict brain age using various machine learning and deep learning algorithms Define Caveat against common machine learning traps Data and Methods We used 1597 open-access T1 weighted MRI from 24 hospitals.

 

Top of Form

 

Principal Components of XAI (long read, approx. 39 pages but very informative an up to date):

https://arxiv.org/pdf/2005.01908.pdf

ARXIV.ORG

arxiv.org

 

Top of Form

 

Marketing and Introduction again :)) from another perspective:

https://towardsdatascience.com/explainable-artificial-intelligence-14944563cc79

Top of Form

 

 

https://cd-make.net/make-explainable-ai/

 

CD-MAKE.NET

CD-MAKE » MAKE-xAI 2020

CD-MAKE Cross Domain Conference for Machine Learning and Knowledge Extraction co-organised with ARES 2020, August 25 – August 28, 2020

 

Top of Form

 

Fairness and machine learning. If you feel like giving feedback on this book:

https://fairmlbook.org/

Fairness and machine learning Limitations and Opportunities Solon Barocas, Moritz Hardt, Arvind Narayanan Video tutorials Course materials Contact us Citations EmphBox. This online textbook is an incomplete work in progress. Essential chapters are still missing.

 

FAIRMLBOOK.ORG

Fairness and machine learning

Fairness and machine learning Limitations and Opportunities Solon Barocas, Moritz Hardt, Arvind Narayanan Video tutorials Course materials Contact us Citations EmphBox. This online textbook is an incomplete work in progress. Essential chapters are still missing.

 

Top of Form

 

 

Carnegie Mellon's Event

https://www.cs.cmu.edu/calendar/mon-2020-05-04-1530/software-research-seminar

Top of Form

 

Fridays are good for introducing XAI tools 🙂: Captum, Model Interpretability for PyTorch, https://captum.ai/

Model Interpretability for PyTorch

 

CAPTUM.AI

Captum · Model Interpretability for PyTorch

Model Interpretability for PyTorch

 

Top of Form

 

 

This paper is practically a XAIES for COVID19. It extracts IF...THEN rules from data after the deep neural network classification.

 

https://www.medrxiv.org/content/10.1101/2020.04.24.20078584v1.full.pdf

MEDRXIV.ORG

www.medrxiv.org

 

 

 

Today I introduce you to Doctor XAI 🙂!

https://dl.acm.org/doi/abs/10.1145/3351095.3372855

DL.ACM.ORG

Doctor XAI | Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency

research-article Open AccessDoctor XAI: an ontology-based approach to black-box sequential data classification explanations

 

Top of Form

 

 

LIME was one of the major tools we used for XAI. But as you know it was not really stable. This paper proposes an improvement of interpretability and fidelity for LIME.

https://arxiv.org/abs/2004.12277

ARXIV.ORG

An Extension of LIME with Improvement of Interpretability and Fidelity

While deep learning makes significant achievements in Artificial Intelligence (AI), the lack of transparency has limited its broad application in various vertical domains. Explainability is not only a gateway between AI and real world, but also a powerful feature to detect flaw of the models and biases.

 

Top of Form

 

 

A new XAI model from Berkeley, Neural-Backed Decision Trees (NBDTs): https://bair.berkeley.edu/blog/2020/04/23/decisions/

 

My comments are the following: It is possible to extract an expert system from a decision tree. It should be then possible to extract an XAIES from an NBDT.

The BAIR Blog

 

About this website

BAIR.BERKELEY.EDU

Making Decision Trees Accurate Again: Explaining What Explainable AI Did Not

Top of Form

 

Yesterday it was DNA Day ! But can you see DNA ? Yes, there is a XAI for DNA and a good delicious one made out of strawberries. And you can do it at home: https://unlockinglifescode.org/node/653

Image may contain: text

 

Saturday is for academic fun with Stanford University. Enjoy !

 

https://events.stanford.edu/events/877/87701/

April 25, 2020, 8:00 PM. Zoom.

 

About this website

EVENTS.STANFORD.EDU

Center for South Asia Comedy Night

April 25, 2020, 8:00 PM. Zoom.

 

Top of Form

 

 

Back to signal processing? Not quite, still another XAI perspective: sparse representations.

 

https://www.notion.so/Explainable-AI-Sparse-Representations-and-Signals-fedf1522aff4415d8f156e1f94bb80c5

 

NOTION.SO

Explainable AI, Sparse Representations, and Signals

Distributed knowledge is hard to explain

 

Top of Form

 

Facial recognition with face masks ! The new challenge of today. Not easy to solve for the general case, but it works for individuals and their phones. XAI is needed to know what the network learns when recognizing a masked face.

https://www.biometricupdate.com/202004/vinai-biometric-recognition-technology-for-phones-works-with-protective-face-masks

 

BIOMETRICUPDATE.COM

VinAI biometric recognition technology for phones works with protective face masks

VinAI Research has developed a method of facial recognition technology that can accurately identify individuals who wear surgical face masks, the Vingroup-funded research lab announced. Claiming it…

VinAI Research has developed a method of facial recognition technology that can accurately identify individuals who wear surgical face masks, the Vingroup-funded research lab announced. Claiming it…

 

Top of Form

 

I am interested in this Special Issue. There is enough time for submission also.

 

https://www.journals.elsevier.com/signal-processing-image-communication/call-for-papers/emerging-multimedia-technologies?fbclid=IwAR0LIQpIPLBnuWrNCg1XCVFiyhvfE0yWesPr8t9nSV1C1iPciDHFbJYu7aA

Recently, the multimedia landscape underwent a revolution around several technological innovations.

 

JOURNALS.ELSEVIER.COM

Signal Processing: Image Communication

Recently, the multimedia landscape underwent a revolution around several technological innovations. Although these new innovations are not...

 

Top of Form

 

One more reason for XAI, to understand AI powered robots

FACEBOOK shut down an artificial intelligence experiment after two robots began talking in a language only they understood. The “chatbots” Alice and Bob modified English to make it easier for them to talk to each other.

THESUN.CO.UK

Facebook shuts off AI experiment after two robots begin speaking in their OWN language only they understand

 

Top of Form

 

A bit of XAI history !

https://arxiv.org/abs/2003.07520

ARXIV.ORG

Foundations of Explainable Knowledge-Enabled Systems

Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time.

 

Top of Form

 

XAI research jobs

JOBBNORGE.NO

4 PhD positions: Explainable Artificial Intelligence (XAI) for critical applications (184834) | NTNU - Norwegian University of Science and Technology

Job title: 4 PhD positions: Explainable Artificial Intelligence (XAI) for critical applications (184834), Employer: NTNU - Norwegian University of Science and Technology, Deadline: Monday, April 27, 2020

Save

Job title: 4 PhD positions: Explainable Artificial Intelligence (XAI) for critical applications (184834), Employer: NTNU - Norwegian University of Science and Technology, Deadline: Monday, April 27, 2020

 

Top of Form

 

 

CIRCUITS for XAI, the study of the connections between neurons.

https://distill.pub/2020/circuits/zoom-in/

By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks.

 

DISTILL.PUB

Zoom In: An Introduction to Circuits

By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks.

 

Top of Form

 

Alexander Binder: Explaining Deep Learning for Identifying Structures and Biases in Computer Vision

 

https://interpretablevision.github.io/slide/iccv19_binder_slide.pdf

INTERPRETABLEVISION.GITHUB.IO

interpretablevision.github.io

 

Top of Form

 

ICASSP an A class conference for FREE !

https://2020.ieeeicassp.org/

You can register here: https://cmsworkshops.com/ICASSP2020/Registration.asp

2020.IEEEICASSP.ORG

ICASSP 2020

Signal Processing: from Sensors to Information, at the Heart of Data Science

 

Top of Form

 

Master thesis in Computer Science at the KTH ROYAL INSTITUTE OF TECHNOLOGY.

Title: Explainable AI - Visualization of Neuron Functionality in Recurrent Neural Networks for Text Prediction

Link: http://www.diva-portal.org/smash/get/diva2:1394892/FULLTEXT01.pdf

DIVA-PORTAL.ORG

www.diva-portal.org

 

Top of Form

 

Three more calls for papers.

 

1.       https://www.mdpi.com/si/BDCC/image_detection

2.       https://www.mdpi.com/journal/mathematics/special_issues/New_Trends_Machine_Learning_Theory_Practice

3.       https://www.mdpi.com/journal/energies/special_issues/Machine_Learning_and_Deep_Learning_for_Energy_Systems

Big Data and Cognitive Computing, an international, peer-reviewed Open Access journal.

 

MDPI.COM

Big Data and Cognitive Computing

Top of Form

 

A.I Theory of Making a Horse from Two Hundred Rabbits

https://www.ameinfo.com/…/ai-theory-human-consciousness-int…

Top of Form

 

 

XAI that can be used for COVID19.

 

Take a look at the example given during the ICML conference by Shrikumar. Lets suppose you have already trained a model with DNA mutations causing diseases.

 

Quantization was the first thing I learned in digital signal processing. Now, in deep learning, quantization is about converting from floating point to fixed point integer. Training is faster but also XAI through surrogate models like LIME could become more stable despite of noise.

 

https://towardsdatascience.com/speeding-up-deep-learning-with-quantization-3fe3538cbb9

In last week, Facebook has just open sourced their matrix multiplication library which you can read it here .Readers may quickly find the…

 

TOWARDSDATASCIENCE.COM

Speeding up Deep Learning with Quantization

Top of Form

 

XAI for TAI ?

 

https://arxiv.org/abs/1912.00747

ARXIV.ORG

Defining and Unpacking Transformative AI

Recently the concept of transformative AI (TAI) has begun to receive attention in the AI policy space. TAI is often framed as an alternative formulation to notions of strong AI (e.g. artificial general intelligence or superintelligence).

 

Top of Form

 

From the Elsevier's journal Artificial Intelligence

1.       Paper: https://www.sciencedirect.com/science/article/pii/S0004370218305988?via%3Dihub

2.       Special issue: https://www.journals.elsevier.com/artificial-intelligence/call-for-papers/special-issue-on-explainable-artificial-intelligence

SCIENCEDIRECT.COM

Explanation in artificial intelligence: Insights from the social sciences

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparen…

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency.

 

Top of Form

 

Search with context, Not keywords !

 

https://covid19.mendel.ai/

COVID19.MENDEL.AI

COVID-19 Scholarly Articles Search

Web site created using create-react-app

 

Top of Form

 

STAGES OF AI EXPLAINABILITY

 

https://towardsdatascience.com/the-how-of-explainable-ai-explainable-modelling-55c8c43d7bed

In the first part of our overview of the How of Explainable AI, we looked a pre-modelling explainability. However, the true scope of…

 

TOWARDSDATASCIENCE.COM

The How of Explainable AI: Explainable Modelling

In the first part of our overview of the How of Explainable AI, we looked a pre-modelling explainability. However, the true scope of…

In the first part of our overview of the How of Explainable AI, we looked a pre-modelling explainability. However, the true scope of…

 

Top of Form

 

Towards Medical XAI

https://www.researchgate.net/publication/334534363_A_Survey_on_Explainable_Artificial_Intelligence_XAI_Towards_Medical_XAI

PDF | Recently, artificial intelligence, especially machine learning has demonstrated remarkable performances in many tasks, from image processing to... | Find, read and cite all the research you need on ResearchGate

 

 

RESEARCHGATE.NET

(PDF) A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI

PDF | Recently, artificial intelligence, especially machine learning has demonstrated remarkable performances in many tasks, from image processing to... | Find, read and cite all the research you need on ResearchGate

PDF | Recently, artificial intelligence, especially machine learning has demonstrated remarkable performances in many tasks, from image processing to... | Find, read and cite all the research you need on ResearchGate

 

Top of Form

 

Today, some tech updates:

I am still a big fan of Colab and Tensorflow, but it is good to know that we have open source alternatives like MindSpore to understand the internal algorithms and mechanisms: https://www.techradar.com/news/huawei-open-sources-tensorflow-competitor-mindspore?fbclid=IwAR0-sodYYcvuFf1n39FAC2gbA-dWgsi6m15CHtaKBpzrwiQ003o9930A0eM .

 

There is also Colab Pro. It might be a good thing but for sure it is not free anymore: https://colab.research.google.com/notebooks/pro.ipynb#scrollTo=SKQ4bH7qMGrA

 

It would be nice if for research and science purposes, we could have access to Colab Pro.

Top of Form

Nice and easy reading: https://medium.com/luminovo/ai-and-the-question-of-explainability-9778ef70df7a

Top of Form

 

COWS GO TO COLLEGE ? I HEARD THAT A COW WENT TO HARVARD....

 

https://www.thenational.ae/arts-culture/google-s-new-chatbot-meena-is-supposed-to-be-the-best-one-yet-but-how-human-is-it-really-1.981652

Meena can supposedly talk about anything and can even make up bad jokes

 

 

About this website

THENATIONAL.AE

Google's new chatbot Meena is supposed to be the best one yet, but how human is it really?

Meena can supposedly talk about anything and can even make up bad jokes

Meena can supposedly talk about anything and can even make up bad jokes

 

Top of Form

 

 

The most recent AI debate, more to come...: https://www.technologyreview.com/s/615416/ai-debate-gary-marcus-danny-lange/?fbclid=IwAR2r9MIPphROXDQpSJ95JNgIC6ipz2DrKDp7EKL01v_acHKLErYNUOkChgE

The field is in disagreement about where it should go and why.

 

 

About this website

TECHNOLOGYREVIEW.COM

A debate between AI experts shows a battle over the technology’s future

The field is in disagreement about where it should go and why.

The field is in disagreement about where it should go and why.

 

Top of Form

 

Neural Network Receptive Field, a step forward for XAI in CNNs:

https://www.learnopencv.com/cnn-receptive-field-computation-using-backprop/

LEARNOPENCV.COM

CNN Receptive Field Computation Using Backprop | Learn OpenCV

How to understand which area on the input image is visible for the output…

How to understand which area on the input image is visible for the output pixel of the neural network. PyTorch code is shared.

 

Top of Form

 

For sure interesting:

RSVP for COVID-19 and AI: A Virtual Conference.

 

 

About this website

HAI.STANFORD.EDU

COVID-19 and AI: A Virtual Conference

RSVP for COVID-19 and AI: A Virtual Conference.

RSVP for COVID-19 and AI: A Virtual Conference.

 

Top of Form

 

Just to let you know, I've created a virtual assistant for XAI. It is still in the testing phase but soon I will will upload it to the web page.

Top of Form

 

Soon after our XAIES web (see pixelatus), KAGGLE reinforced its Explainability Section. Could it be a coincidence ? https://www.kaggle.com/learn/machine-learning-explainability

KAGGLE.COM

Learn Machine Learning Explainability Tutorials

Extract human-understandable insights from any machine learning model.

 

Top of Form

 

Free paper from the prestigious journal NATURE only using the link below: https://www.nature.com/articles/s42256-019-0138-9.epdf?shared_access_token=RCYPTVkiECUmc0CccSMgXtRgN0jAjWel9jnR3ZoTv0O81kV8DqPb2VXSseRmof0Pl8YSOZy4FHz5vMc3xsxcX6uT10EzEoWo7B-nZQAHJJvBYhQJTT1LnJmpsa48nlgUWrMkThFrEIvZstjQ7Xdc5g%3D%3D

Tree-based machine learning models are widely used in domains such as healthcare, finance and public services. The authors present an explanation method for trees that enables the computation of optimal local explanations for individual predictions, and demonstrate their method on three medical data...

 

 

About this website

NATURE.COM

From local explanations to global understanding with explainable AI for trees

Tree-based machine learning models are widely used in domains such as healthcare, finance and public services. The authors present an explanation method for trees that enables the computation of optimal local explanations for individual predictions, and demonstrate their method on three medical data...

Tree-based machine learning models are widely used in domains such as healthcare, finance and public services. The authors present an explanation method for trees that enables the computation of optimal local explanations for individual predictions, and demonstrate their method on three medical data...

 

Top of Form

 

https://www.mdpi.com/…/Explainable_Artificial_Intelligence_…

Applied Sciences, an international, peer-reviewed Open Access journal.

 

 

MDPI.COM

Applied Sciences

Applied Sciences, an international, peer-reviewed Open Access journal.

Applied Sciences, an international, peer-reviewed Open Access journal.

 

Top of Form

 

State of Data Science and Machine Learning 2019

Kaggle-State-of-Data-Science-and-Machine-Learning-2019.pdf

PDF

Top of Form

 

XAI is Good but Can I Trust the Explainer?

https://arxiv.org/abs/1910.02065

ARXIV.ORG

Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods

For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks. In this work, we identify two issues of current explanatory…

For AI systems to garner widespread public acceptance, we must develop methods capable of explaining the decisions of black-box models such as neural networks. In this work, we identify two issues of current explanatory methods. First, we show that two prevalent perspectives on explanations --- feat...

 

Top of Form

 

Stability of Interpretable Models from the xai-project.eu people: https://arxiv.org/pdf/1810.09352.pdf

ARXIV.ORG

arxiv.org

 

Top of Form

 

https://christophm.github.io/interpretable-ml-book/

Machine learning algorithms usually operate as black boxes and it is unclear how they derived a certain decision. This book is a guide for practitioners to make machine learning decisions interpretable.

 

CHRISTOPHM.GITHUB.IO

Interpretable Machine Learning

Machine learning algorithms usually operate as black boxes and it is unclear how they derived a certain decision. This book is a guide for practitioners to make machine learning decisions interpretable.

Machine learning algorithms usually operate as black boxes and it is unclear how they derived a certain decision. This book is a guide for practitioners to make machine learning decisions interpretable.

 

Top of Form

 

Finally, we have XAIES (http://pixelatus.com/home.php) the first platform for Explainable Artificial Intelligence Expert Systems competitions, developed from scratch with our enthusiastic students. You need to login to create new XAI competition or to submit your explainable results to our proposed use-cases.

PIXELATUS.COM

pixelatus.com

the expert platform for Explainable Artificial Intelligence solutions and systems. We use well-established models like the versions of the LIME (Local Interpretable Model-Agnostic Explanations) algorithm, activations…

the expert platform for Explainable Artificial Intelligence solutions and systems. We use well-established models like the versions of the LIME (Local Interpretable Model-Agnostic Explanations) algorithm, activations maps, deep Taylor decompositions, etc. and new models based on computational topolo...

 

Top of Form

 

Timely models, no XAI yet: https://onezero.medium.com/amp/p/f4ec40acdba0

Algorithms that can detect infections, differentiate COVID-19 from the common flu, and more

 

 

About this website

ONEZERO.MEDIUM.COM

Computer Scientists Are Building Algorithms to Tackle COVID-19

Algorithms that can detect infections, differentiate COVID-19 from the common flu, and more

Algorithms that can detect infections, differentiate COVID-19 from the common flu, and more

 

Top of Form

When interpretability is needed (and when it is not): https://arxiv.org/abs/1702.08608

ARXIV.ORG

Towards A Rigorous Science of Interpretable Machine Learning

As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess…

As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the...

 

Top of Form

 

LAST CONFERENCE BEFORE THE VIRUS (february 2020): https://xaitutorial2020.github.io/

What is explainable AI (XAI for short) i.e., what are explanations from the various streams of the AI community (Machine Learning, Logics, Constraint Programming, Diagnostics)? What are the metrics for explanations?

 

XAITUTORIAL2020.GITHUB.IO

Explainable AI

What is explainable AI (XAI for short) i.e., what are explanations from the various streams of the AI community (Machine Learning, Logics, Constraint Programming, Diagnostics)? What are the metrics for explanations?

What is explainable AI (XAI for short) i.e., what are explanations from the various streams of the AI community (Machine Learning, Logics, Constraint Programming, Diagnostics)? What are the metrics for explanations?

 

Top of Form

 

https://colfaxresearch.com/canonical-stratification-for-non-mathematicians-tda/

Learn how topological data analysis with canonical stratification can lead to better explanation and justification of AI decisions

 

 

COLFAXRESEARCH.COM

Explainable Artificial Intelligence and Topological Data Analysis

Learn how topological data analysis with canonical stratification can lead to better explanation and justification of AI decisions

Learn how topological data analysis with canonical stratification can lead to better explanation and justification of AI decisions

 

Top of Form

 

https://arxiv.org/abs/1903.08510

ARXIV.ORG

Topological Data Analysis in Information Space

Various kinds of data are routinely represented as discrete probability distributions. Examples include text documents summarized by histograms of word occurrences and images represented as histograms of oriented…

Various kinds of data are routinely represented as discrete probability distributions. Examples include text documents summarized by histograms of word occurrences and images represented as histograms of oriented gradients. Viewing a discrete probability distribution as a point in the standard simpl...

 

Top of Form

 

Bottom of Form

Bottom of Form

 

 

 

Explain AI in the fight with covid19:

Share what you've made with ML.

 

 

MADEWITHML.COM

Made With ML - Share what you've made with ML

Share what you've made with ML.

Share what you've made with ML.

 

Top of Form

 

Topological and non-topological data understading:

https://opendatascience.com/explainable-ai-from-prediction-to-understanding/

Dr. George Cevora explains why the black box of AI may not always be appropriate and why we need explainable AI to know how to look for the right answers.

 

 

OPENDATASCIENCE.COM

Explainable AI: From Prediction To Understanding - Open Data Science

Dr. George Cevora explains why the black box of AI may not always be appropriate and why we need explainable AI to know how to look for the right answers.

Dr. George Cevora explains why the black box of AI may not always be appropriate and why we need explainable AI to know how to look for the right answers.

 

Top of Form

 

Asking and answering questions ! The expert systems are back:

 

https://arxiv.org/pdf/2001.02478.pdf

ARXIV.ORG

arxiv.org

 

Top of Form

 

https://www.ayasdi.com/…/artificial-i…/going-beyond-xai-tda/

It is becoming well understood that in order to make Artificial Intelligence broadly useful, it is critical that humans can interact with and have confidence in the algorithms that are being used. This observation has led to development of the notion of explainable AI (sometimes called XAI) which wa...

 

 

AYASDI.COM

Beyond Explainability - XAI, Research Areas + TDA | Ayasdi

It is becoming well understood that in order to make Artificial Intelligence broadly useful, it is critical that humans can interact with and have confidence in the algorithms that are being used. This observation has led to development of the notion of explainable AI (sometimes called XAI) which wa...

It is becoming well understood that in order to make Artificial Intelligence broadly useful, it is critical that humans can interact with and have confidence in the algorithms that are being used. This observation has led to development of the notion of explainable AI (sometimes called XAI) which wa...

 

Top of Form

Top of Form