Arjun Kumar and Tim Oates

Abstract. Neural networks have attracted significant interest in recent years due to their exceptional performance in various domains ranging from natural language processing to image identification and classification. Modern deep neural networks demonstrate state-of-the-art results in complex tasks such as epileptic seizure detection [1] and time series classification [2]. However, the internal architecture of these networks, in terms of earned representations, remains opaque. This research addresses the first step towards the long term goal of constructing a bidirectional connection between raw input data and symbolic representations. We examined whether a denoising autoencoder could internally find correlated principal features from input images and their symbolic representations and whether these principal features could be used to reconstruct one from the other. Our results indicate that using symbolic representations in addition to the raw inputs generates better reconstructions. Our network was able to reconstruct the symbolic representations from the input and vice versa.



Deep neural networks are a kind of artificial neural network consisting of many layers of hidden units between their input and output layers. These hidden layers capture a complex hierarchy of input features [3]. Using multiple hidden layers, deep neural networks extract input features at multiple levels, allowing them to learn complex mappings between said inputs and their expected outputs [4]. Recent advancements in the architecture and training mechanisms of deep neural networks have led them to replace state-of-the-art systems in many fields. However, although deep neural networks present world-class results in many domains, the internal representations these networks learn are still opaque. We intend to address this problem by trying to connect symbolic representations of the input to neural networks to understand and reason about what is learned by them.

An autoencoder is an artificial neural network that consists of identical input and output layers with one or more hidden layers that present themselves as limited capacity channels used to abstract complex features of the input space [5]….

Complete technical paper available as a PDF.

About our Services

Click to Download

Identity Security Machine Vision Financial

How We Worked with DICIO to Build Powerful and Proprietary Identity Security

How We Worked with DICIO to Build Powerful and Proprietary Identity Security When targeted by cybercriminals, simple passwords often fail to protect personally identifiable and payment information. Stephane Mathieu, the CEO of DICIO, and his team set out to innovate identity security in Mexico and globally by building a crypto-identity service for secure, compliant, and…

Read more