Bnn


#1

Hey Rob! Just reaching out in response to your comment, I would love to be involved. Please let me know how I can help out or what the next steps are :slight_smile:

Thanks!

Ethan


#2

Hi Ethan, I think we’ll do this as open source on github. @Mahdi.Jelodari is super excited to work on it from our side. I suggest we discuss implementation decisions here in this thread for now and we can make a new forum here if the discussion starts getting multi threaded!

So, what do you think would be a good place to start?


#3

Awesome! That sounds like a good plan, and I’m excited to work on it as well.

I work with Dr. Minje Kim at Indiana University who was one of the first to research binary neural networks. We have our code in matlab and perhaps a good first step would be to reimplement that on the reconfigure platform. I’ll ask him if he’s ok with that.

I think that if we do a few examples, some implementations of papers, then we’ll probably recognize what parts would fit well in a library. It would also be nice, as a user, to see some very fleshed-out examples for specific tasks but also have a library for some general purpose networks.

Does that sound like a good initial direction?


#4

#5

#6

That sounds great! Would you mind if I make this a public conversation so @Mahdi.Jelodari can join in and other folk track it?


#7

#8

Hi Ethan, yes I think it makes a lot of sense to start with a high-level model either in Python or Matlab and identify the requirements step-by-step.

I also have a network implemented in log-domain with 5bit data representation. The model is in Python and it currently supports sparsity (Ack: USC-DNN group) as well. The idea is significantly in favour of training time (What would be interesting to see here is whether cross-training is possible for BNNs).

Here is a list of functions which would be initially required in this library:

  • Layer of neurons (incl. widths, activations, neuron types) implementable using 2D/3D arrays.
  • Fit/training mechanism (a parameterizable iterative loop wrt size of epochs and batches).
  • Connectivity patters between layers and neurons.
  • Data manipulation functions including alignment etc.

Feel free to extend the list.
Obviously NNs are dataflow friendly so I’m looking forward to a very efficient acceleration app (http://ieeexplore.ieee.org/document/7551407/)


#9

It might make sense to start with a quick-and-dirty implementation on reconfigure.io before delving into all the potential options - You may well find issues, missing functionality or optimisations we need to address, and frontloading those would help get it into our engineering pipeline.


#10

That’s a great list! I like Rob’s idea of the quick-and-dirty implementation. When will the platform be open where I could start working on it and getting familiar with the tools?


#11

#12

@peterseo have a look at http://docs.reconfigure.io/ where it’s explained how you can get access to the platform. About the list I’m working on a simple implementation so that we can think of extending it…


#13

Ethan, you should have now got an onboarding mail from @josh.bohde for platform access - do drop any issues you have getting set up into the Early Access Feedback category.


#14

Thank you! I received the onboarding mail; I’m excited to use the platform. :smile:


#15

Very interested in Neural Network Binarization along with network compression and reduction to lower bit depths


#16

@peterseo @Folknology I’ve initiated the following open repo on Github to implement a reference design and capture the features we need for implementing a fully parameterisable BNN on FPGA based clouds. Feel free to contribute :slight_smile:

I thought probably the Inference function is the least complicated one (vs. BP) to be tested in the FPGA domain. However, have already got BP implemented partially and had to revise the associated types. I’m currently working on reworking the implementation wrt reco check to make sure that the design is compatible with our compiler.


#17

When @Mahdi.Jelodari and I discussed the BNNs, another important issue was brought up: binarization of inputs and outputs. For example, if we want to process images in the RGB channel, we have to convert from 0-255 into a 0/1 format! What’s the best routine for this where the least amount of data is lost? In this paper, fixed-point and hash codes are mentioned.


#18

My thoughts on using commercial Keras/TF frameworks for implementing BNNs on FPGAs:

https://github.com/ReconfigureIO/brain/issues/1

apparently binary op implementation in TF suffers from accuracy loss on cpus:

https://github.com/tensorflow/tensorflow/issues/1592