Mihailo Isakov

4th year PhD student in the
Adaptive and Secure Computing Systems (ASCS) Lab

Hi, I'm Mihailo, a PhD student working at the intersection of computer architecture, machine learning and security. My advisor is Professor Michel Kinsy, who leads the Adaptive and Secure Computing Systems (ASCS) group here at BU.

What do I do?

some of my research projects:

Enabling hardware to think

While hardware can accelerate machine learning (ML), using ML to accelerate hardware is much harder. Majority of ML algorithms cannot run (let alone train) at the speed of hardware. I explore how we can reduce deep neural network (DNN) model sizes through sparsity [4, 17] and quantization ahead of training. Small, efficient models that can be baked on-chip may allow us to reason about microarchitecture at runtime, so that we may increase performace, reduce power, and adapt to changing tasks, environments, and goals.

This is also the topic of my PhD thesis!

Securing DNNs on the edge

DNN models can require tons of data, compute power, and researcher-hours to train. After we deploy them in the field, whether on self-driving cars, voice assistants, or suicidal robots, it is easy for attackers to steal these models (see [1]). In [2], we propose the Trusted Inference Engine (TIE), an root-of-trust based secure DNN accelerator that can run these models without risking exposing the underlying model.

Co-designing hardware and ML

I also explore how we can modify DNN models to better fit the hardware. In [4] I propose ClosNets, which replace linear layers with 3 sparse layers with the Clos topology. The insight here is that as long as you maintain full connectivity and provide enough paths between inputs and outputs, networks can still train well, but with 5-10x less connections. ClosNets also get mapped into hardware very well - since we know the topology ahead of time, they don't suffer from the problems of ordinary sparse networks (i.e., having to store weight indices, having non-uniform memory access patterns, unbalanced computation, etc.).

Currently, I'm trying to provide theoretical answers to which topology is gives the best accuracy / memory ratio on average for any task. In NeuroFabric [ADD REF], we claim that parallel butterfly topologies give the best bang for the buck, as long as you initialize them well.

Building open-source processors

I am a part of the BRISC-V team here at the ASCS lab. We are building an open-source manycore system-on-chip, CPUs, caches, NoCs and all. At the moment, I am building a RISC-V ISA, out-of-order, multiple issue procesor with branch speculation and register renaming, which will hopefully be the flagship of the BRISC-V project. All of our RTL is Verilog 2001, so feel free to use it!

Building Teaching Tools

For computer architecture and computer organization classes, we teach the RISC-V architecture. For use in the class, we have built an in-browser, step-by-step RISC-V compiler and simulator, so that the students can get used to writing and debugging assembly. Feel free to try it!

Training Swarms of DNNS

Training DNNs on clusters of machines is difficult, partly due to the networks being far slower than the compute units, and partly due to the straggler problem. When synchronizing updates between worker nodes, we can either wait until everyone get's everyones update, which takes time, or we can use stale updates, which can hurt accuracy. Meanwhile, we have all this DRAM memory distributed accross nodes, yet it is used only to clone the DNN model. In NoSync [3] we asked whether we can instead not sync updates at all, but (1) have each worker train a separate model, and (2) have the best-performing models pull all other models towards them. Turns out, this approach can tolerate stale updates really well, (reducing reliance on network bandwidth and latency), and actually provides an accuracy boost!


Conference papers
  1. M. Isakov, V. Gadepally, K. M. Gettings and M. A. Kinsy, ”Survey of Attacks and Defenses on Edge-Deployed Neural Networks,” IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 2019, pp. 1-8. Best Student Paper Nominee.
  2. M. Isakov, L. Bu, H. Cheng, and M. A. Kinsy: Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators. In the 2018 Asian Hardware Oriented Security and Trust Symposium (AsianHOST), 2018.
  3. M. Isakov and M. A. Kinsy: NoSync: Particle Swarm Inspired Distributed DNN Training. In the 27th International Conference on Artificial Neural Networks (ICANN), 2018.
  4. M. Isakov, A. Ehret and M. Kinsy, ”ClosNets: Batchless DNN Training with On-Chip a Priori Sparse Neural Topologies,” 28th International Conference on Field Programmable Logic and Applications (FPL), Dublin, 2018, pp. 55-554.
  5. E. Taheri, M. Isakov, A. Patooghy, M. A. Kinsy: Addressing a New Class of Reliability Threats in 3-Dimensional Network-on-Chips. In the Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2019.
  6. M. Isakov, A. Ehret and M. Kinsy: Chameleon: A Generalized Reconfigurable Open-Source Architecture for Deep Neural Network Training. In the 2018 IEEE High Performance Extreme Computing Conference (HPEC), 2018. Best student paper nominee.
  7. A. Ehret, M. Isakov and M. A. Kinsy: Towards a Generalized Reconfigurable Agent Based Architecture: Stock Market Simulation Acceleration, International Conference on Reconfigurable Computing and FPGAs (ReConFig), 2018.
  8. J. R. Doppa, R. G. Kim, M. Isakov, M. A. Kinsy, H. Kwon and T. Krishna: Adaptive Manycore Architectures for Big Data Computing. In the International Symposium on Networks-on-Chip (NOCS), October 2017.
  9. H. Hosseinzadeh, M. Isakov, M. Darabi, A. Patooghy, and M. Kinsy: Janus: An uncertain cache architecture to cope with side channel attacks. In 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS) Aug 2017. The Myril B. Reed Best Paper Award.
  10. E. Taheri, M. Isakov, A. Patooghy, and M. Kinsy: Advertiser Elevator: a Fault Tolerant Routing Algorithm for Partially Connected 3D Network-on-Chips. In 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS) Aug 2017.
  11. M. Kinsy, S. Khadka, M. Isakov and A. Farrukh: Hermes: Secure Heterogeneous Multicore Architecture Design. In the IEEE International Symposium on Hardware Oriented Security and Trust (HOST), May 2017.
  12. M. Kinsy, S. Khadka and M. Isakov: PreNoc: Neural Network based Predictive Routing for Network-on-Chip Architectures. In the 27th edition of the ACM Great Lakes Symposium on VLSI (GLSVLSI), May 2017.

Journal papers
  1. M. A. Kinsy, L. Bu, M. Isakov and M. Mark: Designing Secure Heterogeneous Multicore Systems from Untrusted Components. Cryptography, vol. 2, iss. 3, no. 12, 2018.
  2. L. Bu, M. Isakov and M. A. Kinsy: RASSS: A Hijack-resistant Confidential Information Management Scheme for Distributed Systems. In the Institution of Engineering and Technology (IET), - Computers and Digital Techniques, 2018.
  3. L. Bu, M. Isakov, and M. A. Kinsy: A Secure and Robust Scheme for Sharing Confidential Information in IoT Systems. In the Elsevier Journal for Ad Hoc Networks, (Ad Hoc Networks), 2018.

Workshops & Posters
  1. N. Boskov, M. Isakov and M. A. Kinsy: CodeTrolley: Hardware-Assisted Control Flow Obfuscation. Boston Area Architecture 2019 Workshop (BARC19).
  2. M. Isakov and M. A. Kinsy: NeuroFabric: A Priori Sparsity for Training on the Edge. In the 2019 tinyML Summit (tinyML), 2019.
  3. R. Agrawal, S. Bandara, A. Ehret, M. Isakov, M. Mark, and M. A. Kinsy: The BRISC-V Platform: A Practical Teaching Approach for Computer Architecture, In Workshop on Computer Architecture Education (WCAE), 2019.
  4. M. Isakov and M. A. Kinsy: ClosNets: a Priori Sparse Topologies for Faster DNN Training, Boston Area Architecture 2018 Workshop (BARC18), 2018.
  5. M. A. Kinsy, M. Isakov, A. Ehret and D. Kava: SAPA: Self-Aware Polymorphic Architecture, Boston Area Architecture 2018 Workshop (BARC18), 2018.