Bachelor of Engineering in Computer Science

Aug 2009 - Jun 2013
R. V. College of Engineering, Bangalore, India
GPA 9.54/10.00

Courses Algorithms, Data Structures, Operating Systems, Database Management Systems, Computer Networks, Cryptography and Network Security, Unix Network Programming, Compiler Design, Computer Architecture, Software Engineering, Computer Graphics, Multimedia Communication, Digital Electronics and Microprocessors, Operations Research and Fuzzy Logic

Professional Work Experience

Member Technical Staff - 4, Acropolis

April 2019 - Present
Nutanix, San Jose, USA

Member Technical Staff - 4, Flow

May 2018 - March 2019
Nutanix, San Jose, USA

  • Lead for Flow and Epoch integration - Network pattern visualization across individual/grouped VMs
    • Epoch formerly known as Netsil was acquired by Nutanix in early summer 2018.
    • I was the engineering lead for the first integration effort in bringing both the engineering stacks together.
    • Conceptualised the feature end to end, including UI integration, APIs, backend, enablement, packaging, realization of all use-cases and workflows and Epoch integration.
    • Implementated 90% of the feature's backend code.
    • Objectives
      • To render network visualization in Prism Central UI by leveraging the capabalities of Epoch's time series DB, Maps and Analytics sandbox.
      • Provide the user with a sneak peek into the traffic patterns/interactions within VMs and external entities.
      • Aid the user in categorizing VMs and creation/application of network segmentation security policies.
      • Strong foundation block for extensible use cases in futire releases.
    • Key Learnings - The nature of this integration was extremely complex and challenging with a steep learning curve in the following areas
      • Understanding Product requirements and translation into User Stories.
      • Mapping the Scope and feasibility to release versions.
      • Prioritization and Planning of tasks including mine and of the entire team (UX, UI, Epoch, Quality Assurance Pipeline)
      • UX mock iterations - evaluating feasability of every mock.
      • Integration points - ensure intergration only via handshake of APIs between Flow and Epoch.
      • Object model design - to achieve the interweaving of components in both architectures.
      • Multiple iterations (quick turn around implementation cycles) of the API interactions between Flow and UI.
      • Multiple iterations (quick turn around implementation cycles) of the API interactions between Flow and Epoch.
      • Integration with in house MSP (Microservices Platform) to host the Epoch data collector.
    • Challenges
      • Extensive and tightly coupled Cross Team Collaboration (Product Management, UX, UI, Epoch, Auth, MSP, Flow, Functional Test, System Test, Support and Site Reliability Engineering teams).
      • Trying/Making the TRIAD working paradigm successful.
      • Stabilizing the extremely fragile datapath, by making the dataplane code more resilient to corner/failure cases.
      • Ensuring the accuracy of traffic patterns even at production level large scales by iteratively optimzing code as and when the feature progressed in the Quality Assurance Pipeline.
    • Impact
      • Very High Customer Impact.
      • Many existing and new Flow customers waiting on this.
      • Essential business need to address product gap.
      • Slated for release in summer 2019.
    • References : Sneak Peak into Flow+Epoch (Netsil)

Member Technical Staff - 3, Xi (Hybrid Cloud) Networking Team

Feb 2017 - April 2018
Nutanix, San Jose, USA

  • Lead for SNAT in Xi
    • Worked on prototyping, designing and implementing SNAT - Enables external network connectivity for the tenant VMs in Xi.
    • Key Learnings
      • Understanding OVS, OVN and neutron architectures. Implemention involved modifying OVN source code (networking-ovn plugin).
      • The proposed design is the only way SNAT functionality can be provided at scale as the solution works inline with the OVN pipeline.
      • Integrating the solution with Floating IP feature to make ensure both Floating IP and SNAT features co-exist on the same platform (OVS, OVN).
    • Challenges
      • Inception, prototyping and evaluation of multiple options before finalizing the most optimal design option.
      • Designing and realizing complex object model involving dataplane changes across Nutanix Prism Central, Prism Element, Xi SDN Controller, Openstack Neutron, OVS, OVN and Acropolis Hypervisor. at the PC and PE level (which included IDF watches, dataplane and control plane.
      • Control Plane changes such as event watchers, pollers and handlers to handle faults, cluster upgrades, node down/up, HA, etc
      • Ensuring this passes the Quality Assurance Pipeline - comprised knowledge transfer sessions and co-ordination with globally distributed QA (functional, automation, system and contractor teams) to formulate, review and execute test plan for this feature accompanied with long hours of complex datapath debugging sessions.
    • Impact
      • Releases - Xi-Alpha, Xi-GA.
      • Design compatible with short term as well as long term Xi architecture.
      • The conceived solution is being upstreamed to be included in the main OVN.
      • Final Xi-SNAT design was presented in one of the openstack summits by our Principal Engineer. Presentation Slides / Presentation Video

  • Redesigning On-Prem Virtual Network and Subnet APIs for Xi - Nutanix's Hybrid Cloud Offering
    • Worked on full stack design and implementation of Xi networking APIs for virtual networks and subnets.
    • Key Learnings
      • Google protobufs and RPCs - Complete redesign and implementation of existing protobufs (initially meant for only On-prem use cases) to suit Xi and On-prem workflows.
      • Designed and implemented version 1 of automated functional tests for virtual network and subnet CRUD API operations for Xi Networking.
      • Extensive cross team collaboration to achieve Xi-portal launch and end-to-end Xi networking workflows.
      • Ensuring APIs worked with new intent engine in Xi SDN Controller.
    • Challenges - Integration of Xi Portal in AWS, Xi AZ, Xi SDN Controller, Nutanix Prsim Central and Prism Element. (remote connections, fanout proxy, timeouts, authentication)
    • Impact
      • Releases - Xi-Alpha, Xi-GA
      • Enabled co-existence of On-Prem and Xi by ensuring common customer facing networking APIs but with different backend workflows.

Nutanix's annual conference .Next

  • London, UK
    November 2018
    • Lead for the CTO's FLOW keynote demo - Use cases demonstrated required bringing together the capabilities of Flow, Calm and Epoch.
    • Conceptualised user stories and demo script by working with Senior Engineering/Management.
    • Formed and Co-ordinated a globally distributed team comprising of UI, Engineering (Flow, Calm and Epoch), IT for infrastructure and demo logistics.
    • Implemented the Flow + Epoch backend for the demo workflows.
  • New Orleans, USA
    May 2018
    • Speaker at the NX files customer and partner session - presented an in-house prototyped project centered around Hybrid Cybersecurity and Blockchain.
  • Nice, Europe
    November 2017
    • Worked on refining the DB hacking application, prototypes and scripts prepared for the previous .Next to be
      presented at the Europe keynote demos once again.
  • Washington DC, USA
    June 2017
    • Designed and implemented a new DB hacking application to demonstrate the working of microsegmentation
      in the Flow keynote demo.
    • This application has been used in multiple .Nexts, SKOs, keynote demos and in various platforms.
    • Worked on preparing and scripting parts of the backend for the networking Xi-portal APIs for Xi keynote demo.

Member Technical Staff - II, VMware ESX Storage Team, Research and Development Team

Jun 2013 - Aug 2015
VMware, Bangalore, India

  • Implemented scripts to scan, detect and report data corruption of a virtual machine’s snapshot delta disks and their metadata.
  • Incorporated architectural changes in disk and object libraries across versions of ESX, virtual SAN (vSAN) and virtual volumes (vVols).
  • As the Scrum Master and Triage Lead, tracked team activity and assigned tasks to team members after analyzing feature requests and complex storage stack issues raised by customers and partners.

Software Develoment Intern, Media Labs

Jan 2013 - May 2013
Ittiam Systems, Bangalore, India

  • Designed and implemented Customer Subscription and Billing Modules for farmOTT, Cloud Based Video Transcode Service.
  • Technologies Used: Amazon EC2, PHP, FFMPEG, REST APIs.

Research Work Experience

Graduate Research Assistant under Professor Barton P. Miller

Aug 2015 - May 2016
Department of Computer Sciences, University of Wisconsin - Madison, USA

  • Designed plugins for Eclipse/IntelliJ to support SWAMP functionalities that facilitated testing code for security vulnerabilities across various development environments. Java, Curl APIs..
  • Automated the infrastructure (virtual machine creations, OS image customization and packaging of tools).
  • Technologies Used: OZ tool, RPMbuild, Linux.

Software Research Intern under Professor Dr. A. G. Ramakrishnan

Jun 2012 - Aug 2012
Medical Intelligence and Language Engineering Lab, Indian Institute of Sciences, Bangalore, India


Evaluating Viability of Network Functions on Lambda Architecture
  • As a part of this project, we focused on evaluating the viability of standalone network functions (NFs) on Lambda architectures and proposed a locality-aware, event-based NF chaining system termed as LENS.
  • Apart from the results discussed in Final Presentation Slides, the following additional experiments were carried out which are discussed in Supplemental Writeup:
    • Standalone NAT on AWS Lambda vs Standalone NAT on Azure Functions.
    • Measurement of Lambda Start-up time on AWS Lambda.
    • Investigation of the Lambda instances count on AWS Lambda.
  • Technologies Used: AWS Lambda, Azure Functions
NFS-Like Distributed File System
  • Built a simple distributed file system based on NFS-like protocol.
  • File system was integrated into the Linux client side using FUSE (FUSE allows to make something that looks like a file system but which is built in user space).
  • All basic file system functionalities such as make/delete directories, create/delete/read/copy/write to files and listing of directory contents were supported.
  • Optimized Writes by Batching and use of COMMIT protocol.
  • Built server to be able to recover from crashes. Server crashes are transparent to clients who notice just degradation in performance. Assumed fail-recover behavior.
  • Technologies Used: C++, NFS v3 Specification, FUSE
PageRanking, Structured Streaming and Twitter Stream processing using Spark and Apache Storm
  • Developed a PageRanking application using Python on Spark on a Hadoop cluster for the Berkeley-Stanford Web Graph dataset.
  • Experimented with custom partitioning of the RDDs, analyzed and fine tuned the performance by varying Spark Context.
  • Wrote a simple Python application that emits the number of retweets (RT), mention (MT) and reply (RE) for an hourly window that is updated every 30 minutes based on the timestamps of the tweets.
  • This was used to analyze the Higgs Twitter Dataset which was built after monitoring the spreading processes on Twitter before, during and after the announcement of the discovery of a new particle with the features of the Higgs boson.
  • Used Apache Storm to collect and process more than 500,000 english tweets on a Hadoop cluster that match certain conditions such as particular hashtags, friendsCount etc using Twitter API.
  • Technologies Used: Python, Spark, Apache Storm, Structured Streaming and Zookeper,
Timing and Communication
  • Used Linux timer clock_gettime() to measure Dean's numbers after determining resolution and precision of the timer.
  • Built a reliable communication library on top of raw UDP-based sockets that allows two processes to communicate via UDP packets, using a simple timeout-retry mechanism to detect when the receiver has not received a message, and then re-send that message. Measured its performance and reliability characteristics by inducing controlled message drops.
  • Google RPC and Apache Thrift
    • Measured the overhead of marshalling a message (packing an item into a protobuf, time it takes to pack an int, a double, a string of varying size, a complex structure on each platform).
    • Measured round-trip time for a small message, when both client/server are on the same machine and when on different machines.
    • Compared overhead of protobufs and RPC with that of barebone RPC library.
    • Measured bandwidth when sending large amounts of data without streaming and with client or server streaming and analysed size of messages required to reach peak line rate.
  • Technologies Used : C++, Python, Google RPC and Apache Thrift
Benchmarking of Data Analytics Stacks - MapReduce and Tez
  • This project mainly involved benchmarking the performance of MapReduce and Apache Tez using TPC-DS and TPC-H workloads.
  • Deployed a Hadoop cluster on Microsoft Azure VMs.
  • Ran SQL query "jobs" atop Apache Hive using MapReduce and Tez.
  • Tuned and analyzed the configuration of the MapReduce and Tez jobs by varying the number of reducers, parallel shuffle copies and the slow start of reduce jobs to obtain optimal performance for certain queries.
  • Developed and deployed a simple MapReduce application in Java that groups and sorts anagrams in a large corpus of words.
  • Technologies Used: Python, Java, Hadoop, Hive, Map Reduce and Tez
Jua : Customer Facing System to Improve User Experience in Public Clouds
  • Designed and developed Jua, a customer facing system that adopts machine learning methodology to perform the task of choosing the appropriate VM instance type and also implements a number of placement gaming strategies to ensure that the customers get the best performance relative to the cost they incur.
  • Technologies Used: python, Amazon Web Services(AWS). Project Report.
Predict Cricket Match outcome
  • Designed a machine learning based prediction system by learning features sets from CricInfo.
  • Technologies Used: python's scikit-learn and beautiful soup. Project Report.
  • Implemented the entire data science pipeline on data sets extracted from two restaurant aggregator websites : Yelp and Zomato.
  • Built a focussed web crawler to retrieve and store all the restaurant pages from Zomato and Yelp.
  • Performed data cleaning and information extraction by constructing manual wrappers on the retrieved HTML files.
  • Parallel and sequential Blocking was done on several attributes by applying appropriate similarity measures such as Jaccard, TF-IDF, Levenshtein distance to scale down the data to be considered for entity matching.
  • Applied learning methods : Decision Trees, Linear Regression, Logistic Regression, Support Vector Machines, Randomized Forest and Naive Bayes to predict entity matches and evaluated the best learning method based on Precision, Recall and F1 measure.
  • Technologies Used: beautiful soup, iPython, pandas, magellan. Project URL.
  • Designed and developed an application to share multimedia files and stream video over the cloud.
  • Catered to live streaming requests by splicing the contiguous video segments and storing it for subsequent forwarding.
  • Technologies Used: Java RMI, Amazon Web Services(AWS), Amazon S3, Microsoft DivX.
Projects in Computer Networks
  • Iperfer - Implementation of the iPerf tool in Java that was used to measure network bandwidth and performance of virtual networks.
  • Implementation of Virtual Switch and Router for Link and Network layer forwarding
    • Learning switch forwards packets at the link layer based on destination MAC addresses.
    • Router forwards packets at the network layer based on destination IP addresses.
    • Designed and developed the logic for packet forwarding, route lookup, longest prefix matching of the IP, MAC address lookup from the ARP cache and the generation of ICMP packets for error conditions.
  • Software defined networking (SDN)
    • Implemented two control application for a software defined network (SDN).
    • Implemented layer-3 routing application that installed rules in SDN switches to forward traffic to hosts using the shortest, valid path through the network.
    • Implemented a distributed load balancer application which redirected new TCP connections to hosts in a round-robin order.
  • Domain Name Server (DNS) - Designed and implemented a simple DNS server that performed recursive DNS resolutions, and appended a special annotation if an IP address belongs to an Amazon EC2 region.
  • Language Used: Java.
Machine Learning Projects
  • Naive Bayes and Tree Augmented Naive Bayes (TAN) Classifiers Code
    • Implemented the Naive Bayes and TAN classifiers intended for binary classification.
    • Used Prim Algorithm to construct the Maximal Spanning Tree in TAN.
    • Laplace estimates was used to calculate all probabilities.
    • Tested the classifier on data sets used to predict lymphatic cancer and to predict whether a U.S. House of Representatives Congressman is a democrat or a republican based on 16 key votes identified by the CQA
  • Single-Layer Neural Network Using Stochastic Gradient Descent Code
    • Implemented an algorithm that learns a single-layer neural network using stochastic gradient descent (on-line training).
    • This algorithm was intended for binary classification problems where the output unit used a sigmoid function.
    • Stochasic gradient descent was used to minimize squared error.
    • Tested the obtained neural net on a data set that represented energy within particular frequency bands when the signal from a RADAR bounces off a given object. This data was used to determine if the object is a rock or a mine.
  • Iterative Dichotomiser 3(ID3) Decision Tree Learner for Classification Code
    • Implemented a ID3 Machine Learning algorithm in Python for binary classification to predict heart disease, diabetes and moves in a Tic-Tac-Toe game. Extended it to include numeric and nominal attributes.
    • Plotted and analyzed the learning curves that characterize the predictive accuracy of the learned trees as a function of the training set size.
  • Language Used: Python.
Performance Analysis during Live Migration
  • The aim of this project was to analyze the performance of the chosen workloads during live migration.
  • Downtime and total migration time rate which are the key considerations for choosing a live migration approach(pre-copy or post-copy) were recorded for drawing inferences.
  • Technologies Used: iSCSI protocol, Xen hypervisor, Perl.
Online Test Management System
  • Developed an Online Test central database coupled with an online web-portal which was an end to end framework to conduct examinations online and get details about end-users comprising of students and faculty.
  • Technologies Used: C#, ASP.NET, MySQL, SQL Server. Code.


Granular Computing and Network Intensive Applications: Friends or Foes?

Hotnets, 2017
My contribution mentioned in the acknowldgement section.
Our CS-739 Distributed Systems project work on Serveless Computing (Lambda Architecture) and the feasabilty of leveraging its architectural capabilities for Network Function Applications, was later carried forward by a fellow teammate in research by the University of Wisconsin Madison - Computer Sciences Department.

Jua : Customer Facing System to Improve User Experience in Public Clouds

Semantic Scholar, Summer 2016

Performance Analysis during Live Migration

Volume 2, Issue 4, International Journal of Engineering Research Technology (IJERT), 22nd April, 2013.

Performance Analysis of Goldwasser-Micali Cryptosystem

Volume 2, Issue 4, International Journal of Engineering Research Technology (IJERT), 22nd April, 2013.

Green Computing – A Case Study on the Holistic Approach of Innovative Computing

National Conference on Recent Trends in Computer Technology, 29th April, 2011.

Holy Grail of Cloud Computing – Comparison of three IT giants Amazon, Google and Microsoft

International Conference on Applications of Wireless Sensor and Ad Hoc Networks (ICWSAN), 10th March, 2013.