Hi! My name is Lev.

I'm a Full-Stack Developer from Vancouver.

About me
my photo

I'm a Full-stack Developer experienced in development of a large scale distributed application (AWS Backup), React + node.js library catalog application, teaching CS to university students, and writing C++ as part of my PhD research. I'm driven by new technologies, application performance, and writing clean, testable code. I taught myself JavaScript using online platforms such as EdX and Udemy and currently working as a Software Development Engineer on the AWS Backup team. I'm passionate about new technologies, website performance, and writing clean, testable code.

My resume

My skills

Front end
React Redux
Webpack GraphQL
TypeScript d3
Back end
node.js Express
AWS MongoDB
Python PostgreSQL
Tools
Webpack npm
Immutable.js d3
git AST

Movie data visualization

Problem

Visualize movie data to see whether there is a correlation between movie budget, popularity, and average rating.

Approach
Dashboard consisting of four main sections:
  1. A scatter plot: popularity vs budget. Display movie details on hover and click
  2. Movie details: a section containing basic information about the movie (title, poster, budget, etc.)
  3. A map showing the number of movies filmed across the world. Filter by one or multiple countries by clicking the map
  4. A bar chart showing number of movies per genre. Filter by one or multiple genres by clicking the bar chart
Choice of technologies
  • The visualizations are created using D3 - an open-source library allowing to build different types of complex graphs.
  • Due to the simple nature of the page, I decided not to use front-end frameworks and stick with vanilla JavaScript, HTML, and SASS compiled into CSS.
  • It is easier to store movie data in documents format rather than in tables, so I chose to use MongoDB on the back end.
  • Data is pulled from the back end API -- a MongoDB driver + Express application running on node.js.
  • The application is built using parcel.js and hosted on a AWS EC2 instance running Nginx web server.
  • Continuous integration is set up using Jenkins and GitHub web hooks.
Challenges
  1. Data formatting
    The data used were acquired from TMDB in CSV format, with individual fields wrapped in double quotation marks. Some of the the fields (such as movie plot) contained single, double quotation marks, as well as commas. Additionally, the object keys were wrapped in single quotes: this made it impossible to parse the file as JSON as is. To solve this problem, I reformatted the file in three steps:
    1. Wrote a script to convert the file from CSV format to DSV, changing the field separator from comma to pipe ( | ).
    2. Using a regular expression, took out all double quotes that wrapped individual fields.
    3. Using another regular expression, replaced single quotes that wrapped object keys with double quotes, without replacing the apostrophes.
    4. Wrote a script to read the resulting DSV file and populate the database.
  2. Performance tuning
    When a big time interval was chosen and no other filters were applied, a large amount of data had to be retrieved from the DB and transferred over to the front end. The initial version of the page took almost 3 seconds to load, and requests for more data taking up to 1.5 seconds. To decrease the load time, I took a number of steps:
    1. Decreased the amount of fields retrieved from the database. I decided not to read the plot field even though we might need it if the user clicks on the movie to see details. Instead, pull it only if and when the user actually requests the details. This introduced a slight delay in retrieving the movie details, but significantly decreased page load time and improved general responsiveness.
    2. Delegated processing of data to the MongoDB engine. Instead of retrieving all data about what movies were filmed in each country, count the movies per country in the DB and only transfer the final number.
    3. Set up indices on the DB to speed up reads: since new data in never written to the DB, there is no downside to adding several indices to the collection.
    4. Cache data on front-end: after each response from the backend API, save the retrieved data in memory. For any subsequent requests, if some or all data is cached, read it from memory and only request the missing part.
    As a result, the initial page load time decreased from 2 to 0.5 seconds, additional requests - from 0.5-1.5 to 0.2-0.5 seconds (depending on the amount of data requested).

Memory game

Problem
Build a simple memory game, showcase my React skills.
Approach
I built a simple memory card game that can improve your memory and is easy to start playing. The game board contains cards of initially hidden color. The goal of the game is finding card pairs or matching color by clicking the cards.
Choice of technologies
The game is built using React, bootstrapped using create-react-app, and hosted on a AWS EC2 instance running Nginx web server.
Continuous integration is set up using Jenkins and GitHub web hooks.
Challenges
  1. Adjustable game difficulty. In order to offer some additional challenge for the memory ninjas out there, I implemented three difficulty levels: easy, medium, hard. Switching the difficulty changes the number of cards on the board to 16, 24, and 32 respectively. Cards sizing is adjusted accordingly.
  2. Responsive design (making the page mobile-friendly). By means of media queries, the cards size is increased on smaller devices.
  3. Win condition. Every time the user finds a matching pair, the number of opened cards is increased by two. When it reaches the total number of cards, a win pop-up is displayed to the user.
  4. User experience. Smooth transitions and animations are used in order to create a nice user experience.

Let's get in touch!

E-mail LinkedIn GitHub