This is a little page where I detail my experience, decisions and reflections of my project and design involvement! Though slightly unserious and improper, I hope this page can serve useful to those pursuing similar projects or looking for insight into my abilities as an engineering student.
To fill space, here's some fun facts about me!
- I love to host dinner parties (friendsgiving,friendsmas, ALL of it)
- I'm currently reading the "before the coffee gets cold" series!
- I broke a blender in grade 6 because i forgot about a spoon i put in it
- I'm left handed
I coded this website myself! Using Next.js and Tailwind CSS :)
I coded this portfolio using Next.js and Tailwind CSS! Previous to this, I had little to no experience coding. Only coding for class and an introductory certification to Python during high school (schulich ignite). My motivation to make a portfolio stemmed from wanting to have a place to showcase my achievements and projects. I'm really proud of how it turned out given the amount of time (about a week) and learning-curve I overcame approaching this project. Originally, the design for this project was more complicated than my abilities are currently able to achieve. However, I recognize there is a lot of room for improvement in the future, where those ideas may be useful :)
I made the original wireframes using Figma, all of which are linked here !
After some time, I realized it was really daunting to complete a project of this size. So I started by designing something more simple and with the help of the internet, I built this website! I found that trial and error provided the most useful learning experience for this task. After watching a video or two on the basics of Next.js and Tailwind CSS, it was easiest to have this Tailwind cheat sheet open and actively search any questions I had considering layout and specific terminology for positioning and similar features.
On this website, I really wanted to have some kind of typewriter animation. Which was a learning curve to say the least. After scouring the internet I found a few links and examples (here and here). To my understanding, the animation on the front page increases the width of the "item", here it is the text box as the animation progresses to incrementally reveal the text. Here, the keywords in the "className" of the desired text are:
inline-flex: ensures that everything is aligned in a single row
overflow-x-hidden: indicates that any text exceeding the width of the text box is not visible
whitespace-nowrap: makes sure all text does not wrap onto a new line
animate-type: applies the animation named "type" defined in the page "tailwind.config.ts"
To write out animations in "tailwind.config.ts", they must be defined in animations and keyframes (in the export default function). Within the brackets of "animation", the name, duration, pacing, repetition and when to apply certain styles is defined. It will look something like this -- type: 'type 2.7s ease-out .8s 1 normal none' with each kind of animation separated by commas.
type: animation name
2.7s: duration of animation
ease-out: how the speed of the animation changes, here it speeds-up towards the end (see also: ease-in, linear, etc.)
0.8s: time delay previous to the animation starting
1: number of iterations the animation should be played
normal: how the animation progresses through keyframes (see also: reverse, alternate, etc.)
none: when to apply any styling before and after the animation (see also: forwards, backwards and both)
The code in "tailwind.config.ts" should look similar to below. Where the % in keyframes indicates at what point in the animation (0% being the beginning and 100% the end) to change an element, in this case the width or opacity.
Personally, I'm really pleased with how the website turned out! Though a bit finicky and a couple bugs, all of the animations and aesthetics feel very genuine and authentic to myself. Most importantly, they are all functional. I'm proud of pulling this off and I hope that the amount of effort and time put into this project can be seen by those visiting this site!
My friends (Fred and Isabella) and I are building a website to take the experience of designing and creating your own jewelry/gifts online! We started this project for nwHacks 2025. Here is the Devpost and the Github :)
In the future we are hoping to implement online transactions using Stripe, improve the functionality, and potentially add a collaboration function! We are fully intending to participate in UBC's March 2025 Makers Market with tablets available for customers to design digitally or in person. My role in this project is similar to a product manager! Though, I offer to help with the front-end, my friends benefit more from this experience than me as I am not looking to break into the software sector. There is still a lot to learn for myself here as well, including branding and business analytics among others.
Feel free to check out our website! Below are some earrings I've made :)
Also a current wip! We are also looking to finish in January 2025, at least the first prototype. we are currently designing and ordering the first pcb to test a 3x3 keyboard!
Working together with my friend, Fred we started with initally understanding the inner workings of a keyboard. In this project my role was to design the pcb while Fred worried more about the firmware and coding aspects of the project. To begin, as usual, we looked on the internet and found a git repository that showed each step of how to build a keyboard (here). However, this repository didn't provide me with the proper insight into fully understand how the circuit worked. I googled some more and found this video which made it clear how to efficiently wire a basic keyboard.
Commonly, technology follows a certain logic path to complete a task. Generally it would look something like this --
1. detect signal (something changes!) -- sometimes this requires knowing how much something has changed or in what way
2. signal changes something (we change something because of a change in the environment) -- cause and effect! we want to achieve something based off of the fact that something changed
For a keyboard to work you need to be able to detect which and when a key is pressed, here we are not really bothered with the question of how much something has changed. In this case, our signal would be an analog reading of the voltage at a particular point in the circuit. Here, we are looking for HIGH or LOW. Essentially if the switch is on or off. Typically, this would require a connection directly from the circuit to a pin on a microcontroller -- one for each input signal (in this case, number of keys). However, most keyboards have around 100 keys and it would be ridiculous to have a microcontroller with 100 input pins on a small device like this.
Instead, we can use diodes! Simply put, a diode only allows current to flow in one direction. In practice, that means there will only be current in the branch of the circuit containing the diode under certain conditions (the diode is forward biased). With this knowledge, we are able to organize the keyboard like a grid. So that when a key is pressed, we are able to get the "coordinates" of the key using like 20 pins on the microcontroller. The signal would be returning a certain "row" and "column" to identify the pressed key rather than an individual signal for each key.
tbd!
As a part of my IGEN 230 class, we had a semester to build a line-following robot to complete a series of three courses (see images below). We were provided with both an Arduino UNO and ESP-32 and a basic robot kit (including a chassis, two DC motors, IR sensors, and other required materials). My team was able to complete all three courses. As a part of this project, I was involved in designing and soldering the pcb, managing the power supply, and writing the pid code in Arduino/C++ .
To begin, we made a couple executive decisions to determine the difficulty of the robot we wanted to build. These decisions were to use the ESP-32 rather than the Arduino Uno and to attempt to achieve PID control.
In class, we started with setting up a simple circuit connecting only one IR sensor to the Arduino UNO and testing to the readings fluctated depending on the colour of a surface. From here we propogated the circuit four more times and organized it neatly on a board. Initially, we used a potentiometer to be able to adjust the reading values of the sensors. However, eventually we found that the readings were more stable if the potentiometer was shorted and the sensors directly connected. The resulting board had a schematic like below.
In order to be able to see and use the values observed by from the sensors, we needed to connect it to the ESP-32. Each sensor requires one analog input connection. The H-bridge used to control the motors also required six pins on the ESP-32, four of which are digital and two as analog outputs (to control the speed of the motors). The general pin-out of the ESP-32 is illustrated in the image below.
In order to attach the sensors to the chassis, our team designed a sensor bracket in OnShape capable of screwing into the premade holes in the chassis and holding five IR sensors in a row facing the ground and the ultrasonic sensor in the front. This design was not interated too much as many of the issues could easily be fixed with post-processing after printing.
Overall, our project suceeded in completing three courses by overcoming 90 degree turns, larger than 90 degree turns, gaps and intersections in the course lines (demonstrated in the videos below). I believe that further iterations on the sensor bracket would improve the reliability of our robot. If the sensors were space precisely such that if the central sensor is aligned with the track, the two adjacent sensors would be placed just when the track width ends on either side. This way, any deviation from the track would quickly be detected, allowing for our robot to adjust quickly and run more smoothly. In addition, this would remove the issue when the line is occasionally not detected by any sensor when it is turning.
2024 Hackcamp Winner! I attended Hackcamp and met some really cool people (Jocelyn, Isabella, and Brandon) and together we built CapCap (links to Devpost). A website capable of tracking how much "studying time" a user spent focusing on their work. We built this using computer vision and facial recognition to track when and where a user would look away from their screen.
My role in this project mainly consisted of product design and completing the deliverable requirements! Our Figma and Git are linked here :) Below is our submitted pitch for HackCamp!