Design Engineering
Showcase 2020


Aida Manzano Kharman
Computer Vision
Dr Petar Kormushev
Computer Vision Engineer Intern

I worked as a Computer Vision Engineering Intern for the company U:CAAN Ltd where I gained experience in image recognition algorithms, training with custom datasets, working with Docker, APIs and implementation of image recognition applications on the Raspberry Pi.

U:CAAN Ltd. is a recycling start up that aims to incentivise recycling by gamifying the process and targeting a younger and wider audience of users. It does so through two main products: Digital Advertising Recycling Pods and the LittaHunt app.

My role was to create the working technical prototypes for both products to demonstrate their viability, which I was able to successfully deliver by the end of the placement.

 — U:CAAN Ltd.
Figure 1: U:CAAN’s Digital Advertising Recycling Pod Ecosystem. (M. Pollen 2020)

Demonstration of Design Engineering Thinking and Skills

The two Design Engineering thinking methods I utilised the most throughout the placement were: “failing fast” and “knowing what you don’t know”. These were crucial when working in a highly specialised field of work I had no prior experience with, which is computer vision.

For each prototype I developed, I utilised the same design, make and test techniques taught in our course. Firstly, I would kick off the project with a thorough brainstorming session to define the specifications of the project, and thus the milestones to be achieved, given I was working with a relatively open brief. Then, I would research the existing technology in the field and the best methods to meet said specifications. Here is where the “knowing what you do not know” design thinking method became crucial to the project’s development. Once I had understood which were the best methods and practices to carry out the prototype, I then moved on to the implementation phase. This is where I would enter a cycle of development, testing and iterating before deploying the final applications produced, whether this be a trained image recognition algorithm, or a client server application using Flask API on Docker. At this stage is where the “failing fast” design thinking was pivotal.

The figures below show the projects’ evolution from first system design to final working prototype:

 — U:CAAN Ltd.
Figure 2: System design of the LittaHunt App. (A. Manzano, 2020)
 — U:CAAN Ltd.
Figure 3: Working client and server system. (A. Manzano, 2020)
 — U:CAAN Ltd.
Figure 4: System design of the Digital Advertising Recycling Pods. (A. Manzano, 2020)

Role and Contributions

My role was to develop the works-like prototypes of U:CAAN’s products, and my area of focus was demonstrating that the computer vision features of said products were feasible. My role included a combination of system design, research, software development and deployment.

I produced a system design proposal for the DARPs, researched existing image recognition algorithms and selected the most suitable one for the given use case, which was YOLOv3. After this, the algorithm was trained on a custom dataset to successfully detect aluminium cans, glass bottles, PET bottles and HDPEM bottles. Then it was deployed on the Raspberry Pi, which I had connected to a camera and optimised to run neural network computations with a package called NNPACK, and installed Darknet, the framework in which YOLOv3 was built and trained on.

By the end of the first half of the placement, I’d designed and implemented from scratch system for which I created a working prototype, where the camera would capture an image of the litter, run the trained YOLOv3 and return the output of the prediction via the Raspberry Pi.

For the LittaHunt App I also brainstormed and produced the system design with the necessary technical specifications shown in Figure 2.

Once this was completed, I researched how to best implement this system by looking further into APIs and client-server applications and found “Flask” to be the best method. The final prototype delivered was a client that would post an image to the server, where the inference code was called to run the algorithm on the sent image, and then return the prediction back to the client. I then put this entire application on Docker, which is a program that allows developers to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it.

 — U:CAAN Ltd.
Figure 9: Architecture of Docker. (Accenture 2018)
 — U:CAAN Ltd.
Figure 10: Flask client-server application running on Docker container, with port 5000 open. (A. Manzano, 2020)

Finally, I carried out research on Amazon Web Services’ (AWS) cloud solutions to find which one was the most cost effective for U:CAAN’s applications whilst taking into account its future scalability.

I set up a virtual machine instance on AWS Elastic Cloud Compute (EC2) with all the necessary requirements to run the trained YOLOv3, including CUDA, cuDNN, OpenCV, Darknet and of course the image recognition algorithm’s trained weights. This infrastructure will be crucial to U:CAAN as it begins deploying its products and its need for processing power and computing resources rapidly increases.

 — U:CAAN Ltd.
Figure 11: Basic structure of an Amazon Web Services Elastic Cloud Compute instance. (Creately, 2018)


This experience was extremely rewarding as well as challenging. Design Engineering’s practical and fast paced nature has prepared me to be able to successfully deliver functioning technical prototypes in areas with no or little prior experience in.

My supervisor Matt Pollen entrusted me with the full responsibility of the computer vision development of the projects, and far from this being daunting, it actually motivated me to grow as an engineer, resulting in a very positive and successful placement experience.


No comments have been posted on this project yet.

Outdated Browser

This website has been built using some of the latest web technologies. Unfortunately, your browser doesn't support these technologies. To update your browser, please visit Outdated Browser.