hackathon_id
int64 1.57k
23.4k
| project_link
stringlengths 30
96
| full_desc
stringlengths 1
547k
⌀ | title
stringlengths 1
60
⌀ | brief_desc
stringlengths 1
200
⌀ | team_members
stringlengths 2
870
| prize
stringlengths 2
792
| tags
stringlengths 2
4.47k
| __index_level_0__
int64 0
695
|
---|---|---|---|---|---|---|---|---|
9,922 | https://devpost.com/software/1-5meters | 1.5meters' logo
Our app's interface
1.5meters
Social distancing, especially in public spaces, has shown to be an important technique in reducing the impact of the coronavirus. Most public health authorities recommend keeping atleast a 1.5 meter distance from all others to reduce the risk of becoming infected, and infecting others. Where confirmed cases are found to be out in public, contact tracing teams must tediously work out who the infected person did, and could have come into contact with. Use-cases such as these represent the target applications of the 1.5meters system.
What does it do?
1.5meters is a CV/ML-based system for keeping track of social distancing. Using deep-learning models, the system is able to detect people in footage (such as surveillance footage) and localise their position in the world relative to those around them.
This enables the system to determine the distances between people and measure the degree to which people are socially distancing. With the information, the system is also able to track the potential spread of the virus from an infected individual, based on others they come in contact with.
1.5meters does not use any form of facial recognition (only detection) and is intended to work within a variety of use-cases which can utilise existing hardware to preserve individuals' privacy, while at the same time providing useful information for tracking social distacing and performing contact tracing.
Structure
1.5meters consists of a backend and a frontend component. The backend side contains the machine learning components (in Python) and the frontend side provides a web-interface to enable rich 2D and 3D visualisation of the results, as well as useful metrics.
The frontend can connect to a backend using websockets technology for on-demand analysis, however it is also able to play back pre-computed results (recorded in JSON format).
Take it for a spin
You can try out the front-end visualiser
here
.
Since there is no backend currently hooked up here, you can grab a pre-computed test file
here
to upload.
Authors
Moiz Sajid
- Master of Informatics TUM
John Ridley
- Master of Informatics TUM
This project was submitted to hakaTUM 2020 hackathon under the government and society stream.
Attributions
Pre-trained models (monoloco & pifpaf) and some ML boilerplate provided by the
VITA lab at EPFL
.
The library has been additionally modified to be better faciliate real-world application in this system.
Built With
javascript
opencv
python
pytorch
Try it out
docs.google.com
github.com | 1.5meters | Keep track of social distancing without the ruler! | ['Moiz Sajid', 'John Ridley'] | [] | ['javascript', 'opencv', 'python', 'pytorch'] | 46 |
9,922 | https://devpost.com/software/risk-management-system-for-medical-department | GIF
Small GIF to demonstrate the client side. We evaluate the inputted information and only make slots wit low infection risk available
Our matching system ensures, that lowly infectious patients(marked in blue) don't mix with highly infectious patients(marked in red)
Inspiration
It is hard for partients to distinguish between which symptom can be the cause of what. Do I have just a seasonal allergy, a cold or maybe COVID-19? Am I potentially infectious for others or could others in the waiting areas actually infect me? A small survey before booking an appointment with a family doctor provided by us gives a rough analysis of his/her infectious status as well as an analysis of the patient's susceptibility and vulnerability to the most common infectious diseases. After that, the patient can directly book the appointment with a family doctor.
Currently, the medical staff on the other hand receive patients without any prior knowledge about their symptoms, as well no indication of their risk level. This resulted in some cluster infections of COVID-19 in the hospital in the past as well as quite commonly infections with flu in the waiting areas (see below). Therefore, to reduce infections in the facility and more specifically in the waiting room, we also provide the medical staff with the information whether a patient might be infectious as well as how vulnerably a patient probably is.
All appointments are then scheduled in a way that the risk of infections between patients is minimized. Moreover, we make sure that high risk patients (those with a high susceptibility and vulnerability to infectious diseases) have the least possible contact to any other patient that could infect them.
This system provides better risk management for both sides of patient and medical department. Patients with non-urgent signs can reduce the contact with other patients. Doctors and nurses can prepare correct protection with the coming cases.
What it does
The system serves as a two-folded solution for patient and medical departments. People consult the system in the form of a website (either stand-alone or embedded on the doctors website) to answer the provided question about their complaints. An easy adjustable decision tree is used as a basis for this.
The system works as a virtual GP, analyzing the symptoms and decide the emergency level of the case. In can give suggestions, such as home remedy and what nutrients to take for the light cases. If the data privacy is ensured, a high-risk case, for example loss of smell in terms of COVID-19, could be reported by our system to the local Centers for Disease Control and Prevention.
On the other hand, the system works as a risk management system for medical departments. Doctors and nurses will have the information of symptoms and risk level of the coming cases transferred via our system. They know when the case is potentially highly infectious or not. Thus, the front-line medical staff are clear about what kind of equipment such as N95 or protection suit they should prepare and what protocols they should adopt.
The system is also a shared platform of real-time capacity of medical departments. It automatically separates the cases with low and high risks into different visiting time slots and helps reducing the amount of cases to the overloaded units. The intelligent allocation ensures the reduction of cross-contact and keeps the smooth running of the whole medical system.
Lastly, we made it possible to store all inputted symptoms data anonymously in our database once an appointment has been set up. This can be an enormous novel source of tracked symptoms over time linked with the area in which the symptoms occurs. This could become a base to control the spreading of infectious diseases better, as symptom irregularities and peaks can be detected.
How We built it
Landing page and user interface for user and doctor respectively - Website by django
Symptom check form in decision tree - Python
Algorithm for disease classification and emergency level - Python
Back-end database - PostgreSQL + Python
Real-time calendar system - Coming soon
Massaging system for user and medical units - Coming soon
Risk reporting system connected to Centers for Disease Control and Prevention - Coming soon
Challenges We ran into
This is an ambitious project concerning the technical background of the participants. The first challenge we encountered is to build a working prototype, also called minimal valuable product, in a relatively short time. We were brainstorming about how to build a platform with all the core functions and also a clear user interface. We want to make sure the user experience for patients and medical staff reaches the same satisfaction.
The second challenge is the integration and communication of all functions writen by different team members. Non of us hat prior knowledge about the website framework django or the relational database PostgreSQL.
Accomplishments that We're proud of
A single system that fulfills both the needs of patient and medical staff. It provides:
symptom-based consultation
emergency and risk estimation
automatized classification of diseases by the database of symptoms
more understandable suggestions and booking for patients
early warning mechanisms before the coming of cases
real-time intelligent allocation based on capacity of medical units
What We learned
hackaTUM_C0dev1d19 provides great opportunity of interdisciplinary challenges. We decided to make use of this chance to break our old routines. We built the website entirely in python with django framework, which means a complete paradigm shift. We also noticed the importance of interdisciplinary knowledge. Since every member has coding skills, the forming of workflow and visualization was quite smooth. Now we are confident to work with much more frameworks and libraries such as React in limited time. We are experts of django now!
Our Next Step
Given that the level of digitization is much higher in the US and East Asia such as Japan, Taiwan and South Korea, we primary target the markets. We will firstly cooperate with the governments and persuade as many as medical units to join our system framework. We expect more medical units and doctors will adopt our system after they see the effectiveness of protection among the first clients.
On the technical side of things, we still have a lot of ideas on how to improve and evolve our system. Obviously that process start with optimizing what we achieved during the HackaTUM, this optimization is especially important as this is going to be a real time system with clients concurrently working with shared resources. We want to ensure a safe and easy to use product, and our aspirations can only be fulfilled by especially building up the product's base.
We want to especially focus on optimizing our matching algorithm, so that we can ensure that patients that are not infectious and maybe only need a vaccination are not thrown into waiting rooms with several potentially highly infectious people, this will slow down the spreading of infectious diseases. Our clients already get rated on infectiousness as well as on wether they are part of a group in risk (e.g. old people and people with medical preconditions). Our goal here is, that we make sure to cut down contact of risk group patients with potentially highly infectious people to zero (wherever possible), as those contacts are the ones that have a higher potential to result in death.
Background information and references
The infection risk in the waiting areas of healthcare facilities is quite high. Above that, a comparably large number of patients in the waiting areas have underlying conditions which predispose them to infections. Studies show that the most promising approaches to reduce the infection risk is to lower the number of patients in the waiting areas as well as their time waiting there. Above all, patients with an immunosuppressive state or chronic disease should not be waiting together with other infectious people (see Beggs et al. (2010), Potential for airborne transmission of infection in the waiting areas of healthcare premises: stochastic analysis using a Monte Carlo model.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2939637/#!po=0.943396
).
BT analysis of the reported data of the flu infection rate to the RKI in the season 2018/2019 showed that only about 1% of the medical practices actively reported their data (source:
https://influenza.rki.de/Saisonberichte/2018.pdf
). With the symptom checking of our system it would be possible to analyse the infection state of more patients which could improve the assessment of the infection spread within different regions.
Built With
bootstrap
django
postgresql
python
Try it out
github.com | Risk Management System for Medical Department | A system providing primary consultaion for symptomatic complaints and estimating the risk level of case. Thus, medical departments can adjust their protection and reduce infcetions in the waitingroom. | ['Nicolas Röhrle', 'Hong-Mao Li', 'Martin Achenbach'] | [] | ['bootstrap', 'django', 'postgresql', 'python'] | 47 |
9,922 | https://devpost.com/software/openventsmartiotai | Inspiration
We did this project for the OpenVent Challenge from Infineon for hackaTUM C0dev1d19.
Features
monitoring the data of all vents directly on your phone
get push-notifications of problems occur at patients beds
get all changes made on the ventilators during home time
Collection of resources
Before we stared programming our application we talked with anesthetist who gets in contact with professional ventilators in his everyday workday. He was able to tell us what are the most significant values of the vent that he needs for his job and what does values mean.
How our project works
A
python server
collects the data from the ventilators. All necessary data gets forwarded to physicians phone trough an Android Application.
The
Application
has been built using Java and some OpenSource-libraries to process and display the present data from the ventilators. The data is only received when the human needs it, which is only when he looks in a patient profile.
The server on the other hand checks all the time if a patient has a problem. After a problem is recognized, it sends an
alarm
in form on a push notification with the room number of the patient, to the doctors phone. This technique allows to give all important data to the physician, but at the same time saves battery and resources of the phone.
If a doctor sets the app in to
break mode
, he won't receive new alarms during the time he doesn't work. Nobody wants to be interrupted with work related things while at home. When he comes back all changes that his colleagues made are displayed.
What we learned
how medical ventilators work
more about app development
a lot about networks
What's next for OpenVentSmartApplication
Tests with real ventilators
Find a better name
Built With
android-studio
java
openventapi
python
raspberry-pi
Try it out
github.com | OpenVentSmartApplication | A frontend Android App and a Backend Python Server that allow Physicians to Monitor the Open Vent System in Realtime | ['Fabian Proebstle'] | [] | ['android-studio', 'java', 'openventapi', 'python', 'raspberry-pi'] | 48 |
9,922 | https://devpost.com/software/running-help | Inspiration
helping the elderly or the people in need to buy daily necessities
encouraging people who like doing out-door sport to help those people in need
What it does
people can make an order which includes things they want which supplied by their nearby stores.
people can choose the order which they want to take and send the packaged supplies to the buyer house.
the left amount of items can be found in the app.
people can directly pay for their order in the app
the total length and estimated consuming time and the map of planed route can be seen in the app.
How we built it
we build an ios-App written in Swift.
we use Google map API.
Challenges we ran into
To mock up the data we need
To beautify our App
To realize all functions we mentioned above
Accomplishments that we're proud of
We have implemented our planed App
What we learned
make the mock-up as in details as possible
What's next for Running-Help
To implement the connection with Google Map API
To separate the front- and back-end
Built With
map
swift | Running-Help | Running-Help | ['Stella Li', 'mmiiikeee Li'] | [] | ['map', 'swift'] | 49 |
9,922 | https://devpost.com/software/peermd | Inspiration
In light of recent events, it has become evident that health care systems across the world are not sufficiently prepared to provide health care in times of crisis.
Hospitals and medical staff are overwhelmed and WHO officials warn that health systems are ‘collapsing’ under the coronavirus.
Peer.md aims to connect patients in need of consultation with medical professionals available worldwide. As not all health care systems are equally strained at all times, providing access for clients to doctors in less afflicted areas helps make use of the full capacity of the systems by distributing patients efficiently. This enables patients in crisis areas who do not need acute treatment or a visit to a physician to receive professional care online. It is meant to be, in essence, a
load balancer
for health care systems. It can also be used to help people with no health insurance (28 million people in the US alone) or people in remote/less-developed areas, so COVID-19 is only one of a plethora of possible use cases. A side benefit of such a system for the COVID-19 pandemic is that the spread of the virus is better contained because many people with less severe illnesses do not risk contracting the disease by physically visiting a doctor.
Why Peer? Why MD?
Peer stands for peer-to-peer because the goal is to have the infrastructure work with as little input from a central authority as possible. MD is an abbreviation for Doctor of Medicine (lat. Medicinae Doctor, the American version of Dr. med.)
What it does
You can register either as a doctor or as a patient. If you are a patient, you can create medical
"issues"
which fall into one of the following three categories:
"urgent"
,
"normal"
and
"observation"
.
You can add
"reports"
to your medical issue and send
"messages"
to your doctor, which will be visible as part of the
issue
.
If the
issue
is
urgent
, you enter a queue and match up with a doctor, who will provide you with immediate help via chat and video.
If the
issue
is
normal
, it is assigned to a doctor who will look at the symptoms you described and ask you further questions by sending you
messages
. You can schedule a video call with your assigned doctor if your doctor agrees to it.
If the
issue
is an
observation
, you are not matched up with a doctor and can add
reports
to your log until you believe you need to match with a doctor.
As your situation develops, you can change the category of the issue (e.g. if the medical issue you marked as an
observation
evolves to acute symptoms, you can change it to an
urgent
issue and enter a waiting room to see a doctor immediately.)
After the issue is diagnosed, it is marked as
"completed"
. If you feel worse again, you can move it into a different category and receive medical help. Your new doctor will see your previous reports.
Your medical profile stores all of your issues and general data about your health: allergies, preexisting conditions etc. all belong here.
How we built it
To speed up the process of building a prototype, we used Google Cloud Firestore as the database because of its flexibility and ease of use. Around it, we built a REST API in TypeScript that supports the different ways in which we need to query and store user data. For the user interface, we built a progressive web app in React.js/TypeScript. To enable direct video and audio communication between clients, we used WebRTC. In addition, we used a framework built on top of WebSockets to handle signaling (setting up peer-to-peer connections via WebRTC) between clients.
Challenges we ran into
It was not easy to get access to the webcam stream because every browser on every device handles audio/video media a little bit differently.
What we learned
Choosing a NoSQL database turned out to be a good idea because the structure of our data schemas changed multiple times during the development process.
It took a significant amount of time to set up basic functionality such as authentication/sessions, so if we decide to build a web app next time, we will probably write a boilerplate in advance so that we can focus more of our time on building the essential functionality of the application.
What's next for peermd
The eventual vision is to build an infrastructure that leverages technology to allow everyone on the planet to have access to a medical professional.
This hackathon is just the beginning. We are going to continue building the application and adding exciting features. Some of these include:
Ability to upload images and documents to a report
Confirmation of qualification for doctors (using a document)
Improved matching of patients and doctors (e.g. matching by multiple criteria such as languages spoken and type of problem)
Feedback system from patients after diagnosis
Strong data privacy infrastructure (because of the incredible sensitivity of the data we’re handling)
Review from multiple professionals for complex/ambiguous cases
Information videos on diseases, to allow doctors to focus more time on analyzing patients and answering questions
Computer-aided diagnosis by analyzing symptoms (first similarity to other diagnoses/decision tree, then more complex probabilistic models e.g. machine learning)
Prioritization of people in certain geographic areas (e.g. Italy or New York City during COVID-19 pandemic)
Financing of the platform (initially through donations and volunteering, later public funding from organizations such as the WHO and governments)
The hiring of professional doctors who get paid to consult on the service
Ship medical equipment and medication to developing nations, highly qualified doctors available online to analyze data from medical equipment and provide medical expertise necessary for accurate diagnosis
There are also already lots of bugs we need to fix, but we are looking forward to turning our vision into reality and helping more people around the globe have access to medical help. We are aware that the journey will be tough, but we believe that by continuing to work on this project we could improve and perhaps even save lots of lives.
Built With
firebase
google-cloud
node.js
react
typescript
Try it out
peer.md
peermd.org | peer.md | Peer-to-peer online medical consultations to balance load on health care systems | ['David Bonello', 'Alexander von Recum', 'Elisabeth Xia'] | [] | ['firebase', 'google-cloud', 'node.js', 'react', 'typescript'] | 50 |
9,922 | https://devpost.com/software/emergency-logistics-syx3r5 | test | test | test | ['Ramyashree Bhat'] | [] | [] | 51 |
9,922 | https://devpost.com/software/localrelief | Inspiration
Due to the shutdown of public life in many countries, many small businesses face an existential threat.
To grow closer as a community and forge stronger bonds with local businesses, LocalRelief attempts
to create a platform for businesses to outline how you can support them, enabling you to engage with
your favorite book store, while keeping the physical distance necessary to keep everyone safe.
Built With
graphql
node.js
react
typescript | LocalRelief | Support local businesses in times of lockdowns and social distancing | [] | [] | ['graphql', 'node.js', 'react', 'typescript'] | 52 |
9,927 | https://devpost.com/software/epsylon-mask-t-m-f8b7dz | mask
Epsylon Mask
Removable UVC light tubes
Valve for air inlet
Transparent PVC that empowers lips reading
Textile filter to prevent dirt and coarser particles from penetrating
air flow
Inspiration
“I want the country to know that if I end up on that ICU bed, it is because I was not given enough PPE to protect me. Why is it that when my shift ends, I peel off the same N95 mask that I have worn for 12+ hours straight? I have breathed in stale air all day on a unit rife with the dying”
— KP Morgan. Nurse at The Mount Sinai Hospital
The successful management of a COVID-19 pandemic is reliant on the expertise of healthcare workers at high risk for occupationally acquired influenza. The recommended infection control measures for healthcare workers include surgical masks to protect against droplet-spread respiratory transmissible infections and masks to protect against aerosol-spread infections.
However, every time the healthcare worker goes into a COVID patient’s room they expose themselves - putting workers in jeopardy. It is not one patient and one exposure, it’s multiple exposures. Putting workers on getting gravely ill each day without having proper protection.
What it does
** Disinfection of the breathing air with UVC LEDs **
Epsylon is a reusable face mask with over a 99 percent protection against infectious agents. It deactivates the viruses and bacteria using UVC led lights which in our masks have been proven to be harmless to the body. To ensure a cleaner and safer environment, the textile filter prevents dirt and other coarse particles from entering the breathing space of the wearer.
In comparison with the traditional surgical masks and N95, Epsylon enables lip-reading making the mask inclusive and accessible. It is light and can be worn for long hours in addition to being durable with an estimated lifespan of over 5 years.
Lastly, the mask is not only built for the safety of the wearer but also for those around them; the mask filters exhaled breath making it close impossible to infect other patients or health care workers since the mask cleanses the inhaled and the exhaled breath.
Overall, the efficiency of Epsylon much higher than the current one on the market for two key reasons:
1) Using silicon technology, loose points are efficiently sealed providing more protection from entry of unwanted particles
2) Currently, the maximum efficiency of masks on the market is 95% whereas with Epsylon we were able to accomplish an efficiency of 99%. This number is estimated to rise to 99.99% with the use of higher quality LED lights giving rise to the overall quality hence the efficiency rates.
How I built it
The Epsylon mask primarily is made using UVC. There are currently UVC LEDs with an area of less than 5 x 5 mm; showing an optimal scattering angle of UVC light. They emit light for disinfection with wavelengths between 260-270 nm each LED has an output of at least 80 mW. The LEDs can be operated with a voltage of less than 8.8 V.
Every virus needs a different dose of UV light to neutralize it. The reduction always takes place in the log10 area. For example, if you need 10mj/cm^2 for a 90% reduction, then you need 20mj / cm ^ 2 for a 99% reduction.
If COVID-19 were similar to the Spanish influenza virus, 3mj/cm^2 would suffice for a 90% reduction. For a respiratory mask with a 99.99% reduction, approx. 14 mj / cm^2 is needed and would suffice.
The reusability feature is derived from LEDs. The LEDs filter the air through the UVC light. The LEDs in use have a lifetime of 1 year if used continuously but can lat up to 5 years otherwise, which will often be the case. Additionally, the central part of the mask is made using silicon which boasts of strong durability and reliability. The LED tubes are easily replaceable in case of damage or harm. These abilities make the mask washable, reusable, and extremely durable.
Challenges I ran into
When we started out the project, we did not have a clear direction in mind. All the team was looking for ways to help the health care workers in the most optimal way possible. While researching the most commonly faced challenges during the COVID-19 global pandemic by the health care workers was that of sub-par protection provided by the currently used face masks by the health care industry.
The first obstacle was to figure out how to beat the current stats and build a solution that combats and addresses a variety of problems at once. We started by understanding the problems in the current mask that puts the health workers at risk this includes exposure to viruses and bacteria, build up from sweat, inability to communicate with disabled, exposure to the harmful environment, and easy penetration of dust particles in breathing space.
Once we had identified all the key areas where the current surgical mask and N95 lack, our next challenge was to pick out the best of the technologies to find the optimal solution to each. And so we did by including materials such as silicon which make the mask extremely durable and reusable it's easy to cleanse thoroughly.
Accomplishments that I'm proud of
Epsylon mask team is proud of the current outcome, even in the initial stages, the mask has a minimal limitation which we are soon looking towards fixing. We are proud that the mask is able to accomplish and check off multiple pain-points of the health workers at once.
Additionally, after spending time researching for hours and days, we discovered the harrowing truth of the current situation of the health care industry. We found out that the real number of deaths of the health line workers is being hidden in the media and the families are heavily impacted as well.
To think of it, it's just a mask but it has the power of ensuring psychological balance and relieving the health care workers of the distress of making a decision of whether to attend to a patient or not. Epsilon has the capability to provide nurses and health care workers with confidence that they are safe and providing that safety net to our heroes has been the biggest accomplishment of this project.
What I learned
Throughout this project, there has been a large variety of learning for the entire team. The learning ranges from larger respect, love, and understanding for the health care workers and other front line workers during these tough times and uncertain times.
The use of a variety of hardware technologies has helped us gain an in-depth understanding of UVC light on the human body and we optimized the product to ensure 100% that the UVC light is unable to trespass through the specialized materials of the tubes. Additionally, silicon although often just a replacement of plastic has been used as a key feature, this taught us the large variety of applications of this simple material it single-handedly made our product more durable, reliable, reusable, and more inclusive. Consequently, the team learned the importance of inclusivity in this situation as it often looked over and easily ignored, hence making sure that the health-focused product enables lip reading to ensure the safety of all of human-kind.
This has also been a huge business lesson for the team as designing, creating, and understanding the product not only took scientific and technological knowledge but it also required understanding the market, the market size, the target audience, unique selling point, competitive analysis, to name a few. Researching all of the above helped us gain insight into importance of our solution and gaining a more in-depth understanding of the targetted problem
What's next for Epsylon Mask
Epsylon Mask is more than just a product, it is a gesture of mutual reciprocity. As the tag line reads "Protecting Those Who Protect Us", this face mask of the future is aiming at helping the heroes of modern-day just as much as they have helped mankind through their tireless services. In the current stage of the product, it has a protection rate of 99% whereas, through research and in-depth knowledge of the technology, the team knows that it is very much possible to raise the number further to 99.99 and that is possible through scaling and high-quality part production.
Finally, Epsylon Mask is not only protective toward the coronavirus but it is built to protect against any influenza virus, hence we further want to explore the possible usages and markets for the product beyond the current target.
Built With
led
silicon
uvc
Try it out
www.figma.com | Epsylon Mask T&M | Protecting Those Who Protect Us | ['Timo H', 'Jocelyn Calderon', 'Andrés Guzmán'] | ['1st Place'] | ['led', 'silicon', 'uvc'] | 0 |
9,927 | https://devpost.com/software/test-project-fej821 | PreCon AR
Inspiration
In this current situation, people are still struggling to adapt and doing daily activities. The government enforces some rules and gives suggestions for the public, so that people can deal with this problem. But, there are many people who still underestimate the fast transmission of COVID-19, disobey the rules, ignore the advice and those who aren’t sure which shouldn’t be done, etc
What it does
Using Augmented Reality (AR), we made PreCon (Prevent Contagious) in order to help the public in experiencing real life situations inside AR. The app helps to educate people on how to prevent this virus from being transmitted. Inside the training simulation, people will interact with the virtual environment and learn how they should act accordingly, because the important thing is to prevent something bad from happening to us rather than try to cure it. Since all the events happen inside the virtual world, this training simulation is safe to use by everyone.
Beside AR training simulation, there are still several features that PreCon have. Here all the feature of this app :
AR Training Simulation. Experience real life training simulation and interact with the object inside the virtual world.
Advice from WHO with easy to learn illustration step by step.
AR Selfie. Take a selfie photo with a mask and face shield.
COVID-19 status all around the world.
Intelligent assistant to provide us some answer about everything.
How I built it
Unity as our main tools for making the application.
Using C# Language.
ARFoundation to provide the Augmented Reality technology.
Postman public API.
Wolfram Alpha API as Intelligent Assistant.
echoAR to provide a flexibility for 3D object and real-time info update.
Challenges I ran into
I need to find more information about Augmented Reality also do trial and error, sometimes makes me overwhelmed.
Accomplishments that I'm proud of
This project actually. Somehow proud that I can be part of "helping others" in this pandemic with my own way. Especially helping medical team to prevent other people being infected. Helping people with Extended Reality technologies are something that really change the views of AR/VR/MR application.
What I learned
From this project, I'm more learn new things about AR, how to solve problem, how to adapt to the new things, also more exploring about COVID19 also for my self I'm learning more about what should we do and should't. More exploring some advice for being healthy during this pandemic.
Also learn about how to integrate my application with Wolfram and echoAR system.
What's next for test project
Adding more training case inside the current application. Also if we can really continue with this project, we maybe will try to do Virtual Reality Training Simulation, so it can be more immersive to do a training.
Built With
android
api
arcore
arfoundation
c#
echoar
speech-recognition
unity-technologies
wolfram-technologies
Try it out
precon.rgplays.com | PreCon (Prevent Contagious) AR - AR Training Simulation | Educate people using Augmented Reality, more like training simulation about how to react to this current situation, to keep them healthy by give them example, virtually. This way can be more safety. | ['Maynard Lumiu', 'Steven Sean', 'Mario P'] | ['2nd Place', 'The Wolfram Award', 'Best Project using echoAR'] | ['android', 'api', 'arcore', 'arfoundation', 'c#', 'echoar', 'speech-recognition', 'unity-technologies', 'wolfram-technologies'] | 1 |
9,927 | https://devpost.com/software/gotour-vr | Landing Page
Tourism Information Detail
List of Tourism Objects Gallery
360 View (VR Mode On)
360 View (VR Mode Off)
Integrated Multiplayer VR Room
Inspiration
I'm really inspired by how amazing AR/VR could change and evolve the current technology into the next level where it can change how we interact with everything, starting from interacting on media, our daily activities, people and even places where we can travel, the true power of Virtual Reality is that we can create an infinite world with no any constraint or boundary which is the greatest approach to do something that we cant do in the real world at a usual state especially in this epidemic where most places are closed.
So because of this, it inspire me to build such WebVR called GOTOUR VR.
What it does
This WebVR will create an infinite world where we can travel to any listed tourism objects as much as we can with a more engaging way to interact, more interesting, and more realistic but virtually experience.
GOTOUR VR Architecture
This WebVR is be able to :
Experience tourism objects virtually but realistically.
Get tourism objects detail information.
Send message to the tourism object.
Call the tourism object.
Get a route and direction to get to the tourism object.
Set VR Mode to make the experience more realistic.
Interact with other people inside VR Room by moving their avatar and talking with their microphone.
VR Mode Flow
How I built it
I built this WebVR with these frameworks and tech-stacks :
Aframe Js
Aframe Js is the core of WebVR javascript library that i use for this project.
Networked Aframe
Networked Aframe is a powerful Aframe Js component that used for integrating WebRTC into our WebVR with a very easy and simple way! in this case our WebVR will be able to send a realtime data socket into our WebVR which can create a multiplayer world that many users can join, send and update user's 3D avatar position and user's voice to the WebVR, at this state this WebVR can be a pure virtual world that we can interact with.
Firebase Realtime Database and Hosting
Since this is Virtual Reality Project, it is essential to use Realtime Technology so that we don't have to keep refresh the page which will take a high load and rendering time, in this case i'm using Firebase Realtime Database to read and store the data that needed in this project, and also i host this WebVR in Firebase Hosting since Firebase is powerful, so i can integrate more feature with other Firebase technology easily in the future.
Mozilla Web Speech API
In order to make the assistant can speak and understand what we are saying, i'm using Mozilla Web Speech API since this is the easiest way to implement Text-To-Speech and Speech-To-Text with a good example.
Figma
For the 2D assets that i need to put on the 3D world of WebVR, i'm using Figma since it is really simple but powerful.
Cinema 4D
For the 3D assets that i'm using, especially for the assistant avatar, i'm using Cinema 4D, since it is really easy to use with a great performance for the result, at least i'm already used to with this long time ago.
Challenges I ran into
Challenges that i ran into are really a lot! those are :
Rendering Time
So this is the rendering graph in WebVR, rendering time is one of the biggest challenge to build WebVR especially with a lot of assets, models, and complex system, and so far this still be the weakness of this WebVR since i'm still focus on building the WebVR first rather than make it lighter with some techniques and some libraries.
Requirements and Compatibility
Since WebVR technology is still kind of new, and this is a Cross Platform project so there are a lot of compatibility issues, especially users who are using old browser, phone, operating system, slow internet speed might having a various issue, in that case i was trying a lot of browser with various versions to get the best performance when i'm recording this for the video.
Accomplishments that I'm proud of
Accomplishments that i'm proud of is even with those challenges i was really excited that i was be able to built such a WebVR since to be honest i never seen other WebVR like this, so i'm so happy that all my AR/VR experiements and hardwork really paid me off with how this project has been built successfully, even there are still need a lot of improvements.
What I learned
Innovation is really important, that's where all successful story is coming from, in that case we should not afraid of failure because everything has its own pace to get to the finish goal, remember that consistency is the key :D
What's next for GOTOUR VR
Performance Improvements
Compatibility Improvements
Add more Tourism Objects
Add a user authentication
Improve the multiplayer feature where all people can gather together, meet and talking together in the tourism virtually with their own avatar.
Built With
360
ar/vr
assistant
cinema4d
css
figma
firebase
google-cloud
google-maps
html
javascript
mozilla-web-speech-api
pwa
realtime-database
speech-to-text
text-to-speech
vrcontroller
vrgear
web
webrtc
Try it out
gotour-vr.web.app | GOTOUR VR | Cross Platform Realtime Full Dive WebVR with Smart Assistant to Visualize and Experience Tourism Objects Virtually but Realistically. | ['Surahutomo Aziz Pradana', 'Suratno .'] | ['Track Winner: Travel & Mobility', '3rd Place'] | ['360', 'ar/vr', 'assistant', 'cinema4d', 'css', 'figma', 'firebase', 'google-cloud', 'google-maps', 'html', 'javascript', 'mozilla-web-speech-api', 'pwa', 'realtime-database', 'speech-to-text', 'text-to-speech', 'vrcontroller', 'vrgear', 'web', 'webrtc'] | 2 |
9,927 | https://devpost.com/software/covid-analyst | Our User Interface shown on a Laptop
Heat Map with Risk Assessment making use of US Census Data Set
Our main screen showing the Heat Map, News, and Chatbot
Inspiration
Specific, region-targeted information surrounding COVID-19 is hard to come by. At best, a search on Google can get you the number of reported cases for your state along with countless news articles with questionable accuracy. We wanted to create an information hub for everything you need to know about COVID-19 in your region.
What it does
COVID Analyst uses machine learning and spatial data analytics with a combination of reliable data sources and research publications to give you an address-level risk heatmap of COVID-19 in your area. In addition, it scrapes credible news outlets to give you a feed of news in your area and an AI-powered chatbot will answer any questions you may have about COVID-19.
Key Features:
A risk heat map of your region created using published statistics from WHO and APM Research Lab and data from JHU and the US Census through spatial data analytics and machine learning.
A web scraper that provides the user with relevant, credible COVID-19 news in their region.
A chat bot that answers questions pertaining to the website and COVID-19 using live data and can conduct a symptom-based screening.
What separates us:
Our map provides information on an address-by-address level.
We were the first ones to receive and work with a dataset including race/gender/age information for the US from AMP Research Lab (American Public Media Lab).
How I built it
Using socio-economic information from Census Tracts (data such as poverty rate, education, race-ethnicity, population pyramid, proximity to health care, and old age, we produced heatmaps to find communities that are at higher risk for COVID-19. Data is extracted and stored in a spatial data model, and we use machine learning to evaluate the risk factor based on published statistics as weights.
Our web app is fully integrated with Google Cloud, running on App Engine, and using Places API and Dialogflow for the chat AI. We used React.js/Node.js for our web stack and conducted scraping with Node.js using Puppeteer.
Challenges I ran into
We were faced with hardware limitations (it took much more time than expected to process the data), so our app is currently limited to Wake County, North Carolina.
What's next for COVID Analyst
We started this as a sample for Wake County North Carolina (population about 1.1 million) but this work can be easily scaled to the entire US with the appropriate resources and time. We can also create a native mobile application along with the web app.
Built With
dialogflow
google-cloud
node.js
places-api
puppeteer
react
Try it out
github.com
covidanalyst.tech | COVID Analyst | AI-powered targeted analytics for the COVID-19 pandemic | ['Can Koz', 'Marina Tai', 'John Javad Roostaei'] | ['1st Place Overall Winners', 'Google Cloud 2nd Place', 'Track Winner: Health & Fitness'] | ['dialogflow', 'google-cloud', 'node.js', 'places-api', 'puppeteer', 'react'] | 3 |
9,927 | https://devpost.com/software/spark-k6txdh | Inspiration
- Needed a place to track and plan our projects.
- Frustrated over apps that are specialized to do only one thing resulting in having to keep many tabs/apps open at the same time.
- Saw a need for an all-in-one productivity app.
- Believed it had the potential to increase the productivity of those affected by COVID-19.
What it does
- Features: Calendar, Tasks, Team Administration, Project Tracking, Messages, Meetings and Zoom Integration, Discussion Board
- A cumulative productivity app that puts all the apps you need into one
- Assists teams and organizations by improving productivity and tracking the progress of projects
- Highlights team collaboration with SparkRooms to coordinate team members
How we built it
- Written in HTML/CSS/JS, Python
- Written in Visual Studio Code
- Utilized Travis for continuous deployment and autonomous configuration
- Divide and Conquer - Frontend, Backend
- Git, VSCode Live Share, and Discord
Challenges we ran into
- Responsiveness & Mobile compatibility
- Rendering iframes on the dashboard
- Saving arbitrary user data within Firestore
- Git collisions: Committing changes to the same lines at the same time
Accomplishments that we're proud of
- Firebase for hosting, database, and authentication
- Fully functional login system with Google Oauth 2.0 Authentication
- Dashboard to render iframes to show lots of content in a single page
- Travis CI/CD
- Lots of backend and Javascript to process website
What's next for Spark
- Expand Spark for function enterprise and educational usage
- Increase responsiveness of site to enable mobile usage
- Create a way to send personal messages to team members
- Create more tools for users eg. a personal File Storage Method
Built With
bootstrap
css3
flask
fullcalendar
google-cloud
html5
javascript
jquery
node.js
python
travis-ci
Try it out
sparkapp.cf
github.com
docs.google.com | Spark | An intuitive and empowering all-in-one productivity application. | ['Raadwan Masum', 'Rohan Juneja', 'Safin Singh', 'Aadit Gupta'] | ['Wolfram Honorable Mention', 'Track Winner: Work & Economy'] | ['bootstrap', 'css3', 'flask', 'fullcalendar', 'google-cloud', 'html5', 'javascript', 'jquery', 'node.js', 'python', 'travis-ci'] | 4 |
9,927 | https://devpost.com/software/ace-pa84w5 | App Logo
Meet the Team !!
ACE- Login
ACE- Registration
ACE- Interest
Student Profile
ACE- Chapter List
ACE- Chapter
ACE- Solar System
ACE- AR linkage
AR Interface
ACE- Quiz
ACE- Quiz
ACE- Recommended Activity
ACE- Description of Activity
ACE- Description of Activity
Teacher's Profile
ACE- Teacher's Section
ACE- Student Performance
ACE- Student Performance Graph
ACE- Student Performance Graph
ACE- Student Performance Graph
ACE- Student Performance Graph
Inspiration
Being University students, We feel the E-learning System is a bit tiring and boring. So how can we make E-learning fun and engaging?
Hence we came up with an application that could help students enjoy their studies rather than look at it as a task.
What it does
_ ACE _
is an E-learning application that allows students to enjoy their academics from a different point of view. The interactive interface of the augmented reality helps student to quench their curiosity level, achieve-high level of learning passion and inspiration in students. Thus, it also improves imagination and helps in gaining the student's attention.
It also includes a journal so that they can pen down their understanding of the chapters.
The most interesting part of our app is that it uses a recommendation system to suggest activities to students according to their selected interest.
How We built it
We designed our application using Adobe XD and Adobe Photoshop and used Adobe Dimension, Adobe Aero & Adobe Creative cloud for creating AR solar system.
To implement A.I into our system, we used R programming language for creating Student's performance graphs and used recommenderlab library for recommending activities to student.
Challenges We ran into
1. Time management
2. Difficulty in deciding what features to include and not include
3. Competitive Market
4. Working with AR:
Finding an AR platform to work with was quite hard because we wanted a platform that was easy to work with, requires less memory storage and can be integrated with different types of operating systems. Currently the app employs Adobe Aero and Dimension.
Accomplishments that We are proud of
+ Building An AR model
+ Building a recommendation system from dummy data
What We learned
We learnt that we can achieve anything as long as we are a good team player and dedicated we can achieve anything.To do something like this virtually made us understand that
no task is impossible
.
It is important to choose your team wisely and all your team members are open for discussion and ready to work.
What's next for ACE-
A promising future
1)Finding ways to engage students to make their own AR Model and allow themselves to grow with the technology
2) Adding AR feature where any object from our environment can be copied and pasted in our preservation or assignments that allows it to make it more interactive
3) The Student data can be used for further analysis and find ways to predict behavior and suggestions.
Built With
adobe-aero
adobe-creative-sdk
adobe-dimension
adobe-xd
dplyr
ggplot2
ggrepel
photoshop
purrr
r
recommenderlab
tidyr
Try it out
github.com
youtu.be | ACE | School Comes Home | ['Jumana Nagaria', 'Meera Ramadas', 'meghana karra', 'Priya Venkadesh'] | ['Track Winner: Education & Empowerment'] | ['adobe-aero', 'adobe-creative-sdk', 'adobe-dimension', 'adobe-xd', 'dplyr', 'ggplot2', 'ggrepel', 'photoshop', 'purrr', 'r', 'recommenderlab', 'tidyr'] | 5 |
9,927 | https://devpost.com/software/vr-concerts-powered-by-ai | Watching the concert
Concert VR view
Project webpage
AI metadata extraction - Azure Video Indexer
Team behind the project
Concerts are being cancelled. Artists are giving online shows. New models of engagement are coming.
Inspiration
We like music and we miss concert experiences, so we decided to create a new digital platform where concerts can be seen in Virtual Reality. This platform will allow artists to create concerts easily and to share them with their fans. With use of Artificial Intelligence, viewing experience will be much better than on the standard video sharing platforms that we have nowadays.
What it does
Artists upload videos to the platform and define when a certain concert should go live. Social media sharing allows the artist to share concerts with viewers. Viewers can join and watch concerts via web, mobile or in Virtual Reality.
When video is uploaded to the platform, it is processed by Azure Video Indexer - a powerful AI tool that automatically extracts advanced metadata from the video. With extracted data we are able to enhance your viewing experience.
How we built it
We used different technologies in order to build this project. The main site was built with HTML, Javascript and PHP. For the Virtual Reality part we used Unity. Video processing is done with Microsoft Azure Video Indexer for scenes and keyframe detection. Solution is hosted on Microsoft Azure.
Accomplishments that we're proud of
We are proud that we managed to build VR solution powered by AI in such a short time.
We are also proud that this project was built from home with team members located in 3 different countries: Sweden 🇸🇪, Germany 🇩🇪
and Croatia 🇭🇷.
What's next for VR Concerts powered by AI
We identified following steps for our project:
donation option in order to support the artist
different stages that are configurable by the artist (indoor, outdoor, effects...)
live streaming feature
smart video editing so only artist is displayed on the virtual stage
Video Copyright
Music video used in our demo project is streamed from
here
. All rights to that video belongs to awesome band Goldfinger. We would also like to say that we really appreciate their efforts in making "stay home" / "quarantine" music videos.
Built With
ai
azure
azurevideoindexer
c#
css
html
javascript
unity
webgl
Try it out
vrconcert.blob.core.windows.net | VR Concerts powered by AI | Digital platform to host online concerts with VR support, powered by AI. | ['Goran Vuksic', 'Niksa Vlahusic', 'Mohamed Bouallegue'] | ['Track Winner: Music & Entertainment'] | ['ai', 'azure', 'azurevideoindexer', 'c#', 'css', 'html', 'javascript', 'unity', 'webgl'] | 6 |
9,927 | https://devpost.com/software/staysafedonatesafee | Inspiration and whole story
This app is a blockchain-based charity app deployed on the ethereum test network.
The core purpose of this app is a transparent charity platform on Blockchain. Due to the rise of Coronavirus, there are a huge number of growing fake charities that emerged and which are looting people during these tough times. As thousands of people are diagnosed with COVID-19 and are unable to work, many are finding it hard to make ends meet and are asking for donations. At the same time, scammers are creating fake GoFundMe pages designed to tug at your heartstrings and empty your wallet.
This issue has been solved and addressed by a blockchain-based transparent charity web app to ensure people do not get cheated as there is transparency in each and every step. By minimizing administrative costs through automation, providing more accountability through traceable giving milestones, and allowing donors to see more clearly where their funds are going, blockchain may help restore some of the lost credibility to charities that prove worthy of the public’s trust.
This web app is backed by a smart contract and web 3 has been used to integrate the smart contract with a react frontend and the D app has been deployed on the Ethereum network. The first Dashboard page shows the list of open charities that a user can contribute to. These are sample charities that have registered on the platform. An organization can register itself on the network by giving more details about itself and by setting a minimum contribution which they would want from each person.
Anyone willing to donate to one of the charities, can view a charity and get all information about it. People can also contribute Ether to charities by using their basic metamask account. Those who contribute higher than the minimum contribution set for that particular charity, automatically become approvers and later have a say in how their money is being spent. This ensures transparency and prevents fraud. The organization can then make requests to spend the money for the purpose of welfare or other government purposes. All the approvers in the network who have contributed to the charity can see the request and a majority vote is required from all the approvers to be able to execute the transaction. This whole logic is backed by a smart contract and works on trusted consensus. End to end transactions have been implemented which is made possible with the help of metamask and this whole web app is deployed on the Ethereum Rinkeby test network.
The blockchain app is deployed on Blockstack.
Another component of this web app is a page built using U I path and hosted on Mongo D B Atlas which shows the symptoms and precautions which one has to take to stay safe. U I path has been used to extract and scrape data about precautions and symptoms and statistics from different places and also relevant tweets relating to the developments and updates of the pandemic have been shown.
.
Domain.com
staysafeanddonatesafewithus.online
Built With
ethereum
mongodb
react
solidity
uipath
web3
Try it out
github.com | StaySafeDonateSafe | This app is a blockchain-based charity app deployed on the ethereum test network which instills transparency and prevents frauds | ['Sai Rishvanth Katragadda'] | ['MLH: Best use of Blockstack', 'Track Winner: Finance & Digital Money'] | ['ethereum', 'mongodb', 'react', 'solidity', 'uipath', 'web3'] | 7 |
9,927 | https://devpost.com/software/framble | Inspiration
Technology can be a blessing and a curse in the same time. Just look at your smartphone, an entire book library can fit into your pocket. Yet we are using it to look at memes and get away from the real world, forgetting about the importance of living in the present moment. What is the first thing we do in the morning? We check our phone for notifications. That's a very unhealthy habit. The average user checks their device 47 times a day. 85% of smartphone users will check their device while speaking with friends and family. And at least - the average user spends 1hr 16 mins a day on the top 5 social media apps. With the help of Framble, you can bring your Instagram pictures right onto your table and get inspired without checking your phone every 5 minutes.
On the other hand some people have no technical knowledge to access these social medias even if they wanted to. Back in the days, most grandparents lived in the same household as their families. Today thousands of miles may separate family members. Few grandparents find that being far from grandchildren is emotionally stressful. With the help of Framble, our grandparents can be supplied with fresh pictures of you and your family. It gives you the ability to stay close to your beloved ones all year long. Imagine being on vacation with your partner and kids, and you manage to capture your son’s the first time riding a bike. If his grandmother has a WiFi-enabled Framble, she could enjoy the same experience as you, just like she was there.
What it does
Framble is the world's first E-Paper picture frame, and it was designed to transform your social media addiction into a daily inspiration source based on your own memories in form of an interactive home decoration tool.
Framble has WiFi, Bluetooth and a battery, by which the frame can run up to six weeks with a single charge. The frame is fully handcrafted of eco-friendly recycled wood, which makes it even more special.
When we started working on Framble, we decided to create the best digital picture frame there is to inspire people’s everyday life through their own memories and help them understand the importance of the present moment.
How I built it
The problem with regular digital picture frames is that they look like tiny displays, they need power cables in order to run, their design don’t fit most interiors and manual picture upload is required with the help of a microSD card, or in best case they have an online interface or app. Framble is the first digital picture frame that uses E-Paper display technology, which not only feels like real paper but it can also run up to 6 weeks with a single charge. Thanks to its wonderful wood design, it looks like authentic frames, fits most interiors and stands out. Framble can also stream your latest pictures from your social media, including Facebook, Instagram, Google Photos and even Pinterest.
Challenges I ran into
We have already reserved and built our website (
www.framble.io
), registered “Framble” as a trademark, built 5 prototypes, started developing our Android app, created promotional videos and acquired and have gotten positive feedback from two Norwegian CTOs with multiple successful companies behind them. We have sourced all the crucial electronic components and we already have 1000 followers on Facebook and 50 early subscribers in our e-mail list and our campaign haven’t even started yet. Our next task is to increase our audience, validate the market, launch on crowdfunding websites (Kickstart, Indiegogo), then produce the first batch consisting of 1000 units. If we fail to validate the market for the digital picture frame, we sell the app separately. We already got an offer from one of the biggest digital picture frame manufacturer, which is $4 / each download.
Accomplishments that I'm proud of
We have tremendous advantages over our competitors, just to name a few: unlimited viewing angle, glare-free display, 6 weeks battery time, social media connectivity, stylish rustic frame made of real, recycled wood. Our only disadvantage is that currently E-Paper displays are only available in black&white, however the manufacturer has already developed a color version, which is going to be available to the public around Q2 of 2020. Integrating this display into our product will provide us a unique position in the digital picture frame market, which is almost impossible to compete with.
What I learned
We have determined that our innovative technology would generate the most value in the intersection of two big user groups: social media users and digital photo frame users. This forecast is not exclusive, however this is where we see the biggest potential. One very good example of this intersection is families. Assuming that most of the digital picture frame users have social media as well, we can conclude that our primary target is digital photo frame users. Other targets can be interior designers (furniture shops, retailers, wholesales), Instagram fans (influencers), professionals (shop owners, artists and photographers). We plan to approach these groups in their natural habitat by ads on Facebook, Instagram and Pinterest, by influencers using these platforms and by professionals using Framble to showcase their own services or products.
What's next for Framble
Market validation
Lead generation
Crowdfunding
Fundraising
Framble+ (advaneced version of Framble)
Built With
android
c
e-ink
e-paper
python
Try it out
www.framble.io | Framble | An E-Paper picture frame for Facebook and Instagram | ['Péter Szilágyi'] | ['Track Winner: Environment & Sustainability'] | ['android', 'c', 'e-ink', 'e-paper', 'python'] | 8 |
9,927 | https://devpost.com/software/alz-vision | Updated Website: Upload Memories
Log In Screen
User Information Screen
Task Screen
Make A Memory Screen
Test Your Memory Screen
Analytics Screen
Frequently Asked Questions Screen
Did You Know Screen
Demo Of Flask API Working
Website
Website
Updated Website: Graphs
Updated Website: Home Page
Updated Website: Redescribe
Inspiration
Nearly 3 million people in the US alone fall victim to memory impairments such as dementia and Alzheimer's. We personally have met people who struggle with Alzheimer's and have forgotten critical information and cherished memories, even going as far as forgetting a loved one.
Nowadays, when doctors diagnose patients, they don't personally know the person and have never met them before. They rely on mental tests and interviewing the family members. They don't have any real, concrete historical data their memories or any data on their decline of their memory. We hope to provide a data-analysis tool for doctors to better get an idea of each patient.
What it does
Our app hopes to provide doctors with data on users to help them better diagnose Alzheimer's. The user can upload photos and videos to the application. For each photo and video, the user will write a one-sentence description of the event portrayed. The user will also be prompted to later (after days, weeks, months, or even years) re-write the description. Our algorithm will rate (from 0-1) the description to how similar the new sentence is to the original sentence. Each of these scores are plotted on a time (number of days) x accuracy graph. With each memory, a linear or non-linear regression is performed (based on which produces the biggest R^2). The weights are then extracted (slope, y-intercept, and other factors) and an Isolation Forest outlier detection is performed. In the end, the algorithm will provides graphs and visual aids to show a user's memory decline. In addition, it will search for any potential outliers in their memory loss and common keywords associated with those outliers.
Version 2.0 (May 23rd, 2020 - May 24th, 2020)
Accomplishments that we're proud of
We finally got the app together! We were really proud of how our application is currently more or less functional. We can successfully upload memories and display them for redescribing. Also, our machine learning analysis is integrated into our app, so we're really happy about how everything is coming together. After one more sprint, we believe that we can start pitching our ideas and continuing with the design process.
How we built it
We used Flask only for this MVP, and we redesigned our original website. We used Flask-Mongodb instead of Mongodb and we used MongoDB Atlas. For the frontend, we used HTML/CSS/JS with Bootstrap. We used the same machine learning algorithms from our last sprint.
Challenges we ran into and What we learned
It was particularly difficult for us to upload images through MongoDB and we learned a lot about images with MongoDB and that there was a library which handled MongoDB with Flask (we were originally using only the original MongoDB library).
Next Steps
We want to work more on the UI to enhance user experience. We hope to share our app with local doctors to get their feedback on our app. We will adjust accordingly and then continue the design process to create a finished product. We especially want their feedback on how to display the data which will be most convenient to them. Then, we will pitch this product to local hospitals and clinics and hopefully collaborate with them to make this product a tool for patients to collect data to help doctors to diagnose Dementia and Alzheimer's better.
Version 1.0 (May 16th, 2020 - May 17th, 2020)
Accomplishments that we're proud of
We were able to create functional machine learning algorithms to find similarity and analyse the text. In addition, we have the start on our UI both for a mobile application and for a Flask app.
How we built it
We built our app with a mobile app and a basic website for the front end, a Flask API to allow us to access the machine learning models, and a MongoDB Atlas for the backend. We built the app using React Native, and we built the website using HTML/CSS/JS and Flask. For the machine learning in the analysis, we used StringMatcher and SequenceMatcher to rate each sentence and regressions and Isolation Forest to find outliers, and the charts were made using matplotlib.pyplot. Then, we used MongoDB Atlas to store each user's memories and descriptions that our front ends could use.
Challenges we ran into
For both the outlier detection and the sentence similarity scores, we tested a few models before reaching our final decision on the model which worked best. It was our first time using MongoDB Atlas and mongod, so it took some time to learn about the API. It was also a little difficult to work together online, but we used Discord as our platform and was sure to periodically check-in on each other.
What we learned
First, from a non-technical standpoint, our team learned a lot about Alzheimer's. We spent two weeks researching the disease to learn more about how it is diagnose and listening to real user stories. From a technical standpoint, two of our teammates learned how to use React Native and all our teammates learned MongoDB Atlas.
What's next for Dement.ai
We hope to be able to put the app together to form a final product.
Built With
ai
css3
flask
heroku
html5
javascript
machine-learning
mongodbatlas
python
react-native
sass
Try it out
github.com | Dement.ai | Helping Doctors Better Diagnose Dementia and Alzheimer's | ['Shreya C', 'Veer Gadodia', 'Dhir Kachroo', 'Mihir Kachroo'] | ['Track Winner: Solidarity & Elderly'] | ['ai', 'css3', 'flask', 'heroku', 'html5', 'javascript', 'machine-learning', 'mongodbatlas', 'python', 'react-native', 'sass'] | 9 |
9,927 | https://devpost.com/software/givelive-livestream-platform-backed-by-machine-learning | Stand Apart: Together At Home
The Covid19 world pandemic has changed life as we know it. With much of the world social distancing, we are called to distance, but not isolate. It’s difficult to avoid the negative effects of social distancing when we can no longer visit friends and family or go to cultural events like concerts, festivals, and theatre. Behind closed doors we are still social beings; we need to experience each other. Sharing culture is what makes us human.
Give On LIVE! takes immediate action by partnering with one interactive medium, available to everyone with internet connection: LIVEStream. Live-streaming is already the new norm, and we have given it a new home. Our mission is to gather livestream content in ONE place. A portal that fosters connection between humans. The ultimate interactive stage where we can share culture without borders.
Host a LIVEstream or Share your favorite LIVEstream Creator. Post an event on the calendar and share with the world. Give On Live is the guide to discovering, hosting, and playing through virtual social connection that makes us human. We may be called to stand apart for now, but through Give On LIVE, we can be UNITED through LIVEstream.
#GetLIVE & #GiveLIVE
Stand Apart: Together Your Livestream Guide
Inspiration
How can people within communities support each other, and jointly work towards reaching a balanced day?
GiveOnLIVE; a platform building a robust database of livestream resources, plans to transparently and fairly recommend content and host to viewers for the purpose of wellbeing and balance. Each unique viewer has their own individual needs to stay balanced in quarantine. Our hypothesis was that we could make accurate suggestions based on case studies of healthy humans in district categories age, gender, tax bracket, etc... Our challenge was to determine can we build an algorithm with data that contrasts against “ideal” balance of interactive experiences for wellbeing and the new alternative ones offered by our platform in crisis and physical distancing times.
Solution
We are building an application that facilitates the connection and matching between viewers that need engagement in certain areas and creators that can share their talents or resources LIVE!
Interest can be chosen as a profile setting, but the solution allows for interoperability and exchange between LIVEViewers. Viewers can also join cluster based groups. Matching of emotional needs and specific events or Live Hosts/Creators will be improved over time through Machine Learning, and takes into account various input factors based on voice and text recognition.
Once the match is found and a community member helps complete a LIVE, the user can reward the creator by giving them "LIVE Time". LIVE Times are rewarded through a system that is modeled on the concept of mutual credit, to encourage a mentality of paying it forward.
What it does
The GetLIVE algorithm will lead individuals to follow daily content consumption leading to optimal emotional balance. It will also match users by interests and profile encouraging creation of specialty groups. Event categories like Mind, Body, Soul, Covid-19, etc will be further separated into emotional clusters when processing needs or unique viewers and filling their “Discover” space.
#GetPhysical, #GetThinking, #GetLaughs #GetLove #GetSkills #GetFaith #GetMusic #Concerts&Festivals
Our Impact
Our solution aims to mobilize human engagement, encourage wellness and provide insight into the needs of unique viewers. A safe home for livestream creators our AI matching algorithm encourages direct exchange of community members through #GetLIVE & #GiveLIVE Challenges by created by viewers, creators, and sponsors. Through gamification and non-monetary exchange of rewards, we aim to maintain and even increase the motivation of viewers and creators, creating an interactive giving culture. This is further strengthened through mechanisms for recognition and reward.
When it comes to long term impact, we hope to obtain analytics related to SDG Action to improve long term acts within social impact projects, and further encourage proximity and transparency between online marketers and viewers. Jack Dorsey, of Twitter, recently stated in a public event, “There is a clear need for transparency and fairness in the algorithms we create.”
Some of the impacts we will track within the communities using our platform:
Impact on the content choice and sentiment of viewers
Impact on the unemployment rate
Impact on mental illness cases
Impact on behavior and balance from machine learning AI recommendations
A detailed infrastructure and research strategy drafted in submission report.
Our Progress
We started with a Google Form, Public Calendar and a Dream; Bring hope into every home by sharing free resources in ONE easy to navigate guide.
The GLOBAL HACK - April 2020 During the Global Hack we gathered a small rockstar team and built out a simple front-end combining the Google Form and Google Calendar. We built on Wordpress and followed a theme with all the functions of a live streaming platform like YouTube. We kept it very simple, with a home landing page with categories, the full calendar, and submit form. We began recruiting LIVEAdmins to directly input new events following our community rules of engagement.
http://giveonlive.com/submit
The DATA BIO HACK - April 2020 We gathered physicists and data scientists to discover what data could tell us about what activities were available, how people were feeling, and a smart data-driven solution to recommend daily livestream to viewers we had never met. We scoured data sets and created the “Senti-me” aggregator. Using an API over Twitter we triggered collection of key words and posts. Thus we were able to gather, compare and analyze the sentiment of humans in different regions (IP addresses) This helped us to start forming our persona Thomas, an out of work creator and Tara Rose an exchange student living in the heart of the outbreak, Milano, Italy.
The EuVSVirus HACK - April 2020 We have built a mobile app for Android and IOS This has direct integration into the Google Calendar. Users can search the many listings and add to their personal agenda. There is an option to watch “Featured Daily” selections directly from the site. This will be connected to a smart bot, that uses data sets and sociology AI to recommend a curated daily agenda for viewers #MyPandemicSurvivalGuide. Also an option for “Daily Livestream” to an email with the pick of the day.
The Global AI Hackathon - April-May 2020 We have engaged in intensive research of various data sets, sentiment analysis and determined ways to use APIs to build our GetLIVE Wellness Algorithm. We had to pivot from our hypothesis and categories based on types of event to emotions critical to wellness. This allowed us to create more feasible data analysis and integrate into our recommender system. Our global team of scientists and design thinkers had helped to identify the direction for gaining a foundation of data through a beta mobile launch of users in our target and diverse profiles.
World Hackathon Day -
Under submission and HERE
https://github.com/hammedb197/GiveLIVE_AI/tree/master/emotional_detection
S.O.S. Hackathon -
We continue to polish our product and consider decentralized framework and opening up payment rails. We are in process of adding direct donations and payments using not only FIAT through merchant partners, but also our own custom made cryptocurrency peer to peer transfers. We have won Best Woman Led Project in this hackathon. Thank you S.O.S. Hackathon and great mentors from Ethereum Foundation.
Raise-Up Buildathon- July 2020
We begin to consider challenges and scaling including ways to make our database of LIVE content available to viewers with disabilities. We have integrated Alexa Skills so that content in the LIVEstream library can be searchable with voice. We have also added a chat bot to guide audiences to the content they seek. Our research team has compiled Cultural Institutions and City Virtual tours on the Web. We plan to reach out to these institutions directly. They will be able to access their profiles, creat events, and accept donations and ticket sales. This is possible through the Events Calendar plug-in w have installed. Thank you Modern Tribe ( Creator of the software). Our AWS account has details of Alexa Skills, Lex for chatbot, and Amazon Pay integration. Thank you Amazon!
https://github.com/giveonliverepo/aws-raiseup-hack
How we built it
We started with a simple Google Form & Calendar, and added a front-end website built on Wordpress. We began recruiting LIVEAdmins to add events to the calendar and recruit users to niche FB groups related to different categories. We built a back-end aggregator to scrape events off posts on Twitter to start. These events integrate directly to the public Google Calendar and pass through a moderation portal where admins can have access to push to the system or not. This will soon be integrated into a Hyperledger, Sawtooth blockchain infrastructure to allow for decentralized storage of database content. For admins we will make a streamline way to submit livestream recommendations though desktops and and app integratable bots to their own calendar, eventbrite, instagram and youtube accounts.
Challenges
There are endless resources and we need a scalable method to make sure content is appropriate. As we grow the host creator community we will also be challenged by maintaining an open governance that adheres to the rules of engagement. Getting started configuring the web server was challenging. Integrating the different tech and plugins took time and strain. We had to deal with bugs. We were also working on various time zones, so team collaboration challenges. Very grateful to mentors and slack that helped us coordinate all. Customizing features with google integration was limited to google style. We need to consider if we will keep these color and style limitations or later build from scratch. There are many benefits with so many users on google to keep as is, we are analyzing financial of labor and upkeep for staying in Google or customizing a new solution.
Each step of the way we are integrating more sophisticated technology to deal with infrastructure, Blockchain, and for recommendations, wellness and ethics, AI & Machine Learning.
Accomplishments that we're proud of
In 24 days we have been able to go from One person on a mission to 5 Leads to 35 Mission Members.
What we learned
Livestream content, Zoom group webinars and other platforms can take time for viewers to get used to. Both hosts and participants have a learning curve to accessing content. We understand limitations and want to also embed an easy way for guide subscribers to access their livestream from GiveLIVE! app or links from personal google calendar.
What's next for Give On Live
Give, Grow, and Build. We are excited to enter various Hackathons and continue to add value to the community with more features and most importantly the interactive livestream content society needs to stay uplifted in this critical time.
1) Keep filling the now LIVE and PUBLIC resource by recruiting more ambassadors, admins, and tech web scrapers to fill the calendar. Our goal is to have an offering of hope every hour in every cluster.
2) Prepare for beta group for Mobile App Pre-Launch. Integrate various games and incentives to acquire the data sets needed to feed our AI & Machine learning GetLIVE Algo
3) In synchrony with the beta launch, continue with customer validation surveys and focus groups to refine UI/UX and AI functions.
4) Launch Mobile App and seek partnerships for specialized subscription based services like conferencing, VR and immersive experiences to continually enhance and meet viewers needs.
As we work to complete the database architecture MVP, we are satisfied in our results and Mobile App. Through continued research we are confident to create a machine learning GetLIVE! algorithm that will fully integrate with the GiveLIVE interface and content database. We plan to continue collecting data points on emotions, geographies, profiles, and historical wellness case studies to make our GetLIVE Algo even smarter. In a time when the entire world is in desperate need for engagement and relief of dangerous mental health triggers like depression, anxiety and loneliness we are determined to offer this solution. As we seek greater understanding of crisis situations, we know for certain the only cure belongs back to the basics of humanity. GetLIVE Algo, through the GiveLIVE platform will guide viewers at home to #GetPhysical, #GetThinking, #GetLaughs, #GetLove, #GetSkills, & #GetFaith. Together At Home, we will take a stand against Covid19.
Built With
amazon-alexa
amazon-pay
ascend
eventbrite
flutter
hyperledger
lex
python
sawtooth
twitter
youtube
Try it out
www.giveonlive.com
youtu.be | G.O.LIVE! LIVEstream Discovery App *STREAM *PLAY *LOVE | G.O.LIVE! The home of livestream; we match individual viewer needs with great content, encouraging users to #GetLIVE & #GiveLIVE for fun rewards. | ['Chinwendu Maduakor', 'ChristyAna Viva', 'hammedb197 Hammed', 'aradhana chaturvedi', 'Saleem Javed', 'Arjan van Eersel'] | ['The Best Women-Led Team'] | ['amazon-alexa', 'amazon-pay', 'ascend', 'eventbrite', 'flutter', 'hyperledger', 'lex', 'python', 'sawtooth', 'twitter', 'youtube'] | 10 |
9,927 | https://devpost.com/software/elderly-support | Inspiration
we wanted to help youths volunteer with the elderly and lessen the need for them to leave the house
What it does
an easy purchasing platform for eldery to use with speech to text functions and AI that can determine the order
Built With
flask
python
Try it out
github.com | Elderly Support | Elderly are more vulnerable to the COVID-19 pandemic due to weaker immune systems. Many governments have recommended youth to help the elderly purchase groceries and this aims to connectthe 2 groups | ['joel chan'] | [] | ['flask', 'python'] | 11 |
9,927 | https://devpost.com/software/kick-out-addiction | Inspiration
Provide support
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for
Built With
api
uikit | Addiction | Help for family | [] | [] | ['api', 'uikit'] | 12 |
9,927 | https://devpost.com/software/thrive-w8o69b | THRIVE BRIEF PITCH DECK 1/6
THRIVE BRIEF PITCH DECK 2/6 (PROBLEM)
THRIVE BRIEF PITCH DECK 3/6 (SMALL BUSINESS)
THRIVE BRIEF PITCH DECK 4/6 (SOLUTION)
THRIVE BRIEF PITCH DECK 5/6 (HOW IT WORKS)
THRIVE BRIEF PITCH DECK 6/6 (OUR TEAM)
Inspiration
During the coronavirus crisis, every business is affected. But big companies have budgets and manpower to do social media marketing, moving sales of products or services online, resulting in incomes that are not decreasing or may increase even more
But a small business or a family business doesn't have enough people, has a limited budget, doesn't have expertise in social media marketing and doesn't have the ability to move sales or services from offline to online, resulting in many small businesses have decreased revenue or had to shut down.
What it does
Thrive therefore developed an application that allows users to upload images or videos of a product or service with the name and price of the product and record audio that briefly describes those products.
Our system separates images with deep neural networks, arranges images and videos into categories, and provides auto-generated descriptions for those images or videos. Users will get a preview of the post with pictures and product details that are automatically generated.
When the user presses the confirm button, the application will post to all social media platforms that the customer has defined. The application will automatically post again every week with an updated description. Users will have to press confirm to repost or edit the description to control the quality of the post. Users can schedule a time for reposting themselves.
The system will summarize every post, how many likes, how many shares, and which photos are popular. What products should be posted during which days and times?
How I built it
Our team builds web pages with basic HTML CSS javascript with Bulma UI and uses Python flask as a backend.
We use google speech-to-text API to parse the frontend speech into text and then generate the words with transformers library using PyTorch.
Run the pre-trained model "gpt2" to create words from the context in which the user speaks and returns to create a preview page in the frontend.
Challenges I ran into
It's hard to think of a business model for this application.
It must be carefully planned because the system is more complex than we thought.
Accomplishments that I'm proud of
Create a minimum viable product within 24 hours, along with a landing page, a demo video, and product documentation.
What I learned
We learn that when we create an application we must also think about who it was created for, what its benefits are, and how it is used.
What's next for Thrive
We will start using this application immediately after the end of World Hackathon Day with our restaurant in Bangkok.
We will collect feedback to develop applications to best meet the needs of users.
Built With
bulma
css
flask
google-cloud
google-speech-to-text-api
gpt2
html
javascript
mailchimp
pytorch
transformers
Try it out
mailchi.mp
github.com | THRIVE | Small businesses have few people and budgets not enough to compete with big company. So we build AI helping them create a social media marketing template to promote their great products and services. | ['Tanawat Horsirimanon', 'Jirayut Chatphet'] | [] | ['bulma', 'css', 'flask', 'google-cloud', 'google-speech-to-text-api', 'gpt2', 'html', 'javascript', 'mailchimp', 'pytorch', 'transformers'] | 13 |
9,927 | https://devpost.com/software/investment-by-the-innovation-market-model | Distribution of companies by efficiency
Distribution of Polish companies by efficiency
Inspiration
THE PROBLEM: Effective investment of financial assets in prospective firms
What it does
Allows
Buy assets of promising high-yield companies for dividends.
Buy assets of promising companies with high returns for resale.
Identification of financial bubbles.
## How I built it
Using the IMM Model
## Challenges I ran into
Сases of non-economic preferences and innovations are not adequately described.
## Accomplishments that I'm proud of
Distribution of companies by sales in the model coincides with the actual distribution of companies.
The model adequately describes the cases of non-economic preferences and the introduction of innovations.
What I learned
I learned the IMM model
What's next for Investment by the innovation-market model
Transactional analysis based on the IMM model
Built With
api
bigdata
forbes
matlab
python
Try it out
tod.zzz.com.ua
ec2-3-120-138-190.eu-central-1.compute.amazonaws.com
ec2-3-120-138-190.eu-central-1.compute.amazonaws.com
ec2-3-120-138-190.eu-central-1.compute.amazonaws.com | Investment by the innovation-market model | Effective investment of financial assets in prospective firms: Purchase of assets of promising companies with high profitability for dividends. Identification of financial bubbles. | ['Kamel Pawelsky', 'Nick Dubovikov', 'Nick Dubovikov'] | [] | ['api', 'bigdata', 'forbes', 'matlab', 'python'] | 14 |
9,927 | https://devpost.com/software/operational-remote-identification-of-diseases-covid-19-tfwpqg | Biohazard COVID-19
Decision - enter data: foto, temperature, pressure and etc.
Inspiration
OPERATIONAL REMOTE IDENTIFICATION OF DISEASES
What it does
OPERATIONAL REMOTE IDENTIFICATION OF DISEASES (COVID – 19)
How we built it
IDENTIFICATION OF INFORMATION BY COMPARISON OF STRUCTURED MASSIVES
Challenges we ran into
Rapid remote identification of diseases, collection, processing and analysis of information to ensure timely provision of emergency medical care
Accomplishments that we're proud of
certificate of authorship UA № 91811
What we learned
Patent applications UA а 201908606, u 201908607 Method for analyzing and identifying information
What's next for
Online remote identification of people with fever (COVID – 19)
ANALYSIS AND FORECAST OF THE MARKET
Built With
api
c#
mathlab
python
tensorflow
Try it out
defin.zzz.com.ua | OPERATIONAL REMOTE IDENTIFICATION OF DISEASES (COVID – 19) | OPERATIONAL REMOTE IDENTIFICATION OF DISEASES | ['Kamel Pawelsky', 'Nick Dubovikov'] | [] | ['api', 'c#', 'mathlab', 'python', 'tensorflow'] | 15 |
9,927 | https://devpost.com/software/real-time-social-distancing-tracker-for-covid-2hc0pl | 10 cameras monitoring social distancing at a time
Inspiration
During this COVID-19 pandemic , many industries/organisation are required to maintain social distancing among their workers. so we felt to built a system to overcome this situation using our knowledge.
What it does
Several CCTV cameras capturing different area of industries are fed to our system . Then the videos are processed ,the humans are detected . And the area in which the people are not maintaining social distancing are alerted along with the video output of showing red boxes around the people who are violating social distancing.
How I built it
We completed this project in three parts. They are as follows
-Human Detection
-Distance detection between detected human
-Highlighting or alerting the hummus who are not following social distancing
In brief
Firstly we detected the Human from the video. For detection of human, we using pretrained Neural Network.,specially Mobilenet SSD i.e. region-based neural network.
After the detection of humans,we highlighted the output using boxes around the respective humans.Then we calculated the distance between them.We specified the distance for safety as 5 feet from each person to person. If the distance between any pair of people is greater than 5 feet then ok for which the human highlighted with ‘Green Boxes’.If the distance between any pair of people is less than 3 feet then, these both people are highlighted with ‘Red Boxes’.Here we specified threshold distance as 5 feet, but this can be changed accordingly and monitored.
Challenges I ran into
Firstly this project was useful only when this all computed in real-time. For which all human detection , distance detection, and alerting should not lagging. For this have do make algorithm computationally less expensive. In addition to that, we have to have accuracy very good. So adjusting these two thighs were difficult but we were successful and for computation per frame in a video it is taking around 1 sec.
Another thing was all video was captured by CCTV which was fitted to a top. So the calculation of actual distance was a task.
What I learned
We learnt how to implement different concepts together in one project.Also by doing this project we got to know how to ML can be applied in real world especially Today
What's next for Real Time Social Distancing Tracker for COVID
we are trying to recognize person individually who is violating social distancing and alerting them
Built With
gpu
machine-learning
opencv
python
Try it out
drive.google.com | Real Time Social Distancing Tracker for COVID-19 | A step to win against COVID-19 | ['Suyash Chougule', 'Atharva Sundge', 'NIRANJAN KAKADE'] | [] | ['gpu', 'machine-learning', 'opencv', 'python'] | 16 |
9,927 | https://devpost.com/software/reignite | Easy to Start
Landing Page
Customer Friendly
Email Delivery
Copy & Paste
Inspiration
COVID-19 has presented challenges to many different industries. One of the greatest challenges is the economic impact caused by the need to shutdown businesses. In the US alone approximately 7.5 million small/local businesses are at risk of permanently being shut down over the next five months and 3.5 million are at risk of shutting down within the next two months. An estimated 35.7 million people who are currently employed by small/local businesses are at risk for unemployment.
What it does
Reignite provides small/local businesses a simple gift card solution to help boost their income. By completing a quick form businesses are instantly provided with a landing page and plugin that can be added to their existing website. Using Marqeta a gift card is issued for the user that can be utilized on their existing checkout process.
How I built it
Reignite is primarily built with Node.js and Firebase. I made sure that the design was responsive so users could utilize the tool on the browser and mobile device. The card issuing utilizes Marqeta which allows for merchant locking. The email delivery of the gift card is powered by SendGrid. If users access the business landing page or gift card page via the browser they will notice a map of the businesses location built using Mapbox. Finally the custom plugin that users can add to their existing website is built via vanilla javascript.
Challenges I ran into
The biggest challenge was the onboarding process. I wanted to provide businesses with a solution that was as easy as possible. I had to reduce as much clutter as possible and focus on the critical information that was necessary for the business to provide to get started. Once that was accomplished I built a couple of different styles for the plugin that would appear on their websites.
I took advantage of the 'live chat' solution that many larger businesses utilize and adapted the same strategy for the gift card form popup.
Accomplishments that I'm proud of
Overall I'm proud of how much of the project I was able to complete, the user experience, and the design throughout the application.
What I learned
This was the first time I used Marqeta and I was able to go through a lot of the documentation to see the many different possibilities with the platform.
What's next for Reignite
I want to make some minor adjustments to the process. I would like for the user to send the gift cards via SMS as well. This will allow for SMS authentication when the user wants to retrieve the gift card details.
Built With
css
firebase
heroku
html5
javascript
mapbox
marqeta
node.js
sendgrid
Try it out
hellogifty.herokuapp.com
hellogifty.herokuapp.com
hellogifty.herokuapp.com
github.com | Reignite | Helping to reignite local business with gift cards. | ['AJ Rahim'] | ['Second prize'] | ['css', 'firebase', 'heroku', 'html5', 'javascript', 'mapbox', 'marqeta', 'node.js', 'sendgrid'] | 17 |
9,927 | https://devpost.com/software/community-map-gzfnqv | Inspiration
Creating a community of various projects and services - ecosystem where projects grow and develop easier.
What it does
Location-based software platform acting as a
hub for various services
helping people in certain areas to
get better visibility, communicate and collaborate
more efficiently.
It’s basically an
open map
-like system allowing people and local (or global) businesses to participate in it using some of the services it provides. Part of them are built-in and developed by us, others -
by partner projects and developers
thus making the platform extendable and adaptive to various environments and conditions.
One of the biggest problems with crisis projects and initiatives is actually
segmentation
and lack of critical mass of user to get it going, too big Time to Market.
That's why we shifted our focus to helping other projects develop by providing the base to build upon.
It's not just the crisis project - such ecosystem is needed for more effective society that would be ready and more responsive for any current or future situation.
The information in the platform is anchored to a certain physical
locations
or
areas
(location + radius). It can also be seen as
layers tagged with category/topic
.
The users are able to
upvote the information
they like or find important. This and filtering the relevant content helps to
reduce the informational noise
to a much more bearable level.
The short list of relevant projects include:
Local community volunteers
Online ordering and delivery
Social games and storytelling
Business survival
Community journalism
Local crowdfunding
We now have 2 ways of integration with 3rd parties - React-based SDK (powerful and flexible) and embedding with iframe (easy) along with public REST API.
We managed to create several important partnerships in the last month with projects using Open Community Map as service provider.
One of them is
Non-Zone
project - the global map for experiential and solo-travelers - our platform served efficiently for storing and retrieving project data along with fully customizable UI - dark theme of the map, their own design of all main components and controls.
How we built it
Google Maps API, Firebase, React, Typescript
Challenges we ran into
Finding the right niche to focus on.
Building reusable technical stack providing enough value.
Accomplishments that we're proud of
Growing community around the project. Finding strengths to keep going.
Creating React-based SDK allowing powerful integrations.
Built partnerships with real-world projects, fulfilling their needs for custom UI and behavior with our SDK thus validating the idea.
What we learned
There are many like-minded people out there, trying to improve the world around them. No need to compete with them, better make friends and grow together.
What's next for Community Map
Building bigger community around the project
Building flexible enough technical solution that would work for most projects
Creating more value for partners and end users
Join our Community!
Feel free to join our
Slack workspace
- we're looking for partners, supporters and collaborators!
Built With
firebase
google
map-embedding
maps
public-api
react
Try it out
communitymap.online
github.com
www.opencommunitymap.org
docs.google.com | Open Community Map | Platform for building local community and location-based services. Partnerships with other projects. | ['Dmitry Yudakov', 'Ivan Orlović', 'George Petrov', 'Ivan Stavrev', 'Станко Йорданов', 'Mohamed Hany'] | [] | ['firebase', 'google', 'map-embedding', 'maps', 'public-api', 'react'] | 18 |
9,927 | https://devpost.com/software/buko-h4f62m | Log In Page
Home Page
Screen Views
"Knowledge" probably is the only "good" that can't be lost or taken away, therefor the most precious in the world.
So we wonder, if knowledge can be obtain by reading, why are people losing this habit every day? Why do they get dissappointed by literature?
We got to the conclusion that economics and practicality are key factors. Also we discover that 73.4% of people in our country has a smartphone. We got to the conclusion that creating an app would be the best option.
How does it work?
Our project is called Buko, is an app where you subscribe to lend and receive books for a limited time. For example, you can lease a book for two weeks. This way we are practicing circular economy and the 3R's, you will be able to lend the books you aren't using to people in your area. For a limited time, your book will be available for leasing and then it returns to the original owner.
This app lets you rate and know the state of the book in order to make sure that everyone will take correct care of the object. The app will apply a penalty if the book presents signs of misusing and extra benefits for those benefactors that usually lend books.
We got 2 types of subscription: individual and institutional (in the near future).
And will offer different prices and packages in accordance to their different neccesities.
How we built?
Our first step was to choose and design mock-ups of the different screenviews to use as templates. Then, we used Android studio (IDE) to program the app using kotlin. During this creation process we chose a tech-friendly approach by working on a native development basis. We gave it this functions: Log in with google, reserve books, track delivery location.
As an extra, we decided to built VR spaces to create a new reading experience. This VR (virtual reality) was assembled in Unity and its purpose is to produce a space based on the book the person is reading. For example: if you are reading about Dracula, a dark and transilvanian environment will be reproduce.
Challenges:
Right now, the most challenging part is to have a better quality data base. Our customers could be in the danger that anyone could hack and have their information. We would like to have the RDS from AWS to solve this difficulty.
Also we want to use a better software creating plattaform as Pinpoint instead of google, because the first one has many options and experience in delivery services.
FOR AN APP PREVIEW PLEASE WATCH THIS VIDEO:
https://youtu.be/NiCT3ZxVFIc
FOR A PREVIEW OF HOW THE VR WOULD WORK, CLICK HERE:
https://youtu.be/2fcv7fRLRIY
*From the TRY OUT LINK you can obtain the APK for Android
Built With
android-studio
english
kotlin
unity
Try it out
drive.google.com | Buko | Books for everyone. A new reading experience... | ['Miluska Patroni', 'Rodrigo Abad', 'Pablo Mansilla', 'Nilo Sebastian Vila Zamora', 'Luis Navarro Hernandez'] | [] | ['android-studio', 'english', 'kotlin', 'unity'] | 19 |
9,927 | https://devpost.com/software/naveyegation | Vuforia library
Unity platform
Inspiration
Over the last 100 years, the system of education, in general, has not changed significantly. Though online platforms have revolutionized education it has proven to be very time consuming and a very easy way to distract students from their normal flow of study.
Many of us have been staying at home quite a lot in the past couple of months, probably for longer than we've ever been. And although the current measures do allow us to go outside, there are still a lot of places that are not open yet - schools and universities being part of that.
Which made learning move online, for most students for the first time ever. And this meant that both teachers and students had to adapt to online learning in a short period of time. But this has also brought challenges. Getting explanations from teachers requires more effort, searching for information online can sometimes take a long time if you're unsure what to look for or what you don't understand, but what if you could use your phone and the textbook you already have to get further explanations on a particular topic that you don't quite understand?
What it does
T Square AR books allows you to use your phone in order to display additional content on the screen of your device for the page that has been scanned. This content could be in the form of a video with further explanations, a 3D model, pictures, or even animated sequences. This will make current textbooks more engaging, but more importantly, it will bring a new way of providing information to students in order to have them better understand the subject they are learning, without making big textbooks even bigger.
How we built it
Using Unity as our basic platform to create the Augmented reality application with all necessary assets required for the app.
Vuforia was used for the target manager and using the API license key we were able to import the targets into Unity.
Challenges we ran into
So far there is only a demo created as a proof of concept with a limited number of targets. In order to produce the content for any textbook, the following should be taken into consideration.
Experts in the field: Someone with knowledge in that field must be involved in order to produce the appropriate explanations or any other information necessary.
Development team: a team of programmers and app developers is required to come up with a more passive way to run targets and data through a system in order to create a more efficient and productive software.
Content creators: A team of amateur level creators to develop the necessary content to be augmented.
Accomplishments that we're proud of
The real application and the basic concept was created in the designated 24 hours. As one of the members of this team is currently working on a Medical textbook, with this proof of concept coming to life, we will be able to implement this application into an actual medical textbook before going to the mainstream market.
What we learned
we learned about teamwork and the beautiful world of Augmented Reality.
What's next for T Square AR books
Because of the current pandemic situation, it was a very important move to shift education to a more self-sufficient method and to maintain a self-sufficient method even when the pandemic is over. It has been too long that the education system has not revolutionized globally but this is the right moment to make that change.
Built With
unity
vuforia | T Square AR books | Paper textbooks with an AR twist for thorough explanations and better learning. | ['Tazim Khan', 'Alexandra Rusu', 'Tanvir_aspen'] | [] | ['unity', 'vuforia'] | 20 |
9,927 | https://devpost.com/software/epic-heroes-k84w5v | Inspiration
Inspired by the future of autonomous electric vehicles and unmanned COVID 19 tracking and tracing
What it does
Remotely monitors that people around are wearing masks and can also check their temperature by fitting an infrared thermometer. Awareness campaigns around social distancing, symptoms of COVID-19 can be displayed on a scrolling basis
How we built it
Various controllers, chips were assembled and the whole
Challenges we ran into
Ensuring autonomous movement, obstacles avoidance while moving, terrain management
Accomplishments that we're proud of
This has already been implemented in a district called Chennai southern part of the Indian sub continent (show in the video)
What we learned
We were able to develop this is a very short time despite constraints of lockdown.
What's next for EPIC Heroes
Improvising to total vision recognition, autonomous movement fully, integration with a drone that can take off from the rover at any given point of time
Integration of the solution with various Service providers
Built With
iot
opencv
python | SAINT Scalable Autonomous Intelligent Navigation Transporter | Robotic Rover with inherent ability to create awareness of COVID Social Distancing, enabling | ['Karthik Ramesh', 'HARISH Vardhana', 'Arvind Y', 'Raghavender Mahalingam', 'Sudeep Suresh', 'Krishnapriyan Sridharan', 'Deepak N C'] | [] | ['iot', 'opencv', 'python'] | 21 |
9,927 | https://devpost.com/software/service-to-senior-citizen-in-covid19 | Inspiration I contribution towards my City
What it does - Providing a platform to Senior Citizen
How I built it Understanding the need of end user and leveraging the proper platform
Challenges I ran into - understanding psychological part and user experience part
Accomplishments that I'm proud of - Iam ready with some stuff and Municipal corporation is also ahead to support along with my Organization Welingkar, to support this good deed.
What I learned - Getting into details of project
What's next for Service to Senior Citizen in Covid19 - A complete health care plan with help of Govt.
Built With
html
ui
ux
Try it out
www.bmcforseniorcitizen.com | Service to Senior Citizen in Covid19 | An attempt to reach out and help Senior citizens who are living alone without a family, and facing challenges during this Covid19, by using BMC Mayor platform and Prabhag Samiti network. | ['Kaustav Paul Chowdhury'] | [] | ['html', 'ui', 'ux'] | 22 |
9,927 | https://devpost.com/software/travel-with-tec | Smart wearable
App connected with technologies
Inspiration
In the initial phase of corona virus in India, when the govt started thermal checking at airports, I had to wait for hours due to high crowd and manual checking process. Also it was difficult to find way out despite directions on board. Though it was an important procedure for the safety of passengers but highly inconvenient. This gave me an idea to develop an app that can make this process more easy and save time without compromising safety.
What it does : The app basically creates an ecosystem at one place, connecting smart devices, drones . All this are connected to cloud and data flow is in real time. The app can be used to access this data. The watch can trigger haptic whenever we are about to touch a railing , escalator or lift buttons so that we become careful. The drones within the airport area will continosily monitor the crowd, calculating the numbers, creating geofencing and mapping areas as per number of people. The drones will send warnings and other real time data to the app via cloud. Also the app has a built in AR system that can help us navigate within the airport and show directions, shops and other places of interest.
How we built it : This is basically in an ideation and design phase. Research was done from several sources for technology that can be implemented. Once this process is done, we move to prototype and final testing phase.
Challenges we ran into : One of the major challenge is working in a team ,when the team is far away and using online medium to communicate,share ideas, discuss. This was a first time experience. Also any idea needs to pass the test of vulnerabilities. So a lot of time was spent on thinking about defects that can occur. This important to create a robust system.
Accomplishments that we're proud of
We are proud of working together despite being at far places. The team work and how each member took a task and gave their complete effort to implement this. Projects like these are always a result of great minds working in sync and this makes us proud.
What we learned
The major learning was during the research phase where we learnt about emerging technologies like AI, Deep learning, Sensors, IOT, Cloud system etc. Also making us aware of the current trends and problem solving.
What's next for Resfeber app .
So as we have completed the research, ideation phase we will go with prototyping and testing our product.
Built With
ai
android
ar
augmented-reality
azure
azure-iot-suite
c#
cnn
deeplearning
dronesystem
gpu
html5
imageprocessing
iot
matplotlib
mobileapp
pandas
python
pytorch
sensorcloud
unity
videostream | Resfeber app | Improving travel through airport considering safety and health by using AI,thermal sensing, navigating AR,drone,smart wearable all connected through a mobile app having universal database of airports. | ['Nitin Yadav', 'Mitesh Nahar', 'Dixit Sankharva', 'Sayantan Jana'] | [] | ['ai', 'android', 'ar', 'augmented-reality', 'azure', 'azure-iot-suite', 'c#', 'cnn', 'deeplearning', 'dronesystem', 'gpu', 'html5', 'imageprocessing', 'iot', 'matplotlib', 'mobileapp', 'pandas', 'python', 'pytorch', 'sensorcloud', 'unity', 'videostream'] | 23 |
9,927 | https://devpost.com/software/smarttracker-covid-19 | Worldwide-1
Inspiration
Now a days whole world facing the novel Corona Virus, to track the spread of novel Corona Virus country-wise, details of confirmed cases, deaths and Recovered, awareness regarding COVID-19. This Android app was created to spread awareness about the Covid-19 virus.
Built With
android-studio
github
live-nation-event-data
sqlite
whoapi
Try it out
drive.google.com | Covid-19 | A Smart App that does All the Work. | ['Hrithik Sahu'] | [] | ['android-studio', 'github', 'live-nation-event-data', 'sqlite', 'whoapi'] | 24 |
9,927 | https://devpost.com/software/textcollect | So we decided to create TextCollect(our project) to concurrently meet both the requirements of social distancing plus healthcare.
This is what makes USSD unique and more effective in comparison to mobile applications to which the vulnerable don't have the access.
Aim for global expansion of this model much needed in countries with dense populations.
Inspiration
Imagine Peter. He is a 67yr old male living in Johannesburg, South Africa. He has been on the same treatment for HIV and hypertension for the past 2 years and his medication has not changed in this time. On the surface, he is healthy.
He is due for his 6-monthly follow-up at the clinic soon and he is scared, as he will need to take 2 taxis to get there and not be able to practice appropriate social distancing throughout, making him vulnerable to the virus due to his age.
While also given that the visit is unlikely to change the management of his diseases. We realise the indispensable need to stay at homes to curb the spread of COVID 19 , but more importantly look after our health .
So we decided to create TextCollect(our project) to concurrently meet both the requirements of social distancing plus healthcare.
What it does
TextCollect triages patients (accurately and with reliability) with chronic illnesses using a USSD model in which a patient answers a questionnaire similar to the one used by a doctor and depending on his responses which are relayed to our server to match with the machine learning data, we notify him whether his medication is sufficient for the disease or a change in dosage/medicine is needed. This helps them access health care facilities while maintaining social distancing norms.
Technical Functioning
1)USSD session is initialised by mobile user
2)Sends: HTTP “GET” Message to 3rd Party Server Address
3)XML Response String Containing Menus etc get relayed from the server
4)USSD Menus Displayed on Mobile Handset
5)Responses given by user relayed back to our server for implementing with machine kerning data sets
6)The result of evaluation is provided to the customer via SMS.
We have done extensive case research for this model in South Africa, where majority doesn't have access to smartphones and using USSD will be most effective.
We have a team of data scientists, developers, health care professionals working on this and are ready with the prototype keen on implementing it for the greater good.
Accomplishments that we're proud of
We are proud to have managed all the technicalities and logistics with minimal resources and to have progressed by leaps within minimal time as well.
We are registered as a Delaware Corporation and in the process for patenting the idea.
Extremely proud of our well estimated statistical prediction of this USSD model preventing 8,00,000 visits annually with a mere 10% implementation, implying a massive curb on the spread of COVID-19 and patients peacefully being able to access healthcare.
Finances
We aim to breakeven in Year 1 and have plans for payback in Year 2 as explained in the financials document,
Our conservative projection is based on a gradual adoption of our solution with a start at 2% of the target population. Our revenue system is based on a monthly subscription of $0.5, which can be financed by an NGO.
Our system is profitable from the first year :).
What's next for TextCollect
Target the South African govt. for medical response budget, look out for local NGOs like the Praekelt Foundation and foreign agencies like PEPFAR, DIFD.
Building partnerships with private pharmaceutical companies as 26% of all South Africans take at least 1 medication regularly - i.e 15.8 million for HIV alone Among those most at risk for complications of COVID-19, those above 65yrs, amount to 59% of total patients that take at least 1 medication.
But 71% of all medications are prescribed in the public sector, and among those accessing health care in public sector, 20% reported of not being able to fill a script in the past year due to stockouts at their clinic. It is that gap we are hoping to fill.
Aim for global expansion of this model much needed in countries with dense populations.
Looking forward to a happy peaceful world enabled with contact List Interaction and minimal travel maintaining social distancing. | TextCollect:Unique Way to triage the chronically ill | Accurately triage chronically ill patients using USSD & ML from homes to curb COVID -19. | ['Eric Djakam', 'pratham b'] | [] | [] | 25 |
9,927 | https://devpost.com/software/castme | Main Menu
Motion capture streaming demo
Female avatar professor teaching
Male Avatar professor teaching
presentation screen
view from the back
View from the middle
Customize Character
castme.life website
Splash Screen
Inspiration
Video lectures are present in abundance but the mocap data of those video lectures is 10 times ahead in the form of precise data. High quality and a large amount of data are one of the requirements of best argmax predicting ML models, so we have used here the mocap data. Despite the availability of such promising data, the problem of generating bone transforms from audio is extremely difficult, due in part to the technical challenge of mapping from a 1D signal to a 3D transform (translation, rotation, scale) float values, but also due to the fact that humans are extremely attuned to subtle details in expressing emotions; many previous attempts at simulating talking character have produced results that look uncanny( two company- neon, soul-machine). In addition to generating realistic results, this paper represents the first attempt to solve the audio speech to character bone transform prediction problem by analyzing a large corpus of mocap data of a single person. As such, it opens to the door to modeling other public figures, or any 3D character (through analyzing mocap data). Text to audio to bone transform, aside from being interesting purely from a scientific standpoint, has a range of important practical applications. The ability to generate high-quality textured 3D animated character from audio could significantly reduce the amount of bandwidth needed in video coding/transmission (which makes up a large percentage of current internet bandwidth). For hearing impaired people, animation synthesis from bone transform could enable lip-reading from over-the-phone audio. And digital humans are central to entertainment applications like movies special effects and games.
What it does
Some of the cutting edge technologies like ML and DL have solved many problems of our society with far more better accuracy than an ideal human can ever do. We are using this tech to enhance our learning procedure in the education system.
The problem with every university student is, they have to pay a big amount of money for continuing to study at any college, they have to interact with the lecturers and professors to keep getting better and better. We are solving the problem of money. Our solution to this problem is, we have created here an e-text data to human AR character sparse point mapping machine learning model to replace the professors and use our ai bots to teach the same thing in a far more intractable and intuitive way that can be ever dome with the professors. The students can learn even by themselves AR characters too.
How we built it
This project explores the opportunities of AI, deep learning for character animation, and control. Over the last 2 years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training, and runtime control, developed in Unity3D / Unreal Engine-4/ Tensorflow / Pytorch. This project enables using neural networks for animating character locomotion, face sparse point movements, and character-scene interactions with objects and the environment. Further advances on this project will continue to be added to this pipeline.
Challenges we ran into
For Building, first of all, a studio kind of environment, we have to collect a bunch of equipment, software, and their requisites. Some of them have been listed following.
Mocap suite- SmartSuite Pro from
www.rokoko.com
- single: $2,495 + Extra Textile- $395
GPU + CPU - $5,000
Office premise – $ 2,000
Data preprocessing
Prerequisite software licenses- Unity3D, Unreal Engine-4.24, Maya, Motionbuilder
Model Building
AWS Sagemaker and AWS Lambda inferencing
Database Management System
Further, we started building.
Accomplishments that we're proud of
The thinking of joining a virtual class, hosting a class, having a realtime interaction with your colleagues, talking with him, asking questions, visualizing an augmented view of any equipment, and creating a solution is in itself is an accomplishment.
Some of the great features that we have added in here are:
Asking questions with your avatar professors,
having a discussion with your colleagues,
Learning at your own time with these avatars professors
and many more. some of the detailed descriptions have been given in the submitted files.
What we learned
This section can be entirely technical. All of the C++ and Blueprint part of a Multiplayer Game Development.
We have started developing some of the designs in MotionBuilder, previously we have been all using the Maya and Blender.
What's next for castme
1. We are looking for a tie-up with many colleges and universities. Some of the examples are Galgotiah University, Abdul Kalam Technical University (AKTU), IIT Roorkee, IIT Delhi.
2. Recording an abundance amount of the lecture motion capture data, for better training our (question-answering-motion capture data) machine learning model.
Try it out here:
Intro Demo (2 min):
https://youtu.be/Xm6KWg1YS3k
Complete Demo:
https://youtu.be/1h1ERaDKn6o
Download pipeline here:
https://www.castme.life/wp-content/uploads/2020/04/castme-life%20Win64%20v-2.1beta.zip
Documentation to use this pipeline:
https://www.castme.life/forums/topic/how-to-install-castme-life-win64-v-2-1beta/
Complete source code (1.44 GB):
https://drive.google.com/open?id=1GdTw9iONLywzPCoZbgekFFpZBLjJ3I1p
castme.life:
https://castme.life
More info
For more info on the project contact me here:
gerialworld@gmail.com
, +1626803601
Built With
blueprint
c++
php
python
pytorch
tensorflow
unreal-engine
wordpress
Try it out
castme.life
www.castme.life
github.com
www.castme.life | castme | We are revolutionizing the way the human learns. We uses the Avatar Professors to teach you in a virtual class.Talk to your professors,ask questions,have a discussion with your colleagues in realtime. | ['Md. Zeeshan', 'Rodrixx Studio'] | ['The Wolfram Award'] | ['blueprint', 'c++', 'php', 'python', 'pytorch', 'tensorflow', 'unreal-engine', 'wordpress'] | 26 |
9,927 | https://devpost.com/software/hobby-enigma | Landing Page
Categories Page
Task Page 1
Specific Task View
Task Page 2
Inspiration
So, the inspiration came from the ongoing
COVID19
Pandemic due to which, to be honest, Our lives have kinda stopped. In such a time, people are unable to decide what to do. I mean, I myself am confused about what to do. Now rather than doing something, many of us are wasting our time thinking about "What to do?". So, in order to make it easy for ourselves, we developed this so that you can organise your activities and manage your time well.
We can defeat the virus!
What it does?
It basically provides you with
well organised categories within which there are tasks which you add to your to-do list and kill your time by learning something
. Rest you can see in the video I guess.
How I built it
So, my teammate Ayush gave me this idea about how we can utilise this time of ours in learning something. As soon as we heard, we started our work. We designated tasks among each other and preceded. We created the Front-end and Back-End parts separately. For Front-End, we designed separate layouts for landing page, category pages and the task panels. In the Back-End part, We used the
Django Framework
. Also,
User Authentication
is one thing which we implemented for the first time.
Tools Used
* HTML
* CSS
* Django Framework
* Bootstrap
Challenges I ran into
The main challenge was integration of front-end and back-end since we divided those part among us. Rest all was fine. Thinking about the layout and all was too time consuming.
Accomplishments that I'm proud of
This is our First hackathon. So, we are proud of everything we did.
Impact on Society
During the COVID emergency this online support space will be a great aid to people,isolated at home, as they could continue to have educational support, webinars and Live, exercises and practices to keep their anxiety level low and get some fun passtime.
What I learned
The Environment, got to know new people, and the motivation may help you to find alternative solutions with new technologies.We have learned that
cooperation
is essential for a successful group project, particularly during a quarantine period, where smart working is the only way to be together.
Challanges We Faced
The biggest challenge was thinking of a solution that could help children with fewer resources. Solutions such as
Hobby Enigma
can bring the joy of learning home in so many households.
What's next for Hobby Enigma
A mobile app is on the way and also we will integrate more categories and more tabs and also, some fun and frolic is missing I guess, So, that's one thing to do.
Project Link
https://github.com/Gr8ayu/hackathon
Contributors
Ayush Kumar
Ankit Kumar Singh
Abhiroop Saha
Built With
css
django
html5
javascript
python
scss
Try it out
github.com | Hobby Enigma | Giving your hobby a new wing | ['Ankit Kumar Singh', 'Ayush Kumar', 'Victor Ojewale', 'Abhiroop Saha'] | [] | ['css', 'django', 'html5', 'javascript', 'python', 'scss'] | 27 |
9,927 | https://devpost.com/software/paint-kfo87i | Services Offered
Landing Page
Inspiration
My inspiration has always been helping the community, more importantly those whom the community hardly focuses on. This project started with the desire to help artist of various types appraise their artworks, and to make this facility available to any user, all over the world, so I built an A.I and open sourced it you can check it here
link
but presenting a source code to run the A.I and a '.h5' wasn't a practical solution.
So I am glad to announce that I have integrated a better version of the A.I into a web app and a mobile app, both available for use.
What it does
The web app
It takes the image of an artwork appraises it and gives it a price and automatically put it out for sale or auction
Lets you buy artworks and provide access to other artworks of the artist
You automatically get a portfolio created, showing all the genre of your artwork.
The mobile app
Lets you appraise your artwork
Lets you put your artwork for sale or auction
Lets you buy artworks
How I built it
The A.I was built using tensorflow and keras in python, I then made a flask app to connect to the A.I, and built a web interface around it to enable users access it.
Challenges I ran into
I am very proficient with the use of Python for A.I and algorithms, HTML, CSS AND PHP and MySQL for web design, however i couldn't think of a way to successfully integrate an A.I built with python, with the PHP server side so i had to learn flask from scratch.
I ran into a lot of use limit while training my A.I on google collab.
Accomplishments that I'm proud of
Successfully learning how to integrate an A.I in a web app
Successfully learning flask
learning multiple ways to integrate A.I and design for speed and optimized processing
Creating a platform where creatives can appraise and sell their artwork
Integrating the A.I into a mobile app
What I learned
I learnt how to deploy to heroku
I learnt how to make a flask web app
Integration of several kinds on A.i on multiple platforms
What's next for Paint
Completing the build, I need to add a payment system and a cart system for the web app-
Completing the mobile app
Making it into an A.I and related algorithms completely controlled ecommerce system
Built With
css3
flask
flutter
heroku
html5
javascript
python
sqlite
Try it out
painter-ai.herokuapp.com | Paint | A solution for artist | ['Adefolahan Akinsola'] | [] | ['css3', 'flask', 'flutter', 'heroku', 'html5', 'javascript', 'python', 'sqlite'] | 28 |
9,927 | https://devpost.com/software/lungviewer | AR Mode
VR Mode
Inspiration
I was checking the impact of smoking and the health effects of Cigarette Smoking and the kind of impact it may have on Covid-19. I wanted to do something to prevent smokers from smoking. Perhaps something which gives viewers/users a jolt visually which motivates them. I thought of creating this app in which the users can see visually what kind of impact smoking is causing to their lungs and the damage smoking has caused to their lungs already which they can see visually.
What it does
LungViewer allows users to see how their lungs look like and the damage smoking has caused to their lungs dynamically. Users can see how smoking has progressively damaged their lungs over the years. Users are given an option to choose between AR mode and VR mode. In AR mode the user can hover the device on an image to launch the model. Users can slide the parameters like their age, how much they smoke, and years they have smoked. They can visually see the progressive damage smoking has caused to their lungs over time using the latest AR/VR technology. Not only this, but the app also educates the user on the differences between a healthy lung and a damaged lung, myths and facts, images, and more. Users can configure and pair their own Google Cardboard with the app using the settings gear icon in the VR mode of the app. The beauty of our app is that it is native and it works with both IOS and Android.
How I built it
Using Vuforia’s image targeting system and the Google Cardboard SDK, the app grabs an image of the lung viewer logo, which is uploaded to Vuforia’s database. It is then retrieved through Unity and used to show different stages of damaged lungs displayed on top of the image showing the damage based upon the parameters chosen by the user. Using Google Cardboard’s SDK this app carries the same functionality of the AR mode but through virtual reality, so users can see the lungs in more detail visually.
Challenges I ran into
It was difficult to tweak the model and the assets to fit with the project. I had challenges importing the Google Cardboard SDK into Unity using the prefabs that google provides was a bit challenging too.
Accomplishments that I'm proud of
My first experience using 3d models and Vuforia in Unity is a success. I usually use Swift to create apps but my first experience with making app with Unity.
What I learned
I learned how to use Vuforia through Unity and how to project a model on top of an image using Vuforia’s image targeting feature. I also learned how to toggle the VR Mode through the google cardboard SDK.
What's next for LungViewer
By choosing an image the targeting feature gives flexibility and more possibilities to raise awareness and educate users on other parts of human anatomy. In the future, I can add more information and textures for the lungs, animations, and rotating and scaling the lungs on the user’s command.
Built With
android
c#
unity
vuforia
xcode
Try it out
github.com | LungViewer | LungViewer is an app that raises awareness about smoking through AR and VR | ['Krish Malik'] | [] | ['android', 'c#', 'unity', 'vuforia', 'xcode'] | 29 |
9,927 | https://devpost.com/software/rental_property_star | Inspiration : To build a correct application
What it does : builds application on rental
How I built it - using .NET
Challenges I ran into : .NET features
Accomplishments that I'm proud of : dropdown
What I learned dropdown
What's next for Rental_Property_Star : AI
Built With
.net | Rental_Property_Star | This is for the list of rental apartments and list of apartments on sale | ['Nikhil Pandey'] | [] | ['.net'] | 30 |
9,927 | https://devpost.com/software/ar-based-health-caring-system | Inspiration- I have choose the domain of biomedical devices. Because to reduce the rate of death of patients due to human error occurs in hospitals.
2.What it does? - It is an device which is wearable by patients during surgery to record all the data of the patient like temperature, heartbeat, blood pressure and continuously transmit the data to the doctor. The doctor can see all those data in the Augmented Reality glass whenever required.
3.How I built it?- It is built by using Augmented Reality technology with an hardware to make the operating environment of the surgeons comfortable.
Challenges I ran into - Initially I ran a challenge with coding to make an Augmented Reality glass. Later on I got the output on clearing the errors.
Accomplishments that I'm proud of , that I solve the real world problems faced by the doctors in hospitals.
6.What I learned? - I have learnt how to do an project and how to manage a project in all terms.
What's next for AR based Health Caring System? - After developing a product , I decided to start an startup to launch my product in the market with a perfect team.
Built With
augmented-reality
hardware
Try it out
github.com | AR based Health Caring System | Our domain is healthcare and biomedical devices. This helps the doctors to make their operating environment more comfortable during surgeries. | ['SIVAKUMAR S'] | [] | ['augmented-reality', 'hardware'] | 31 |
9,927 | https://devpost.com/software/ugoround-public-safety-alerts-for-travellers | Inspiration
We are all looking forward to getting back to travelling both for work and pleasure. However in this new Covid World we are now concerned, even afraid to venture far from home and into unfamiliar places. Travellers may not follow the news in the locations they are travelling. Moreover, Travellers are less likely to know where to find information, what social media pages to check, or local websites or even may not speak the language. In this circumstance we have a solution that can assist.
What it does
We have built an App dedicated to the sole purpose of receiving Alerts - be it a Warning or a helpful Notification from Authorities. Whilst this is useful and needed, it still needs a local authority to engage and start sending alerts. We have now come up with a way for the App will pick a real time feed of Public Safety related topics and show on the Map any incidents and as it relates to your anonymised location. The App will use a combination of AI and Machine Learning to filter in and importantly
filter out
topics and situations which are not Public Safety related. We will input a series of
Public Safety
Topics for example Covid-19 is a topic and Active Shooter is another. If any social media posts are detected and the location is determined (by our algorithm) to be near you - then this will display as a pulse on the Map. The AlertPulse will pulsate more or less and based on the seriousness of the situation. It will give you a contextual view of an incident and be instantly relatable to your current and anonymised location.
How I built it
We developed a web based platform using PHP & JavaScript and integrated the Firebase API. We developed our own proprietary geofencing logic that requires no personally identifiable information (PII) from App users. We have integrated Google Firebase AI and ML and set this to assist filter and curate information.
Challenges I ran into
Location services on the mobile phone that does not require the user to allow location to run in the background. - is a major challenge. Whenever you are using GPS in the background you are draining the battery. Our solution does not require users location to run continuously and indeed we state less than 1-2% of the battery will be used over the whole day.
Accomplishments that I'm proud of
We wanted to ensure a single alert can be sent in multiple languages. We came up with a unique method that allows the Admin to send a single alert in as many languages as they choose. The app user can tap the alert language icon to select their preferred language. This will be ideal for places that heavily rely on overseas tourists that may not speak the language.
What I learned
We have discovered that you can easily deliver vital information and updates to a user based on their location - anonymously. Traditional methods such as SMS, Social Media and email all require the user to register and give away personal data. This is the generally accepted methods as how critical communication is disseminated.
We asked why?
And can we do it where the Citizens are totally anonymous to us and the people sending alerts. The answer was YES! This will be ideal for people who are visiting a place temporarily or will be unfamiliar with how local information is disseminated.
What's next for UgoRound Public Safety Alerts for Travellers
Our solution is ready and can be deployed - the real time AlertPulse is under dev and will be ready soon. All places must manage the re-opening of their locality in coordination with authorities. Both the City and the Tourism Authorities within need to be able to send alerts to the Community and Travellers alike.
We developed our system so there is a hierarchy that gives authorities in the (City) access while still allowing the local (Village/Town) "alert originator" to send out community First to Know alerts. It needs to be managed at the village or municipal level because each place is different and local mayors/leadership and health authorities are ultimately on the ground.
In addition there are many other use cases such as Security, Evacuations, Weather and other public safety concerns that can benefit the City/Country once the system is being used. At this time there is no universal alerting methodology across places - each Country implements their own system. We have built UgoRound to work anywhere in the World. UgoRound is a universal Safety App that will create a secure and trusted source of information for any public safety situation.
Built With
elastic-email
firebase
google-maps
java
javascript
messagebird
php
socket.io
swift
symfony
twitter-search-api
twitter-status-filter
Try it out
ugoround.com | UgoRound Public Safety Alerts for Travellers | The UgoRound App will deliver Public Safety related alerts curated from a social media feed, and based on your anonymised location. | ['Faozul Azim', 'Islam MD Zahirul', 'Gavin Bernstein', 'Dalit Livni-Rav'] | [] | ['elastic-email', 'firebase', 'google-maps', 'java', 'javascript', 'messagebird', 'php', 'socket.io', 'swift', 'symfony', 'twitter-search-api', 'twitter-status-filter'] | 32 |
9,927 | https://devpost.com/software/test-uq1af3 | The Founders
Landing Page
Classroom environment
Our dashboard
Teachers can use our whiteboard for live lectures.
Teachers receive AI-powered feedback.
Teachers make quizzes with an easy interface.
Student sits in chair in our immersive 3d environment.
Classroom environment on two screens
Inspiration
COVID-19 has transformed distance learning. As high school students, we felt isolated from our peers and teachers, and decided to create SmartRoom: an immersive and feature rich application for students and teachers.
What it does
SmartRoom connects students with teachers through an interactive and immersive 3D classroom. Students can move around in the classroom, sit in chairs, and see each other. SmartRoom features an unique smart dashboard, which teachers may use to receive AI-powered feedback from students, administer real-time quizzes, and give lectures on a live whiteboard.
How we built it
For the front-end we used: Three.js for the 3D environment, HTML/JavaScript/CSS (SASS) for our webapp, and bundled it with Parcel. We also used Google Cloud Storage to manage personal photos uploaded by users, and Blender for animating the 3D character models.
For the back-end, we used: Node.js/Express for our server, Socket.io (websockets) for communication, and IBM Watson’s Tone Analysis API for our smart feedback.
We used Heroku to deploy our application (smartroomvr.herokuapp.com)[
http://smartroomvr.herokuapp.com/
]
Challenges we ran into
Lighting our 3D scene was difficult since we had to balance performance with quality of lighting.
There were some issues when we were loading our HTML pages in separate chunks since we wanted to make a single page application.
Structuring our code was a challenge because we had to combine a 2D web interface with a 3D environment, as well as write significant back-end code to support it all.
Combining multiple FBX animations into a single GLTF one in Blender also took a lot of time to learn.
Google Cloud Storage was giving us a CORS error when uploading photos, so we had to manually modify the server through the Google Cloud Terminal to allow CORS with all domains.
Accomplishments that we're proud of
Kirtan:
I made a sleek and aesthetic dashboard from scratch -- no templates, no frameworks.
I built my own, custom classification between Constructive/Destructive criticism derived from IBM Watson’s 5 tone classifications (joy, anger, analytical, etc.)
Deepak:
I made an intuitive landing page that allows students and teachers to select their role and enter a room number to join a classroom.
I build the real-time 3D environment from scratch using Three.js and Socket.io.
I learned about Vector projections from 3D to 2D and how to combine multiple FBX animations into a single GLTF file.
What we learned
We learned a lot about server side logic and the Socket.io library for websockets
We learned about 3D graphics concepts like lighting, model animation, and rendering
We learned how to use IBM Watson’s tone analysis API
We learned how to use Google’s Cloud Storage API along with modifying the bucket itself through the Google Cloud terminal
What's next for SmartRoom
We will replace the images (profile pictures) with a Web RTC video so that students and teachers feel even more connected with one another.
We will also integrate the whiteboard into the 3D environment so that the experience is much more immersive.
We will implement our 3D environment on a VR platform for an even more immersive experience.
Built With
blender
cloudstorage
css-3
express.js
google
html5
javascript
node.js
socket.io
stackoverflow
three.js
Try it out
smartroomvr.herokuapp.com | SmartRoom | SmartRoom is a 3D classroom that connects students and teachers. It features a dashboard where teachers may receive AI-powered feedback, administer realtime quizzes, and draw on a live whiteboard. | ['Deepak Ramalingam', 'Kirtan Shah'] | ['2nd Place'] | ['blender', 'cloudstorage', 'css-3', 'express.js', 'google', 'html5', 'javascript', 'node.js', 'socket.io', 'stackoverflow', 'three.js'] | 33 |
9,927 | https://devpost.com/software/primaltrack-yhvj2z | PrimalTrack App Platform
Inspiration
Looking at the trend of highly infections diseases (SARS, swine flu, Ebola, covid-19), around 5 years time, there may be another virus pandemic. The current healthcare systems are not prepared for pandemic. So the question is - how can we prepare for the next pandemic? and how can we improved the current healthcare system?
Due to the highly infectious covid-19, many carers are unable to provide care or reach people who need care. Elderlies suffering from chronic diseases required routine checkup, however this may not be possible due to the covid-19 crisis.
What it does
Our solution is a two way app platform for caregivers and patients who required routine health monitoring. The system leverages AI technology to analyze data collected from facial recognition, speech recognition and wearable devices through IoT on a daily basis, and alert the caregivers if there is any identified risks.
How we built it
Health data are collected from smartphone camera, sensors, microphone and wearable devices. Collected data are analyzed by trained machine learning algorithms using datasets and API available online (Human API and Cloud Vision API). Data visualization was done using Grafana, and InfluxDB is used for storing Cloud-based time series database. A secure repository is created on the server and only doctors/ caregivers can access to that specific repository. It is secured by putting adequate security in place both at the server end and the client end. Also, we are planning to implement MultiFactor Authentication on the client end so that when doctors/ caregivers access the app/ data, they will be required to enter an ID, password and most importantly a third authentication which could be a code or text, etc. To ensure security of data, any users accessing the server to run analysis must use Multifactor Authentication.
Challenges we ran into
Elderly users are not familiar with smartphone apps, and there is a technological barrier for them to adopt our platform. Taking this into consideration, we desiged the user interface to make sure the impaired senses can be tackled by proper labeling, and colour/ font selections so that everything is clear and legible. Also, we included clear indications of what actions to take, as they might not easily understand what younger users understand. We added more engagement to make sure they remember everything because they might have decline in memory.
What's next for PrimalTrack
We are looking for more developers and designers to join the team. Give us a shout if you're interested:) | PrimalTrack | A two-way app platform for elderlies and their caregivers | ['Ava Chan', 'Rohan Pal', 'Esraa Emad Alzaq', 'DANCIL-isecure Cecil', 'Mohammad Yusuf Mulla'] | [] | [] | 34 |
9,927 | https://devpost.com/software/providing-legit-job-postings-msk3rj | JobHunt
Team
Aerica Singla, Arushi Madan and Arun Venu
Inspiration
COVID-19 pandemic is affecting economies in every continent. Unemployment rates are spiking every single day with the United States reporting around 26 million people applying for unemployment benefits, which is the highest recorded in its long history, millions have been furloughed in the United Kingdom, and thousands have been laid off around the world.
These desperate times provides a perfect opportunity for online scammers to take advantage of the desperation and vulnerability of thousands and millions of people looking out for jobs. We see a steep rise in these fake job postings during COVID-19. In the grand scheme of things, what may start off as a harmless fake job advert, has the potential of ending in human trafficking. We are trying to tackle this issue at the grassroot level.
What it does
We have designed a machine learning model that helps distinguish fake job adverts from genuine ones. We have trained six models and have drawn a comparison among them.
To portray how our ML model can be integrated into any job portal, we have designed a mobile application that shows the integration and can be viewed from the eyes of a job seeker.
Our mobile application has four features in particular:
1) Portfolio page: This page is the first page of the app post-login, which allows a job seeker to enter their employment history, much like any other job portal/app.
2) Forum: A discussion forum allowing job seekers from all around the world to share and gain advice
3) Job Finding: The main page of the app which allows job seekers to view postings that have been run through our Machine learning algorithm and have been marked as real adverts.
4) Chat feature: This feature allows job seekers to communicate with employers directly and discuss job postings and applications.
How I built it
We explored the data and provided insights into which industries are more affected and what are the critical red flags which can give away these fake postings. Then we applied machine learning models to predict how we can detect these counterfeit postings.
In further detail:
Data collection: We used an open-source dataset that contained 17,880 job post details with 900 fraudulent ones.
Data visualisation: We visualised the data to understand if there were any key differences between real and fake job postings, such as if the number of words in fraud job postings was any lesser than real ones.
Data split: We then split the data into training and test sets.
Model Training: We trained various models such as Logistic regression, KNN, Random Forest etc. to see which model worked best for our data.
Model Evaluation: Using various classification parameters, we evaluated how well our models performed. For example, our Random Forest model had a roc_auc score of 0.76. We also evaluated how each model did in comparison to the others.
Immediate impact
Especially during but also after COVID-19, our application would aim to relieve vulnerable job seekers from the fear of fake job adverts. By doing so, we would be re-focusing the time spent by job seekers onto job postings that are real, and hence, increase their chances of getting a job. An immediate consequence of this would be decreasing traffic onto fake job adverts which would hopefully, discourage scammers from posting fake job adverts too.
Police departments don’t have the resources to investigate these incidents, and it has to be a multi-million-dollar swindle before federal authorities get involved, so the scammers just keep getting away with it. Hence our solution saves millions of dollars and hours of investigation, whilst protecting the workers from getting scammed into fake jobs and misused information.
What's next for Providing legit job postings
We wish to completely automate the notification system built using 'Twilio'.
Built With
javascript
machine-learning
python
tensorflow
twilio
Try it out
github.com | Providing legit job postings | Filtering out fake job adverts from existing job portals | ['Arun Venugopal', 'Arushi Madan', 'Aerica Singla'] | [] | ['javascript', 'machine-learning', 'python', 'tensorflow', 'twilio'] | 35 |
9,927 | https://devpost.com/software/corona-guard | Corona Guard
Home Screen
Resources Page
Contact Log Page (part 2)
Contact Log Page (part 3)
Settings Page
Contact Log Page (part 1)
NFC Scanner Station Overview
NFC Scanner Station Front View
NFC Scanner Station Side View
NFC Scanner Interface
NFC Prototype
Backend
Backend Database
Inspiration
Since December 2019, the Coronavirus pandemic labeled Covid-19 by the World Health Organization has spread to over 188 United Nations member countries infecting over 4.9 million people while causing over 300,000 deaths around the globe. Coronavirus’s high RO value of 2.5 combined with its long incubation period of up to 14 days means that the only way to control the spread of Covid-19 is to social distance. This has caused the majority of public spaces to close wreaking havoc on the global economy. To reopen the majority of the global economy safely, robust testing and tracing infrastructure is needed to prevent a new spike in Covid-19 cases and deaths.
Most countries around the world lack robust infrastructure for tracking the spread of Covid-19 letting it spread very quickly causing unexpected spikes in cases all over the world. Contact tracing is a method of tracking personal interactions in order to preemptively warn a person before they spread Covid-19 to others who they will come into contact with in the future. By tracking personal interactions before Covid-19 actually spreads, many of the risks of being in public are reduced while healthcare providers can take a proactive approach when treating suspected cases of Covid-19 potentially saving thousands of lives. All primary, secondary, and tertiary interactions are logged which allows people to get notified even if they are at a low risk of contracting Covid-19. Contact tracing also allows health care professionals to allocate limited resources like medications and vaccines to people who need them most by finding people who are at the highest risk for contracting Covid-19.
What it does
Corona Guard is a secure contact tracing app that utilizes peer to peer bluetooth communication to anonymously track the spread of COVID-19. It notifies users of daily interaction with other users of the application, gives updates regarding the number of direct, indirect, and distant interactions with people testing positive for COVID-19, and calculates the risk of having the virus. Corona Guard also features an NFC chip that users must check into before going into public spaces to ensure that the public spaces are within healthy capacity levels. Owners of these public spaces also have the option of barring people from entering their property if they have a high risk user. The app has a resources page to give the most up-to-date and accurate news and recommendations during the pandemic to prevent misinformation. All in all, Corona Guard aims to curb the spread of COVID-19 at its sources through its anonymous contact tracing and NFC system so that communities collectively can tackle the pandemic once and for all.
How we built it
The main mobile application for Corona Guard was made using Google’s Flutter SDK. Flutter is a mobile SDK that is compatible with Android Studio and Xcode letting mobile applications be compatible with both android and ios devices of any shape, size, and operating system version. Flutter enables Corona Guard to run on any modern smartphone with proper scaling and a responsive UI. Using the Flutter SDK, our team developed a responsive UI that enables users to get real time data about their risk of infection while receiving notifications if they have been in contact with someone who has tested positive for Covid-19. The app also provides links where users can get up to date information about Covid-19 spread and procedures in their respective areas. The user interface of our app was designed with the intention of giving users easy to access information while being as transparent as possible about how a user’s data is stored and used.
The backend of the app was built using Google Firebase. There were two main tables, one for users and one for the entire system. The user side had these fields: “uuids heard”, a boolean “infected” value, and a calculated “risk” percentage. Every user updates the uuids they heard to the system, and this is stored in the “events” table. This table has 3 fields as well: “uuid”, a “time” value, and the user who uploaded this value. Together, these two datatables work to send data to the Flutter frontend. Firebase gives our app the ability to adapt to a changing user base being fast and responsive with one or one billion users.
The NFC scanner we built utilizes the MIFARE NFC standard which can transmit over 1KB of data wirelessly in under a second. The scanner reads the data in blocks which each hold 16 bytes of data. There are 64 blocks meaning the scanner reads 1024 bytes of data in total. The NFC tag in the phone stores whether a person has been exposed or infected with Covid-19 in the first byte in the first block as either a 0 or a 1; 0 for negative and 1 for positive. We stored this data in the first block so that in the event the scanner gets a partial read the scanner will still be able to display a positive or negative result. Lastly, if the rest of the bytes in the block are not clear (set to 0) the scanner will read the phone’s NFC chip as invalid as the person who is scanning the phone is likely using an unsupported app. As no data is needed by the arduino microcontroller, our NFC scanner can run without an internet connection making it even easier to use for businesses of all sizes. The only potential maintenance a business would have to perform are firmware updates every few months as we continue to optimize the scanner to become faster and faster.
Industrial Scanner Design
As part of our NFC scanner, we also prototyped a sleek and industrial scanner enclosure for use in public spaces to accompany the NFC scanner and user interface. The scanner enclosure is made out of aerospace-grade aluminum and features a stylish industrial design. The user interface includes an NFC reader/writer and a large, high resolution 12" LED display.
Challenges we ran into
Some challenges we ran into while building Corona Guard was how to make it as private as possible. We initially thought of the idea to use geolocation or GPS, but many are hesitant towards giving their every location to a private company. Thus, we decided to transmit anonymous UUIDs (Unique User ID) between users in order to track which phones had contact with each other. These UUIDs are not connected to any private data such as someone's name, so we decided that this was a secure enough way for everyone to remain completely anonymous while still being able to accurately track contacts.
Accomplishments that we're proud of
The application is fully functional and is compatible with iOS 8 or newer and Android Jelly Bean, v16, 4.1.x or newer.
What we learned
We learned the fundamentals of app development and how to build a working application from the ground up. We learned how to use the Flutter SDK for the front-end UI/UX and Google Firebase for the backend.
What's next for Corona Guard
We are hopeful that this application can provide communities a tool to collectively combat the spread of COVID-19 through its accessibility and ease-of-use. Through connections with health organizations, we can provide COVID-19 testing centers with information on who to prioritize testing and give health officials valuable information to help stop the spread of the pandemic. In the future, we hope to continue expanding our knowledge of algorithms and data science techniques to make the backend of the app more efficient and scalable.
Built With
arduino
bluetooth
dart
firebase
flutter
nfc
Try it out
github.com | Corona Guard | Corona Guard is a smart contact tracing app that aims to slow the spread of COVID-19 and other infectious diseases by logging interactions between humans in a secure and private way. | ['Vikas Ummadisetty', 'Derek Xu', 'Krishna Veeragandham', 'Subash Shibu'] | [] | ['arduino', 'bluetooth', 'dart', 'firebase', 'flutter', 'nfc'] | 36 |
9,927 | https://devpost.com/software/360-virtual-assistance | Inspiration
The aging of the population and the drop of birth rate is creating a serious problem.
Elderly people are going to live alone and they won’t have anyone that take minimal basic care of them because there won’t be enough social workers.
What it does
Connect through 360 Virtual Video Streaming the health care/social worker with the people in need so they can instantly check if the person is in need.
How I built it
Thanks to cheap 360 camera technology and the built-in mobile VR technology you can get the 360 video stream and check that everything is ok within seconds.
Challenges I ran into
Make the app as friendly as possible so the person in need don't need to do anything in order for the remote health care/social worker check on him.
Accomplishments that I'm proud of
Being able to use the most affordable technology on the marketing to provide a never seen service.
What I learned
Usability, 360 streaming, Virtual Reality interaction
What's next for 360 Virtual Assistance
Find support to keep improving the project so we can deploy it for real for consumers.
PROJECT'S GITHUB
Find all the necessary resources to deploy your custom remote assistance solution
https://github.com/EstebanGameDevelopment/RemoteRobotController
Built With
java
mysql
php
rmtp
unity | Remote Social Assistance | The instant health care assistance to millions of elderly people who life alone | ['Esteban Gallardo'] | [] | ['java', 'mysql', 'php', 'rmtp', 'unity'] | 37 |
9,927 | https://devpost.com/software/stacy-bot | Interface in FB messenger
This representation of NLP
Features which will be added more as time goes
PLEASE NOTE THIS IS A TEST BOT, AS PUBLISHING AND VALIDATION TAKES TIME, SO IF U WANT TO USE THIS THEN U NEED TO BE THE TESTER. BUT U CAN USE THE PHONE CALL FACILITY.
CALL AT: +1 463-221-4880
(This is a toll-free number based in US, if you are out of US then only minimal international charges will be applicable, I am from India and it takes 0.0065$/min)
If you want to use this app in your Facebook Messenger like shown in the video then please comment your Facebook ID in this project's comment section, I will add you as a tester to this app
IT IS JUST AN WORKING DEMONSTRATION OF MY IDEA TO TACKLE THE PROBLEM, IT CAN BE MADE AS PER THE DEMAND OF ANY ORGANISATION. AND THE BEST THING IT IS NOT A CONCEPTUAL IDEA IT IS TOTALLY A REALISTIC IDEA THAT CAN BE DEPLOYED AT ANY MOMENT ACCORDING TO THE DEMAND OF THE ORGANIZATION
Our Goal
General Perspective
Due to the situation of COVID-19 the work force of the world is decreasing(since everyone is maintaining self quarantine and social distancing ), which is creating a big havoc in the world, through this project of mine, I mainly target to tackle this problem and help the health organizations with a virtual workforce that runs 24*7 without any break, and handles all kind of mater, starting from guiding the people to fill up the forms to managing the data of the patients automatically and all-together.
Business Perspective(if required)
Bot service (it is not a company yet, I am just referring to the thing that we want to build or start this company, we are student developers right now) which adds a virtual work force to every client organisation to bloom in the market. In business perspective Our potential business targets are small business,NGO and health organisations and we help them to be free from human service cost and help them to grab more users by providing 24*7 interaction with there users, thus generating more revenue for them.
Inspiration
I really got inspired for making this advance A.I bot by seeing the current COVID-19 situations, because of these COVID-19 situations people are restricted from gathering hence work force and user interaction of various health organisation are diversely effected. Through this project I aimed to connect the health organizations with the patient anywhere in the world,using any platform(not limited by android, ios or Web). And also manage the data of the patients automatically thus reducing human effort and maintaining social distancing.
MADE THIS PROJECT TO BRING A CHANGE
.
How is our product different than others
1)
There are many types of A.I bots,where most of them are Decision tree based models that work with particular buttons only,our products will be totally based on NLP based models,which are more advanced and are in higher demands than others.
2)
Other service A.I bot service providers are confined to only 1 or 2 platforms, whereas we at the same time are providing advantage to the client to choose from a large scale of platforms like FB messenger, google assistant,slack,line,website bots and even in calls
3)
For the health organisations that are willing to buy our technology (We are also willing to donate this tech for free), from business perspective we will also be cheaper than our other competitors, when others are taking near about $3300/year for the service, we are doing it in $100-$1500 one-time fee range with more versatility.
It will totally be free for any user using it, no charges will be applicable for users
What it does
Our bot provides the power to every health organisation at such situations of COVID-19 by managing the screening,testing and quarantine data and also connecting the persons that are willing to do the test with the help of diversified digital platforms. In cases where internet is not working (where other bots won't function) still our bot works inside the phone number thus providing fruitful results in such situations.It basically covers all important aspects of an advanced A.I bot. It also connects the health organisations with volunteers that are willing to donate their time as helping hands in this hour of need.
How I built it
I built it using Google cloud A.I solutions, Google cloud Dialogflow framework(which includes automatic firebase integration) where I trained the bot with NLP with huge datasets from WHO and government and then integrated it with the Facebook messenger through Facebook Developer account. It is also supporting Phone call facility
Challenges I ran into
I had to go through many challenges, starting from being a solo developer, I really had to face a lot of problems as making such a complex app which all the advanced features as mentioned, all these things together cost me a lot of sleepless nights but i hope my hard-work pays off
Accomplishments that I'm proud of
I am really proud of the app that I made because it itself is a big milestone for a solo developer like me.
What I learned
I learned a lot of things through out this journey of developing this app, starting from advance use of Google cloud A.I solutions, Dialogflow and integrating it to Facebook messenger, making filters inside the chat-bot to enhance user experience etc.Connecting it with a phone number to receive phone calls etc.
What's next for Health Bot
If my work gets selected, then for sure I am going to work really hard to make Health Bot even bigger and to add more amazing functionalities to make my users happy.
Built With
dialogflow
facebook
google-cloud
javascript
json
Try it out
github.com | Advanced A.I Health Bot | An A.I bot with: Telephone calling,NLP,24*7 health coverage,total automatic data management,wipes rumors,Easy navigation,HD pictures,Customer service help etc | ['Udipta Koushik Das'] | ['Accessibility: Second Prize', 'Healthcare: Second Prize'] | ['dialogflow', 'facebook', 'google-cloud', 'javascript', 'json'] | 38 |
9,927 | https://devpost.com/software/clark-vision-asnobe | Clark Vision old
Clark Vision old
Clark Vision old
Clark Vision old
Clark Vision latest
Clark Vision latest
Clark Vision latest
Inspiration
With an aging workforce, a growing skills gap, and rising demands for quality and productivity,
it’s never been more urgent to empower the people who get the work done every day. That's
why augmented reality, with smart glasses as the primary form factor, is leading the way and
has gone from being the "wave of the future" to the wave of now.
From production to delivery to maintenance, ClarK Vision is bringing more power to the people—and helping global industrial companies gain a powerful competitive edge by giving extra powers to their employees.
What it does
ClarK Vision offers a solution based on augmented reality (AR) that improves workers’ efficiency and safety across industries. The AR smart glasses have features such as a camera capable of continuous QR scanning, a thermovision camera, wireless connectivity, environmental sensors and integrated head-up display. From factories and warehouses to fieldwork, ClarK Vision offers a scalable solution, being part of a modular ecosystem.
Put in use, AR glasses for industrial purposes helps the employee in his/her day to day-to-day job, enhancing their senses. Apart from being protective glasses, their main purpose is to provide explicit guidance overlaid on the work being done. Due to the fact that graphic information will be projected on the lens being coordinated with the performed task, human error will be reduced and productivity will grow.
ClarK Vision can accommodate various use cases by adapting the hardware and software according to our business partners. Its hardware and software are modular thus allowing for customization of the product according to the client’s needs. The head-up display offers real time information easing the work of the employee and allowing for a boost in productivity.
How I built it
Both the ClarK Vision product idea and the team were born in March 2018, before that year's edition of Innovation Labs, the largest Romanian startup pre-accelerator. After winning the Innovation Lab’s Startup of the Year Award and BearingPoint’s “Be an Innovator” Competition in Berlin, we decided to incorporate our startup and we created SC NUCLEUS TECHNOLOGIES SRL on 19th of June 2018. The company started as a team of 6 but restructured by the end of 2018 to its core founders: Vlad Măcelaru, Ionuț Moldovanu, Costin Costea, Alexandru Șolot - each with a 25% share of the company.
Challenges I ran into
Accomplishments that I'm proud of
The shortlist of hackathons and innovation competitions we have won or participated to is as follows:
Innovation on National Market
won Startup of the Year Award - Innovation Labs, Bucharest 2018
won Best Startup Award - 200 Seconds of Fame, Bucharest 2018
won 1st Prize - StartUp Path, Bucharest 2018
won 1st Prize - StarTech Factory, Bucharest 2018
Innovation on the European Market
Finalist, Entrepreneurship Award – European Robotics Forum, Bucharest 2019
Finalist, Best IoT Startup - Central European Startup Awards, Bucharest 2019
won 1st Prize - Be an Innovator By BearingPoint, Berlin 2018
Finalist Open Innovation - European Youth Award, Salzburg 2018
Innovation on International Market
Semifinalist NextGen Logistics - University Startup World Cup, Copenhagen 2018
Top Picks, Advanced Manufacturing & Robotics – TechCrunch Disrupt, Berlin 2018
What I learned
The beginning of 2020 and the current pandemic crisis created an opportunity for us to pivot towards a more distilled version of the product. We are currently working towards a remote assistance solution composed of the ClarK Vision glasses and a proprietary cloud platform, leaving additional sensor development on hold.
What's next for Clark Vision
ClarK Vision is a wearable augmented reality device that is especially powerful, as it delivers the right information at the right moment and in the ideal format, directly in workers’ line of sight, while leaving workers’ hands free so they can work without interruption. This dramatically reduces the time needed to complete a job because workers needn’t stop what they’re doing to flip through a paper manual or engage with a device or workstation.
Built With
3dprinting
android
angular8
java8
javascript
json
mongodb
rest
springboot
webrtc
websockets
Try it out
clark.vision | Clark Vision | Clark Vision - Augumented Reality Glasses | ['costea-costin96', 'Moldovanu Ionut', 'Vlad Macelaru', 'Alexandru Solot'] | [] | ['3dprinting', 'android', 'angular8', 'java8', 'javascript', 'json', 'mongodb', 'rest', 'springboot', 'webrtc', 'websockets'] | 39 |
9,927 | https://devpost.com/software/modulus-l83n26 | . | . | . | ['Ryan Ma'] | [] | [] | 40 |
9,927 | https://devpost.com/software/modulus-7i30cv | Inspiration
In light of the recent COVID-19 crisis, we’ve seen staggering demand for online courses as students grapple with a reality in which education is now delivered over the internet. But traditional e-learning platforms like Khanacademy struggle to keep up with the pace of demand, while LMS platforms like Canvas, which requires teachers to sign up as part of large, wealthy organizations such as school districts, are difficult to use and lockout small independent teachers that just want to continue teaching. And on top of all that, all platforms rely solely on one medium of teaching, such as Udemy through videos, and Edmodo through text, without regard for user learning preferences.
What is Modulus?
Modulus is an online education platform, similar in concept to Canvas or Blackboard, both of which are used by schools and universities around the nation. But unlike existing platforms, Modulus directly integrates the VARK learning styles - a psychological framework for teaching - into an incredibly simple to use, modular course structure that anyone can use to teach anything. The result is a fairer, more accessible, and more equitable online education for everyone.
Modulus Features
Modulus includes VARK profiles, which are charts that display the proportions of different learning styles for a course or a user. Across the entire user interface, the colors and learning styles used in the profiles are consistent, which means you can tailor your education to your learning preferences.
Fast, responsive, and intuitive, with no bloatware, unlike other LMS solutions that disadvantage those with poor hardware, slow internet connections, and little tech-savviness.
Peer-to-peer: our platform lets anyone create, upload, and share courses, with the idea that we can recreate the Montessori model of learning in a digital environment.
How is Modulus used?
Modulus is used to create a digital classroom online, where teachers can post courses, assignments, lectures, and tests to share with students anyplace, anytime. Our goal is to recreate the best parts of modern educational methods, from VARK learning models to Montessori peer-to-peer instruction, in an online environment, so that as a society progress can continue to be made in the field of education, even from home during quarantine.
How I built it
We used React to develop the front end for the web application, while integrating with the Google Firebase service for backend database operations. For the landing page, we used Bootstrap, and React for the web app educational platform itself.
Challenges I ran into
This was the first time that our team used Firebase Google Cloud services for user authentication and data storage, so it was difficult to integrate that into our web app, which is written in React, a web framework we had learned for our first hackathon only two weeks ago. We encountered lots of issues thus with merging these new technologies together and deploying them successfully on Heroku.
Accomplishments that we’re proud of
Despite having just learned Firebase, and only having two weeks of experience with React and Bootstrap, we managed to do the following:
A fully functional web platform, with an intuitive and extremely fast design.
Full integration with a cloud-hosted database backend that tracks course enrollment for our individual users
Automated emailing for password recovery
Integrated course creation into the platform
Anti-bot services like Recaptcha
What's next for Modulus
Our team hosts a tutoring service for middle school and high school students who either want to catch up or get ahead during this difficult time, so we plan on using this platform ourselves to promote education for all.
Who we are
High School Juniors from Seven Lakes High School, in Houston, Texas
Daniel Wei - danielwei15#3016
Ryan Ma - GoblinRum#8553
Haoli Yin - Nano#4890
Built With
bootstrap
cmd
css3
express.js
firebase
google
heroku
html5
javascript
node.js
npm
react
recaptcha
research
Try it out
modulusplatform.site
github.com | Modulus | An online education platform that directly integrates VARK learning styles for efficient online learning | ['Haoli Yin', 'Daniel Wei', 'Ryan Ma', 'Mohamed Hany'] | ['Best Educational Impact'] | ['bootstrap', 'cmd', 'css3', 'express.js', 'firebase', 'google', 'heroku', 'html5', 'javascript', 'node.js', 'npm', 'react', 'recaptcha', 'research'] | 41 |
9,927 | https://devpost.com/software/augmenta | Opening Screen
Introduction Screen
Picture of the text book
Picture of the textbook when Augmenta Augments 3d Models on it.
Inspiration
*Due to COVID-19 almost one in four children living under lockdowns, social restrictions and school closures are dealing with feelings of anxiety, with many at risk of lasting psychological distress which is hindering the process of learning for children of small age group as they can not see the objects/words/letters or visualize them in real time.
*Feelings of helplessness, loneliness and fear of being socially excluded, stigmatized or separated from loved ones are common in any epidemic, while prolonged stress, boredom and social isolation, as well as a lack of outdoor play, can lead to a higher number of mental health conditions in children, such as anxiety.
What it does
Our main aim is create immersive visual prefabs or cues (3D Models) for ease in daily tutelage of young kids with Augmented Reality using Vuforia SDK on prexisting taken through camera input targets acquired by realtime image recognition
How I built it
*One of the base of our detection system is Image processing using an SDK Vuforia in Unity
*Using the Unity AR Core it detects the image and mounts 3d Models on it.
*The Dot Mapping on the image is done using the vuforia “feature select” feature which maps points on the target image to help in further augmentation
Challenges I ran into
*Adding textures to the 3D model
Animating the 3D Model
Accomplishments that I'm proud of
Learn to stimulate Models and build an app out of it in just 24hrs
What I learned
Using Unity AR core to make Augmented Reality App
What's next for Augmenta
Introducing Brain stimulation to check how the student is grasping.
Introducing Custom user editable model Input.
*Introducing Augmented screens for kids to watch educational movies.
Built With
assetstore
sketchfab
unity
unityar
vuforia
Try it out
github.com | Augmenta | Augmenta is a project which main aims to create immersive visual prefabs or cues (3D Models) with Augmented Reality using Vuforia SDK on preexisting targets using real-time image recognition | ['Anmol Srivastava', 'ISHAN KUMAR', 'Abhirup Chakraborty', 'Rishabh Chawla'] | [] | ['assetstore', 'sketchfab', 'unity', 'unityar', 'vuforia'] | 42 |
9,927 | https://devpost.com/software/sanitizing-drive-fv7oyj | CAD Design
Impact of Project
Implementation Plan
AE Algorithm
Prototype testing for Drive of the System
Fabrication of the prototype drive
Vision
To provide Innovative Solutions to solve the COVID-19 Pandemic that has affected the entire Human Community.
About Us
We are a team of 5, having expertise in Robotics and Designing which gives us an edge to provide solutions which do not require human intervention.
Problems
Faced due to Covid-19:
This virus has led to the following problems:
Widespread infection.
Improper Technology for Sanitization of Vulnerable Areas.
Technologies available are Costly.
But a question arises here. UVC radiation is very harmful to humans, then how we will ensure correct control of this drive. Read-Along!
Solution
Provide low cost Sanitizing Robot.
Safety is ensured without any hindrance to people working around the places where the Robot will traverse.
Human Intervention is not required as the system is FULLY AUTONOMOUS. It will travel along the entire floor space emitting UVC radiation only along its way on the floor and the walls faced by the light.
Since, in this fight with COVID-19, we all have to stand strong and ensure that no lapses are entertained. This system will be deployed in hospitals and commercial spaces where the chances of virus spreading are higher.
Implementation Plan
It is a four-wheel mecanum drive which will be used to disinfect the Hospital Area and Commercial Space. It is fully autonomous using UVC Tubes. The UVC tubes are placed at positions which ensures efficient sanitization.
The Drive consists of Microcontroller (Arduino) which controls the motor driver and is being powered by the power distribution board. Motor Driver is used to driving the four wheels with the help of motors. The sensor data we get as input is fused with each other using an autonomous exploration algorithm which helps us get a 3D mapped image of the surroundings. This image gives us the accurate distance and angle of an obstacle/object from the robot as output so that it can avoid it easily. As there are multiple sensors the field of view is wide and the results are more accurate.
Impact of the Project
Why are we using Ultraviolet Light?
Ultraviolet light can be an effective measure for decontaminating surfaces that may be contaminated by the SARS-CoV-2 virus by inducing photodimers in the genomes of microorganisms.
Is it Effective against SARS-CoV-2?
It is estimated that the SARS-CoV-2 virus can survive on surfaces for up to 9 days, based on its similarity to SARS and MERS. Therefore, different research shows that the average dose required to disinfect the virus and bacteria is 67 J/m^2.
Calculation
The UV doses required to disinfect the coronavirus is 67 joule/m^2, which is an average value get from research papers.
Reference: Kowalsky W., “2020 COVID-19 Coronavirus Ultraviolet Susceptibility”, March 2020,
https://doi:10.13140/RG.2.2.22803.22566
If we are using a UVC tube of 15 watts, so to disinfect the viruses with an efficiency of almost 99% just we need to calculate is the time of exposure for which the surface needs to be exposed on UV radiation.
To find the time of exposure we can use ‘’INVERSE SQUARE LAW’’ of radiation which is given by,
I=P/A=P/(4piR2)
E/A=P*T/A
WHERE, E/A=D= UV doses required to disinfect viruses in joule/m2
A= Area of exposure
T= Time of exposure
R= distance between light source and surface
So,T=D*A/P = (D*2piR2)/P
T = (67*2*pi*0.5*0.5)/15 = 7.01
Here we used half surface area of the sphere(2piR2) as the luminous flux in our bot has been concentrated within half of the spherical curvature in fact less than that.
With these calculations, it is cleared that within 7 sec. of exposure of UV radiation from a distance of 0.5m we can have almost 99% of the virus-free surface.
Note- UVC-emitting unit varies depending on the distance between the light source and the irradiation area and any object in between that shadows.
Work done During Weekend
1)CAD Designing.
2)The calculation for Using UVC light and its Intensity. (Time Required for Disinfection)
3)Simulation in WeBots.
4)A Simple Prototype design using hands-on material present with us in our Lab.
5)Completion of Programming and Control Algorithm.
Requirements for Continuing the Project
It is important to continue working on this project, in order to achieve the best results. The following points explain the requirements for continuing this project:
1)The need to ensure that there is not an upsurge of cases, timely sanitization of public places are required.
2)Therefore we need to use this project to ensure that public health is maintained.
3)The maximum cost for making this robot is 16,000 INR. This could easily be reduced if the robot is in mass production.
4)The components required (as mentioned in Bill of Materials) are readily available and easy to interface.
On average, only 4 members are required to make this robot in a week, which helps in employing people and further reduces the problem of Unemployment.
Value of the Project after Crisis
The need for the projects that provide solutions to COVID-19 virus is that it should be implemented even after the effects have slowed down. As per reports, scientists predict that humans have to practise some form of social distancing till 2022. Therefore to ensure that our environment is clean, we need to deploy such projects which help in the long run.
The following points explain the value of this project after the crisis:
1)The robot can be deployed in public places since it is fully autonomous to disinfect the area.
2)Sanitization will prevent the growth of virus and bacteria, therefore this robot can ensure timely and successful operation in killing the virus.
3)The making of the robot will generate employment that will contribute to the already crippled world economy.
Business Model
Core Value: To provide the best sanitizing robot that ensures the disinfecting of workplaces and help in continuing a better life.
Target Customers: Primarily Hospitals and Commercial Workspaces like offices, colleges, schools, etc where there is a high probability of spread of the virus.
Revenue Streams: With the help of this product the customers will ensure timely sanitization of there workspaces and the cost earned from selling these will be used to fight this virus in upcoming years. 35% of the profit earned from this will be donated to families of medical professionals, which lost their loved ones in this fight.
What Makes Us Stand Out
The current pandemic termed as Coronavirus has brought the entire humanity to its knees. To provide the frontline warriors appropriate ammunition to fight against this virus, should be our only motto. At this time of emergency, we would like to propose a PYRO- Sanitizing Robot. The Robot is fully autonomous which helps in traversing the area and cleaning it. The sanitization is carried out using UV-C tubes which has the capability to destroy the virus. The robot will be deployed in areas which are vulnerable and has the potential to become virus hotspots like hospitals, schools, colleges, etc.
To ensure proper dosage is emitted by UV-C unit, the distance is calculated and proper consideration has been given to shadowed and/or critical areas. Disposable Indicators is used which helps us to ensure that an adequate dose has been given.
Video Links
Simulation Video
Prototype Video
Pitch Video
Built With
arduino.ide
c++
google-slides
solidworks
webots
Try it out
github.com | PYRO- A Sanitizing Robot | The need of the hour is to remove this virus from our environment. One of the ways in which we can do this is by sanitizing our hospitals and workplaces with the help of this robot which uses UVC tube | ['Rabindra Sah', 'ANIMESH PAL', 'Aashray Arya', 'atif akhtar', 'shashank bhati'] | [] | ['arduino.ide', 'c++', 'google-slides', 'solidworks', 'webots'] | 43 |
9,927 | https://devpost.com/software/scribr-p7heb5 | Inspiration
Taking notes by hand increases material retention--this has been proven. But it also increases something else: the chance of losing your work. What if you could have the learning benefits of handwriting notes but still be able to keep a copy as a Google or Word document and Ctrl-F through it later? We had to tackle this problem ourselves.
What it does
Scribr is a deep learning model that allows you to input pictures of your notes and have it transcribed for you.
How we built it
We trained data from the IAM Handwriting Database on Tensorflow in the cloud. We used a CNN -> LTSM -> CTC structure.
Challenges we ran into
After bricking our computers trying to download all the data, we decided to move our data aggregation and model training to Google Cloud Platform’s Cloud ML Engine. This allowed us much more time for optimizing our model.
Accomplishments that we are proud of
Figuring out how to integrate Google Cloud Platform into our workflow was a lifesaver. Our app would not be where it is without it.
What we learned
We learned a ton about Convolutional Neural Networks and Long Short Term Memory Networks while building our project.
What's next for Scribr
There's still room to improve our model through more data and better architecture, which is going to be vital going forward. We also want to create a Flask app to serve our model.
Built With
cloud
docker
flask
google
python
tensorflow | Notable | Model that generates digital notes from images | ['Sebastian Schott'] | [] | ['cloud', 'docker', 'flask', 'google', 'python', 'tensorflow'] | 44 |
9,927 | https://devpost.com/software/we-can-7tbs18 | making tourism adapt the chages to covid
solo travelling
makes Airline travelling more efficient
makes airport procedures easy and efficient
less time taking
formulating the blueprint and implementing
Built With
ai
c++
camera
etc
java
python | "We Can" | "The wings to fly again" | ['Kaustav Paul Chowdhury'] | [] | ['ai', 'c++', 'camera', 'etc', 'java', 'python'] | 45 |
9,927 | https://devpost.com/software/story-board | Inspiration
In this global lockdown, a lot of young kids are stuck at home and they are really missing out on education in the years most important to brain development. We wanted to help those stressed-out parents who would be worried sick right now about their kids' future. Even Teenagers find it hard to interact and keep there brain productive and active.
What it does
Our app contains an extensive library of rooms that you can purchase and set up in Augmented reality. Once you buy a specific themed room you can learn, play, solve mysteries, and do what not based on the "theme" of the room. We also have coins that you get when you complete accomplishments which you can avail to unlock rooms or get discounts.
How I built it
We used the Vuforia package in Unity for Augmented Reality ground plane and then set up individual rooms in it.
Challenges I ran into
Accomplishments that I'm proud of
We built this app in 2 days and this would hopefully help out a lot of parents around the world.
What I learned
I learned about business strategy and product planning along with the usage of databases with unity.
What's next for Story Board
We are going to add a custom room feature using which educators from all around the world could create their own rooms and earn from its sale. We also realize that 3D development may be a limitation for most of them and therefore we will be setting up a marketplace for 3D models as well where designers from around the world could sell their models and earn from it.
We will also be adding multiplayer functionality so that friends can come together in a room and enjoy as if they are together.
Many more themes, stories, mission-based rooms will be designed by our team to cater to the need of every user.
Built With
blender
figma
mixamo
unity
vuforia
Try it out
drive.google.com | Story Board | Making Education Global and fun | ['Akash Jha', 'Abhijeet Swain', 'Ashwani Kottapalli', 'Aaditee Juyal'] | [] | ['blender', 'figma', 'mixamo', 'unity', 'vuforia'] | 46 |
9,927 | https://devpost.com/software/auto-mask | Inspiration
When mask-wearing became required due to COVID-19, I empathized with the doctors and nurses who have always had to wear the itchy and hard-to-breath face covering. Realizing that face coverings will always be uncomfortable no matter what material or design, I thought to make masks easy to take on and off so catching my breath in a grocery store would be effortless.
What it does
Auto Mask features an eye shield to protect from infected saliva, touchless control to minimize bacteria transfer from hands, and even a sneeze detector! An electrode on the abdomen activates the mask just in time to catch a cough or sneeze.
How I built it
I designed the 3D printed headpiece and combined the Arduino microcontroller with an ultrasonic sensor, muscle sensor, and a pair of servo motors.
Built With
3dprinting
arduino
c++
cad
Try it out
www.thingiverse.com | Auto Mask | Normal masks are uncomfortable to wear all day. Auto Mask features touchless mask on/off control and abdominal muscle sensing sneeze detection to catch coughs in time. | ['Taliyah Huang', 'Calista Huang'] | ['Hardware winner'] | ['3dprinting', 'arduino', 'c++', 'cad'] | 47 |
9,927 | https://devpost.com/software/knowledge-bot | Inspiration
In this lockdown, where we are not able to meet our mentors ,gurus, and teachers, we attempt to make a wise bot which can answer your doubts and queries
What it does
It answers your questions related to God currently
How we built it
Using front end react and backend in python & flask, everytime the user fires a question, the frontend hits an flask APi that calls the functions to find the suitable answer from the database
Challenges we ran into
None
Accomplishments that we're proud of
We are prowd of both our frontend and backend
What we learned
We learnt various document similarity measures
What's next for Knowledge bot
Voice to Text and adding more domains
Built With
apis
flask
nltk
python
react
scikit-learn
Try it out
149.129.189.80
github.com | Knowledge bot | In this lockdown, where we are not able to meet our mentors ,gurus, and teachers, we attempt to make a wise bot which can answer your doubts and queries. | ['Ketaki Malhotra', 'Manav Sethi'] | [] | ['apis', 'flask', 'nltk', 'python', 'react', 'scikit-learn'] | 48 |
9,927 | https://devpost.com/software/tip-of-my-tongue-k3weuf | Inspiration
Every day we come across situations when we we can't remember
that word
. We try to reach for it, churning our memory over and over. It becomes exasperating after a while.
Whether you are working on the next epic fantasy series or trudging through the report due tomorrow or trying to tell a funny story, you come to a halt and struggle to find the right words even though you feel it's there, just out of reach. In comes, Tip of my tongue: The hack to crack through the block and finally clinch the word you've been reaching for.
Obviously, there is a lot of personal experience that served as the inspiration for this skill and we hope that this skills cures a chronic problem for many people.
What it does
The skills is built to cure 'Tip of my tongue' which is where it derives its name from. Users can ask for something like "find a word that means rear of boat" and Alexa skills will respond with "I have found a list of words for you. The top results are stern, skeg, aft, astern, backwash and scull". Users can look for synonyms of a word, synonyms that start with a specific letter.
Tip of my tongue provides following features:
reverse dictionary
Find descriptive words for a noun
synonyms of words
synonyms related to specific topic
synonyms on the basis of starting letter
searching and suggesting words that are most likely to be used together (closely linked words e.g if you ask for word that is most often used with wreak, Alexa will suggest words like havoc.)
Built With
alexa
node.js | Tip of my tongue | Clinch the word you've been reaching for since morning! | ['Piyush Agrawal', 'Subham Banga', 'Shashwat Gulyani'] | [] | ['alexa', 'node.js'] | 49 |
9,927 | https://devpost.com/software/test-mfhncq | Updates to AR Pool:
You can see new updates demo at
1:37
Here's what's new:
We have added two
new gaming modes
for use when you go on
Facebook live
.
You can have your audience play the game on facebook live by posting comments like #left, #right, #PowerUp, etc to control the rotation and power level of the shot. We like to call this mode
#FBPlaysARPool
. This way you can host a game on your facebook live streaming and your viewers play by adding comment hastags.
Another way you can have your audience be part of the game is by challenging you in the
#ARPoolChallenge
mode. This time audience will post comments like #BottomRight, #CenterLeft, etc. to vote for the pocket to challenge you to put the ball in. If the player puts the ball into the wrong pocket, it's a foul. Player only scores for putting the ball in the pocket chosen by the audience.
Inspiration
There are times when you want to play pool with your friends, obviously you can't have the table along with you every time. During breaks, the table is already occupied. None of those mobile games can bring you the joy of playing with your friends standing around while you take the shot. So, we thought we could build something that will come a bit closer to that.
For the updates, we were inspired by
#TwitchPlaysPokemon
which is a pretty popular streaming on Twitch where viewers control the gameplay by posting comments. It took off so well and we wanted to do something similar for facebook :D. So we created the
#FBPlaysARPool
mode.
What it does
It brings a pool table wherever you are. Yes, you can have a pool table right in your living room, office cubicle, wherever. It gives you the ability to play the game from the facebook or instagram camera so you don't need to download another app. You can replay your previous shot while recording it when you hold down the record button on the camera. Share your feats with your friends on messenger or post them on your stories.
With the new updates, you can now host a game on your facebook live stream and let your viewers play the game by posting hastags in the comments like #Left, #Right, #PowerUp, etc. You can also let your viewes challenge you to play a shot to put the ball into a specific pocket. They do this by voting on the pocket of their choice by posting comments like #BottomRight, #TopLeft, etc.
How We built it
We used Spark AR Studio to create the AR experience. We did all the game state management and setting rules in the script. The user interactions and the Scene objects were handled by the patch editor. Script to patch editor bridging helped us to divide the workflow and create a good experience with relative ease.
We have used LiveStreaming module to aggregate comments and count hashtags to control the shot or to add a rule for valid pocket (as chosen by the audience votes).
Challenges We ran into
We didn't know much about working with AR projects when we started. We learned about what the user expects and what the facebook platform offers us. We were able to figure out our dependencies and the constraints relating to what would be a good user experience as we went along after the initial grind. As pure programmers, we struggled with placing and positioning 3D objects and overall setting the scene.
To test the LiveStreaming module, you have to go live which means leaving the development environment and then testing if your code worked which breaks your flow and reduces your efficiency. So, we created a mock for the LiveStreaming module and read autogenerated comments to test our implementation efficiently. We have provided further details in the updates section.
Accomplishments that I'm proud of
We were able to work through and learn about Augmented reality experiences, 3D objects and user interactions on the facebook platform. We added a new skill of Spark AR Studio to our arsenal.
We created a mock for the LiveStreaming module and generated mock comments which increased our efficiency while building this effect.
What We learned
Spark AR Studio, digital 3D objects and their composition. The game creation made us do a recall of 3D geometry :D
What's next
We're looking to add Player vs AI mode with an an AI Character to go along with. We'll add option for players to choose different tables to suit their aesthetics and make the experience more seemless and more engaging gameplay.
Built With
javascript
spark-ar
Try it out
ipiyush.com | AR Pool | Bring a 3D pool table with you wherever you go! | ['Piyush Agrawal', 'Shashwat Gulyani'] | ['Regional - Third Place'] | ['javascript', 'spark-ar'] | 50 |
9,927 | https://devpost.com/software/donorfu | bot
Inspiration
The inspiration for this idea came when we had need of blood for our friend. We found so many groups on facebook distributed over different areas. Some groups were highly active and thus cluttered with requests that did not correspond to the location targeted by the group. Others were dormant and requests were unanswered. We got into thinking there should be a solution to manage these groups and match potential donors with specific posts.
What it does
DonorFu is a facebook messenger bot leveraging access to groups approved by admins to match posts with registered potential donors.
Group admins log in and add specific groups whose posts are then processed and matched with donors for sending them notifications in messenger itself. The donors can follow the link to the post in the notification to declare their willingness to donate. Additionally, they can check their eligibility before contacting the poster. The eligibility check is just a basic set of questions and has no association with proper checkups done on site before donation.
How we built it
We have built a bot connecting Dialogflow NLU engine with Facebook messenger platform APIs. We are using Facebook Groups API to get access to posts in groups that are added to our app by their admins. Facebook Login is used for getting permission to access group posts. Then our app can access group posts which can be managed in the bot itself due to Account Linking with the bot. We are using cloud functions for listening to Messenger platform webhooks and process posts. Google maps API is used to check the vicinity of locations.
Challenges we ran into
We faced many challenges while building this which made our journey interesting.
We learned to build a complex manual facebook login flow along with account linking for the messenger.
Another challenge was using Facebook test users with Messenger bot as there is no clear way to add the bot to a page created by test user.
Accomplishments that we're proud of
We were able to extract blood group and location and match them with donor details with good accuracy.
Created a seamless way for managing groups, getting notifications, sign up for notifications, all within the same ecosystem using a conversational interface.
What's next for DonorFu
DonorFu can easily be extended for other kinds of donations such as organ donations.
On a higher level, this kind of mechanism for matching users with groups posts can be applied to other areas where both parties reap benefits.
Built With
aws-lambda
dialogflow
facebook-graph
messenger
messenger-platform
mongodb
nginx
node.js
Try it out
www.facebook.com | DonorFu | Connecting potential donors with facebook group posts requesting blood donations. | ['Harman Singh Jolly', 'Piyush Agrawal', 'Shashwat Gulyani'] | ['First Place - Regional Round', 'Bonus Prize: Best solution to Bridge on and offline experiences'] | ['aws-lambda', 'dialogflow', 'facebook-graph', 'messenger', 'messenger-platform', 'mongodb', 'nginx', 'node.js'] | 51 |
9,927 | https://devpost.com/software/deepflight | drone
movidius stick
Inspiration
Public surveillance has been around for a very long time. But the technology currently in use is fast becoming outdated and modern instruments such as UAVs are invading the space. UAVs are unmanned aerial vehicle than can be fitted with capabilities of autonomous flight, camera coverage and a lot more if given enough computing power. UAVs are mobile and they can perform actions such as target following based on the input feed through camera.
What it does
In this project, we've built such a system which does active tracking and can drive itself to follow the target. But what happens if the target escapes the UAV regardless? In a public surveillance scenario, we would have several drones monitoring their respective sectors. The Ground control station (GCS) relays communication between these drones to re-identify an escaped target. When a target goes off the frame of one of the drones, the tracking information collection helps to identify which neighbouring drone should come in to acquire the target again. This is what we call handoff!
We have achived Multiple Object Tracking using Faster-RCNN, with which our drones are able to detect and track any and every object that enters or exits the frame. With vehicle and human re-identification we are able to re identify the lost object such that we can track the object after hand-off.
We used movidius neural compute stick with raspberryPi on our drone to detect pothole. The movidius stick is used by deploying the layers before the fully-connected layer into it. The output of this deployed network is then transferred to the ground station to pass through fully connected layers and further tracking using deep sort.
How we built it
Deep sort algorithm is used for the actual tracking assisted by faster RCNN and reidentification. Dronekit is used for simulating quadcopters and controlling them. YOLO object detector is used for pothole detection. Raspberry Pi fitted with Movidius Neural Stick is used for online inference. PyMongo used with stitch connection string pushes the data of potholes to MongoDB Atlas Cluster. Client application receives flight and tracking data on socket.io. It also receives server sent events from Atlas for changes in mongodb database using Stitch-js SDK.
Challenges we ran into
Training of faster rcnn with visdrone dataset was a big challenge.
Accomplishments that we're proud of
We trained our tracker on Visdrone dataset. We also used change streams of mongo and server sent events of MongoDb Atlas. Our soft handoff procedure is able to make efficient assignments while choosing another drone for relocation of target.
What we learned
We now understand Dronekit-python library inside out. We learned to use mongodb cloud tools.
What's next for DeepFlight
DeepFlight features a soft handoff procedure for assigning a UAV for the task of relocation of target. We will be able to build it out further. Along with that, we can implement detection models for traffic violations like improper lane changes.
Built With
deep-sort
faster-rcnn
mongodb
mongodb-stitch
neural-networks
person-reidentification
pymongo
vehicle-reidentification
Try it out
deepflight.ipiyush.com | DeepFlight | Autonomous tracking of objects using a swarm of drones with soft handoff procedure | ['Piyush Agrawal', 'Shashwat Gulyani', 'Subham Banga'] | ['Best use of MongoDB Stitch'] | ['deep-sort', 'faster-rcnn', 'mongodb', 'mongodb-stitch', 'neural-networks', 'person-reidentification', 'pymongo', 'vehicle-reidentification'] | 52 |
9,927 | https://devpost.com/software/epsylon-mask | Epsylon Mask
mask
Removable UVC light tubes
Transparent PVC that empowers lips reading
air flow
Textile filter to prevent dirt and coarser particles from penetrating
Valve for air inlet
Inspiration
“I want the country to know that if I end up on that ICU bed, it is because I was not given enough PPE to protect me. Why is it that when my shift ends, I peel off the same N95 mask that I have worn for 12+ hours straight? I have breathed in stale air all day on a unit rife with the dying”
— KP Morgan. Nurse at The Mount Sinai Hospital
The successful management of a COVID-19 pandemic is reliant on the expertise of healthcare workers at high risk for occupationally acquired influenza. The recommended infection control measures for healthcare workers include surgical masks to protect against droplet-spread respiratory transmissible infections and masks to protect against aerosol-spread infections.
However, every time the healthcare worker goes into a COVID patient’s room they expose themselves - putting workers in jeopardy. It is not one patient and one exposure, it’s multiple exposures. Putting workers on getting gravely ill each day without having proper protection.
What it does
** Disinfection of the breathing air with UVC LEDs **
Epsylon is a reusable face mask with over a 99 percent protection against infectious agents. It deactivates the viruses and bacteria using UVC led lights which in our mask are proven to be harmless to the body. To ensure a cleaner and safer environment, the textile filter prevents dirt and other coarse particles from entering the breathing space of the wearer.
In comparison with the traditional surgical masks and N95, Epsylon enables lip-reading making the mask inclusive and accessible. It is light and can be worn for long hours in addition to being durable with an estimated lifespan of over 5 years.
Lastly, the mask is not only built for the safety of the wearer but also for those around them; the mask filters exhaled breath making it close impossible to infect other patients or health care workers since the mask cleanses the inhaled and the exhaled breath.
Overall, the efficiency of Epsylon much higher than the current one on the market for two key reasons:
1) Using silicon technology, loose points are efficiently sealed providing more protection from entry of unwanted particles
2) Currently, the maximum efficiency of masks on the market is 95% whereas with Epsylon we were able to accomplish an efficiency of 99%. This number is estimated to rise to 99.99% with the use of higher quality LED lights giving rise to the overall quality hence the efficiency rates.
How I built it
The Epsylon mask primarily is made using UVC. There are currently UVC LEDs with an area of less than 5 x 5 mm; showing an optimal scattering angle of UVC light. They emit light for disinfection with wavelengths between 260-270 nm each LED has an output of at least 80 mW. The LEDs can be operated with a voltage of less than 8.8 V.
Every virus needs a different dose of UV light to neutralize it. The reduction always takes place in the log10 area. For example, if you need 10mj/cm^2 for a 90% reduction, then you need 20mj / cm ^ 2 for a 99% reduction.
If COVID-19 were similar to the Spanish influenza virus, 3mj/cm^2 would suffice for a 90% reduction. For a respiratory mask with a 99.99% reduction, approx. 14 mj / cm^2 is needed and would suffice.
The reusability feature is derived from LEDs. The LEDs filter the air through the UVC light. The LEDs in use have a lifetime of 1 year if used continuously but can lat up to 5 years otherwise, which will often be the case. Additionally, the central part of the mask is made using silicon which boasts of strong durability and reliability. The LED tubes are easily replaceable in case of damage or harm. These abilities make the mask washable, reusable, and extremely durable.
Challenges I ran into
When we started out the project, we did not have a clear direction in mind. All the team was looking for ways to help the health care workers in the most optimal way possible. While researching the most commonly faced challenges during the COVID-19 global pandemic by the health care workers was that of sub-par protection provided by the currently used face masks by the health care industry.
The first obstacle was to figure out how to beat the current stats and build a solution that combats and addresses a variety of problems at once. We started by understanding the problems in the current mask that puts the health workers at risk this includes exposure to viruses and bacteria, build up from sweat, inability to communicate with disabled, exposure to the harmful environment, and easy penetration of dust particles in breathing space.
Once we had identified all the key areas where the current surgical mask and N95 lack, our next challenge was to pick out the best of the technologies to find the optimal solution to each. And so we did by including materials such as silicon which make the mask extremely durable and reusable it's easy to cleanse thoroughly.
Accomplishments that I'm proud of
Epsylon mask team is proud of the current outcome, even in the initial stages, the mask has a minimal limitation which we are soon looking towards fixing. We are proud that the mask is able to accomplish and check off multiple pain-points of the health workers at once.
Additionally, after spending time researching for hours and days, we discovered the harrowing truth of the current situation of the health care industry. We found out that the real number of deaths of the health line workers is being hidden in the media and the families are heavily impacted as well.
To think of it, it's just a mask but it has the power of ensuring psychological balance and relieving the health care workers of the distress of making a decision of whether to attend to a patient or not. Epsylon has the capability to provide nurses and health care workers with confidence that they are safe and providing that safety net to our heroes has been the biggest accomplishment of this project.
What I learned
Throughout this project, there has been a large variety of learning for the entire team. The learning range from larger respect, love, and understanding for the health care workers and other front line workers during these tough times and uncertain times.
The use of a variety of hardware technologies has helped us gain an in-depth understanding of UVC light on the human body and we optimized the product to ensure 100% that the UVC light is unable to trespass through the specialized materials of the tubes. Additionally, silicon although often just a replacement of plastic has been used as a key feature, this taught us the large variety of applications of this simple material it single-handedly made our product more durable, reliable, reusable, and more inclusive. Consequently, the team learned the importance of inclusivity in this situation as it often looked over and easily ignored, hence making sure that the health-focused product enables lip reading to ensure the safety of all of human-kind.
This has also been a huge business lesson for the team as designing, creating, and understanding the product not only took scientific and technological knowledge but it also required understanding the market, the market size, the target audience, unique selling point, competitive analysis, to name a few. Researching all of the above helped us gain insight into importance of our solution and gaining a more in-depth understanding of the targetted problem
What's next for Epsylon Mask
Epsylon Mask is more than just a product, it is a gesture of mutual reciprocity. As the tag line reads "Protecting Those Who Protect Us", this face mask of the future is aiming at helping the heroes of modern-day just as much as they have helped mankind through their tireless services. In the current stage of the product, it has a protection rate of 99% whereas, through research and in-depth knowledge of the technology, the team knows that it is very much possible to raise the number further to 99.99 and that is possible through scaling and high-quality part production.
Finally, Epsylon Mask is not only protective toward the coronavirus but it is built to protect against any influenza virus, hence we further want to explore the possible usages and markets for the product beyond the current target.
Built With
led
silicon
uvc
Try it out
www.figma.com | Epsylon Mask | Protecting Those Who Protect Us | ['Timo H', 'Andrés Guzmán', 'Jocelyn Calderon'] | [] | ['led', 'silicon', 'uvc'] | 53 |
9,927 | https://devpost.com/software/the-artwork | The ARtWork
Inspiration
Many people want to support local upcoming artists and have original artworks in their homes, but they cannot find the connection. At the same time, it is extremely hard for an upcoming artist to show and sell their art. The old-fashioned way: to be represented by a gallery is the most common, but galleries take high percentages of commission and only give a platform to a specific kind of art that they find interesting. The ARtWork is built to break these boundaries and systems within the art world, which is very important in our opinion. The ARtWork is a direct contact between buyer and artist, great for both worlds. Another inspiration is #supportyourlocal, very important in these times of crisis.
What it does
The ARtWork connects between buyers and artists. On the platform artists can upload original artworks they created and want to sell. The app is location based, so the buyer can easily look for an artwork in their region, as moving an artworks can be complicated and expensive, and through this platform buyers can support local artists. Within the app, buyers can browse for their preferences, whether it is a painting, sculpture or new media work, colorful, monochrome, pop-art or else. The artist are required to upload at least one picture of the artwork, tag it accordingly and pice the work. After connection through the app, they can be in direct contact through a chat function, and for example make an appointment in the studio of the artist to proceed the exchange.
The ARtWork is perfect for young, upcoming artists, that want to promote their art, as well as buyers that look for original art away from the conventional ways.
How I built it
backend JAVA frontend Android
Challenges I ran into
Categorizing the artworks sufficiently so it will give good and accurate results for the buyers, based on their art preferences.
Accomplishments that I'm proud of
What I learned
What's next for The ARtWork
The next step is implementing AR, so that the buyers can check the artwork in their own living room or environment where they want to put or show the work.
Built With
java
unity | The ARtWork | The ARtWork is a platform to connect between artists and buyers. The app let people browse for artworks in their preferred style and media, from local artist. Direct contact: a new way to buy art. | ['gilad grinberg'] | [] | ['java', 'unity'] | 54 |
9,927 | https://devpost.com/software/safeslot-getting-essentials-safely-during-crisis | Logo
Inspiration
In these days of the crisis, one of the biggest problems is buying essentials - food and medicine. Wherever you go, there are big queues for stores or overcrowded stores. With less enforcement of social distancing, people are not confident about going to stores. Over that, a lot of stores are closed or operating for a lesser duration than normal.
What it does
Our solution to the above problems is to
Evenly Spread Customer Visits at Different Times
Provide Customers with correct and updated opening status and information
Providing Customers with a Proof of Essential Travel (if any law enforcement agency asks for it)
Our app, SafeSlot helps in implementing these solutions. When user opens the app, they can see the nearest stores based on their location. We plan to divide the store timings into various time slots with a maximum cap of registrations based on the store/counter size and let users book the slot for their essential shopping. For the same, users can book a maximum of two slots per day for any store. We also have an option of DriveThru option in which users can upload their grocery list/doctor's prescription and stores can pack it by the time they arrive at the store. Hence, reducing the customer visit time to an average of 5 minutes.
How I built it
Our team built it in NodeJS and ReactJS
Challenges I ran into
The major challenge we have right now is mass adoption. We are trying to solve it by approaching various government authorities and showcasing the app in various contests.
Accomplishments that I'm proud of
We are proud of developing the solution within a short duration of 3 days.
What I learned
We learnt how to deal with a real-life crisis. We ran our idea with various people and learnt how to make a solution practical.
What's next for SafeSlot
We are in the process of creating a Store Side Application to update the slots on a live basis.
Maps integration is in process.
Upload prescription/grocery list feature is being added.
Branding in the app is being taken care of
Built With
node.js
react
Try it out
safeslot.in | SafeSlot | Getting Essentials Safely during Crisis | ['Sanket Patel', 'Shubham Jain', 'Aditi Katyal', 'Aditya Sonel', 'Hardik Gupta', 'Akshay Nagpal'] | [] | ['node.js', 'react'] | 55 |
9,927 | https://devpost.com/software/art-of-virus | 19 nationalities
Ten composers created the 10 main branches of the virus
4 easy steps
Art of Virus
ArtOfVirus
Modern Art Orchestra invites you to get infected with Music. Follow this peerless experimental project and explore the essence of music through tracking our social composing process.
Inspiration
I bet you can name melodies that are the symbols of your life's major events, relationships, etc. Art of Virus-melody is a symbol of our common experience of the pandemic. 9 notes were released and spread by the Modern Art Orchestra, Hungary through the internet to composers worldwide, who are challenged to write a short melody from these notes. Phonic Chat platform is the home of the thread of the musical virus. Our challenge is to keep up with the growing community of participants and provide them a user-friendly environment where they can share their contributions in several formats. Our inspiration is to contribute to online social music production. Phonic Chat Platform has been created to collect and nurture virtual fellowship among musicians and engage fans in a new way to the music production process. We think that music is a common experience of Humanity and we bring back this practice with the help of IT and develop its online standards.
What it does
Anything that has happened around the whole world according to covid-19 is very sudden, unusual and out of people’s comfort zone.
The ArtOfVirus project helps music artists and composers to release their creative power to interpret this humanity crisis with artistic creativity and as a community. These are the two cornerstones of music’s existence and the major roll in people’s life. Technology enables us to make this virtual community workshop of music composing visible and audible for the audience.
How I built it
It has been bulit upon the basis of the existing Phonic Chat platform within 2 weeks
Challenges I ran into
The tree-structure that evolves from the constantly growing number of the composed pieces of music that has been spread to 10 branch-starting musicians now has grown into 5-6 levels per branch and each doubled. Making this tree-structure is a real challenge.
Accomplishments that I'm proud of
The whole system was established within two weeks, and it is able to deal with all the different file formats that composers use. Anyone understood the usage of the admin field very quickly which means that it was designed as simple as it could be.
It is good to see all the artist gather on our platform from every corner of the world. There are almost 100 registered musicians/composers from 19 nationalities at our site and they all cooperate with one another.
What I learned
Better in team.
Better to develop when people use your system while it is being built.
Direct feedback from user experience is like gold.
What's next for Art of Virus
international press recognition
further development towards visualizing the whole branching of the tree-system of composers
to enable at the different branches to play all of the thread of alterations within one branch
Scale our Phonic Chat platform and get founding for go-to-market
Built With
adobe-illustrator
apache
bootstrap
cakephp
css
design
html
html5
javascript
mysql
nginx
phonicchat
photoshop
php
Try it out
phonicchat.com | Art of Virus - worldwide online composing | The initiative parallels music composing with the spread of a pandemic, the mutation of a virus, its latency, disappearance, and re-appearance, in other words behavior. Experience social music of AOV! | ['Denes Poor', 'Kinga Tamás', 'feketekovacs'] | [] | ['adobe-illustrator', 'apache', 'bootstrap', 'cakephp', 'css', 'design', 'html', 'html5', 'javascript', 'mysql', 'nginx', 'phonicchat', 'photoshop', 'php'] | 56 |
9,927 | https://devpost.com/software/dreamy_vr_tour | Inspiration
We know how the outbreak of COVID-19 has created a global health crisis that has had a deep impact on the way we perceive our world and our everyday lives. Everyone is working from home and are advised not to go out and stay inside.
What it does
Once the site is opened, you can select from the options provided over there to visit the place of your own choice, and our website would provide you the real-time virtual experience of yourself being at that place and enjoying the view, it's beauty. If you wish to switch to some other place after visiting some particular place and having fun out there, you will be having the option to switch to other destinations as well. In this way, we would like to provide the user of our website the experience of a holiday trip with the comfort of their home and also not abiding the rules of lockdown.
Features
The transformative nature of VR can offer a bridging solution for those with wanderlust.
Virtual reality in tourism allows visitors to learn the names and locations of all the significant sights in the world.
Tourists can also look inside significant locations to determine if they would indeed want a real-time walkthrough of such sights
For entertainment purpose
VR is able to activate the emotions by stimulating the users' senses. Because with virtual reality users are able to interact within the experience. This creates a great opportunity for the entire travel industry, especially for tourist destinations.
Cost-effective: It doesn't require any VR controllers. Also, it's built on google cardboard so it can be easily used on cheap VR boxes.
It can be accessed from any corner of the world.
How I built it
The whole webpage is designed using HTML, Javascript, CSS, Jquery, Bootstrap, and most importantly A-Frame. Firstly we created the basic 360 image layout then we added the 360 images of selected five destinations. Then came the interaction of gaze pointer instead of reticle pointer. Finally, we added some UI elements like the tabs to switch from one destination to another. In the end, we added a landing webpage which is quite beneficial to explain our project and also allows the user to take the live experience. Along with that, we have also added a section for the users to reach out to us. If a user wants a particular location to visit we can add it to the existing VR component.
Challenges I ran into
The biggest challenge was setting up the layout and the gaze pointer. Understanding that most of us don't have access to VR controllers we didn't go with it and thought to take gaze pointer for locating places. Knowing the fact that sharp colors can cause eye damage we preferred to edit the images with mild stack colors so that users' eyes don't get hurt and he can have a better experience.
Accomplishments that I'm proud of
We are proud that we were able to make a proper executable website that would provide a user with high-class experience both on the phone and desktop browser. One more thing which we accomplished is providing better user interaction with the developer so that the user can ask for what he wants. For example, If a user wants to visit a particular location then he or she can directly communicate to us through the website. To have a better experience users are recommended to install google cardboard app on their phones.
What I learned
Team collaboration and time management are the two things that we learned while doing this project. Along with this, we came across many new technologies such as webVR and A-Frame and how they can be an alternative for mobile VR apps. I also learn how we can build cost-effective solutions for the people who need it.
What's next for Dreamy_VR_Tour
In the future, we hope to add more destinations to the framework, as well as innovate the way in which we present the information to our users. We are also thinking to integrate voice interaction with this project so that users can easily ask about the place and get information regarding it. The limitation which would come in our way would be high bandwidth consumption which we are thinking to reduce by using RESTful API.
Built With
bootstrap
css
google-cardboard
html5
javascript
jquery
photoshop
webvr
Try it out
github.com
moit-bytes.github.io | Dreamy_VR_Tour | Explore destinations across the world remotely and safely | ['Mohit Kumar', 'Gurram sahithi priya', 'Nitesh Bharti'] | [] | ['bootstrap', 'css', 'google-cardboard', 'html5', 'javascript', 'jquery', 'photoshop', 'webvr'] | 57 |
9,927 | https://devpost.com/software/automated-green-house-utcoa5 | This is the robot i mentioned
AUTOMATED GREENHOUSE.
As the technology advance , we saw it is important to bring new technology to agriculture where by products with high quality will be produce and also it helps to meet the market demand.Here in Africa we depend so much in Agriculture and automating Agriculture here will be a huge step forward.
Our green is able to manage the essential needs for particular crops grown for them to perform better, Greenhouse can then send all the conditions to the cloud where by you can login in anywhere you are to see what is going on. With the security also CCTVs camera can be installed and real time images send to the cloud in which it act like security for instance when the bulb is not functioning you can notice through the camera if you are away from the green house.
We use materials that was available to bring our idea to the table that is automated greenhouse a greenhouse ,with technical part we use Two Arduino .relay ..sensors ,Fan ,FRID and other components. Node mcu to send information to cloud.5G will really help us here in Africa to deal with Industry 4.0.
Challenge i ran into was so many but finance is worthy to mention. we do love tech and skills we already have.
Green house could function as expected so cool.!!
Anything can be achieved passion and hard work is only needed.
What is next for us is to add more components so as to use internet to send all the information from green house to the cloud ,5G will help here because it is faster so as to check what is going on without any delay.
we love Technology and hardwork with our passion we believe it will take us Far .
Other project
we also tried to do garbage collector robot i have attach an image with the greehouse.NB robot is still under improvement
-----we as starteq automation if we could find investors anywhere we are going automate Africa in whichever way----
Built With
ai
c
c++
hardware
iot
programming
sensors | Automated Green house | How about food security with 5G -technology? | ['limo patrick'] | ['Best Hardware Hack presented by Digi-Key'] | ['ai', 'c', 'c++', 'hardware', 'iot', 'programming', 'sensors'] | 58 |
9,927 | https://devpost.com/software/online-courses-meta-search-team-entity | Inspiration
The idea is inspired from the famous online hotel booking app and as there is no existing platform that deals in online course meta-search, this is an opportunity that we want to grab.
What it will do
A single platform where all the courses available for any domain will be listed and options to compare on various parameters with just one click. Courses can be compared and filtered on the basis of syllabus and content covered, the duration of the course, fess of the course and also popularity. Imagine the option to list your choices and compare using your desire filters and then select according to your objective ! It will bring immense ease in pursuing the perfect course for you. And that is what we want to deliver – ease and comfort to help your select the best online available course.
Built With
audio
photoshop
powerpoint
slides
video | Online Courses Meta-search - Team Ātman | Ātman - Online Courses Meta-search platform to ease and comfort learners helping select the best online available course | ['Rihan Momin', 'Gyayak Jain', 'Mitali Shingne', 'Sagar Jagtap'] | [] | ['audio', 'photoshop', 'powerpoint', 'slides', 'video'] | 59 |
9,927 | https://devpost.com/software/augmented-reality-periodic-table | Inspiration
Education and learning will be effected by the rise of the imminent AR revolution as many AR glasses manufacturers are entering the market soon
What it does
Shows the periodic table in AR space. Just an example of the type of content that will be popular in the future
How I built it
Using Xamarin, ARKit, C# and .NET
Here is a walkthrough of the code
https://youtu.be/T34GpOnUZ7A
Challenges I ran into
Typing information for all 118 elements!
Accomplishments that I'm proud of
The overall effect
What I learned
Combining animations in Augmented Reality
What's next for Augmented Reality Periodic Table
I could add more information to selected Elements
Built With
.net
arkit
c#
xamarin
Try it out
xamarinarkit.com | Augmented Reality Periodic Table | Augmented Reality Periodic Table of Elements | ['Lee Englestone'] | [] | ['.net', 'arkit', 'c#', 'xamarin'] | 60 |
9,927 | https://devpost.com/software/bookmygig | Listing all gigs on homepage
Streaming credentials for creator
Waiting for host to start the stream
Host started the live stream
Inspiration
I see that shows/events were called off, as soon as the pandemic broke out. This has impacted the creators severely and I wanted to build something which helps content creators perform live shows online for live audiences.
What it does
This is a platform where creators perform live online shows(could be dance, comedy, plays, and the list goes on...) for live audiences. There is also a chat feature, where the users who are part of the same show can chat in realtime while they are watching it.
How I built it
Framework/Technologies used :
ReactJS
NodeJS
Redis as an in-memory database
Node-media-server (RTMP) for video streaming
Socket.io for realtime-chat
Three main Pillars of the application :
REDIS
is used to store data, as it is an in-memory database which makes our app incredibly fast and the process of exchanging data back and forth is seamless. Our application uses different blend of
built-in data structures
to store and retreive data in an efficient manner.
RTMP
provides a bidirectional message multiplex service over a reliable stream transport, such as TCP, intended to carry parallel streams of video, audio, and data messages, with associated timing information, between a pair of communicating peers. More about RTMP could be learned
here
.
When a creator lists a gig, he/she is given a unique streaming ID which is used to identify the creator on the backend and allocate a separate channel, where he could live stream and also the audiences of that particular show are isolated from the rest of the channels/shows.
As soon as the creator hits
start stream
button, the video data is transported to media server, where it encoded to different formats. In our case, we use
flv
format which is a file format used by Adobe Flash Player to store and deliver synchronized audio and video streams over the Internet.
Later, on the client side we use a
flvjs plugin
to render the video in realtime.
REALTIME CHAT
is accomplished using socket.io, which is a library to abstract the
WebSocket
connections. It enables realtime, bi-directional communication between web clients and servers.
When a client types the message and clicks send, it is sent to server and is then broadcasted to all the connected clients in the same room.
The messages that gets exchanged within a room is isolated from the outside world.
Also, we are using Redis
pubsub
indirectly as socket.io internally relies on it to achieve the realtime two-way communication.
Challenges I ran into
Handling creator data in the backend and storing it efficiently using built-in Redis data structures.
Fetching all the shows asynchronously(using promises), and resolving them was tricky.
Figuring out a way to load & play the live stream on the browser without clashing between others was really challenging.
Accomplishments that I'm proud of
I could successfully able to hook-up all the different parts of the application together and come up with the working end product.
What I learned
I picked up a lot in this process, this was the first time having my hands-on Redis, Socket.io and came across terms like RTMP, its uses, and also how to set one up.
Built With
node-media-server
node.js
react
redis
rtmp
socket.io
Try it out
github.com | BookMyGIG | Shows are getting cancelled all over the world due to pandemic, but with bookmygig, shows can be live streamed. | ['Manoj Kumar'] | [] | ['node-media-server', 'node.js', 'react', 'redis', 'rtmp', 'socket.io'] | 61 |
9,927 | https://devpost.com/software/activity-pathway-m6a3tc | Home page of the app
Fitness tab
Grocery Shopping Tab
Traveling Tab
Schedule Tab
Cooking Tab
Me traveling to Hawaii's Green sand beach
Me traveling to France's Eiffel Tower
Inspiration
My friend had a hard time staying fit and eating healthy during the past few months. My friends parents travel every time my friend had a break in school. Due to COVID-19, they can't travel or visit any places. My friends parents also have a hard time trying to find a store that allows you to buy groceries and delivers in at their house. This is what inspired me to make this app.
What it does
Activity Pathway is an app that several activities that it can help you with such as helping you staying fit daily, eating a healthy appetite daily, help you remember all the activities you need to do, help you travel across the globe, help you buy your groceries, and help you track your order after buying groceries from the app.
How I built it
I used html and css to code the app, but also used my ideas and an online app maker to create my app.
Challenges I ran into
The challenges I ran through was trying to create a camera feature, where when you use Activity Pathway to travel virtually, you can take a picture of the places you went to keep in a gallery of photos. I realized that there wasn't much time to code this feature and that it was too complicated, so I left it for something to add to my app in the future.
Accomplishments that I'm proud of
I'm proud of the grocery option thats in Activity Pathway and the traveling option. I feel proud because mainly love to buy groceries and travel to different parts of the globe. Due to coronavirus, people have to stay home and even going to a grocery store has gotten dangerous. I feel proud of my app since it helps people go to places that they always dreamed of going while they are at home due to coronavirus. The grocery option helps people buy their groceries in order for them to stay safe and not go outside during the environment.
What I learned
I mainly had forgotten most of how to code in html and css, but after this hackathon, my skills in html and css have grown far beyond what they were before the hackathon started.
What's next for Activity Pathway
What next for Activity Pathway is adding the camera feature so that people can take photos and save them in my app. What also is next is that people can insert a photo of themselves or of their family and my app would identify all the people in the photo inserted and my app would add the photo of the people from the inserted photo so that it looks like the photos you take while traveling were actually taken and that your entire family traveled to that place. Another thing that is next for Activity Pathway is that I would like to add a education option where children could learn different subjects. The education option can be for toddlers to high schoolers or even college students that need to refresh some of the skills they learned in the past. Also, if I had more time, I would have made my own live fitness and cooking videos.
Built With
ar/vr
css3
html5 | Activity Pathway | Activity Pathway is an app that can help you do several different activities to stay active while at home. | ['Tanvi Waghela'] | [] | ['ar/vr', 'css3', 'html5'] | 62 |
9,927 | https://devpost.com/software/morsel-kr0jtm | Landing page
Homepage
Creating a new listing
Pantry with listings
Placing an order
Order confirmation
Inspiration
Two problems that have resulted from COVID-19 are the inability to access food, as well as increased food waste. I decided that I wanted to create a solution that was able to tackle both problems simultaneously.
What it does
Morsel is a food-sharing app that allows for users to enjoy smaller portions and more cost friendly dining options without the hassle of having to coordinate splitting a meal. Morsel's platform connects listers and buyers, both of whom can enjoy portioned food from their favorite restaurants, without having to shell out a large amount of money. for listers, Morsel allows them to list half their meal in three easy steps:
Order a meal (at your favorite restaurant or using a delivery app) and be served/delivered half your meal. No need for leftovers!
Create a listing on Morsel to split or donate the other half of your food, which the restaurant will help you retain, so there is no contact or contamination.
Sit back and wait for 40% of what you paid to be refunded!
For the buyer, the process is just as simple:
Visit the in-app pantry to view your dining options.
Select and read about various options and pay 50% of the item's menu cost.
Take your confirmation to the restaurant or a delivery app and get your meal!
Morsel also makes food donations easy by allowing for users to donate meals in three clicks!
How I built it
I built Morsel using Swift and incorporated Firebase for authentication.
Challenges I ran into
Designing the UI/UX to make the app clean and easy-to-use was the largest challenge, as I had to take the perspective of the user and see whether each action was fluid and apparent.
Accomplishments that I'm proud of
Building a functioning app in less than a day with an idea that I absolutely love is something that I am definitely proud of!
What's next for Morsel
I'm hoping to incorporate a social element to Morsel, such as sharing food within your friend circle and a leaderboards for competing to donate meals.
Built With
firebase
swift
Try it out
github.com | Morsel | An innovative food-sharing app that reduces food waste and your food waist. | ['Alice Yeh'] | ['Best Environmental Lifestyle Hack'] | ['firebase', 'swift'] | 63 |
9,927 | https://devpost.com/software/antigen-blockchain | Inspiration
As the economy needs to re-open, authorities require "COVID-19 free" medical to allow people to travel or to work. On the other hand, the media has been littered with news about fake COVID-19 medical certificates sold illegally.
What it does
The antibody test result can be stored in blockchain such as Etherium. For example, the COVID-19 PCR test result is recorded in an immutable ledger. Therefore this source of truth reduces the risk of forgery. The wider impact would be stopping the spread of a pandemic.
How I built it
The initial prototype is build using on Ethereum TestNet.
Challenges I ran into
All information stored on the Ethereum blockchain can be viewed by everyone. Therefore need to strike a balance between privacy and blockchain "perfect" transparency.
What I learned
To minimise the gas costs, I made the Solidity contract as simple as possible.
What's next for Antibody Blockchain
Need to improve the Solidity contract
Built With
solidity
Try it out
github.com | Antibody Blockchain | A digital certificate based on blockchain to verify an antibody of a person. | ['Ronald Simon'] | [] | ['solidity'] | 64 |
9,927 | https://devpost.com/software/save-beach-ventanilla-peru | Cerro Blanco, Ventanilla—Peru
Limpiar, Conservar y Construir un Balneario Autosustentable; Construccion de 2 Muelles de Pesca Deportiva, Implementacion de Piscigranjas para asegurar la calidad de vida y seguridad alimentaria de la poblacion... | Save Beach Ventanilla—Peru | Save Beach Ventanilla—Peru | ['Serapio Escudero Gonzales'] | [] | [] | 65 |
9,927 | https://devpost.com/software/chat-near-free | Login Page
Mobile Mockups
Inspiration
In a time of slowly increasing government censorship on internet content, it's important to make advances on the true democratization of online data. One of the most important transactions that can occur between online users, resides in messaging services. We decided that a decentralized app would be the perfect way to achieve this goal, because of its ability for transactions to remain anonymous, and for the integrity of information to be properly vetted. This will also ensure that previous transactions cannot be overwritten, thus protecting previous messages. The creation of this app will help people preserve free speech, in places where it is threatened.
What it does
This decentralized messaging service can facilitate the anonymous communication between any two parties, in a way that ensures 100% confidence in the integrity of all messages sent.
How we built it
The login page was created with HTML, CSS, and Javascript. The login page is designed to redirect you to the appropriate page based on your Near login status. We followed the workshop conducted earlier in the hackathon and were able to put up a smart contract using Assembly, which allows a user to send a message to the network with authentication from the near protocol. We made use of near-api-js to set up the connection between the smart contract and the web app which we built using React js. We found the create-next-app to be a helpful starter boilerplate to make our web app. We hosted this functionality using Near on Digital Ocean and used GitHub hosting to host the whole Web app along with login.
Challenges we ran into
None of us had any experience with either Blockchain or the Near API. We had to read through content to understand the technology and come up with an effective use case and build it. We were able to find a fix to most of our errors, thanks to the really helpful mentors. We also found the near protocol documentation to be really helpful and we look forward to building DApps on a larger scale using the near protocol in the future!
Accomplishments that we're proud of
We had very limited time as we had to attend all the workshops, revisit them, and learn to build on a completely new platform. We are glad that we were able to build, run, host and put up a whole ready to use web app on Blockchain, and thus took our first step ahead into the decentralized future. In this process we learned a lot about near-API, Blockchain technology, and Assembly language, which we believe is also a great accomplishment.
What we learned
We were all newcomers to blockchain applications, so we were able to learn a lot from this experience. Coordinating with the team members virtually also was not easy as all of us were in different time zones and it was really tough to work on coding together. It was a great learning experience and we look forward to more such hackathons in the future.
What's next for ChatterBox
We have created mockups of a mobile version of our app, this expansion into mobile is a viable way for us to increase user reach and engagement. We are hoping for our app to become more widely adopted in places where the true integrity of internet data is being threatened. This will most likely be places of harsh government censorship (Russia, Iran, China, etc.). Although this may just be a proof of concept app, we think the concepts do have an increasingly more important real-world application.
Built With
css
html
javascript
near-api
react-js
Try it out
malaw97.github.io
github.com | ChatterBox | This decentralized messaging service can facilitate the anonymous communication between any two parties, in a way that ensures 100% confidence in the integrity of all messages sent. | ['Harshak krishnaa', 'Sai Rishvanth Katragadda', 'Matthew Law', 'anilson monteiro', 'Yuming Tsang'] | ['1st Place Cash Prize'] | ['css', 'html', 'javascript', 'near-api', 'react-js'] | 66 |
9,927 | https://devpost.com/software/environar-7zpwhf | Logo
Inspiration
Following the implementation of lockdown within countries worldwide, daily carbon emissions have decreased by a maximum of 17% which is equal to 17 million tonnes of CO2 a day. This is the first occurrence in which emissions have reached such a low level since 2006. In China alone, emissions have reduced by a quarter.
In other words, lockdown has had an overall positive impact on our environment. Unfortunately, all these efforts may go to waste if people go back to living life like they used to. We as a team decided that people should be aware of the state of our environment and how everyone can help to improve this. This inspired us as a team to come up with EnvironAR.
What it does
EnvironAR is an AR-powered cross-platform mobile game that educates users about environmental damage through an entertaining and educative experience. There are 4 different environments: Earth, Water, Wind and Fire. Each environment has levels that the user must complete for their avatar to progress and solve the mystery at the end of the game. It is aimed at users aged 12 and over so that both teens and adults can be inspired to maintain green habits. Our mobile game propagates the mission of Global Goal 13 - Climate Action which needs to be met by 2030.
We also have a
website
that explains more about EnvironAR, as well as a
repository
where you can find the app files in the 'Releases' tab.
How we built it
We build the game prototype in Unity, using the 'Fungus' and 'Google ARCore' packages. The website was built using Atom.
What's next for EnvironAR
Future plans for the game are as follows:
Completing our game prototype to include all levels
Creating a habit tracker specifically for green habits that promotes sustainability of eco-friendly habits.
Creating a forum where users can communicate and share how they have been improving the environment. This forum could be a part of the website and a separate app for the game
Future plans for the website are as follows:
Allowing users to donate to environmental charities
Built With
atom
augmented-reality
c#
unity
Try it out
grace-sodunke.github.io
github.com | EnvironAR | EnvironAR is an AR-integrated game that educates users about humans' impact on the environment through an immersive experience. | ['Demi Oshin', 'Mary S', 'Grace Sodunke', 'Sukhjit K'] | [] | ['atom', 'augmented-reality', 'c#', 'unity'] | 67 |
9,927 | https://devpost.com/software/scribr-sq5xot | Inspiration
Many studies have shown that taking notes by hand increases material retention. But it also increases something else--the chance of losing your work. What if you could have the learning benefits of handwriting notes but still be able to keep a copy as a Google or Word document and Ctrl-F through it later? As two students who spent the past year studying machine learning, we knew we had to create our own solution.
What it does
Scribr is a deep learning model that allows you to input pictures of your notes and have it transcribed for you into a text document of your choice using our deep learning model.
How I built it
We built our app with a 4-tier architecture integrated into both the cloud and the browser. We aggregated data from the IAM Handwriting Database, the Bentham Manuscripts Collection, the RIMES Letter Database, and the Saint Gall Database and trained our model on Google Cloud Platform’s Cloud ML Engine. We then served our model with Docker and Flask in an easy to use web application.
Our model's training can be divided into three steps. First, our preprocessed images are fed into a five-layer convolutional neural network to extract features. Next, the outputed feature map is propagated through a Long Short Term Memory Network. Finally, we use CTC to both calculate the loss for the RMSProp optimizer as well as decode into our final text.
Challenges I ran into
After bricking our computers trying to download all the data, we decided to move our data aggregation and model training to Google Cloud Platform’s Cloud ML Engine. This allowed us much more time for optimizing our model and creating our Flask interface. Also, we spent much more time than we expected preprocessing our data.
Accomplishments that I'm proud of
Figuring out how to integrate Google Cloud Platform into our workflow was a lifesaver. Our app would not be where it is without it.
What I learned
We learned a ton about Convolutional Neural Networks and Long Short Term Memory Networks while building our project, as well as integrating machine learning with flask to create an easy to use and nice looking ui instead of a command line.
What's next for Scribr
There's still room to improve our model through more data and better architecture, which is going to be vital going forward. We also have plenty of work to do in making our quick hackathon web app into a full-fledged application/website.
Built With
tensorflow
Try it out
github.com | Scribr | Transcribe handwritten notes to text documents with a novel OCR system | ['Sebastian Schott'] | ['Best General Hack'] | ['tensorflow'] | 68 |
9,927 | https://devpost.com/software/free-psychological-help | Inspiration
Our Idea are aiming to create an online platform where the psychiatrists will get in touch with patients suffering from COVID-19 by doing online therapies with artificial intelligence.
Who We Are
We are a network of licensed therapists and psychologists committed to helping medic receive the best mental health care available. We also work in our communities to raise mental health awareness, lower stigma, and help educate people in all things mental health.
What's next for FREE PSYCHOLOGICAL HELP WITH ARTIFICIAL INTELLIGENCE
The Minimum Viable Product (MVP) that is already available not only ensures that the app provides value to end-users but also that it is technically sound. Thus, it guarantees the completed version of the product concept.
How can you join?
leader in every European country
psychologists
psychotherapists
psychiatrists
sociologist
social workers
teachers | FREE PSYCHOLOGICAL HELP WITH ARTIFICIAL INTELLIGENCE | Our Idea are aiming to create an online platform where the psychiatrists will get in touch with patients suffering from COVID-19 by doing online therapies with artificial intelligence. | ['Oleksandr Khudoteplyi'] | [] | [] | 69 |
9,927 | https://devpost.com/software/libra-65q2jf | Logo!
Example of models dictionary in client class with only 1 classification neural network.
Output of tune()
Sample outfit for generate_fit_cnn() to generate a dataset of apples, oranges, and bananas and train a CNN for it with just 3 epochs.
Example process logger
Sample of all plots generated for clustering query. Only plots for best cluster are stored.
Generated model train vs test accuracy plot for classification neural network query.
Example generated clustering plot for n_clusters = 9
Similarity Spectrum for stat_analysis()
GitHub commit history!
Check out the GitHub page if you'd like to see a working Table of Contents. Devpost disables same-page links so I couldn't get it to work.
Libra: Deep Learning fluent in one-liners
Libra is a machine learning API that allows developers to build and deploy models in fluent one-liners. It is written in Python and TensorFlow and makes training neural networks as simple as a one line function call. It was written to make machine learning as simple as possible for every software developer.
Motivation
With the recent rise of machine learning on the cloud, the developer community has
neglected
to focus their efforts on creating easy-to-use platforms that exist locally. This is necessary because in a process that has hundreds of API endpoints, it's very difficult to integrate your pre-existing workflow with a cloud based model. Libra makes it very easy to create a model in just one-line, and not have to worry about the specifics and/or the transition to the cloud.
While Keras makes it easy to use Tensorflow's features, it still requires users to understand the basics like how to preprocess his/her dataset, how to build models for his/her task, and architectures for his/her networks. Libra takes all of this out of the hands of the developer, so that the user has to have no knowledge about machine learning in order to create and train models.
Guiding Principles
Beginner Friendly.
Libra is an API designed to be used by developers with no deep learning experience whatsoever. It is built so that users with no knowledge in preprocessing, modeling, or tuning can build high-performance models with ease without worrying about the details of implementation.
Quick Integration.
With the recent rise of machine learning on the cloud, the developer community has failed to make easy-to-use platforms that exist locally and integrate directly into workflows. Libra allows users to develop models directly in programs with hundreds of API endpoints without having to worry about the transition to the cloud.
Automation.
End-to-end pipelines containing hundreds of processes are automatically run for the user. The developer only has to consider what they want to accomplish from the task and the location of their initial dataset.
Easy Extensibility.
Queries are split into standalone modules. Under the dev-pipeline module you can pipeline both different and new modules and integrate them into the workflow directly. This allows newly developed features to be easily tested before integrating them into the main program.
Overview
Libra is split up into several main components:
Client:
the client object is where all models and information generated are stored for usage.
Queries:
How models are built and trained in Libra. They're called on client objects and are given an instruction. ALL preprocessing is handled by the queries!
Image generation:
For generating datasets and fitting them to convolutional neural networks automatically.
Model Information:
How to retrieve generated plots, and deep-tune models.
Dimensionality Reduction:
How to perform reduction and feature selection to your dataset easily.
Process Logger:
Keeping track of processes Libra is running in the background.
Pipelining for contributors:
Special module pipeline in order to contribute to Libra.
User instruction identification:
How Libra uses user instruction to determine targets and make predictions.
Table of Contents
Prediction Queries: building blocks
Regression Neural Network
Classification Neural Network
Convolutional Neural Network
K-Means Clustering
Nearest Neighbors
Support Vector Machines
Decision Tree
Image Generation
Class Wise Image Generation
Generate Dataset & Convolutional Neural Network
Model Information
Model Tuning
Plotting
Dataset Information
Dimensionality Reduction
Reduction Pipeliner
Principle Component Analysis
Feature Importances via Random Forest Regressor
Independent Component Analysis
Process Logger
Pipelining for Contributors
Providing Instructions
What's next for Libra?
Contact
Queries
Queries are how you create and train machine learning algorithms in Libra.
Generally, all queries have the same structure. You should always be passing an English instruction to the query. The information that you generate from the query will always be stored in the
client
class in the model's dictionary. When you call a query on the
client
object, an instruction will be passed. Any format will be decoded, but avoiding more complex sentence structures will yield better results. If you already know the exact target class label name, you can also provide it.
Regression Neural Network
Let's start with the most basic query. This will build a feed-forward network for a continuous label that you specify.
newClient = client('dataset')
newClient.regression_query_ann('Model the median house value')
No preprocessing is neccesary. All plots, losses, and models are stored in the models field in the client class. This will be explained in the Model Information section.
Basic tuning with the number of layers is done when you call this query. If you'd like to tune more in depth you can call:
newClient.tune('regression', inplace = False)
To specify which model to tune, you must pass the type of model that you'd like to perform tuning on.
This function tunes hyperparameters like node count, layer count, learning rate, and other features. This will return the best network and if
inplace = True
it will replace the old model it in the client class under
regression_ANN
.
Now, if I want to use my model, I can do:
newClient.models['regression_ANN'].predict(new_data)
Classification Neural Network
This query will build a feed-forward neural network for a classification task. As such, your label must be a discrete variable.
newClient = client('dataset')
newClient.classification_query_ann('Predict building name')
This creates a neural network to predict building names given your dataset. Any number of classes will work for this query. By default,
categorical_crossentropy
and an
adam
optimizer are used.
Convolutional Neural Network
Creating a convolutional neural network for a dataset you already have created is as simple as:
newClient = client()
newClient.convolutional_query('path_to_class1', 'path_to_class2', 'path_to_class3')
For this query, no initial shallow tuning is performed is done because of how memory intensive CNN's can be. User specified parameters for this query are currently being implemented. The defaults can be found in the
predictionQueries.py
file.
K-means Clustering
This query will create a k-means clustering algorithm trained on your processed dataset.
newClient = client('dataset')
newClient.kmeans_clustering_query()
It continues to grow the number of clusters until the value of
inertia
stops decreasing by at least 1000 units. This is a threshold determined based on several papers, and extensive testing. This can also be changed by specifying
threshold = new_threshold_num
. If you'd like to specify the number of clusters you'd like it to use you can do
clusters = number_of_clusters
.
Nearest-neighbors
This query will use scikit-learn's nearest-neighbor function to return the best nearest neighbor model on the dataset.
newClient = client('dataset')
newClient.nearest_neighbor_query()
You can specify the
min_neighbors, max_neighbors
as keyword arguments to the function. Values are stored under the
nearest_neighbor
field in the model dictionary.
Support Vector Machine
This will use scikit-learn's SVM function to return the best support vector machine on the dataset.
newClient = client('dataset')
newClient.svm_query('Model the value of houses')
Values are stored under the
svm
field in the model dictionary.
NOTE: A linear kernel is used as the default, this can be modified by specifying your new kernel name as a keyword argument:
kernel = 'rbf_kernel'
.
Decision Tree
This will use scikit's learns decision tree function to return the best decision tree on the dataset.
newClient = client('dataset')
newClient.decision_tree_query()
This will use scikit's learns Decision Tree function to return the best decision tree on the dataset. Values are stored under the
decision_tree
field in the model dictionary.
You can specify these hyperparameters by passing them as keyword arguments to the query:
max_depth = num, min_samples_split = num, max_samples_split = num, min_samples_leaf = num, max_samples_leaf= num
Image Generation
Class wise image generation
If you want to generate an image dataset to use in one of your models you can do:
generate_set('apples', 'oranges', 'bananas', 'pineapples')
This will create separate folders in your directory with each of these names with ~100 images for each class. An updated version of Google Chrome is required for this feature; if you'd like to use it with an older version of Chrome please install the appropriate chromedriver.
Generate Dataset and Convolutional Neural Network
If you'd like to generate images and fit it automatically to a Convolutional Neural Network you can use this command:
newClient.generate_fit_cnn('apples', 'oranges')
This particular will generate a dataset of apples and oranges by parsing Google Images, preprocess the dataset appropriately and then fit it to a convolutional neural network. All images are reduced to a standard (224, 224, 3) size using a traditional OpenCV resizing algorithm. Default size is the number of images in one Google Images page
before
having to hit more images, which is generally around 80-100 images.
The infrastructure to generate more images is currently being worked on.
Note: all images will be resized to (224, 224, 3). Properties are maintained by using a geometric image transformation explained here:
OpenCV Transformation
.
Model Modifications
Model Tuning
In order to further tune your neural network models, you can call:
newClient.tune('convolutional neural network')
This will tune:
Number of Layers
Number of Nodes in every layer
Learning Rate
Activation Functions
In order to ensure that the tuned models accuracy is robust, every model is run multiple times and the accuracies are averaged. This ensures that the model configuration is optimal.
You can just specify what type of network you want to tune — it will identify your target model from the
models
dictionary using another instruction algorithm.
NOTE: Tuning for CNN's is
very
memory intensive, and should not be done frequently.
Plotting
All plots are stored during runtime. This function plots all generated graphs for your current client object on one pane.
newClient.plot_all('regression')
If you'd like to extract a single plot, you can do:
newClient.show_plots('regression')
and then
newClient.getModels()['regression']['plots']['trainlossvstestloss']
No other plot retrieval technique is currently implemented. While indexing nested dictionaries might seem tedious, this was allowed for fluency.
Dataset Information
In depth metrics about your dataset and similarity information can be generated by calling:
newClient.stat_analysis()
A information graph as well as a similarity spectrum shown below will be generated. This can be found in the image gallery under "Similarity Spectrum."
This represents 5 columns that have the smallest cosine distance; you might need to remove these columns because they're too similar to each other and will just act as noise. You can specify whether you want to remove them with
inplace = True
. Information on cosine similarity can be found
here
.
If you'd like information on just one column you can do:
newClient.stat_analysis(dataset[column_name])
Dimensionality Reduction
Reduction Pipeliner
If you'd like to get the best pipeline for dimensionality reduction you can call:
dimensionality_reduc(I want to estimate the number of households', path_to_dataset)
or
newClient.dimensionality_reducer(I want to estimate the number of households')
Instructions like "I want to model x" are provided in the dimensionality reduction pipeline because it identifies which prediction objective you would like to maximize the accuracy for. Providing this instruction helps Libra provide users with the best modification pipeline.
Libra current supports feature importance identification using random forest regressor, indepedent component analysis, and principle component analysis. The output of the dimensionalityReduc() function should look something like this:
Baseline Accuracy: 0.9752906976744186
----------------------------
Permutation --> ('RF',) | Final Accuracy --> 0.9791666666666666
Permutation --> ('PCA',) | Final Accuracy --> 0.8015988372093024
Permutation --> ('ICA',) | Final Accuracy --> 0.8827519379844961
Permutation --> ('RF', 'PCA') | Final Accuracy --> 0.3316375968992248
Permutation --> ('RF', 'ICA') | Final Accuracy --> 0.31419573643410853
Permutation --> ('PCA', 'RF') | Final Accuracy --> 0.7996608527131783
Permutation --> ('PCA', 'ICA') | Final Accuracy --> 0.8832364341085271
Permutation --> ('ICA', 'RF') | Final Accuracy --> 0.8873546511627907
Permutation --> ('ICA', 'PCA') | Final Accuracy --> 0.7737403100775194
Permutation --> ('RF', 'PCA', 'ICA') | Final Accuracy --> 0.32630813953488375
Permutation --> ('RF', 'ICA', 'PCA') | Final Accuracy --> 0.30886627906976744
Permutation --> ('PCA', 'RF', 'ICA') | Final Accuracy --> 0.311531007751938
Permutation --> ('PCA', 'ICA', 'RF') | Final Accuracy --> 0.8924418604651163
Permutation --> ('ICA', 'RF', 'PCA') | Final Accuracy --> 0.34205426356589147
Permutation --> ('ICA', 'PCA', 'RF') | Final Accuracy --> 0.9970639534883721
Best Accuracies
----------------------------
["Permutation --> ('ICA', 'PCA', 'RF) | Final Accuracy --> 0.9970639534883721"]
The baseline accuracy represents the accuracy acheived without any dimensionality reduction techniques. Then, each possible reduction technique permutation is displayed with its respective accuracy. At the bottom is the pipeline which resulted in the highest accuracy. You can also specify which of the reduction techniques you'd like to try by passing
reducers= ['ICA', 'RF']
to the function.
If you'd like to replace the dataset with one that replaces it with the best reduced one, you can just specify
inplace=True
.
Principle Component Analysis
Performing Principle Component is as simple as:
dimensionality_PCA("Estimating median house value", path_to_dataset)
NOTE: this will select the optimal number of principal components to keep. The default search space is up to the number of columns in your dataset. If you'd like to specify the number of components you can just do
n_components = number_of_components
.
Feature Importances via Random Forest Regressor
Using the random forest regressor to identify feature importances is as easy as calling:
dimensionality_RF("Estimating median house value", path_to_dataset)
This will find the optimal number of features to use and will return the dataset with the best accuracy. If you'd like to manually set the number of feature you can do
n_features = number of features
.
Indepedent Component Analysis
Performing Indepedent Component Analysis can be done by calling:
dimensionality_ICA("Estimating median house value", path_to_dataset)
If this does not converge a message will be displayed for users to warn them by default.
Process Logger
Libra will automatically output the current process running in a hierarchial format like this:
loading dataset...
|
|- getting most similar column from instruction...
|
|- generating dimensionality permutations...
|
|- running each possible permutation...
|
|- realigning tensors...
|
|- getting best accuracies...
A quiet mode feature is currently being implemented.
Pipelining for Contributors
In order to help make Libra extensible, a process pipeliner has been implemented to help contributors easily test their newly-developed modules.
Let's say you've developed a different preprocesser for data that you want to test before integrating it into Libra's primary workflow. This is the process to test it out:
First, you want to initialize your base parameters, which are your instructions, the path to your dataset, and any other information your new function might require.
init_params = {
'instruction': "Predict median house value",
'path_to_set': './data/housing.csv',
}
You can then modify the main pipeline:
single_regression_pipeline = [initializer,
your_own_preprocessor
, #is originally just preprocessor
instruction_identifier,
set_splitter,
modeler,
plotter]
These pipelines can be found under the
dev-pipeliner
folder. Currently, this format is only supported for the single regression pipeline. Complete integration of pipelining into the main framework is currently being implemented.
Finally, you can run your pipeline by using:
[func(init_params) for func in reg_pipeline]
All model information should be stored in
init_params
. If you'd like to modify smaller details, you can copy over the module and modify the smaller detail; this split was not done to maintain ease of use of the pipeline.
Instructions
newClient.svm_query('Estimate household value') --> target: households, found similar in dataset ✓
Libra uses intelligent natural language processing to analyze user instructions and match it with a column in the users dataset.
Textblob
, a part of speech recognition algorithm, is used to identify parts of speech.
A self-developed part-of-speech deciphering algorithm is used to extract relevant parts of a sentence.
Masks are generated to represent all words as tensors in order for easy comparison
Levenshentein distances are used to match relevant parts of the sentence to a column name.
Target column selected based on lowest levenshentein distance and is returned.
What's next for Libra
Q-learning and Policy Gradient Queries
Make process pipeline part of main framework
Modularize the data preprocessor for structured data
Data Augmentation queries
Sentiment Analysis
Contact
If you're excited about Libra and are looking to contribute please reach out to me via email or linkedin to get started with onboarding. This will begin after this hackathon is completed.
Email:
ps9cmk@virginia.edu
Linkedin:
https://www.linkedin.com/in/palash-sh/
Responsible AI considerations are attached as a Google Docs link below.
Built With
argparse
colorama
cv2
itertools
json
keras
keras-tuner
matplotlib
numpy
os
pandas
pdtabulate
pil
pprint
requests
scipy
selenium
sklearn
string
sys
tabulate
tensorflow
textblob
urllib
Try it out
github.com | Libra | A machine learning API that makes building and deploying models as simple as a one-line function call. | ['Palash Shah'] | [] | ['argparse', 'colorama', 'cv2', 'itertools', 'json', 'keras', 'keras-tuner', 'matplotlib', 'numpy', 'os', 'pandas', 'pdtabulate', 'pil', 'pprint', 'requests', 'scipy', 'selenium', 'sklearn', 'string', 'sys', 'tabulate', 'tensorflow', 'textblob', 'urllib'] | 70 |
9,927 | https://devpost.com/software/social-distancing-screen | Social Distancing Screen
Inspiration
I was inspired by World Hackathon Day for COVID19 Emergency Response.
What it does
It's a hologram camera 3d screen application. It can act like a toll-booth, and make suggestions like, "tap your phone" or "tap your card".
How I built it
I got a projection screen from Amazon and downloaded software from AIY Projects Cube with Google.
https://aiyprojects.withgoogle.com/vision/
Challenges I ran into
Like anything, I have to try and try again until I make a sale.
Accomplishments that I'm proud of
Well, it enables customers to interact with a safe Artificial Intelligence that could never be infected or dangerous.
What I learned
I learned that we can indeed, imagine and deploy solutions at the speed of thought. It's no longer a barrier, it's a portal or window... from the friendshipcube edge, to the 22core optical neural network, and into the hybrid cloud.
What's next for Social Distancing Screen
Sales and installations in a reliable partnership.
Built With
3d
camera
distancing
hologram
screen
social
Try it out
aiyprojects.withgoogle.com
www.developers.google.com
www.github.com | Social Distancing Screen | turning the sneeze guard cough screen into a hologram camera 3d portal. | ['https://www.bitchute.com/channel/friendshipcube/', 'Graeme Kilshaw'] | [] | ['3d', 'camera', 'distancing', 'hologram', 'screen', 'social'] | 71 |
9,927 | https://devpost.com/software/uberfast-corpus-browser-over-covid19-scientific-papers | Search in action. Add a few terms (simple or composite) and you will find them in specific pages of specific pdf papers of the corpus.
Inspiration
Universidad Politécnica de Madrid (
UPM
) and Accenture Spain have joined efforts to face current challenges that can be solved using Artificial Intelligence. Both created the
AInnovation Space
in Montegancedo's campus. There, UPM researchers and students work with Accenture developers to create the next generation of tools that will cope these challenges.
What it does
We have technologies to create a fast, reliable and intuitive web application to query and browse huge document repositories. This web app is suitable for mobile devices (tablets, cell phones) as well as desktop computers. In this case we will focus on the
covid19 dataset
, with more than 70K pdf documents (and growing).
The distinctive fact of this technology is the terminology-based way of specifying what you look for. Instead of the classical set of keywords (one word terms), our approach uses "terms" (simple or composite) that appear in the corpus (text and images). These search-terms are created in advance (automatically) for each corpus.
For instance, when a user types "pulmona" the system proposes a list of terms including "pulmona", such as:
pulmona
ry edema
pulmona
ry disease
But also longer terms like:
obstructive
pulmona
ry
chronic obstructive
pulmona
ry disease
The longer the term the better, because it is more specific and this results in fewer occurrences in the corpus. That is, less pages to be read by the user.
How I built it
In the back end we use the KeyQ technology, R packages and solr. The front end is created with Shiny (Bootstrap themes).
Challenges I ran into
We want responses in a blink. Our limit is 0.5 sec but we want 0.1 sec. We all know that response time in that range provides users a control feeling and a satisfying experience.
Accomplishments that I'm proud of
Successful preliminary results with corpora from technological domain (technical manuals), legal domain (Spanish legislation) and bio domain (covid19 scientific papers).
The term-extractor (KeyQ) was created by me and it is "registered software" by my research group (
OEG-UPM
). This research topic is still an active research and development area to me.
What I learned
The combination of in-memory corpus data with in-disc search is a powerful strategy.
What's next for our tool
We have lots of improvements in mind, but we will show them up only to some privileged eyes ;-)
Acknowledgements
I would like to thank Accenture Spain its support and confidence with us through the
AInnoSpace
initiative. Also to UPM for their support in entrepreneurship (
ActúaUPM
and
Innovatech
programs). And last but not least a huge hug to the
UPM Artificial Intelligence master
students (Álvaro, Pedro, Lucía, JJ) who pushed all our ideas and prototypes to professional levels.
Built With
bootstrap
keyq
r
shiny
solr
Try it out
demo.inno.oeg-upm.net | Terminology-based corpus search on covid19 scientific papers | With an exponential growth of scientific publications, an intuitive & fast information finder is a must for communities like researchers or physicians . | ['Mariano Rico', 'Alvaro SDN'] | [] | ['bootstrap', 'keyq', 'r', 'shiny', 'solr'] | 72 |
9,927 | https://devpost.com/software/intellischool-2r3qml | intellisSchool Web App - Smart Quiz
intellisSchool Web App - Smart Notes
intellisSchool Web App - Leaderboard
intellisSchool Web App - Dashboard
Inspiration
In the midst of COVID-19, the whole world has realised the power of online learning and the impact it can have on millions of students. Be it K-12 or higher education, virtual learning is talk of the town. Although there are plenty of video lectures available online and usage is extravagant, often these lectures can get too long, monotonous and quite frankly, boring! People zone out frequently and lose track of what's going on. They end up rewinding the video and watch again which really is a waste of time.
In this process, there is a lack of feedback; there is no one to test you if you've really understood what you've been listening to. What if there was a mechanism to evaluate the understanding? Also, not every student like to take notes. But all of us need something to refer to before an exam. What if there is someone who can automatically generate notes for you while you concentrate on listening to the lecture in the video?
What it does
intelliSchool Web Application
Teachers can upload meeting recordings to our application and send invitations to their students.
Students can then login and subscribe to the class.
All videos that are part of the class will be downloaded in the background to the student’s device.
The application will use the video uploaded by the teacher to automatically generate smart quizzes, smart notes, flash-cards and links to related concepts using Machine Learning and Natural Language Processing.
While students are learning a concept, if they have any doubts, they can use the discussion section below the video to ask questions. Teacher, other students or our bot can answer their queries.
At the end of the quiz, they will be able to view a report that gives their score along with the questions that they answered incorrectly. Students can click on ‘Where was this question from?’ button. Our App will take the student to the point in the video from where the question was generated.
intellischool browser plugin
A web browser extension that automatically generates a quiz and smart notes for videos in YouTube and other EdTech websites using NLP (Natural Language Processing) and Machine Learning. A user can self-evaluate their understanding by taking the quiz while watching the video. Each question presents the user with an option to move back to the part of the video from where the question was generated. This will help the user in revising the concepts which they weren’t able to understand when they watched the video the first time.
We also generate smart notes and summary for the video. Our smart notes also provide meanings and examples for certain key words in the notes so that a user can easily understand the overall content of the notes. User can also click on any sentence in the notes to navigate to the point in the video from where the concept summarized in the sentence is explained.
How we built it
We used Angular to build the frontend of the web application and Javascript to build the browser plugin. Backend is built using Python. 'IBM Watson Speech to Text converter' is used to convert speech to text. We are using python packages such as nltk, gensim and our own Algorithms to automatically generate MCQ based quiz and smart notes from the speech in the video.
Challenges we ran into
The process to generate quiz, notes or summary from a video is complex and time consuming machine learning problem. It was a challenging task to reduce the total time taken by this process from several minutes to a few seconds. We were able to accomplish this over the weekend.
Accomplishments that we're proud of
We consider every aspect of the project as an accomplishment as it was a new learning every step of the way.
What we learned
We were able to explore different NLP libraries, their capabilities and restrictions and also explored different open source and cloud solutions and how to use such already existing solution in our project to improve its efficiency.
What's next for intelliSchool
Capture screenshots of diagrams and formulae from the video using Computer Vision and add it to Smart Notes.
Create flash cards
Add analytics to help teachers understand the topics that weren't answered correctly by students.
Add analytics for parents to better understand how their kids are performing.
In smart notes section, we want show links to relevant websites to read more about the central topic of the video.
Generate pre-requisites - add links to learn about most important sub-topics on the video as pre-requisites
Built With
angular.js
flask
gensim
ibm-watson
javascript
nltk
python
Try it out
github.com | intelliSchool | Web Application and a browser plugin that generate quizzes, notes and flash-cards automatically using Natural Language Processing and Machine Learning for any educational video | ['Priyadarshini Murugan'] | [] | ['angular.js', 'flask', 'gensim', 'ibm-watson', 'javascript', 'nltk', 'python'] | 73 |
9,927 | https://devpost.com/software/augmented-reality-musical-theatre-karaoke-filters | Opening Title for the Rent Musical filter.
Sing a lyric from Rent's: "Finale B"
Opening Title for the Wicked Musical filter.
Sing a lyric from Wicked's: "Popular"
Opening Title for the Dear Evan Hansen Musical filter.
Sing a lyric from Dear Evan Hansen's: "Anybody have a map?"
Opening Title for the Chicargo Musical filter.
Sing a lyric from: "All that Jazz"
Inspiration
I started working on these filters when I realised there are very few musical theatre games out there. The one's that are either trivia based or aimed at children for singalongs. I wanted to make games that my friends and myself would enjoy. COVID-19 Shut down all theatres and live performance spaces, we were all out of work and we were also very demoralised. I wanted to bring joy to performers and musical theatre fans everywhere. By making games that I would enjoy. That the enjoyment is in the experience, not in whether you win or lose, where their are no rules you can't break, where you share your excitement in knowing the words to a song, but also be willing to laugh at yourself and share your experience with your friends when you don't. I realised that the analytics side is quite important and one I downplayed the value of while making Augmented Reality filters. I realised that the data showed what musicals people enjoyed the most, what users were most popular and I could actually track how far it reached online. This is all data that could be used to help producers and theatres realise that when they open, there are fans out there willing to pay too enjoy the event and also, what shows and performers are currently popular among society.
What it does
These are a set of musical theatre instagram filters, they're created for fans to test their musical theatre knowledge and sing along to their favourite show tunes. The Title of the show appears on the user's forehead when the camera on their mobile device recognises a face, the filter then randomises through the show's musical numbers when pressing record before stopping, prompting the user to sing along. The recording is then posted on instagram, but the video can then be saved and uploaded to all social media platforms like Facebook, Youtube, TikTok, Twitter and LinkedIn.
How I built it
I have an arts degree, not a computer or engineering one. Everything I learnt about AR is from youtube tutorials, which I think is a testament of how using a visual patch layout opposed to scripting is easily accessible. The functionality of the patch is linking 1 function to the next: Find a face>Show the title>Press record>Delay randomizing for x seconds>Randomize through the assets or x seconds>Stop. The whole time this occurs it is told to track the user's face (how to identify a face is a separate patch altogether). The graphic assets are in 2 categories: The Title and Slides (The musical numbers). Upon recognising the face the filter will show the Title, and upon pressing record the Slides will randomise then stop. It sounds simple and to do a degree if you're a visual learner, it is. But the back end of having working Augmented Reality and facial recognition is a little more complex.
Challenges I ran into
Spark AR Studio literally released an update 2 days ago that broke my test links on android devices and I had to contact them at 10pm to troubleshoot. So using software that regularly updates and changes caused me to lose a bit of sleep over the last couple days. Luckily Spark AR Studio has the tech support of the Facebook and Instagram teams.
Accomplishments that I'm proud of
About 2 years ago I decided to learn game design, I taught myself C#, Unity, SparkAR, music editing, graphic design and started my own game design company. I published games on The Apple app store (where my one game made it to the number 1 spot in its category) as well as the Google Playstore. My Instagram filter profile currently has 24 million impressions and 2.7 million uses, which is insane for a self-taught theatre major on a 2012 Macbook.
What I learned
Not to be afraid of technology or to learn a new skill. People hear Augmented Reality or coding and immediately are intimidated. I work in theatre and it seems like a lot of the artists I work with have the mentality of "I'm creative, not technical" and avoid new technologies, when actually if they embraced it more people could be exposed to their work.
What's next for Augmented Reality Musical Theatre Karaoke Filters
South Africa has a National Arts Festival every year in June, due to COVID-19 closing all possibilities of a live theatre experience they have opted to go digital. I will be teaching a workshop on AR filter design and talking about how to integrate the arts into the modern world through new technologies available.
Built With
facebook
instagram
photoshop
sparkar
Try it out
instagram.com
instagram.com
instagram.com
instagram.com | Augmented Reality Musical Theatre Karaoke Filters | I have created Augmented Reality Musical based games that track its impact and interest on social media, to provide entertainment while reminding producers that theatre is still a viable investment. | ['Luke Draper'] | [] | ['facebook', 'instagram', 'photoshop', 'sparkar'] | 74 |
9,927 | https://devpost.com/software/smart-swaps | SmartSwap
Introduction
Exchanging has been part of human civilization right from when humans began trading,in early phase it was Barter system which was used to exchange goods between two parties,fast forward to modern age we have blockchain to trade goods(represented by monetary units of tokens).
Now its possible to trade without trusting the opposite part thanks to guarantees provided by blockchain
First we have Order-Book based Dexes for transacting between multiple parties,They had their weekness such as
No fair pricing
multiple transactions just to do a simple exchange of two tokens
Bad UI/UX and many more
Then we had introduction of AMM by vitalik which led to creation of bancor and uniswap
This solved quick exchange swap for many users
Only problem is now we have Ethereum blockchain
which has 15 sec transaction confirmation
high gas prices which makes transacting anytime you want difficult
waiting for hours due to high gas prices which can affect price you get when trading(Makerdao black thursday is such a example)
we need a place for fast exchange
thankfully we have plasma sidechain binded to main etherum network which solves much problems with scalability without scarificing much decentralization.
We have come up with plasma exchange solution with fast exchange and less gas fees
Users can visit test website link and login using wallet of their choice by clicking on connect
They can transfer thier assets from mainet to matic network using bridge
If Users want to buy Land Tokens they can select NFT
If Users want to buy any ERC20 Token they can buy using swap page by selecting tokens from dropdown
They can swap and send tokens to another address using send page
They can provide their assets to be available for exchange and collect fees
Functionality
Token(ERC20)-Token(ERC20) conversion
Dai->Cdai(Compound Dai) conversion
Token(mana)->NFT(Land) conversion
Login using multiple wallets(Portis,Torus,Fortmatic,walletconnect,metamask)
ETH-Token(ERC20,ERC721) conversion
Installation Steps
Clone the Repo
Do npm install
Naviagte to ------ and start swapping:)
(Note -> The Addresses used mentioned in the Contract Addresses Section below are our own so you need to replace those to run the dapp for your self, contact
virajm72@gmail.com
/
aveeshshetty1@gmail.com
/
snaketh4xor@protonmail.com
as there a few files where you need to change the addresses)
Set Up NFT Market
As the
NftMarket Contract
has admin level permissions so you will need to set up your own.
Refer
this
to get started.
Note - You can use the Nft Test Contract mentioned in the above link
here
Matic Bridge
Go to depositerc20.js
Create a ropsten custom token on matic using this address
0xe2B7a0c7bC21E000B8327713513b9D4d2620A414 (TERC20)
Create a matic custom token on matic using this address
0xe2B7a0c7bC21E000B8327713513b9D4d2620A414 (TERC20)
Enter your metamask address in from field in the script
Enter your private key in SetWallet function
run node depositerc20.js in cmd
(We're facing some issues , while integrating Matic.js with Reactjs's forms without the private key, we had discussions with matic guys
and they gave the go ahead for scripts)
Screenshots
Video
Smart Swap Demo
Website
Links with title
Sample Transaction Links For Reference
ETH<->MANA -> 0x003b9b872b81b0b9ad0f4745eb61c196a813a1ef7501c1a058aec5a817e9fad6
DAI<->ETH -> 0xb770cca7949d3a353edff6f7ade40cc32bd57a314d11b40e90cb8759089ebd7
DAI<->CDAI -> 0xbed4ff336ef8e2337b369702398d11f7881f5969c0eaa62af227c8e33694e3ea
CDAI<->DAI -> 0xe52737c9049db16ad05f21592dda422a076ec3f797273d65e480d3140f79b89c
ETH-LAND -> 0x75d4c19afa3a521fc0043b5889a5d122ce34743cfd4c5625cc543406cab00420
MANA<->LAND -> 0x645e317d67bc65daaa902d48abaa92c308a9eeb4ec693e9b03330778351d16e4
Adding Liquidity -> 0x16a0f0a1f32ba3eb6ca6c52f11dbee4400b5f78d05adb66187eb2c0cd70c79c6
Copy the Tx Hash and check out
here
[NOTE - We Would recommend checking out the demo video first]
Contract Addresses
You can find the ERC20, NFT(LAND) and Uniswap Exchange addresses
here
All Compound Contract Addresses are
here
NFT MarketPlace Address -> 0x1b5666b40f30231879f8a5dedfc78cdda7cacf77
Tools Used
Compound protocol
Uniswap protocol
Kyber protocol
NFT markerplace exchange
Matic bridge
Truffle
React
Web3react library
Authors
Viraz
snaketh4x0r
Aveesh
Built With
html
javascript
typescript
Try it out
github.com | Smart-swaps | L2 powered super fast platform for ERC swaps & buying NFTs | ['Aveesh Shetty'] | [] | ['html', 'javascript', 'typescript'] | 75 |
9,927 | https://devpost.com/software/health-bot | Inspiration
In our day to day lives, we often overlook our mental health, physical health, emotional well-being or all three at once. Even when we decide to invest more time in ourselves, we don't know where to start. So we made a website where you can find all the information in one place. We have sections for Home Workouts, Outdoor Workout (to keep you physically fit), Mental Health (for filling you up with positive thoughts), Healthy Eating (to keep your immune system in good shape) and Connecting with Loved ones (so you always have someone to talk to).
How we built it
We used wix to design the site. We used UiPath to scrape the video links from youtube and populate the database. The Google action for Google Assistant was made using google action console and Dialogflow.
The Google action allows you to get workouts and tips from your smartphone, Google Home or any device with google assistant in it.
The outdoor workouts will certainly make you feel more connected with nature 😉
Challenges we ran into
We struggled with creating follow up intents with dialogflow and getting the google action to work.
Accomplishments that we're proud of
Learning how to use Dialogflow was a great accomplishment for us. Also this was our first time making a website so we are pretty proud of how the UI turned out to be.
What we learned
Making google actions, creating websites using wix, populating data using UiPath
What's next for Health Bot
Adding a whole variety of workouts and making a skill for Alexa too.
Built With
dialogflow
google-actions
uipath
wix
Try it out
maggiefloat.wixsite.com | Health Bot | Taking care of your mental and physical well-being | ['Maggie Hou', 'Julia Ma', 'Jatin Dehmiwal', 'Victor Yau'] | ['Best UiPath Automation Hack'] | ['dialogflow', 'google-actions', 'uipath', 'wix'] | 76 |
9,927 | https://devpost.com/software/true-love-affection-23e0o1 | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
Am
What I learned
What's next for True love affection
Built With
hardwae
Try it out
www.khaliffyoyo.com | True love affection | Love is key to affection | ['khaliff meshack'] | [] | ['hardwae'] | 77 |
9,927 | https://devpost.com/software/covid-19-pandemic | Youth are the engine of change. Empowering them and providing them with the right opportunities can create an endless array of possibilities. But what happens when young people under 25—who make up 42% of the world’s population – lack safe spaces in which they can thrive?
According to the United Nations, one in 10 children in the world live in conflict zones and 24 million of them are out of school. Political instability, labor market challenges, and limited space for political and civic participation have led to increasing isolation of youth.
That's why the United Nations theme for International Youth Day to focuses on “Safe Spaces for Youth.” These are spaces where young people can safely engage in governance issues, participate in sports and other leisure activities, interact virtually with anyone in the world, and find a haven, especially for the most vulnerable.
Even though creating safe spaces is a tough challenge in many parts of the world, there are many young men and women striving to do so while creating opportunities for themselves and for the disadvantaged in their communities. | The effects of COVID-19 on young people of all ages. | As impacts of the global pandemic, COVID-19, continue to spread, young people are having to worry about things like the health of their families, their workplaces/schools closing and others. | ['vincentkyambia2@gmail.com', 'Vincent Ngala'] | [] | [] | 78 |
9,927 | https://devpost.com/software/smarttracker-covid19 | Inspiration :
Now a days whole world facing the novel Corona Virus, to track the spread of novel Corona Virus country-wise, details of confirmed cases, deaths and Recovered, awareness regarding COVID-19. This Android app was created to spread awareness about the covid -19 virus.
What it does :
The Android app named as ‘SmartTracker-Covid-19’ created to spread awareness about the COVID -19 virus. App includes following functionalities:
CoronaEx Section -
This section having following sub components:
• News tab: Having latest new updates. Fake news seems to be spreading just as fast as the virus but as we have integrated from official sources so everyone will be aware from fake news.
• World Statistic tab: Real-time Dashboard that tracks the recent cases of covid-19 across the world.
• India Statistic tab: Coronavirus cases across different states in India with relevant death and recovered cases.
• Prevention tab: Some Prevention to be carried out in order to defeat corona.
CoronaQuiz section - quiz that will help people know about the Corona virus and its effects on human body. It chooses random questions and show the correct answer for the questions and at the end user will get to know their highest score.
Helpline Section - As this application particularly made for Indian citizen to use, all state helpline number of India included.
Chatbot Section - A self-assisted bot made for the people navigate corona virus situation.
Common Questions: Start screening,what is COVID-19? , What are the symptoms?
How we built it :
We built with using Android studio. For the quiz section we have used sqlite database and live news data we have integrated from the News API. For the coronavirus statistic we have collected data from worldometer and coronameter.
Challenges we ran into :
At time of integrating the chatbot in application.
Accomplishments that we're proud of :
Though , It was the first attempt to create chatbot.we have tried to up our level at some extent.
What's next for SmartTracker-COVID19 :
For the better conversation we will be looking to work more on chatbot.
Built With
android-studio
chatbot
java
news
quiz
sqlite
Try it out
github.com | SmartTracker-COVID-19 | Android app to track the spread of Corona Virus (COVID-19). | ['Pramod Paratabadi', 'Supriya Shivanand Madiwal .'] | ['Best Use of Microsoft Azure'] | ['android-studio', 'chatbot', 'java', 'news', 'quiz', 'sqlite'] | 79 |
9,927 | https://devpost.com/software/alma-mater-alumni-relief-fund | Inspiration
My country want to enter new normal, although the death curve covid-19 still high. Many shops and malls have opened and set minimal safety protocols in this new normal
What it does
Safe is a website where people can see and giving rate how serious local business take safety protocol in new normal after covid-19.
How I built it
I built the website by using unicornplatform CMS. For the design i use adobexd
Challenges I ran into
I tried to build the website use bootstrap, but i think my time is not enough. So i use CMS unicornplatform
Accomplishments that I'm proud of
Finish all the process from design until video making
What I learned
There is a big opportunity to change the world, to help others during covid-19 crisis. I just trying my best to by giving my creativity.
What's next for Almamater
Build the complete website and app solution
Built With
adobexd
cms
unicornplatform
Try it out
safe.unicornplatform.com | Safe | Find business who prioritize safety protocols | ['Miftakhul Farik'] | [] | ['adobexd', 'cms', 'unicornplatform'] | 80 |
9,927 | https://devpost.com/software/informavirus-9cnxek | Symptoms Screen
Heat Map
Zoomed Heat Map
Inspiration
We noticed the importance of tracking cases when it comes to illnesses such as Covid-19 or influenza. This information can be used for allocating supplies to hospitals and mandating certain policies. The current way to do track cases is to look at the number of people who have tested positive for the virus. This relies on an abundance of accurate tests. Another way we could get a grasp of trends of illnesses through a population is to use what everyone has: a phone. People may not know what illness they have, but they will know what symptoms they have. With this data, we can create a real time visual representation of spreading illnesses, and we can figure out what symptoms are correlated with which illness. This visualization will also provide public awareness in regards to the importance of sanitation.
What it does it do
Users log into Informavirus and are immediately prompted to click "yes" or "no" for whether they have a specific symptom, like a fever or cough. If they click yes for a symptom, their location will start being tracked and plotted on a heat map that corresponds to that symptom. After checking all of the symptoms, they will be directed to a page with a heat map for each symptom. Users can't see the map until they check the symptoms they have. Seeing the map is the incentive for checking their symptoms. They will be tracked until they either go back into the app and check "no" on the symptom or the average duration of the symptom has elapsed. We understand that everyone with a symptom will likely not report it, but the goal of Informavirus is to see general trends, not specific numbers.
How I built it
Informavirus was designed as a web and IOS app, using the Quasar framework (using Vue.js), as well as the related VueX and VueRouter for the multipage and state-management functionality. Google Firebase is used to securely store user data (username and password), as well as location data. We designed the application to be as secure as possible - the location data is linked with their user ID, and can only be retrieved if the user decides that they would like to retrieve the data.
Challenges I ran into
One challenge was deciding how to structure our database. We wanted to ensure privacy for the users but also maximize the data's usefulness. If we attached each user's coordinates to that user's node, then it would be easy to delete that user's coordinates if we no longer wanted to track them, but having the user's coordinates attached to that user would be a risk to the user's privacy. Instead, we used two separate nodes, one with all of the users and one with all of the coordinates. We then attach the user's random ID to their coordinates in the other node as a way to deal with specific users' data.
Accomplishments that I'm proud of
I am proud of making an app that is very easy for users to use. If an app like this isn't easy to use, even with the incentive of seeing the map in the end, it wouldn't be likely that people would use it. The idea was to make a simple, clean app, and I think we accomplished that.
What I learned
I learned how important database structure is, and I learned that it will pay off greatly if things like database structure and overall structure of the app are planned before its creation. I also learned how important communication between team members is when working on a projects that has so many moving parts. Communication allows for more efficiency and a better product in the end.
What's next for Informavirus
Of course, since we only had a couple of days to get the project up, we had to compromise with our functionality. There are features that we want to add, but have not had the time to, such as merging symptoms onto one heatmap, using different colors to denote each symptom, as well implementing an algorithm that can use machine learning to see what areas are more likely to become "hot-spots" for the virus. Another functionality that we plan to implement after this algorithm is a method in which the application can send the results of the machine learning algorithms to the nearest hospitals, alerting them if there are more cases that may pop up. Of course, there would be some statistical analysis involved, as there are no perfect predictions, but if we are able to get even the slightest prediction, the extra preparation could be of help in the future. We also understand that for Informavirus to operate effectively, we need as large a user base as possible. So it may be most effective to team up with a company that already has a user base and a reason to attract users, like a social media app.
Built With
css3
firebase
geolocationapi
html5
javascript
vuegoogleheatmap
vuegooglemaps
vuejs
vuerouter
vuex
Try it out
github.com | Informavirus | Imagine seeing trends of illness in populations before testing or hospital visits occur. A user-friendly app that tracks users based off their symptoms, if any, and plots those points on a heat map. | ['John Wang', 'Daniel Schwartz'] | [] | ['css3', 'firebase', 'geolocationapi', 'html5', 'javascript', 'vuegoogleheatmap', 'vuegooglemaps', 'vuejs', 'vuerouter', 'vuex'] | 81 |
9,927 | https://devpost.com/software/iclassroom-qnimx6 | Web-Dashboard
App-Dashboard
App-chat
Login
Web-Videoconferencing
Inspiration
Current pandemic caused education institutions to shut down and we have a lot of difficulties communicating with teachers and peers and don't have a good platform
What it does
It is an online virtual classroom which focuses on peer learning by providing a engaging social media like platform to both students and teachers to interact, clear doubts, mentor others, keep track of their courses, and conduct online classes...
How I built it
Built using webRTC
Challenges I ran into
Code optimization for mobile devices
Accomplishments that I'm proud of
First Prize Winner of Code19 Hackathon by Motwani Jadeja Foundation
Completed a proof of concept working prototype model
What I learned
Learned a lot about webRTC, backend development
What's next for iClassroom
To stabilize the working model, continue development of a scalable and strong backend using django
Built With
django
heroku
node.js
webrtc
Try it out
iclassroom.herokuapp.com
github.com | iClassroom | Learn Anywhere Anytime. An online virtual classroom which focuses on peer learning and doubt clarification by providing a engaging social media like platform to leaning communities. | ['Nandakishor M', 'Shilpa Rajeev', 'Abhinand C'] | [] | ['django', 'heroku', 'node.js', 'webrtc'] | 82 |
9,927 | https://devpost.com/software/rocket-launch | Inspiration
Many expositions, conferences and trade-shows are being cancelled.
In a post Covid-19 world, we need to think about new ways to celebrate events remotely
What it does
To celebrate the United States' return to human spaceflight, we have created this interactive simulation of a Falcon9 rocket launch in Augmented Reality
How I built it
Using a combination of 3D, voice narration, animations, particles effects it is possible to build highly educational AR experiences that are, thanks to their small package sizes, immediately available for user to try and share to their social connections.
Challenges I ran into
Making it work for different type of smartphones.
Accomplishments that I'm proud of
Quite realistic
What I learned
Code optimization
What's next for About Bats
Add more educational content
Built With
facebook
instagram
sparkar
Try it out
www.newsassim.com | Rocket Launch | Celebrating the United States' return to human spaceflight | [] | [] | ['facebook', 'instagram', 'sparkar'] | 83 |
9,927 | https://devpost.com/software/deep-learning-covid-19-chatbot | Cough detection and automated voice recognition and response
Chatbot - Android app
Chatbot - Flask web app version
Chatbot - Node-Red application
RNN noise reduction
Inspiration
The pandemic hits hard on the world, We all are worried about our health. It was important to check heart rate, mental health, and detect any persistent coughing. We need to get live status of active cases near our locality so that we can plan our journey accordingly. For this purpose I just connected with Dr. Anjali and started working on this project.
What it does
You can ask question without clicking a button or typing. Just ask and she will give your health-status, how to care your children, active cases near you and much more. For active cases we used the covid19 India API.
How we built it
There are 4 version for this assistant
Build on Nod-red
Build on Flask web app
Build on Android
Deployed on Nvidia Jetson Nano/PC with automated cough detection, response to your voice using RNN based Noise-reduction technology
Challenges we ran into
It has taken tremendous effort to build this during the hackathon. Including training multiple AI model in limited amount of time, Configuring IBM cloud, Node-red application, Building a flask RESTful app and android application. Installing dependencies on Nvidia Jetson Nano and configuring the script on this Single Board Computer(SBC) has taken so much of our time.
Accomplishments that we're proud of
Succesfully build the system working on SBC's such as Nvidia Jetson Nano, which acts as a local brain for the Asssistant.
What we learned
Training Deep learning models, IBM-cloud devops, AI edge device integration with cloud.
What's next for Deep learning Covid-19 chatbot
Now we are almost completed our work for integrating the emotion recognition, heart rate detection over the cloud. so that it can be used as an API call. which helps in integrating on mobile applications, web application,IoT devices, Other SBC's
Built With
c++
ibm-cloud
ibm-watson
java
keras
opencv
python
tensorflow
Try it out
github.com | Deep learning Covid-19 assistant | An AI assistant to constantly check your heart-rate, emotional state,detect coughing and give active covid-19 cases near your area and help your health queries | ['Nandakishor M', 'anjali m'] | [] | ['c++', 'ibm-cloud', 'ibm-watson', 'java', 'keras', 'opencv', 'python', 'tensorflow'] | 84 |
9,927 | https://devpost.com/software/the-virus-limiter-3-0-p8hg6m | I wanted to find a solution for COVID-19 and I signed up for HackTheCrisisIndia and made it to top 300! Unfortunately, I couldn't make it to top 30.I also got a special mention in The Global Hack. I knew I wasn't gonna give up and I signed up for This Hackathon!
The Virus-Limiter has 4 ideas so far:
TVL 1.0
TVL 2.0
TVL 3.0
KleenSweep
You can view them all at:
https://sites.google.com/view/rehanraj/ideas
A very big barrier I have in my way is The Age Barrier. I am only 10 years old and have no experience in coding and all. And I have like people who are 40 going against me!
A very big accomplishment that I think that I have, is that I made it to top 300 of HackTheCrisisIndia. And got lots of attention in The Global Hack and World Hackathon Day
What I thought in HackTheCrisisIndia was that my idea was just really basic, so I decided to change that This Hackathon. And in my idea in The Global Hack was a little hard to widely spread.
Built With
godaddy
javascript
wordpress
Try it out
rajrehan.com
bit.ly
we.tl
forms.gle
sites.google.com
sites.google.com
sites.google.com
sites.google.com
discord.gg
discord.gg | The Virus-Limiter | A company bound to help people in times of crisis | ['Rehan Raj'] | [] | ['godaddy', 'javascript', 'wordpress'] | 85 |
9,927 | https://devpost.com/software/euconnect-cross-border-education | Problem Statement
Solution
Research Phase
Research Phase
Architecture
We are a group of people with various expertise who gathered together with one matching key goal - to keep our future generation enthusiastic about learning while getting connected globally. Too many bad news about youths being at home amid the Covid-19 crisis - depressed, lonely, bored, demotivated, etc - that affect their learning experience. Our mixed-method research that we carried out revealed the same issues regarding motivation and hardships around remote learning. With one vision, we decide to come together and address this issue.
Vision and Mission
We envision a future with an accessible, borderless world for pupils to connect, inspire and motivate each other in achieving their learning goals. School practitioners are to be linked to a global community and everyone can connect internationally, learn collaboratively and help each other to accomplish learning paths through a gamified, modern yet friendly virtual environment.
With our platform, we want to support the
United Nations Sustainability Goals
on quality education, reduced inequalities, and partnerships for the goals.
We strive for quality learning paths by professionals.
A mentor is qualified through core and soft skills evaluation, including onboarding videos.
Our Platform is inclusive.
It's free of charge for pupils and schools and is accessible in all devices, online or offline.
We are Non-profit, working with global Partnerships for similar objectives.
Open Source content from certified platforms is used in our existing paths
Research Results
We conducted surveys and interviews with pupils from different countries. Most of them told that distant learning is inefficient as there are not enough teachers' explanations and feedbacks. A lot of pupils answered they feel lonely and want to find new friends to communicate with, especially from different countries to learn about other cultures. The idea of mentoring was also supported by pupils as mentors could help them with studying and even with personal stuff. All the respondents would like to have the app we developed.
matchEU and Social Impacts
matchEU is a learning platform for learning that we developed with
three
key features. It is hoped that matchEU could maintain pupils’ social and well-being that would help them survive the learning journey in a ‘new normal’ situation.
1. Global connection
Study partnership
- Pupils can choose their study buddies and create their own space for discussion. While this can increase their motivation to continue learning amid Covid-19, this will also strengthen the intercultural competences among them. The cultural values inculcated within this App defines how impactful matchEU is to pupils' personal growth and development.
Mentorship
- Pupils can choose a mentor based on their preferences. This allows them to work with those whom they are comfortable with and this will keep them motivated in learning the subjects of their choice. This will encourage a long-term, sustainable tandem and mentorship among student users and mentor users.
Friendship
- The care-to-share space allows pupils to create a matching tandem among themselves and create a long-lasting friendship. It helps to maintain their social and well being as it allows them to share and care about each other, especially during the crisis.
Community relationship
- Pupils are also connected to the community around the globe. This allows them to be more sociable while keeping their distance from each other. It is hoped that users can stay emotionally and mentally healthy to continue learning remotely.
2. Worldwide Courses
Free courses
- Pupils are provided with readily-available courses created by multiple stakeholders around the world. This exposes them to a wider perspective that exists beyond a standard curriculum.
Easily accessible
- With a service and client facing mobile application & web app in mind, pupils are provided with a link to content created on easily accessible platforms such as Moodle and Google Classroom. This allows equal chances for everyone to take up any course they are interested in.
3. Gamification
Challenges
- Pupils are provided with ever-ready challenges that they could take up while competing among each other. This will increase the motivation, thus promoting sustainability in the learning journey.
Badges
- Pupils are provided with a reward system that helps keeping them motivated to stay connected in the learning journey.
Ranking
- Pupils are provided with a ranking system based on the badges they received or tracks they completed. This encourages and motivates them to accomplish their own learning paths while gaining world recognition. Mentors are also ranked based on pupils’ level of satisfaction.
Technical Complexity
We have managed to create a python app that runs backend which serves as a website and also processes requests related to the connections to other platforms such as Google Classroom and Moodle. There is also a python app working on the serving as an API for mobile apps and serving as a router to messages and ways to create groups of users registered in the database that matches them or just allow them to form groups and message different stuff. Work continues to make everything functions as a web version. Website frontend is done with react and the goal is to have it modularised and unified so it can be adjusted dynamically when new subsites are created. Works are also done on react-native mobile app.
The Future of matchEU
matchEU gives a powerful impact towards the growth and development of the youths in terms of their cognitive, emotion, and spiritual ability. Given this App is a much needed tool for the current situation, it is best to see how much this App is useful when the crisis is over. It will be an eye-opener to see how much values this App could bring to pupils for their future endeavours. Thus, it is a necessity to see matchEU keep moving forward.
We are looking for first funding, to pay for our working tools in order to create the MVP. Let's connect!
Let's connect - find out more and start with our
landing page
or stroll around on
GitHub-BackEnd
GitHub-FrontEnd
and the
video
Twitter: @eu_match
Mail:
matcheu20@gmail.com
Built With
amazon-web-services
bootstrap
elastic-beanstalk
flask
heroku
justinmind
postgresql
Try it out
match-eu.herokuapp.com
github.com
github.com
www.powtoon.com
ron-huberfeld.github.io | matchEU is a 2020 EUvsVirus Hackathon project | WE HAVE AN IDEA FOR 2022: Let’s start a new Hackathon EUforPEACE, as Ukraine needs our support. Can you help to organise? Send a message on LinkedIn to Isabel Arens | ['Nathalie Haußmann', 'Olha Onofriichuk', 'misrah mohamed', 'Japneet Singh', 'Ron Huberfeld', 'Ryan Lopes', 'Isabel Arens', 'Vanessa Guillén', 'Megan Thong', 'Arushi Madhesiya', 'Chelsea Alcinord'] | [] | ['amazon-web-services', 'bootstrap', 'elastic-beanstalk', 'flask', 'heroku', 'justinmind', 'postgresql'] | 86 |
9,927 | https://devpost.com/software/project-iejftw6br4gd |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
کرونا وژونکی ده !#
کرونا څخه مه ویرږی وکرځی او سپورټ وکړي
#كرونا جدى ونيسى او په كور كى پاتى سى. دا د مړینی خبرونه، جنايي او قبرونه نه ویني؟!
What it does
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for کرونا له نړی هدیره جوړه کړه.
Built With
dari
english
passhto
Try it out
www.facebook.com | !کرونا له نړی هدیره جوړه کړه# | #كرونا جدى ونيسى او په كور كى پاتى سى. دا د مړینی خبرونه، جنازی او قبرونه نه ویني؟# | [] | [] | ['dari', 'english', 'passhto'] | 87 |
9,927 | https://devpost.com/software/virtualprogram-org-5q8udj | Sample Event 2
Sample Event
Create Event Page
Inspiration
If you’ve ever been to a convention or conference, you know exactly this feeling. You’re sitting in the middle of a 20,000 seat convention center, the intro music ends, a video plays on the 100 foot screens and then the keynote speaker walks on to the stage. These experiences are memorable because it's exhilarating to be with like-minded people discussing something they’re passionate about.
Unfortunately, with COVID-19 these were one of the first venues to close their operations. Since then we’ve all begun transitioning to virtual content through platforms like zoom and facebook live. Yet, in the three months since, no one has truly tried to replicate that full experience and bring the conference to the home.
What it does
Never again feel like your front row seat at the conference has been downgraded to merely 25 people in gallery view on zoom. Thanks to virtualprogram.org the gap between video conferencing platforms and truly virtual programs will finally be bridged.
With virtualprogram.org, you virtually walk into the convention center along with thousands of others as music plays and slides are displayed on the screen, The VIPs are seated prominently in the front, and the screen gets dark when the speaker begins.
This solution has potential to be used for events ranging from graduations, religious ceremonies, galas, concerts and so much more. Any program that originally required a stage and audience can be fully virtualized using this product.
How I built it
As a proof of concept, I built a web app which has two components.
First, it uses YouTube to either display a live stream or a video premiere. This way the show itself is given priority over the bandwidth and it also incorporates a slight delay in order to ensure high definition quality of the stream.
The second part uses Jitsi (an open source video conferencing platform) to create a private room for the event. All participants can see each other as well as the youtube stream.
There are also some other tweaks which make it feel like a conference such as branding, animations and dimming of lights by darkening the UI.
Challenges I ran into
It's clear that it takes quite a long time to fully develop a video conferencing platform that's even remotely comparable to Zoom or Google meet. Therefore, this site uses a lot of CPU in order to have more than 10 people on the call. In production this would not be a viable solution.
It would probably be best to develop a custom solution. Given that everyone in the audience is muted and that resolution or FPS isn't much of a priority, a custom solution would be able to handle hundreds and maybe even thousands of people. I had also hoped to add a VIP section, but this would also require a custom solution
Accomplishments that I'm proud of
The prototype I built is capable of being used right away and could be a very useful platform for virtual events. This project has potential to be used for so many types of events. I am looking forward to sharing with others to enhance the virtual experience.
What I learned
Before working on this project I had never worked with video conferencing technology, and I learned a lot about how it works and all the detail required to make all of our calls look so smooth.
What's next for virtualprogram.org
The next steps for this project would be to conduct more market research to narrow down the target market and to find out more precisely what the needs are of those who would use it. Depending on how well this proof of concept performs, I may also begin working on a custom video conference solution for this product as described above.
Built With
html
javascript
jitsi
youtube
Try it out
virtualprogram.org
github.com
virtualprogram.org | virtualprogram.org | You’re sitting among 20,000 at a convention. The music fades, a video plays and the keynote begins. With virtualprogram.org never again feel like your seat has been downgraded to zoom gallery view. | ['Jacob Richman'] | [] | ['html', 'javascript', 'jitsi', 'youtube'] | 88 |
9,927 | https://devpost.com/software/covid19-tracing | All you need for travel planning and assessment
Inspiration
Tourists entering any country are facing the same issue, which is necessary to have travel planning such as prepare physical documents like itineraries, travel insurance and country navigation etc
What it does / Your solution and what it does ?
Business Case: This application serves as a digitalized paltform for integrating travel and health needs of tourists
Step 1: Tourists check in to the application with an automated scheduling itinerary plan by our AI bot, Alvis.
Step 2: The itinerary and travel plans get updated with suggestions like new places and local food / accomodation recommendations. The system will also monitor your health and travel plans by providing notifications and diagnostics
Step 3: The user would have their itinerary updated and advised with new itinerary plans
Step 4: The user could form travel communities using the application for tips on travelling safely
travel-alvis is an AI powered digital check in solution for tourists , which enhances their travel experience by replacing physical touch with elegant diagnostics and contactless checkpoints for post COVID-19 with AR experiences
The travel problem your project solves, including TravelScrum challenge(s)
Travel: Enhancing travel experiences through immersive AR
Health and Safety: Algorithmic analysis of user health
Business Impact
: This application would be useful in creating health safety measures for tourists travelling in post COVID-19. The application offers responses and right steps to take through our features
1) AI-powered itinerary system assistant
2) echoAR modelling of travel attractions
2) Chat support
The application serves as a digitalized and contactless delivery of travel whereby users are more connected and feel safe in response to preparing for post COVID-19
Upon opening of economy, more tourists would be encouraged to take up travel commitments by better insuring their travel. Each user would have their travel insurance tracked in our system
Originality
: Yes, we are one of the few with travel and wellness elements, all in one application !
How I built it
The application was built using Bootstrap 4 framework, echoAR, google firebase, google cloud
Challenges I ran into
The integration of different technologies to make an application fully functional was challenging. I had overcame it through mentor discussion and help from T/S community
Accomplishments that I'm proud of
The new implementation of AR elements for travel and health purposes is a new concept and implementation which I am proud of
What I learned
I learned about new technology providers such as airline content technologies and immersive technologies like echoAR
What's next for travel-alvis
The application would be further scaled as a travel and health community for tourists to find suitable accommodation or hospitals and enable better tracking of patients' well being.
Revenue model would be used as pay per use model, which allows users to customize their travel needs
Tourists coming in to any country could check in to this application for quick diagnostics for free
Built With
echo-ar
firebase
glide
google
google-cloud
ml
uipath
Try it out
travel-alvis.glideapp.io | travel-alvis | An AI powered digital itinerary solution for tourists by providing customized travel plans in response to post COVID-19 using enhanced AR experiences | ['Xuan Wei'] | [] | ['echo-ar', 'firebase', 'glide', 'google', 'google-cloud', 'ml', 'uipath'] | 89 |
9,927 | https://devpost.com/software/semoga-apa-yang-aku-usahakan-selama-ini-akan-ada-hasil-nya |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Good job
Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Semoga apa yang aku usahakan selama ini, akan ada hasil nya
Built With
tiktok
Try it out
www.facebook.com | Semoga apa yang aku usahakan selama ini, akan ada hasil nya | Video creator tik tok | ['Diky Rikardo'] | [] | ['tiktok'] | 90 |
9,927 | https://devpost.com/software/covidandus-com | Inspiration
The world needs to come together and be a unit, face and solve problems together.
What it does
One might say we are all in the same boat, we are travelling down the same river and rowing with the same ores. I say, we are by no means in the same boat, but we are in it together.
For some it’s been horrific, lonely, an ordeal, full of sadness, and for others it’s been a time for reflection, time to get to know ourselves and the people around us.
For the world as a whole it’s been a time of uncertainty and sacrifice where, unfortunately, the number of deaths have become the only gauge to depict how close we are to ‘normal’ life as we once knew it.
Our entire existence has altered and shifted to a new acceptable ‘norm’.
With this in mind I have created this community for us all to come together and share our stories of grief, of success, of struggle, and of humility.
This period will go down in history and with it I want Covid+Us to offer a shelter, housing the stories that have been lived, created and adapted during this time.
As the economy yo-yo’s, financial security has become something of the past for a large majority of people. Governments have provided support and changed many policies to help keep society agile and adaptive.
It’s our responsibility to voice our stories to future generations. To help allow them to appreciate, learn, and remember the struggles and successes of Covid-19 from the front line.
The stories shared through this platform will undoubtedly unravel unique scenarios and situations that people have had to cope with – I want us and future generations to remember and be inspired by those inspirational moments and people.
Let us come together and appreciate that although we are not all in the same boat we are part of the same era and situation.
With this, we can find comfort in knowing that we are not alone, our stories can be shared, saved and used to inspire generations to come through this platform. We can stand proud knowing we are navigating through the storm of Covid-19 with our heads high.
Albert Einstein once said that “Life is like riding a bicycle. To keep your balance, you must keep moving’.
This quote encapsulates how we have all managed to embrace the situation whilst being able to quickly make changes, and adapt our very existence.
The future can never forget the lives that have been lost, the wounds people wear on their sleeves, and the worst of which are the invisible wounds which take the longest to heal. Let’s come together and share our stories as one world, one nation and one heart.
How I built it
I started the product myself, started to design, conceptualise, speak to people and do some interviews. Found the right balance.
Then worked with a team of designer and developer to choose the right technology for the stage we were at, which fit into the budget too - as I am funding it myself.
Challenges I ran into
Costs - Funding it myself meant I had to learn and do a lot myself to save on Dev and design time.
Speaking to people and getting their thoughts
Getting it ready on time with the allocated resource while, having a fully functioning and tested platform
Accomplishments that I'm proud of
The platform and the feedback I have. Looks and works like a global level platform.
What I learned
Team work is very important, and to be open minded. Learn from everyone and take all feedback onboard.
What's next for covidandus.com
Marketing and Content writing
Built With
figma
javascript
ui
ux
wordpress
Try it out
www.covidandus.com | covidandus.com | One pandemic, endless encounters | ['Nikin Chudasama'] | [] | ['figma', 'javascript', 'ui', 'ux', 'wordpress'] | 91 |
9,927 | https://devpost.com/software/home-health-care-patients-tracking-application | Home Health Care Mobile
Home Health Care Sample Decision Support System
COVID-19 Risk Prediction Tool
Salesforce
The follow-up of Home Health Care and elderly patients is not made digitally and home health data is not processed data. This makes it difficult for the elderly patient to follow the situation. Healthcare professionals are obliged to learn the examinations, drugs and patient status previously applied to the patient during the patient visits. With the current COVID-19 outbreak, patient visits have decreased considerably. Since the patients in this group are in the highest risk groups of COVID-19, hygiene requirements during the visits complicate the maintenance procedures. In addition to this situation; Symptom monitoring of home care patients, people in the geriatric class (65 years and older) and potential / recovering COVID-19 patients should be done remotely.
First of all, the information necessary for the follow-up of home health care patients was prepared for information entry in the Android environment. Improvement was made in the Salesforce environment to retain the data. A website was prepared in RStudio environment by developing an AI based model for monitoring the health status of home health care patients. For the COVID-19 symptom follow-up, the data of 22,000+ COVID-19 patients were processed all over the world, and an website was prepared in RStudio environment.
The biggest challenge we face is to find anonymous data that we will use for decision support systems and to clean and make the data available.
In the later stages of the project, video speech, voice recognition and sensor and smart watch (Apple Watch) integration will be supported.
Built With
android
api
css
flutter
html
java
r
rstudio
salesforce
Try it out
dveshealth.com
twitter.com
www.linkedin.com
www.instagram.com
dveshealthai.shinyapps.io
dveshealthai.shinyapps.io
drive.google.com | HOME HEALTH CARE PATIENTS TRACKING APPLICATION | DVESHealth provides AI based home health & elderly care decision support and monitoring mobile / web / cloud solutions. | ['Berna Kurt', 'Mustafa Aşçı', 'Asım Leblebici'] | [] | ['android', 'api', 'css', 'flutter', 'html', 'java', 'r', 'rstudio', 'salesforce'] | 92 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.