hackathon_id
int64 1.57k
23.4k
| project_link
stringlengths 30
96
| full_desc
stringlengths 1
547k
⌀ | title
stringlengths 1
60
⌀ | brief_desc
stringlengths 1
200
⌀ | team_members
stringlengths 2
870
| prize
stringlengths 2
792
| tags
stringlengths 2
4.47k
| __index_level_0__
int64 0
695
|
---|---|---|---|---|---|---|---|---|
10,026 | https://devpost.com/software/spotlight-6503ky | Screenshot of Video
Inspiration
In light of Covid-19's quarantine mandates, we face challenges with human interaction and, thus, social networking. Moreover, the music industry has severely declined because of societal isolation. Small-time creators of music and the arts have a hard time sharing their talents with us as their tours and popular venues shut down for public safety reasons. As high school orchestral musicians ourselves, we understand the need to connect to our fellow artists. The connectivity births collaboration, and in turn, makes us better at music production for entertainment purposes. Furthermore, for small-time artists, collaboration can be a tool for promotion- allowing entrance into more venues, creating more tour dates, and reaching a larger audience to spread their craft. By taking advantage of the reliance on our mobile devices to communicate with the rest of society, we hope to create a virtual platform that allows top-notch entertainment for the general populace and in effect, allowing artists to promote their works.
What it does
Spotlight essentially acts as any other social media and networking platform- except its sole purpose is for artists to spread their talents with others. With a simple design and a straightforward user interface, an individual can enjoyably scroll through a "feed" and experience a multitude of musical artists. We decided to take advantage of the special bond we have with our mobile devices and therefore decided to cater solely to a mobile audience, instead of a web application. By creating a mobile application, we hope that the special relationship we have with our phones can be transferred over to Spotlight to increase users and allow musicians to spread their influence over a broader range of virtual audience members. Like any other social media platform, this one will be complete with the user's ability to like videos posted by artists. There is also a link to be directed to the artist’s Spotify page, again, to facilitate further artist promotion and appreciation for their other works. Besides sharing content, the networking capabilities of this platform will facilitate greater connectivity among members of the music community, and lead to more collaboration and the spread of music as an art form. This virtual networking platform can also allow the user to share videos of his own musical abilities with an upload capability. Because the purpose of this application is to share music with other members of the music community, the ability for the user to share his own videos on the platform will hopefully bring artists and fans closer together in a Covid-19 Era.
How I built it
Spotlight’s UI is built with Expo/React Native. This allowed us to maintain a "controlled" environment for the application’s state, allowing for great compatibility with server-side API calls and rendering. The backend of the application was implemented in Node.js, through the use of a MongoDB Atlas instance, which allows the application to automatically store, render, and serve up video files pertaining to hosted music videos and recorded video performances uploaded by artists.
The video scrolling UI is built off of ExpoKit’s Video AV component, which renders the URIs of videos specified in the state of the app. The video components are wrapped in a FlatView component with snap enabled, allowing for a TikTok-Esque scroll interface.
The backend utilizes a MongoDB Atlas database, interfaced with mongoose, multer, multer-gridfs-storage, send-seekable, and grid-fs-stream. The server first initializes a connection to the MongoDB server, establishes the Video schema, and creates the GridFS configuration. When a new video is created in the React Native client, the app pushes a POST request to the backend in the form of a multi-part upload, where multer and the GridFS interface decomposes the file into binary chunks, where it is stored in the MongoDB.
Challenges I ran into
We ran into significant challenges with a live-stream functionality we attempted to implement, but unfortunately, the "eject" function of Expo broke our application.
Balsamiq Sketches
What I learned
We learned a plethora of knowledge in the realm of streaming and buffering video. We had significant troubles that we worked over in regards to the downloading and buffering of videos hosted in our MongoDB implementation. Specifically, we learned that there was a difference between Buffers and Streams and we learned how to allow buffers to accept a byte location request in the Express server.
What's next for Spotlight
We intend for Spotlight to allow artists to share their work with the general public to promote themselves. In future iterations, we hope that it will also possess algorithm capabilities that will enable the content of artists to "trend" based on their popularity and hence, lead to even more recognition for music promotion. Plus, it is planned that accessing a “user profile” can lead to setting an individual’s musical taste. For instance, if they would like to see solely orchestral music, only orchestral music will appear on their “feed.” Expectedly, soon, the application will have a way to monetize artists in addition to promoting them. Besides monetizing the small-time artists as a reward for displaying their musical talents on the virtual platform, we can work together with record labels in an attempt to advance their music careers and assist music companies in finding new voices. Plus, we hope to add a live stream capability to allow artists to better connect to their fanbases and spread more of their works.
Express MongoDB source code:
https://repl.it/@usere/appMongoFile
Built With
expo.io
mongodb
node.js
react
reactnative
Try it out
github.com | Spotlight | A multiplatform, connected music and artist discovery platform! | ['Ethan Sayre', 'Nicholas Pham'] | ['Honorable Mention - Top 5'] | ['expo.io', 'mongodb', 'node.js', 'react', 'reactnative'] | 3 |
10,026 | https://devpost.com/software/ocular-aid-kwld3t | Logo
Main graphical interface
Settings Page
Domains
nostrain.space
Inspiration
As society becomes paralyzed by the spread of COVID-19, more and more people find themselves staring at their computer screens while working from home. It is expected that many people will soon find themselves experiencing the harmful effects of prolonged screen time. We want to create an assistant that can recognize the signs of digital eye strain and alert the user.
What it does
Ocular Aid uses computer vision technology to detect symptoms of digital eye strain. In addition to this, face detection is used to calculate a user's total screen time and users are able to set periodic alerts that remind them to rest their eyes.
How we built it
Ocular Aid is centered around the fact that humans blink at a slower rate when fatigued are experiencing eye strain. Using OpenCV, human eyes and their bounding boxes are detected with Haar cascade classifiers. The images are then cropped to only contain a single eye. The images are then classified as either open or closed by a pre-trained Densenet-121 feature extractor that is attached to a fully-connected classifier (trained to 96% accuracy on testing data). Ocular Aid's desktop application was created using C# and its WPF UI framework.
Challenges we ran into
A majority of our group had little to no experience with C# and WPF.
The Haar cascade classifiers had some issues when the user was wearing glasses.
Accomplishments that we're proud of
We hacked out a full blown Windows app that has so many real-world applications in just 24 hours! Not only that, but we learned a lot.
What we learned
We gained lots of experience with C# programming.
We learnt about Haar cascade classifiers.
We became much more experienced with OpenCV and image processing.
What's next for Ocular Aid
We'd like to have Ocular Aid take more factors than just blink frequency into account when detecting eye strain. There are also many possible optimizations that should be made. For example, we work with greyscale images of eyes while the convolutional neural network accepts 3 channel RGB images as inputs. This means that the neural network has more parameters than are actually needed.
Built With
.net
c#
css
html
javascript
opencv
python
pytorch
wpf
Try it out
github.com | Ocular Aid | A digital assistant that can "see" symptoms of eye strain and fatigue. Reminding you to take adequate breaks. | ['Rohan Shetty', 'Trevor Du', 'Kevin Gao', 'Matthews Ma'] | ['Honorable Mention - Top 5'] | ['.net', 'c#', 'css', 'html', 'javascript', 'opencv', 'python', 'pytorch', 'wpf'] | 4 |
10,026 | https://devpost.com/software/intelligo | Home Page
Results for "Machine Learning"
Inspiration
This is my first Hackathon, and with little bit of experience in coding, UiPath and Wordpress were of a great help.
Intelligo is Latin for "Learning" and "Intelligence".
Great options today in online shopping can confuse anyone. While searching through various websites to learn a new skill, selecting the relevant course was a bit difficult for me, considering all the parameters. This project can help anyone to select a new course easily.
What it does
RPA (UIPath) and WordPress integration:
Bot opens up a website
https://intelligo.online/
, whose domain is taken from Domain.com , using the promo code provided in DistanceHacks Hackathon resources. The site is made with the help of Wordpress. The user inputs name of of the course they want to learn. After which the bot browses through 5 top learning websites: Coursera, Lynda- LinkedIN Learning, Codeacedemy, Udemy and Udacity. The bot scraps data with the help of Web Scraping and stores it in Data Tables, post which it saves it into a .CSV file. The bot then logins onto the backend interface of Wordpress and uploads the .CSV file generated earlier, which has all the data stored. The bot finally, shows the comparison data on Wordpress website in an easy tabular format. Hence, the user will select the most relevant course based on School, Description, Price etc.
How I built it
The RPA part was built on UiPath, where Data Scraping methods (Web), Data Table generation and manipulation were largely used. Website was built using WordPress tool. And Domain was taken from Domain.com.
Challenges I ran into
Navigating to various learning websites, where the behavior of each website was quite different from the others. The data scraping was a bit challenging too. But usage of Selectors, made it easier in UiPath.
Accomplishments that I'm proud of
I can now easily select the most relevant course that I want to learn next.
What I learned
RPA tool usage for Web Scraping and user actions and WordPress for web development.
What's next for Intelligo
RPA can be further integrated with Machine Learning algorithms so that an algorithm further reduces the response time and can search more advance results. This can be achieved through algorithms such as: Genetic Algorithm, Particle Swarm Optimization etc. using Python.
Built With
data
rpa
scraping
uipath
webscraping
wordpress | Intelligo | Never Stop Learning | ['Lakshay Garg'] | ['MLH - Best UiPath Automation Hack', 'First Time Hacker!'] | ['data', 'rpa', 'scraping', 'uipath', 'webscraping', 'wordpress'] | 5 |
10,026 | https://devpost.com/software/distance-hacks-submission-covid-heroes | We are applying for the best domain registered with domain.com prize (our domain name is covidheroes.space)
Inspiration
Covid has brought the best in our communities where individuals and organizations have rallied around those serving at the front lines; health care professionals, emergency personnel, police officers, mail carriers, grocery staff, and other essential professionals. We want to recognize those that are serving, inspire others to join, and become allies to those in need.
Small Acts, when performed by millions, create a movement! Through COVID Heroes, we want to spread the energy of our heroes and create a movement in our communities to ride through these times.
What it does
Displays Nominated Heroes and profiles their work (a user can Sign Up to Nominate a Hero)
Inspires Users to Signup as an Ally to Help the community
Allows Users to Signup to Request Help from an Ally
Users can Browse the List of Ally Requests and Ally Available (like a Craigslist)
How We built it
Using HTML, CSS, and JS, we built the interactive content of our website and styled it. To display the information of heroes, we used CSS Grids and Flexbox.
We host the website content and store user information in a database. We created the schema in PHPMyAdmin using XAMPP.
We used SQL to communicate with our database; update and retrieve information.
We used Table Filter JS library, which renders database content in an aesthetically pleasing way.
Challenges We ran into
Hero Image Display: When we first started to send images to our database, the images were showing up as an encrypted link and not as an actual image. When we looked into our database, we saw that the image was being stored as a BLOB. After doing some research online, we came to the conclusion that in order for our image to be properly rendered on our website, we would have to store the file path in the database. In order to make this possible, we wrote a PHP Script that takes the uploaded file and converts it into a file path, which is then stored in the database.
Too Many Library Versions: When calling the Table Filter.js library into our application code, we ran into a plethora of issues. After looking at the installation folder, we found that our link to import the library was wrong. We changed the file but that did not work either. After reading more about the Table Filter. js library, we found out that there was another version of the library. After downloading this new library, our code started to work.
Inconsistent size of Hero Grid: When we were trying to display the CSS Grid of the COVID-Heroes, our grids were showing up in different sizes which were ruining the layout of our website. In order to fix this issue, we used the module attribute of CSS Grids, which allowed us to evenly space out the grids. We also readjusted our values for the max and min-width of the grids, which helped restrict the sizes of the grids and appear consistent.
Accomplishments that We're proud of
Based on our research, there is not a single website globally dedicated to recognizing the heroes of COVID-19.
Technically, we are very proud that we were able to create a fully functional PHP login system and database in 42 hours.
What We learned
Before Distance Hacks, Riya and I had never used PHP, SQL, CSS Grids, or the Table Filter.js library. When Distance Hacks kicked off on Friday evening, Riya and I started by brainstorming ideas that would enable us to be of help for the community during Covid.
After deciding on building COVID Heroes, we created a draft on paper for what our website would look like. We researched some of the technologies that we needed, decided to use PHP and XAMPP/ PHPMyAdmin for hosting our database and for the backend to retrieve the content from the database.
We then looked at the PHP documentation and learned the basic syntax and logic. With this new knowledge, we were able to create the login system for COVID-Heroes. When we first started to create the format for COVID-Heroes, we found that the basic CSS properties such as float and margin were not going to work. While researching alternate methods, we stumbled upon CSS grids.
We learned that it was best for us to let our requirements determine what technologies we need. We stayed focused on results and learned what we needed to know to get the task done at hand that was based on the draft of the Covid website we created on a piece of paper.
What's next for Distance Hacks Submission - COVID-Heroes
Technical
Create a mobile version of our website - In order to do this, we would need to implement React.js as we have currently designed our website for desktop users.
Add a feature in the search for heroes page that will bring up COVID-Heroes in nearby zip codes if the current zip code has no COVID-Heroes.
Add an “Inspired” Feature to the COVID-Heroes page, where users can click the “Inspired” icon if they like what a COVID-Hero has done.
4.Add advanced filters on the COVID-Allies page so users can search for people in a certain occupation.
Use the information that is tracked in the database to implement a profile feature for each user.
There are a lot more features that we want to add to COVID-Heroes to make it more user friendly (like login by Google), add error handlers, allow for more functionality, and make it more scalable (like host it into the Public Cloud).
Adoption
Release COVID-Heroes website to the public - start with adoption in Millburn. Share on Facebook, Nextdoor, and local chat groups, and encourage the community to nominate their heroes (heroes can be from any part of the World)
Create a Twitter and Facebook page for Covid-Heroes. Profile one Hero every day
Share broadly in the community with help from Millburn Township office, and Dr. Miron’s announcements
As we ensure website scales locally, we will expand outreach into other communities in NJ, other States, and Globally
Built With
css3
html
javascript
Try it out
github.com
covidheroes.space | COVID Heroes | During this time, many have stepped up to help those in need. Through COVIDHeroes, we recognize those that are serving, inspire others to follow in their footsteps, and become allies to those in need. | ['Tanish Tyagi', 'Riya Tyagi'] | ['MLH - Best Domain Registered with Domain.com'] | ['css3', 'html', 'javascript'] | 6 |
10,026 | https://devpost.com/software/dide-discord-integrated-development-environment | I've made discord bots before, but they were all simple and didn't any real functionality. I decided to make a discord bot that has a real purpose.
The bot is the ultimate development tool for discord users. It allows users to create files, edit them easily, host their html, and run their javascript. Users can develop websites and scripts without ever leaving discord.
I built it using node.js. I used the discord.js library to interface with discord. I used mongoosejs, which is a object modeler for mongoDB. I also used expressjs for the webserver.
The biggest challenge I faced was coding alone. I should have joined another team. I experienced challenges working with MongoDB Atlas, when I finally started the bot on my server, they ignored the database calls because the IP wasn't recognized. While this was frustrating at the time, it shows that MongoDB atlas deeply cares about security. Giving the bot access to my website was also difficult. I had to edit the nginx configuration, which I have very little experience with.
I am very proud of how it works. It is capable of every function I envisioned it having in the beginning. I wasn't originally planning on moving the bot to my server, but it was easier than I thought it would be.
I learned many methods for dealing with strings. The indexOf and split methods were the most useful for understanding the user's command. I learned more about nginx configuration and mongoDB. I gained experience with javascript too.
I will leave the bot running on several discord servers i'm in. The code is open source, so anyone with a database and a computer can create a new version of it.
Using the free domain for a year offer, I bought generalintelligence.tech.
Built With
discord.js
express.js
javascript
js-interpreter
mongodb-atlas
mongoosejs
node.js
Try it out
github.com
discord.gg | DIDE, discord integrated development environment | My submission is a development environment for discord. Users can design and test websites and scripts. | ['Daniel Noguera'] | ['Best use of MongoDB Atlas'] | ['discord.js', 'express.js', 'javascript', 'js-interpreter', 'mongodb-atlas', 'mongoosejs', 'node.js'] | 7 |
10,026 | https://devpost.com/software/the-learning-curve | A
Built With
css
html | A | A | ['ayushi kate'] | [] | ['css', 'html'] | 8 |
10,026 | https://devpost.com/software/sylver-v5dz6k | Inspiration
My friends often ask on our group chats, "Hey, who's up for Netflix party tonight?" Of course, I'd like to watch too -- I just don't have Netflix...what if there were a platform that crossed between all of these different subscription services, and that was cool enough to get people to use it instead of Netflix party? There you have it -- Sylver was born.
What we've done
So far, we've prototyped the landing, catalog and theater pages, using YouTube for our Proof of Concept
What we used
Using Figma for wireframing purposes, HTML and CSS to build and prototype the actual site. We've run the prototype over Glitch.
Challenges
Time zones -- some members of the team have time zones forward or backward a number of hours, which naturally makes synchronous communication and work hard to do.
Achievements
We've got a strong vision for what Sylver can be for remote movie-watching, and we're proud of the fact that we've prototyped some of the essential functionality in such a brief window
What we've done
This isn't easy to do! But it's a lot of fun to work with others on a group project like this.
What's next
Up next, we'll focus on fixing up the UX and cleaning up what we've got, adding the other essential functionalities, and acquiring a domain so we can put the site up and allowing the rest of the world to watch Sylver!
Built With
css
html
Try it out
github.com
github.com | Sylver | The Remote Interactive Cinematic Experience of Tomorrow | ['Swadesh Sistla', 'Swaraag Sistla', 'Dylan Revsine'] | [] | ['css', 'html'] | 9 |
10,026 | https://devpost.com/software/covid-19-symptom-tracker-v12q5i | Inspiration
We are beginners, and weren't able to completely finish, but we learned so much from this, and that is what counts.
What it does
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for COVID-19 symptom tracker
Try it out
github.com
covid19symptomtracker.glitch.me | COVID-19 symptom tracker | A way to track your symptoms | ['Nayereh Hosseini'] | [] | [] | 10 |
10,026 | https://devpost.com/software/coronavirus-news-dashboard | Inspiration
I wanted to make a unified dashboard which shows new and incoming news articles which are summarised and presented. As well as this, I wanted to have a system which found key points and summaries of news articles to show a good enough overview to the lay person.
What it does
Summarises news articles and finds keywords.
How I built it
This project uses a lot of very interesting algorithms and logic to work.
I needed to make the summarisation system as fast and light as possible which was a challenge but was made easier with the discovery of TextRank.
This is an extract of the code for TextRank created with the help of tutorials from various websites.
# GeeksForGeeks, AnalyticsVidhya
def generate_summary(text, top_n=1):
stop_words = stopwords.words('english')
summarize_text = []
"""Summarizer workings:
-> Read and split text
-> Generate a sim matrix
-> Rank the sentences using networkx pagerank (google search algorithm used since 1998)
-> Sort and pick
EXPLAINED IN GREATER DETAIL BELOW
"""
sens = read_article(text)
sen_sim_martix = build_sim_matrix(sens, stop_words)
sen_sim_graph = nx.from_numpy_array(sen_sim_martix)
scores = nx.pagerank(sen_sim_graph)
ranked_sen = sorted(((scores[i],s) for i,s in enumerate(sens)), reverse=True)
for i in range(top_n):
summarize_text.append("".join(ranked_sen[i][1]))
# OUTPUT THE TEXT HERE
return ". ".join(summarize_text)
TextRank is simple.
The first step is reading in and splitting the text, continuing on from this it generates a similarity matrix.
By similarity I refer to cosine similarity or cosine distance.
Diving deeper into cosine similarity, it works like this:
Lets start off with two sentences : "The quick brown fox jumps over the lazy dog"
and "The fast fox hops over the relaxed dog"
The first step is removing "stopwords". Stopwords are words which contribute nothing to the sentence but are only there for the sake of grammar. These are words such as "the" and "and".
The next step is finding the vectors of these sentences.
Then we create a similarity matrix and apply TextRank.
TextRank is very similar to the Google PageRank algorithm. I chose it because of its speed and elegance. It uses networkx for this.
The second part was the keyword/topic extraction system called an LDA which stands for Latent Dirichlet Allocation. This was something I was new to but was fascinated by so I watched videos to understand it properly.
https://www.youtube.com/watch?v=Cpt97BpI-t4
Challenges I ran into
The NewsAPI only returns part of the text.
Accomplishments that I'm proud of
Learned about frontend work and making cards in css.
What I learned
Learned about APIs.
Web-design was a challenge since I don't have much experience.
What's next for Coronavirus news dashboard
Perhaps extend to better NewsAPI.
Built With
flask
networkx
newsapi
sklearn
Try it out
corona-dash-entry.adityakhanna.repl.co | CoronaNews | A news summariser and keyword extraction system to get an overview of whats happening with COVID-19 | ['Aditya Khanna'] | [] | ['flask', 'networkx', 'newsapi', 'sklearn'] | 11 |
10,026 | https://devpost.com/software/team-discover-qg7kn3 | The project is the winner of the EUvsVirus Health & Life Domain!
The problem our project solves
There are thousands of (potentially) infected people being monitored in hospitals in non-intensive rooms. These are cases that are not severe enough to be in ICU care, but if their conditions worsens, they need to be relocated there. Nurses work around the clock to help and monitor them many times a day, but current practices have huge shortcomings.
There is a shortage of protective gear and they are highly overused, which puts nurses at high risk after having so many close physical contact with patients.
Just as with the equipment, there is also lack of human resource: nurses are critical to stay healthy so that staff numbers don't drop.
Monitoring the vital signs of a patient takes about 5 minutes for a nurse, without considering the changing of gear, which amounts to a small number of people being inspected under an hour.
The measured data rarely entered and stored online, which limits any further analysis to be made.
What we bring to the table
We give nurses superpowers, by doing a 100 check-ups in the time that it used to take 1. All while being far from the patient, staying out of risk.
Our solution enables a highly scalable patient monitoring system that minimizes physical contact between nurses and patients, which also leads to smaller shortage of protective gear. Instead of occasional visits, our device measures vital parameters real-time and uploads each patient’s data into a central server. With the help of our dashboard, doctors and nurses can oversee hundred times more patients, while our automatic alert functionalities make it possible to diagnose deteriorating cases instantly and to reach quicker reaction times.
In the span of 48 hours, we have created a fully-functional pair of 3D printed glasses, allowing patients to initiate frequent measuring of their vital signs, all by themselves. These include body temperature, oxygen saturation and respiratory rate, the key values nurses regularly check on coronavirus patients.
What we have done during the weekend
We have improved our 3D printed prototype, that we have created on another weekend. We had to re-assemble the sensors and performed benchmark tests to measure the accuracy of our sensors. We have consulted with multiple medical professionals on top of the ones we have already talked to earlier and were able to come up with better infrastructure for our solution. We also focused more this time on the supporting services such as the dashboard, which we have designed from scratch, along with our pitch video.
Our solution’s impact to the crisis
Our medical device enriched with our data analysis system is designed without the need for any specific infrastructural requirement, which allows universal usage in any country. Furthermore, hospitals, regions or even countries can collaborate and share their data to find global patterns, which opens doors for new innovations to fight the virus together. Our modular sensor design and 3D printed case allows fast mass-production and short implementation time. From the medical view, we are keeping the medical staff in a safe distance to protect them from highly infective patients. With our real-time, large-scale monitoring, nurses and doctors can filter out and deal with most pressing cases while our system keeps an eye on every other patient.
We have talked with over 15 professionals, including multiple doctors, nurses, investors and manufacturers, and they were eager to hear how fast we could get this to hospitals. After further recognition and an award from EIT Health, multiple doctors reached out to us, offering their expertise and support, which gave us another huge confidence boost in the project.
The necessities in order to continue the project
For us to scale up this project, we need partners that can help us in mass manufacturing, as we lack the experience in this area. For the manufacturing, we would need a large quantity of sensors ,injection-molding and assembling facilities. For fast delivery of the device, we also need the cooperation of hospitals, doctors and nurses to help us in testing. Their feedback is invaluable for the success and impact of our product.
The value of our solution after the crisis
Although the parameters measured by our medical device are the most informative values for COVID-19 infected people, body temperature, oxygen saturation and respiratory rate are key indicators for illnesses under normal circumstances as well. Therefore, our wearable makes everyday routine check-ups faster even in normal situations.
Another key change would be digitalization. Many hospitals still don’t have a centralized medical system and database, while our solution could start a new wave of data analysis and speed-up innovative activities in the health industry.
The available data and its analysis can also boost cross-European collaboration by sharing trends and new findings between countries, leading to more efficient and smarter future detection measures.
Team
We have multiple years of experience in hackathons and real life projects. Our team combines a multi-disciplinary knowledge of full-stack development, machine learning, design and business development. We are double-degree EIT Digital students at top universities, including KTH Royal Institute of Technology, Aalto University, Technical University of Eindhoven and Technical University of Berlin.
Márton Elődi - EIT Digital MSc Student in Human-Computer Interaction Design
- Several years of experience in software and product development
Kristóf Nagy - Electrical engineer
and professional motion graphics designer
Péter Lakatos - EIT Digital MSc Student in Data Science
- Experience in ML and business development
Miklós Knébel - EIT Digital MSc Student in Autonomous Systems
- Experience in robotics, deep learning and automation
Péter Dános - EIT Digital MSc Student in Visual Computing
- Expertise in 3D printing and design
Levente Mitnyik - EIT Digital MSc Student in Embedded Systems
- Vast knowledge of electrical engineering, micro-controllers and embedded systems.
Built With
3dprinting
arduino
autodesk-fusion-360
infrared
microphone
pulsoximeter
Try it out
github.com | Team Discover - EUvsVirus Health & Life Domain Winner | We give nurses SUPERPOWERS! | ['Kristóf Nagy', 'Péter Dános', 'Miklós Knébel', 'Peter Lakatos', 'Levente Mitnyik', 'Márton Elődi'] | ['Grand Winner (Health & Life Domain)', 'Challenge Winner'] | ['3dprinting', 'arduino', 'autodesk-fusion-360', 'infrared', 'microphone', 'pulsoximeter'] | 12 |
10,026 | https://devpost.com/software/tracovid | Inspiration
The inspiration came to me all the way back in March, at the onset of COVID-19. When countries all around the world were beginning to realize the significance of the virus but were still reluctant to shut nations down, I saw this as a viable path for authorities to gain track of people who could potentially be infected with the virus without having to employ a large number of human beings to track down each person by hand which is very time consuming and inefficient. I saw this as an easy and efficient way to identify potentially infected people in a community, by identifying whether they have been in contact with an infected person.
What it does
I saw this as an easy and efficient way to identify potentially infected people in a community, by identifying whether they have been in contact with an infected person. The way it works is by installing a QRCode at the entrance and exit of a crowded place such as places of worship, supermarkets, and of course; hospitals. The user enters the place and scans QR code on entry and at the exit. This logs his entry and exit times and is uploaded to a database. As an added feature the person also gets a reminder when he spends 20 mins in a certain place, advising him to exit the place t reduce exposure.
When a person checks in to a hospital and tests positive for the virus, the medical authorities can use a program I made to enter the person's identification number along with the number of days to search for( an average person develops symptoms in 4-7 days). The user gets a file with a list of people who could potentially be infected with the virus by their identification number. Then the person can use another program (included in my project) to update the list of potentially infected people in the database so that they can be notified to stay at home.
How the program determines whether a person could be infected or not:
Upon entering the persons id, the program checks each place in the community(within the specified date range) for the specific persons visit. If a place is found, the program checks for other people who could have been in the place during the same time frame(calculated using his exit and entry time).
Implementation:
Since databases can get quite large over time and becomes practically non-manageable, it is recommended that states(within a country) be designated specific area codes so that given a person's identification number, the program can identify the area and just search for areas within the area, hence improving the efficiency and practicality of the application.
How I built it
I built the app using Xcode and swift. I built the extra programs with python.
Challenges I ran into
I am a lone member so it was hard for me to manage the backend too. So I had a lot of learning to do before I actually began working on the project. I also had some difficulty with providing notifications as this was the first time I was integrating notifications into an app. But my major difficulty was that i didn't have a team, and that was a pretty big bummer. I missed the team making session due to time-zone limits and that was a big bummer.
Accomplishments that I'm proud of
I'm proud of being able to finish the application along with the complementary programs on time!
What I learned
I learned quite a lot in terms of backend especially. I am proud to say that I can efficiently work with firebase and other backend services, something I couldnt before.
Improvements:
Once again, being a sole team member, I had to devote a big portion of my time to learn how backend services and notifications work and that really sacrificed my ability to incorporate something I am good at -> UI. I wasn't able to put a really good UI because of time constraints and I am really sad about that.
What's next for TraCovid
What's really interesting about TraCovid is that this can be used for any future pandemics or health hazards that we might face, so deploying something like TraCovid can be really efficient in slowing down the spread of any virus, helping us to combat future health crises.
Built With
firebase
python
swift
Try it out
github.com | TraCovid | Effectively tracking down people who could potentially be infected in a community. | ['Mishaal Kandapath'] | [] | ['firebase', 'python', 'swift'] | 13 |
10,026 | https://devpost.com/software/clashroom | Inspiration
We wanted to have something school apps don't have; collaboration. We took our inspiration from Habitica and created a concept of teams, each with their own health, competing for the last one standing through a series of quizzes. This would help improve the environment and add the fun back into school.
What it does
This web app helps students collaborate better together in small teams selected by their teacher. Each unit is a new round, with many battles in their way. Each team has a unique experience, the correct answer keeps you safe, wrong answers lower your health. This was stored inside a database using SQLAlchemy and Flask. Users have different roles, giving different pages for teachers, students, and admins.
How we built it
We used Heroku to host a web server that uses Flask as a backend. We collaborated on this project using git and GitHub. The backend was built using SQLAlchemy and Python Flask while the front end was built using HTML, CSS, and Javascript.
Challenges we ran into
This was the first type using git for all team members, so learning it in such a short time was a significant hurdle we faced. In addition, we spent a considerable chunk of our time this weekend setting up the web server on Heroku. Once again, none of us had much experience with Heroku (or web servers in general) so that was a learning experience for all of us.
Accomplishments that we are proud of
We are quite proud that we were able to set a database running with SQLalchemy. Despite the confusing code, we were able to understand the content to an extent where we were able to successfully implement a working database. This, we think is our proudest achievement. Some of us have minimal experience with web design, and given this short time frame, we were able to learn together the necessary requirements and basics of HTML, CSS, and Javascript to create a well-formatted, decent looking web application.
What we learned
Similarly to above, we learned how to code with SQLalchemy and Python Flask. In addition, some of us who knew close to nothing about web design in the past were able to learn basic HTML, CSS, and Javascript quite easily. In such a short time, stress can easily hinder our progress. I think the important things we learned are how to work efficiently under stress and divide time accordingly.
What's next for Clashroom
In the future, we can see Clashroom have a working chat, better graphics, a way for students to "buy" stuff from a store, and many other features that make this app a lot more interactive.
Built With
css
flask
html
javascript
python
sqlalchemy
Try it out
github.com
tag-dh.herokuapp.com | Clashroom | Learn through collaboration and competition in an online classroom environment | ['Kevin Wang', 'George Zhang', 'Avaneesh Kulkarni'] | [] | ['css', 'flask', 'html', 'javascript', 'python', 'sqlalchemy'] | 14 |
10,026 | https://devpost.com/software/classes-on-minecraft | Inspiration - online schooling is hard since we have no one directly teaching us. Also my mother is a teacher and I hear all the time ow hard teaching online is.
What it does - It teaches is you various subjects you learn in school.
How we built it - we built the app on minecraft and we coded commands for the instructions to appear and the equipment to appear. We also coded the building to appear.
Challenges we ran into - we had no idea how to use minecraft education edition and its version of javascript was different from what we learned.
Accomplishments that we're proud of - the project actually works and we figured out how the coding works.
What we learned - we learned how to use minecraft education edition.
What's next for Classes on Minecraft - we want to add more classes and make it public for everyone to use.
Built With
javascript | Classes on Minecraft | An interactive online learning program. | ['Emily Sellers', 'Patience Mares'] | [] | ['javascript'] | 15 |
10,026 | https://devpost.com/software/generating-electricity-by-walking-g8c1po | The primary hardware components used.
A bunch of piezoelectric sensors!
An inside view of the shoe. 17 piezoelectric sensors can be seen in this side. There is an additional 16 sensors on the other side.
The top down view of the shoe (without the styrofoam)
Summary
The average American walks approximately 3,500 steps per day; each step creates mechanical energy, energy which ends up being wasted and dispersed into the environment. Tapping into this wasted energy opens a door for opportunities to supplement the user’s actions. Varying amounts of piezoelectric sensors were used to generate this energy which gets stored in a LiPo battery through the aid of the BQ25570 chip. My design used 33 piezoelectric sensors, which generated, approximately 0.27 volts or 23.625 mAh just after 60 steps. If a user wore this shoe and walked the average amount of steps per day, they would generate 1,378.125 mAh! In addition, I developed an add-on to this project that adds an Arduino Nano with an Accelerometer and Gyroscope sensor. The data from these sensors are run through a neural network that predicts the behavior the user is doing. For example, if the user is jumping it will predict they are jumping.
How I built it
The hardware component of this project has one layer of styrofoam on the top and bottom. This protects the piezoelectric sensors and increases comfort for the user. Then there are two layers of cardboard, each side of the cardboard has 8-9 piezoelectric sensors, connected in series. The two cardboard pieces are connected in parallel. There is then a thin piece of paper between the two cardboard pieces, to make sure no wires short out when they touch each other.
The software uses Keras with TensorFlow. I created a Google Cloud Virtual Machine Instance, which runs a python script that reads in data regarding user's motion and then with Keras and TensorFlow creates a model of the data that can be used for prediction.
Challenges I ran into
Developing the hardware of the shoes took the bulk of my time. I have never used Piezoelectric sensors before, so I had to learn how to use them. In addition, it took me a while to optimize the energy outputted from the shoe. The green BQ25570 chip helped me do that though.
Accomplishments that I'm proud of
This is the world's most efficient shoe that generates electricity! Other solutions mostly use different means to generate electricity. My solution used Piezoelectric sensors, and then the BQ25570 chip to control the flow of electricity from the two capacitors on the chip to the battery. This minimizes the electricity wasted.
What I learned
I learned a lot! In general, I am better at software related projects, this project, being a hardware-first project, increased my skills in dealing with hardware. I got better at soldering, understanding the mathematical calculations of voltage and current, Piezoelectric sensors, Arduinos and various hardware compounds. On the software side, this was my first time using Google Cloud. I am now comfortable in creating complex Virtual Machines in the cloud that can run various advanced scripts.
What's next for Generating Electricity By Walking
I want to add a wifi/Bluetooth chip into the Arduino Nano, this will enable the data from the accelerometer and gyroscope to transfer to a web server in the cloud without the need of a wire. With this advancement, I could develop a mobile/web app that tracks various foot-related fitness activities, including jumping, running and walking.
Built With
google-cloud
keras
piezoelectric
tensorflow | Generating Electricity By Walking | Generate a lot of electricity just by walking! | ['Tarun Ravi'] | [] | ['google-cloud', 'keras', 'piezoelectric', 'tensorflow'] | 16 |
10,026 | https://devpost.com/software/distance-hacks-2020-filter | Inspiration - Filters on the apps that I like to use
What it does - It gives you and the space around you a purple tint with smiley faces floating off your head, and the MLH logo and distance hacks in the 2 top corners.
How I built it - I used the Spark AR platform to build my filter.
Challenges I ran into - The challenges I ran into were figuring out how to create some of the components of the filter. Some of these things weren't as simple as the other ones and you had to follow the steps correctly to get eh output you wanted to get.
Accomplishments that I'm proud of - I'm proud of myself for persisting through and completing the project even though it took me a long time to understand how to create the filter.
What I learned - I learned how to use Spark AR, and how to create a filter.
What's next for Distance Hacks 2020 Filter - The next steps for my filter is to be able to put a location on the bottom of the screen, be able to change the color filter and finally be able to switch the hackathon you are at based on the time and date.
Built With
patches
sparkar | Distance Hacks 2020 Filter | I created a filter on Spark AR to help promote the Distance Hacks Hackathon. | [] | [] | ['patches', 'sparkar'] | 17 |
10,026 | https://devpost.com/software/ultimate-arcade | Inspiration
We were inspired by a problem that we saw in our own lives--becoming exponentially more bored with each passing day, week, and month of quarantine.
We found that our enjoyment of video games as a passtime was something we had in common, so we decided to work together to bring back some of our favorites!
What it does
Our arcade has a home page, with 5 arcade games we programmed ourselves, including Doodle Jump, a Maze, Flappy Bird, an ET Raid game, and Pong.
How I built it
In the end, we used HTML, CSS, and JS throughout our project. When needed, I looked up how to do specific things, since this was almost the first time most of us had used these languages.
Challenges I ran into
One of the biggest challenges we faced was the limitations of other languages--specifically, I had originally written my flappyBird code in Java (Jswing), and when it came time to pooling our programs together, I found out last night that it would be incredible difficult to connect my Java program to the other games written through HTML. In the end, I rewrote this program, learning copious amounts of JS and HTML along the way. Then, this morning, I was able to make a last minute Pong game as well!
Aside from this, an issue we all faced was being able to set aside time in our own hectic lives and communicate effectively among the four of us to make our ideas come to fruition.
Accomplishments that I'm proud of
I am proud of the amount of progress we were able to produce in these short days, even after our setback with the Java platform and the fact that for 3/4 of us, this was our first Hackathon.
What I learned
I learned a lot about the security issues that came with older languages such as Java, and the need for newer languages to emerge. I was able to learn more about how Java works and why it is considered an 'independent language', and I was able to learn enough about JS and HTML (languages I was not too familiar with) to make two games!
The four of us also learned about teamwork, collaboration, and the difficulties that may arise when trying to work on a project of this scale with a team.
What's next for Ultimate Arcade!
We hope to continue working on this website even after this Hackathon, because we feel that through MLH, we were able to find a group of pretty like-minded individuals who are proud of all we were able to accomplish. We want to continue working on this project, hoping to add even more games!
Built With
css
html5
javascript
Try it out
DistanceHacks-Project--athulya-ss.repl.co | Ultimate Arcade! | During this quarantine, we found that many individuals our age were becoming increasingly bored. Consequently, we decided to take it upon ourselves to build our very own arcade, packed retro games! | ['Nihal Saxena'] | [] | ['css', 'html5', 'javascript'] | 18 |
10,026 | https://devpost.com/software/stock-portfolio-allocation | Our prediction values were in yellow, and the real values were in orange for AAPL.
Inspiration
In creation of this project, we decided to pursue a mission to provide more accessible and easy to use data, without having to pay for it. This incentivizes more and more people to invest in the stock market, allowing companies to gain the necessary money on a need basis, and everyone to enjoy long term capital gains with their holds in the market. This proves to be a win-win for everyone, as current investors, new investors, and companies will all benefit and make more money with more and new investors in the markets.
What it does
Our application prompts the user for a ticker symbol they want to investigate, with an automatic completion system while searching for a ticker. This is then processed to our server, then to our stock analysis code, and results are displayed as a response in the UI/UX. This machine learning service has over a
94.7%
accuracy on it's trained models. It takes data over 10 years and splits into training and testing, and effectively gives the user a nicely time stamped prediction for future prices based on adjusted close price.
How we built it
We used Flutter to build the UI/UX to allow iOS and AndroidOS to both have access to the application. We used Flask and the requests library to process requests and responses to and from the server. Finally, we used Python, and within python, we used the TensorFlow library to complete machine learning and data analysis on the stock's adjusted close price. After using training data and testing data from the past ten years, we are able to effectively predict and display a graph for the current stock.
Challenges we ran into
Our biggest challenge was attempting to connect the server with the algorithm, as we got many errors trying to run the algorithm as a function. We still face some of those issues, but we have tried to suppress the errors as best as possible.
Accomplishments that we're proud of
We are very proud of our UI execution and server's HTTP networking, as well as our stock prediction algorithm. All were highly complex pieces of code with tons of errors, but some patience and Stack Overflow searches helped us reach the end product we reached currently.
What we learned
We all learned to troubleshoot errors, and how to create perfect architecture for HTTP requests and servers, to compute the most optimal server for the given project. The UI/UX taught us a lot about design and implementation, and the machine learning taught us the nature of errors and how to adjust code for other pieces of code.
What's next for Stock Price Prediction Mobile Application
Next, we hope to add an interactive graph, so you can see all future values, and then we hope to include more advanced algorithms and options to trade right in our application. This would take Stock Portfolio Allocation to the next level in terms of accessible algorithms for general stock traders and investors.
Built With
dataframe
flask
flutter
numpy
pandas
python
scikit-learn
tensorflow
ui/ux
Try it out
github.com | Stock Price Prediction Mobile Application | An easy and accessible mobile application for general investors and traders to view machine learning predicted values for the next days. | ['Upsham Naik', 'Shrey Jain', 'Shashank Vemuri'] | [] | ['dataframe', 'flask', 'flutter', 'numpy', 'pandas', 'python', 'scikit-learn', 'tensorflow', 'ui/ux'] | 19 |
10,026 | https://devpost.com/software/distancehacksresearcher | Inspiration
I am a High School Student who regularly has to write notes on a variety of topics, it is time consuming and takes a lot of effort. To address this problem I developed an automatic researcher.
What it does
It grabs large amounts of information (text) from the internet on a topic, and generates a text file and a ui with the research notes, these notes will be a summary of all the text.
How I built it
I used the wikipedia API and a textRank API in order to search the internet and summarise the information on a topic which can fit any size.
Challenges I ran into
The user interface is hard to develop, especially as a novice frontend programmer, but I managed to develop the UI using AppJar
Accomplishments that I'm proud of
The data is read easily and summarised into a text file quickly, I am happy with the execution of my idea.
What I learned
How to use textRank and summarise large quantities of data.
What's next for AutoResearcher
I plan on expanding my team and trying to further develop the quality and structure of notes developed.
Built With
python
textrank
Try it out
github.com | AutoResearcher | Enter a Topic - Generate Notes, Fast and easy | ['Vedaangh Rungta'] | [] | ['python', 'textrank'] | 20 |
10,026 | https://devpost.com/software/combatcoronavirus-hzj965 | This app allows users to see the number of cases and deaths in their US state and worldwide via flutter.
APIs Used:
https://thevirustracker.com/free-api?global=stats
and
https://covidtracking.com/api/states
.
Problems Faced:
Getting the coordinates to translate the correct state via Geocoder and grabbing from the API with Flutter.
Fetching From an API
Built With
dart
java
kotlin
objective-c
swift
Try it out
github.com | CombatCoronavirus | This app allows users to see the number of cases and deaths in their US state and worldwide via Flutter. | ['Dylan Revsine', 'Yogesh Seenichamy', 'RoboticsQveen27 Lavin'] | [] | ['dart', 'java', 'kotlin', 'objective-c', 'swift'] | 21 |
10,026 | https://devpost.com/software/crowd-catalog | Inspiration
In this time of uncertainty, we wanted people to be aware of which stores and restaurants were safe to go to. People with conditions that make them vulnerable to the coronavirus want to make sure that a store is following social distancing guidelines and is staying clean before they go.
What it does
Our website gathers information from your community to help you make the best choice for your safety. We do this by asking for reviews for stores, and allowing you to look at them. Reviews answer questions such as:
Are people wearing masks?
How clean is it?
Are social distancing guidelines being followed?
How we built it
By using the Google Maps Javascript and Places APIs and a MySQL database, we were able to let people review stores near them.
Challenges we ran into
The biggest challenge for us was definitely time. We switched over to this project from another project that was not viable, and that left us with less than 24 hours to create this. Because of the time restraint, we were unable to implement viewing other people's reviews
Accomplishments that we're proud of
We're proud of having a working google maps screen and the ability to get stores near a user's location and let people review them.
What we learned
We learned how to use the Google Maps API, the Spectre CSS framework, and git.
What's next for Crowd Catalog
The biggest thing is to implement viewing other people's reviews, which shouldn't be too difficult but we simply ran out of time. For other things we want to do in the future, see the README in our github repo.
Built With
css
google-cloud
html
javscript
php
spectre
Try it out
crowdcatalog.atwebpages.com
github.com | Crowd Catalog | It's like waze, but for the pandemic | ['Matthew Hershfield', 'Sharay Gao'] | [] | ['css', 'google-cloud', 'html', 'javscript', 'php', 'spectre'] | 22 |
10,026 | https://devpost.com/software/coronaconnect-svfo6h | . | . | . | ['Irfan Nafi'] | [] | [] | 23 |
10,026 | https://devpost.com/software/pneumoscan-an-ai-radiology-tool-for-covid-19-pandemics | Fig. 1: Map of Covid19 cases around the world (as of 4/30/2020)
Fig 2: Top 10 countries with most COVID-19 deaths
Fig 3: Current chest X-ray diagnosis vs. noval process with CovidScan.ai
Chart of wait-time reduction of AI radiology tool (data from a simulation stud reported in Mauro et al., 2019).
Fig. 5: Process of CovidScan development
Demo of web-app:
https://www.cv19scan.site/
(Please use Internet Explorer, or Firefox, our web-app currently doesn't support Chrome)
Dataset:
For the data analytics of COVID-19 pandemics, we used data collected by the Johns Hopkins University Center for Systems Science and Engineering updated on 4/30/2020.
For the chest X-ray detection models, we used combined 2 sources of dataset:
The first source is the RSNA Pneumonia Detection Challenge dataset available on
Kaggle
contains several deidentified CXRs with 2 class labels of pneumonia and normal.
The COVID-19 image data collection repository on
GitHub
is a growing collection of deidentified CXRs from COVID-19 cases internationally. The data is collected by Joseph Paul Cohen and his fellow collaborators at the University of Montreal
Eventually, our dataset consists of 5433 training data points, 624 validation data points and 16 test data points.
Inspiration
What will be working situation for medical staff in hospitals during and after the COVID-19 pandemic? How can the medical staff quickly and securely log in and perform PPE safety check while dealing with a huge influx of patients in critical conditions? How can we automate the process of COVID-19 diagnosis so precious time can be saved for both medical doctors and the patients? How can our solution for hospital later be scaled and implemented to be a essential tool for automating the daily operation at hospital even after the COVID-19 pandemics is over?
To answer these core questions, we did some background research to identify the main challenges in order to develop the best solutions around those:
COVID-19 Pandemic:
Fig. 1: Map of Covid19 cases around the world (as of 4/30/2020). Our team created the map based on data collected by the Johns Hopkins University Center for Systems Science and Engineering.
As we see from the map above and the pie chart below, COVID-19, previously known as the novel Coronavirus, has killed more than 63,860 people and infected over 1,067,061 people in the United States alone, topping all other countries around the world. This number is continuing to grow every day.
Fig. 2: Top 10 countries with most COVID-19 deaths.
The 3 main problems occur in the healthcare system during the pandemics are:
1. Confidentiality:
As you may see on the news, hospitals all over the U.S. (New York, Chicago,California…) and other countries (Italy, Spain…) are flooded with a huge influx of patients with critical conditions. With the increasing workload for the medical staff, patients’ confidential information may be put at risk if unauthorized personels can hack into the electronic medical record system. Thus, there is a need for a fast and secured method for medical staff to log in to the electronic medical record platform, so that the staff can move quickly with patients’ information inputting and still remain compliant with HIPPAA (Health Insurance Portability and Accountability Act). Badge scanning will be highly secured solution for this problem.
2. PPE Safety Check:
According to CDC, during COVID-19 pandemics, all healthcare workers should follow strict guidlines and protocols from OSHA regarding wearing PPE. All of the PPE prevents contact with the infectious agent, or body fluid that may contain the infectious agent, by creating a barrier between the worker and the infectious material. Gloves, protect the hands, gowns or aprons protect the skin and/or clothing, masks and respirators protect the mouth and nose, goggles protect the eyes, and face shields protect the entire face. N95 masks are the PPE most often used to control exposures to infections transmitted via the airborne route. Therefore, checking medical staff’s PPE safety protocol is especially crucial during this pandemics.
3. Long wait time for COVID-19 chest X-ray result:
Fig 3: Current chest X-ray diagnosis vs. novel process with CovidScan.ai
Patients can first be screened for flu-like symptoms using nasal swap to confirm their COVID-19 status. After 14 days of quarantine for confirmed cases, the hospital draws the patient’s blood and takes the patient’s chest X-ray. Chest X-ray is a golden standard for physicians and radiologists to check for the infection caused by the virus. An x-ray imaging will allow your doctor to see your lungs, heart and blood vessels to help determine if you have pneumonia. When interpreting the x-ray, the radiologist will look for white spots in the lungs (called infiltrates) that identify an infection. This exam, together with other vital signs such as temperature, or flu-like symptoms, will also help doctors determine whether a patient is infected with COVID-19 or other pneumonia-related diseases. The standard procedure of pneumonia diagnosis involves a radiologist reviewing chest x-ray images and send the result report to a patient’s primary care physician (PCP), who then will discuss the results with the patient.
_Fig 4: Chart of wait-time reduction of AI radiology tool (data from a simulation stud reported in Mauro et al., 2019). _
A survey by the University of Michigan shows that patients usually expect the result came back after 2-3 days a chest X-ray test for pneumonia. (Crist, 2017) However, the average wait time for the patients is 11 days (2 weeks). This long delay happens because radiologists usually need at least 20 minutes to review the X-ray while the number of images keeps stacking up after each operation day of the clinic. New research has found that an artificial intelligence (AI) radiology platform such as our CovidScan.ai can dramatically reduce the patient’s wait time significantly, cutting the average delay from 11 days to less than 3 days for abnormal radiographs with critical findings. (Mauro et al., 2019) With this wait-tine reduction, patients I critical cases will receive their results faster, and receive appropriate care sooner.
What it does
Using the power of pretrained machine learning models from open source, CovidScan.ai is created as a full-scaled AI tool for radiology clinics and hospitals. It can automate the process of security log-in, PPE safety check for medical staff and assist radiologists determine sign of COVID-19 on chest X-ray images with high accuracy indicates pneumonia. This tool of cutting edge technology can be used to reduce the workload for clinicians, and speed up patients’ wait time for pneumonia lab results in this critical time of the COVID-19 pandemic.
Fig 5: Deployment process of pretrained ML model to the web-app
As explained in the figure above, the CovidScan web-app includes 3 main AI components:
1. ID Badge Scanner:
For security purpose, only authorized personel can access to the web-app, which contains patients’ confidential health information (name, date of birth, chest X-ray, medical history…). Hence, the web-app will use pretrained scan the medical’s badge to grant them access to the software.
2. PPE Safety Check:
Due to hospitals/clinics’ strict guidelines in PPE usage, especially during this COVID-19 ourbreak, the web-app will ask the medical staff if he/she is in direct contact with patients for chest X-ray taking. If yes, then the web-app witll use AWS pretrained to check for medical staff’s PPE to see if the staff follow the safety protocols to minimize any exposures to the disease. If the medical staff passed both the secured check and safety, he/she can move on the the next step.
3. COVID-19 Chest X-ray Testing:
In the last step, the medical staff take patients’ chest X-ray images using the specialized machine and then upload the taken images to the database of web-app for testing for sign of COVID-19 infection or bacterial pneumonia.
It is due to the fact that an AI system can review, highlight the pneumonia sign and classify each X-ray image all in less than 10 seconds (comparing the radiologist’s 20 minutes that we mentioned earlier), and it can do that same task effortlessly for 24 hours without taking a break. This time cut is especially critical in the time amid the pandemic of COVID-19. With this spreading rate, it will be overwhelming for radiologists to review a massive number of chest X-ray images of potential COVID-19 infected patients. With the assistance of CovidScan.ai, it can automatically highlight the suspected signs of pneumonia for the radiologists and speed up the process of chest X-ray review. Therefore, more COVID-19 positive-tested patients will get their result back faster and receive appropriate care sooner to prevent the spread of the virus.
How we built it
Employee Badge Scanner:
We developed this feature using the open-source python library Pyzbar. We have written the script in the JQuery which sends the snapshots from the live camera feed to the inference model at the backend. It can read one-dimensional barcodes and QR codes present on the employee’s ID badge. We implemented this feature to work with a snapshot of employees’ ID badge.
Link:
https://pypi.org/project/pyzbar/
PPE Safety Check:
We developed this feature using the open-source TensorFlow model for face mask detection. The backbone network only has 8 Conv layers and the total model has only 24 layers with the location and classification layers counted. The dataset is composed of WIDER Face and MAFA datasets. We have written the script in the JQuery which sends the snapshots from the live camera feed to the inference model at the backend. It works with live footage from any sort of cameras and detects people not wearing a face mask.
Link:
https://github.com/AIZOOTech/FaceMaskDetection
Chest X-ray Classification:
For this feature, we developed a Pytorch model. This project’s goal is to draw class activation heatmaps on suspected signs of pneumonia and then classify chest x-ray images as “Pneumonia” or “Normal”. For this project, we are going to use a dataset available at Kaggle consisting of 5433 training data points, 624 validation data points and 16 test data points. C. For the model, we load the pre-trained Resnet-152 available from Torchvision for transfer learning. ResNet-152 provides the state-of-art feature extraction since it is trained on a big dataset of ImageNet. ResNet-152, as the name sounds, consists of 152 convolutional layers. Due to its very deep network, the layers are arranged in a series of Residual blocks. These Residual blocks skip connections to help prevent the vanishing gradients which are a common problem with networks with deep architecture like ours. Resnet also supports Global Average Pooling Layer which is essential for our attention layer later on. For the attention layer to draw the heatmap, we use the global average pooling layer proposed in Zhou et al. Global average pooling layer explicitly enables the convolutional neural network (CNN) to have remarkable localization ability. We achieve 97% accuracy on the training dataset and 80% on the testing dataset.
Web development: The trained weights of the deep learning models are deployed in a form of Django backend web app CovidScan.ai. While the minimal front-end of this web app is done using HTML, CSS, Jquery, Bootstrap. In our latter stage, the web-app will then be deployed and hosted on Debian server.
Technical Requirements:
The packages required for this project are as follows:
Torch (torch.nn, torch.optim, torchvision, torchvision.transforms)
Django
Numpy
Matplotlib
Scipy
PIL
Tensorflow
jQuery
Challenges we ran into
This hackathon project was a very different experience for us which challenged us throughout this project with the AWS sagemaker. This is the first time we all were working with AWS sagemaker and creating endpoints of the pre-trained TensorFlow model. Also, understanding curated models and determining their accuracy was a little bit challenging for us. Even after successfully deploying the model’s endpoints, calling Amazon SageMaker model endpoints using Amazon API Gateway and AWS Lambda gave us a very hard time.
Accomplishments that we're proud of
We manage to finish the project in such a limited time of 2 weeks in our free time from school and work. We still keep striving to submit on time while learning and developing at the same time. We are really satisfied and proud of our final product for the hackathon.
What we learned
Through this project, we learn to implement a complicated image-recognition deep learning models from AWS marketplace. We also learn the process of developing a mini data science project from finding dataset to training the deep learning model and finally deploy & integrate it into a web-app. This project can’t be done without the efforts and collaboration from a team with such diverse backgrounds in technical skills.
What's next for CovidScan:
In the next 2 months, our plan is:
We will raise fund to invest more into the R&D process.
We will partner with research lab to collect more dataset and find hospitals to test our solution. One of our memeber has published his newly collected dataset on this open-source github:
https://github.com/nihalnihalani/COVID19-Detection-using-X-ray-images-/
Regarding our R&D, we plan on improving the performance of the platform, preferably by reading more scientific literature on state-of-art deep learning models implemented for radiology.
We also plan to add the bound box around the suspected area of infection on top of the heatmap to make the output image more interpretable for the radiologists. We are working to implament the multilabeling model of COVID-CXR on our dataset to improve our application. This model is published by The Artificial Intelligence Research and Innovation Lab at the City of London's Information Technology Services division and has accuracy 0.92, precision 0.5, recall 0.875, auc 0.96.
In many pieces of literature, they mentioned developing the NLP model on radiology report with other structured variables such as age, race, gender, temperature... and integrating it with the computer vision model of chest X-ray to give the expert radiologist’s level of diagnosis. (Irvin et al., 2019; Mauro et al., 2019) We may try to implement that as we move further with the project in the future.
With the improved results, we will publish these findings and methodologies in a user-interface journal so that it can be reviewed by expert computer scientists and radiologists in the field.
Eventually, we will expand our classes to include more pneumonia-related diseases such as atelectasis, cardiomegaly, effusion, infiltration, etc. so that this platform can be widely used by the radiologists for general diagnosis even after the COVID-19 pandemics is over. Our end goal is to make this tool a scalable that can be used in all the radiology clinic across the globe, even in the rural area with limited access to the internet like those in Southeast Asia or Africa.
References:
Crist, C. (2017, November 30). Radiologists want patients to get test results faster. Retrieved from
https://www.reuters.com/article/us-radiology-results-timeliness/radiologists-want-patients-to-get-test-results-faster-idUSKBN1DH2R6
Irvin, Jeremy & Rajpurkar, Pranav & Ko, Michael & Yu, Yifan & Ciurea-Ilcus, Silviana & Chute, Chris & Marklund, Henrik & Haghgoo, Behzad & Ball, Robyn & Shpanskaya, Katie & Seekins, Jayne & Mong, David & Halabi, Safwan & Sandberg, Jesse & Jones, Ricky & Larson, David & Langlotz, Curtis & Patel, Bhavik & Lungren, Matthew & Ng, Andrew. (2019). CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison.
Kent, J. (2019, September 30). Artificial Intelligence System Analyzes Chest X-Rays in 10 Seconds. Retrieved from
https://healthitanalytics.com/news/artificial-intelligence-system-analyzes-chest-x-rays-in-10-seconds
Lambert, J. (2020, March 11). What WHO calling the coronavirus outbreak a pandemic means. Retrieved from
https://www.sciencenews.org/article/coronavirus-outbreak-who-pandemic
Mauro Annarumma, Samuel J. Withey, Robert J. Bakewell, Emanuele Pesce, Vicky Goh, Giovanni Montana. (2019). Automated Triaging of Adult Chest Radiographs with Deep Artificial Neural Networks. Radiology; 180921 DOI: 10.1148/radiol.2018180921
Wang, L., & Wong, A. (2020, March 30). COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest Radiography Images. Retrieved from
https://arxiv.org/abs/2003.09871
Built With
matplotlib
numpy
pil
pytorch1.0.1
torchvision0.2.2
Try it out
gitlab.com
www.cv19scan.site | CovidScan-An AI Radiology Tool For COVID-19 Pandemic | CovidScan.ai is developed to be a secured AI platform with the purpose to assist radiologists with fast and accurate pneumonia dectection amid this COVID-19 pandemic. | ['Moksh Nirvaan', 'Nihal Nihalani', 'Vi Ly'] | ['Second Place', '2nd Place - Website Feature'] | ['matplotlib', 'numpy', 'pil', 'pytorch1.0.1', 'torchvision0.2.2'] | 24 |
10,026 | https://devpost.com/software/corona-stats-website | Website interface
Inspiration
I wanted to build a simple website that can be accessed by people easily to get information about the Corona Virus pandemic.
What it does
It displays the statistics of corona Virus pandemic for the selected country and date.
How I built it
I built it using
vanilla javascript, leaflet.js
and the_ rapid api_.
Challenges I ran into
The major challenge I faced was to understand the concept of APIs and how to use it to access information in JSON format.
Accomplishments that I'm proud of
I learnt how to use APIs and some additional Javascript libraries. I also have successfully revised my front-end skills.
What I learned
i learnt how to create a website that displays some relevant information in visually beautiful ways and can be accessed by all.
What's next for Corona Stats website
Next, I aim to integrate the
IBM Watson chatbot
so that people can ask the bot about some relevant information about coronavirus to the chabot like symptoms, availability of medicines etc.
Built With
javascript
leaflet.js
rapidapi
Try it out
www.covidmap.tech | Corona Stats website | Its a website that gives information about the statistics of Corona virus pandemic according to specified country and chosen date. | ['Avhijit Nair'] | [] | ['javascript', 'leaflet.js', 'rapidapi'] | 25 |
10,026 | https://devpost.com/software/support-notes-for-seniors-v2 | home
how it works
send
Track: Health
Inspiration
Two years ago, I was in quarantine for a month due to a disease. During that time, I faced severe loneliness & anxiety, so get-well-soon cards from friends meant the world to me because it showed that I wasn't alone. Knowing that thousands of
senior citizens are now experiencing social isolation
, putting them at risk of many
chronic health conditions
, inspired me to create this project.
What it does
You submit a letter through the site.
The letters then get printed out & sent to meal service centers for the elderly.
The letters are distributed into the food baskets to reach senior citizens.
Proof of Concept
Experience
: I started
Notes for Support
, a website with a very similar premise but targeting an entirely different group (COVID-19 patients & healthcare workers). So far,
I've printed & sent 2,200+ letters across 30 hospitals
in the US.
Connection
: A close family member is a volunteer at one of the senior meal service centers in CA & have confirmed that a program like this would be much welcomed.
Research
According to a fifty year study conducted by Harvard University, human connection is the single most important component of happiness. That's why the concept of sending physical, individual notes is so powerful.
How I built it
I first built a digital prototype out on PowerPoint. I then built the site with node.js & some other programming languages. I've already had experience using this framework so it wasn't a huge challenge, but thinking about the general format was quite difficult.
Impact
There is something so powerful in receiving a personal, physical letter -- it reminds you that you're not alone. This is something that I've experienced myself & through my other project (
Notes for Support
), had thousands of patients & healthcare workers experience as well. Loneliness can kill while a personal letter can save a life.
Challenges, Accomplishments & Lessons
The biggest challenge was definitely the time constraint -- I found out about this hackathon late & would have loved to add more features. However, I'm proud of pulling an all-nighter to finish this project. I learned to just go for it instead of contemplating if you have enough time.
Budget
Raised $1,500 USD for Notes for Support, a partner program.
What's Next + Value after the Crisis
Getting a domain & putting up the site.
Printing & sending the notes received to senior meal service centers!
Even before the pandemic,
senior citizens were always at risk of health issues associated with loneliness
. It is just that shelter-in-place has exacerbated the issue. After the crisis, this program will still be continued to support the elderly population.
Design: lachlanjc.
Built With
html5
javascript
node.js
Try it out
notesforseniors.now.sh | Support Notes for Seniors [V2] | You send a letter to a senior. We'll print & send to meal service centers for distribution. | ['gina c'] | [] | ['html5', 'javascript', 'node.js'] | 26 |
10,026 | https://devpost.com/software/divoc-e0fywm | Flow chart depicting the working of the whole system.
Homepage of the application
Teacher Login
Student Login
Teacher Dashboard
Student Dashboard
Canvas as a blackboard
Asking question in middle of a lecture
Tab Change alert to gain students attention to the lecture
Inspiration
There is an old saying,
The Show Must Go On
, which kept me thinking and finding out a way to connect teachers and students virtually and allow teachers to take lectures from home and to develop a completely open source and free platform different from the other major paid platforms.
What it does
This website is completely an open source and free tool to use
This website whose link is provided below, allows a teacher to share his / her live screen and audio to all the students connected to meeting by the Meeting ID and Password shared by the teacher.
Also this website has a feature of Canvas, which can be used as a blackboard by the teachers.
Including that, this website also contains a doubtbox where students can type in their doubts or answer to teachers questions while the lecture is going on.
Again this website also has a feature of tab counting, in which, tab change count of every student is shown to the teacher. This will ensure that every student is paying attention to the lecture.
Also, teacher can ask questions in between the lecture, similar to how teacher asks questions in a classroom.
How I built it
1) The main component in building this is the open source tool called WebRTC i.e. Web Real Time Communication. This technology allows screen, webcam and audio sharing between browsers.
2) Secondly Vuetify a very new and modern framework was used for the front end design.
3) Last but not the least NodeJS was used at the backend to write the API's which connect and interact with the MongoDB database.
Challenges I ran into
The hardest part of building this website was to find a
open source
tool to achieve screen and audio sharing. This is because Covid crisis has affected most of the countries economy due to lockdown. Hence, it is of utmost important that schools and colleges do not need to pay for conducting lectures.
Accomplishments that I'm proud of
I am basically proud of developing the complete project from scratch and the thing that anyone who has the will to connect to students and teach them can use it freely.
What I learned
I learned a new technology called WebRTC which I believe that is going to help me more than I expect in future.
What's next for Divoc
Integrating an exam module and allowing teachers to take exams from home.
Built With
mongodb
node.js
vue
webrtc
Try it out
divoc.herokuapp.com | Divoc | DIVOC - An Antidote For - COVID | ['Sanket Kankarej'] | [] | ['mongodb', 'node.js', 'vue', 'webrtc'] | 27 |
10,026 | https://devpost.com/software/jarvis-fmlk01 | Inspiration
The inspiration for this project came from the movie Iron Man where AI can talk to humans in a much larger bandwidth and much more smoothly that it's hard to distinguish a bot and human.
What it does
It is a chatbot which talks with the user. It is built with Google Dialogflow and it can tell you jokes, recite poems, and more! While it's still basic it is learning constantly and different features are being added and tested.
How I built it
It was built with Google Dialogflow and it was trained with different user inputs and outputs. Now it can do small conversations.
Challenges I ran into
I ran into challenges such as using it with raspberry pi. I wanted to embed it with the raspberry pi and I have learned a lot from this about the board and its limitations/
Accomplishments that I'm proud of
I'm proud to be knowing Rasberry pi and python. i learned the whole python language and first time doing a rasberry pi project.
What I learned
I learned a lot of python programming language, and how to use the Rasberry pi(It was my first time).
What's next for Jarvis
Next I'll try to make it smarter and embed other platforms such as teachable machines and other powerful board. I'm trying to make an AI assistant which interacts with you smoothly and instead of only mics and speakers, attach cameras too so that it can see what you're seeing and can learn stuff visually and perform more complex tasks such as scientfic calculation in laboratories.
Built With
google-cloud
Try it out
bot.dialogflow.com | Jarvis | Jarvis an AI speech program made using Google cloud platform. | ['Aditya Krishnan Mohan'] | [] | ['google-cloud'] | 28 |
10,026 | https://devpost.com/software/aceso-the-first-feasible-sarscov2-test-trace-network | Track your Virus tests & trace statistics.
Have conversations with a personal AI driven health assistant.
Scan the QR code in order to activate the digital Health ID.
As a government, test lab or other official entity, participate in the network and create automated policies with smart contracts
The Problem.
It is generally known that extensive and widespread testing as well as contact tracing to identify infection chains is crucial for overcoming the SARSCov2 pandemic and gradually returning to normality.
Currently, however, even though the testing itself is not that complicated, the logistics around testing and investigation (infection tracing) of positively tested patients requires Lots of effort and man-hours and goes beyond the borders of available capacities. The related processes are just not automated and digitized. As a consequence, lots of infections are not reflected in statistics making it extremely difficult to cope with the virus as well as spread and isolate it, lockdowns are inevitable in order to stay beyond the intensive care capacity borders.
For contact tracing, the EU has decided to follow the track of controversial software architectures and apps like PEPP-PT, or now "the decentralized" approach DP3T, which cause not only privacy issues but also don't deliver any direct value-add to the users. Another issue is, that tracking without integrated, optimized and automated end2end test rollout management still leads to data lacking behind the real time state and an inefficient value chain.
To sum up, it still lacks a feasible end2end test management and contact tracing platform, connecting governmental institutions, test labs, healthcare facilities and citizens in order to automate the prioritized rollout of test and trace back infection chains after positive test results, without significantly attacking the personality rights of citizens.
How we solve it.
We leverage the properties of the blockchain technology, artificial intelligence and state of the art cryptography to provide an end2end SARSCov2 testing and tracing network.
But how does this work?
The solution consists of three parts:
a permissioned blockchain network for governing test rollouts / logistics and access to personal data in case of infections
a dashboard for government, testing labs and healthcare facilities
a unique Health assistant and "passport"-type health id for citizens in form of a mobile app.
Additionally personal data is encrypted with a hash and stored off-chain in a decentralized cloud-database whilst solely a smart contract contains the key in order to decrypt and display the data to responsible entities in case of infections and direct contact to infected persons.
ACESO Healthpass
Each citizen is provided by the government with a unique digital Health ID which he maintains in an interactive app keeping an anonymized log of relevant events, for instance nearby contact with another person or visiting a public location, for instance a supermarket. Additionally citizens have other value added services like conversations with a chatbot (assistant), or seeing the current load of people on public places. The healthpass collects all the logs anonymously mapped to the non-personal blockchain health pass id and warns the citizen if he behaves too risky , like for instance having lots of contact with other people.
Sensors used for Contact Tracing
Instead of deploying expensive gateways, we believe there is already a mass of options available. For the purpose of not tracking personal data we do not use GPS sensors, but rather diverse options available on public places.
For people to people tracking our app leverages bluetooth technology and available WiFi Networks to register check-ins at public places.
Additionally, at public places, so called sound beacons can be used by registering a signal through the public speakers (for example in supermarkets), we also currently train a neural network, using IBMs Watson Studio, in order to identify different public places based on sound recognition.
As we want to be an open source solution, we want to offer a plug and play sensor interface for easily incorporating additional sensors. The deployment of new sensors has to be voted by the network in the blockchain.
ACESO Test & Trace Network
The blockchain network, at which governments, healthcare facilities and testing laboratories can take part, governs automated policies for data access and testing logistics through smart contracts empowered by Machine Learning and optimization algorithms in order to achieve ideal capacity planning and real time data transfer.
Even though data is anonymized outside the recognized infection chains, it still can serve as a very valuable data source for epidomologic research.
How this will impact the crisis.
ACESO Test & Trace network provides an ideal trade-off between value add for citizens, personal data protection, and effective insights and testing / infection chain management for governments. With the help of this technology, governments can isolate the spread of the virus by real-time capacity planning and logistic automation and quickly deploy and measure new policies whilst citizens stay informed and can stay safe with the help of their personal health assistant. Additionally it could be extended to manage Intensive Care Capacities cross-border through the whole European Union.
What we have achieved during 2 days.
During this weekend we have not only elaborated the idea, but also deployed a full scale blockchain network with already running smart contracts for privacy rules, and an off chain encrypted database as well as created the first fully functional prototype of the ACESO health pass for citizens with an AI-driven chatbot interface and all mentioned sensors for contact tracing.
How we want to continue and what could the solution bring after the crisis.
We want to get in contact with public entities as well as healthcare facilities to establish an open-source project with a longterm goal beyond the testing & tracing use case during the pandemic. With the help of the digital health pass for each EU-citizen we could automate cross-border patient information transfer and inter-country healthcare research knowledge transfer through smart contracts on a self-governed blockchain network.
Built With
fabric
hyperledger
ibm-cloud
ibm-watson
kubernetes
node.js
react
react-native
Try it out
github.com | ACESO - the first feasible SARSCov2 Test & Trace Network | ACESO digitizes and automates rollouts of extensive testing and contact tracing in compliance with personality rights through a self governed blockchain network. | ['Tin Stribor Sohn'] | [] | ['fabric', 'hyperledger', 'ibm-cloud', 'ibm-watson', 'kubernetes', 'node.js', 'react', 'react-native'] | 29 |
10,026 | https://devpost.com/software/rsa-encryption-and-decryption-bot-f90iws | I created this project with a focus on cryptography and understanding an algorithm that deals with a common cipher. I created this bot to simulate the RSA cipher, which uses the power of modular congruence to create secure transmission of a message. Two prime numbers are kept as private keys, and keys of the power of k, the result e from congruence, and the product (p*q) are public keys. These keys are encoded and return at the end of the encoding sequence. The keys e, k, and the private keys p and q are then solved for modular congruence to reveal the message s. This is secure because the prime number keys p and q make a very large product, and an outside client trying to find possible values would take almost years to find combinations of p and q from the product alone. This is why the cipher is effective. I used this bot to fully understand RSA encryption. I used Java through Eclipse to create the bot, in a text-based fashion through the console. I had multiple challenges in creating test cases and understanding how to debug, considering how many references to methods there are. I managed to visualize all the test cases by writing down all the data and methods and managed to debug effectively this way.
-video demo in google drive
Built With
eclipse
java
Try it out
github.com
drive.google.com | RSA Encryption and Decryption Bot | Algorithm that uses RSA encryption techniques to encode messages | ['Shreyan Das'] | [] | ['eclipse', 'java'] | 30 |
10,026 | https://devpost.com/software/boda-safe | Inspiration
I wanted to fix the problem of insecurity in the motorbike 'Bodaboda' taxi, which has been a problem here. Some riders were not qualified or authorized to ride and they would also steal from customers at night.
What it does
It allows a customer/passenger to verify the rider before boarding the bike. This can be used to gain trust in the mode of transport again.
How I built it
I build a portal which could be used by Saccos to register riders, and afterwards, the registered riders are 'Safe' to board. A user can now verify them by sending their plate numbers to a shortcode, and they'll get a reply whether they're safe or not.
Challenges I ran into
I was using new technology (Blockstack), which I was not very familiar with, and also the limited time.
Accomplishments that I'm proud of
I built a working MVP which could do the basic tasks listed.
What I learned
I learned how to securely authenticate a user using Blockstack.
What's next for Boda Safe
I plan to improve on user interface and more features and try it in real life.
Built With
africa's-talking
blockstack
css3
express.js
javascript
mongodb
mongoose
node.js
react
Try it out
github.com | Boda Safe | Creating a safe space for motorbike taxis | ['Patrick Nyatindo'] | [] | ["africa's-talking", 'blockstack', 'css3', 'express.js', 'javascript', 'mongodb', 'mongoose', 'node.js', 'react'] | 31 |
10,026 | https://devpost.com/software/draw-it-d3u8x9 | Inspiration
I was inspired to create Draw it because of the difficulties most of my teachers had faced in teaching their students virtually. One of my classmates asked a question: "could you explain how to draw and label energy diagrams?" My teacher shared his screen and opened a virtual drawing pad. Then, my teacher proceeded to draw contorted lines using the trackpad on his computer. This process not only made it hard for my teacher to explain but made it difficult for students to understand the material and the challenging concepts. This inspired me to create Draw it.
What it does
Draw it is a tool incorporating computer vision that enabled teachers to draw on their screen by moving their writing implement (pen/pencil) in the air. This allows teachers to swiftly and effortlessly create diagrams or demonstrate challenging concepts to their students during the relatively short class time.
How I built it
I used python and the OpenCV computer vision library to build Draw it. After coding the simple interface, I used the OpenCV library to detect objects blue in color (for the project to work, the implement must be blue in color). Next, the program is able to capture and draw the motion of the blue-colored object.
Challenges I ran into
I struggled a lot with finding the best data structures to work with the OpenCV library. After a lot of trial and error, I decided to use a deque to best handle the date.
What's next for Draw it
I hope to enhance Draw its features by using a machine learning model in order to perform image recognition and detect writing implements such as pens and pencils. Furthermore, I hope to expand the codebase in order to provide more tools for teachers in order to provide an even better distance learning experience for their students during COVID-19!
Built With
opencv
python
webcam | Draw it | A distance learning tool for teachers to better teach their students during COVID-19 | ['Veer Gadodia'] | ['Honorable Mention', 'Best Community Building Hack'] | ['opencv', 'python', 'webcam'] | 32 |
10,026 | https://devpost.com/software/healthmate-9vjhqt | Inspiration
So that's an interesting story. My grandmother always used to forget about the medicines prescribed to her. My late grandfather passed away because of a lack of punctuality in taking medicines, checkups.
Also when the medTeam reached his place he could have survived if they had known the right details about him and the right person to contact.
What it does
The app that I have built-in #HackAtHome is a solution to the problem.
HealthMate helps you never forget your meds or pills again! This is a must-have medication tracker and reminder app for your health. It helps us take the care of our loved ones at risk by reminding them from time-to-time and not overdosing in forgetfulness.
HealthMate works well for medication management, tracking vitamins, supplements, birth control, conditions, medication, symptoms, nutrition, activity, daily vitals, therapies, pregnancy, baby symptoms, notes, etc.
Reasons why this is the next startup
CREATE A CARE PLAN -- Used as a treatment & pill organizer
Don’t create just a medication list. Add Drugs, Meds, Vitamins, Minerals, Natural Remedies, Therapies, Fitness & Nutrition as part of your care plan
Set dose form, dose color & set medicine reminders quickly
SCHEDULE MEDICATION REMINDERS
Set pill reminders, appointments, refill reminders, vitamin reminders or supplement trackers
Mark medication reminders as Taken or Snooze
MULTIPLE USES: TRACK SYMPTOMS, TRACK GOALS, HABITS, HEADACHES, MIGRAINES
ADD YOUR CARE TEAM
Save Caregiver information for future reference
Add Caregiver pill alerts for missed dosages, have someone help you stay on track of your goals
Allow Caregivers to view their care plan.
-Giving a point of contact with emergency teams.
Save & Share Health Appointments
Chatbot based GCS
SOS button and voice-based actions
Essential for managing any chronic illness or health symptoms: Chronic Pain, Cancer, Fibromyalgia, IBD & IBS (Bowel Movements), Urine, Diabetes Care, COPD, Epilepsy, Psoriasis, Rheumatoid Arthritis, Fibromyalgia, Multiple sclerosis (MS Symptoms), IPF. Also for mental health: ADHD, Headache Tracker, Depression, Anxiety, PMS & other disorders
How I built it
The UI is built using flutter,
RadarAPI for location
DialogFlow By Google for Chatbot
Python-Django based backend
Voiceflow based actions for SOS situations on google and amazon
The idea is.. using Google and Amazon Alexa actions when SOS be called the device responds and gives options to whether call the local emergency number or anyone on the care team... Actions are taken accordingly.
The database is on MONGODB.
Challenges I ran into
Many APIs don't have documentation for flutter
Accomplishments that I'm proud of
The UI components.
What I learned
Using various APIs.
What's next for HealthMate
Some of the features need to be implemented...
DOMAIN.COM
letsteamuptofinishcorona.online
Built With
dialogflow
django
flutter
google-geocoding
heroku
mongodb
python
radar
Try it out
github.com | HealthMate | HealthMate is more than a health symptom tracker or symptoms diary app. HealthMate is your daily all-in-one Health & Wellness App that helps you measure, learn, and improve your health. | ['Arunaben Gandhi'] | [] | ['dialogflow', 'django', 'flutter', 'google-geocoding', 'heroku', 'mongodb', 'python', 'radar'] | 33 |
10,026 | https://devpost.com/software/canvas-calendar-transfer | Inspiration
More students are using online learning platforms to do their work. A popular platform is Canvas. It is often difficult to work with Canvas, as it has its own personal calendar which cannot be exported. This is the same for other applications. With teachers using different applications, it is often difficult for students to keep all the assignments together.
What it does
This program is meant to transfer the calendar activities from Canvas and other learning platforms to a personal calendar (currently Google Calendars), allowing for more convenience for the student user. As of now, the program supports Canvas and Google Classroom.
How I built it and What I learned
I learnt how to UiPath Studio. Through a series of clicks, typing, and OCR, the program transfers the information from Canvas to a personal calendar. I learned a lot about UiPath and WorkFlow. I learnt how use Computer Vision and OCR. I also learned how to create variables
Challenges I ran into
It was difficult to set up UiPath. Once there, I had to learn about the different activities. Also, working with times were difficult and organising my workflow.
What's next for Canvas Calendar Transfer
I hope to make it more robust and less glitchy. It's also very specified to my machine right now; I want to make it more broad and general. Also, I'd like to make it export in bulk and make it compatible with more programs. In addition, I would like to make a nice UI, which I was unable to do due to time restraints.
Built With
rpa
uipath
Try it out
github.com | Canvas Calendar Transfer | Retrieves Assignments from Different Platforms and Adds it to a Google Calendar | ['Shreya C'] | [] | ['rpa', 'uipath'] | 34 |
10,026 | https://devpost.com/software/solar-powered-iot-based-charge-point-solution-ev-charging | Block Diagram
Inspiration
We are a motivated team of environmentalists and concerned about green energy
What it does
Solving Challenges in Current EV Eco-System
How I built it
In Easy EDA
Challenges I ran into
EV Charging System
Accomplishments that I'm proud of
Building a cost-effective charging solution
What I learned
Team Mates and co ordination
What's next for Solar Powered IoT Based Charge Point Solution EV Charging
Built With
easyeda | Solar Powered IoT Based Charge Point Solution EV Charging | Solving Challenges in Current EV Eco System | ['Nagendra Gouthamas'] | [] | ['easyeda'] | 35 |
10,026 | https://devpost.com/software/virtual-health-checkup-modelling-of-coronavirus-technoband | Technoband
Software Modelling of Future conditions of CoronaVirus
Inspiration
Daily surge in cases, health conditions of citizens pushed me to work hard
What it does
It predicts the curve of future conditions of any country w.r.t. data set available
How I built it
I built it through software, that have been mentioned.
Challenges I ran into
Lots of challenges, but overcomes and got the results as expected
Accomplishments that I'm proud of
That I did something, which satisfies and help at least one citizen, then the chain will follow up.
What I learned
I learned new softwares, skills
What's next for Virtual Health Checkup|Modelling of CoronaVirus|Technoband
If got success, wanna make it open source.
Built With
arduino
c++
embedded
matlab
python
webex | Virtual Health Checkup|Modelling of CoronaVirus|Technoband | Future prediction with Virtual checkup online and Smart electronic band | ['Shreyansh Pagaria', 'Maor Mashiaxch'] | [] | ['arduino', 'c++', 'embedded', 'matlab', 'python', 'webex'] | 36 |
10,026 | https://devpost.com/software/data-visualization-and-crowd-analysis-using-ml-techniques-uihq1o | User App - Home Screen
Splash Screen
Website
Admin App - Authentication
Admin app - Limit Entry
In recent years, the human population is growing in extreme rate and hence the growth has indirectly increased the incidence of the crowd. There is a lot of interest in many scientific research in public service, security, safety and computer vision for the analysis of mobility and behavior of the crowd. Due to a crowded crisis, there are large crowds of confusion, consequence in pushing, mass-panic, stampede or crowd crushes and causing control loss. To prevent these fatalities, automatically detection of critical and unusual situations in the dense crowd is necessary. People visiting various malls and students studying in universities face a lot of difficulty because of the rush. So far there has not been any significant improvement to tackle this problem effectively. Our project aims to tackle this issue by providing a system for collecting, processing and visualizing the crowd behaviour. The end result of our system is a web and app user interface where users can browse through a range of information related to the crowd distribution and crowd movement within a campus and a city.
This project combines the power of Wifi devices, Big data, Machine learning and Data Visualization techniques to promote smart living and management. The main idea of this project is to analyze the CCTV in real time and tracking the Wifi probe requests of users for automatically sensing the crowd distribution and to provide statistical data to the users. The use of big data is to analyze and predict the level of crowdedness among the various places in the city and inside the campus also. It also captures the crowd movement to locate critical and crowding spots effectively. Furthermore, it monitors the crowd conditions and waiting time at important locations such as bus stops, railway stations, airports, religious places, campus canteen and uses Artificial Intelligence techniques to predict the upcoming crowd. For example, people can check the current crowdedness conditions and waiting time at bus stations and make smarter decisions on their mobility. Through big data analysis, people can not only compare the crowdedness but also can avoid the peak hours by AI prediction.
Crowd detection and density estimation from crowded images have a wide range of application such as crime detection, congestion, data driven smart campus, public safety, crowd abnormalities, visual surveillance, urban planning, bus stations, restaurants and various other places. Nowadays, crowd analysis is the most active-oriented research and trendy topic in computer vision. As a result, it will definitely help to make emergency controls and appropriate decisions for security and safety. This system can be used for the detection and counting people, crowd level and also alarms in the presence of dense crowd.
The objectives of this project are: Develop an automated system for collecting and processing input data. Develop algorithms for observing the crowd size in various places and predicting the crowd. Raise alarms in the case of over crowdedness. Design and build database for data storage. Build an intrusive app and web user interface for visualizing the crowd distribution and crowd movement information.
Thus, our project handles the difficult issue of including the quantity of items in pictures, a universal, principal issue in computer vision. While people and computer vision calculations, are profoundly blunder inclined, our algorithms and IOT devices consolidate the best of their abilities to convey high accuracy results at moderately low expenses providing an effective solution for this imminent problem. All these are back-end working of a web interface and a app that allow authorized person to sign in and gather the details about the targeted location from anywhere.
Built With
ai
android-studio
app
cctv
css3
firebase
html5
java
javascript
json
location
maps
python
pytorch
regression
sdc-net
twilio
website
xml
Try it out
github.com
he-s3.s3.amazonaws.com
drive.google.com | Gods Eye | Crowd monitoring and forecast using ML techniques. Real time analysis of crowd, traffic and crops by using AI in CCTV footages | ['Rithik Jain', 'Vimal Kumar', 'Vijay Krishnaa', 'Godi SaiTeja'] | [] | ['ai', 'android-studio', 'app', 'cctv', 'css3', 'firebase', 'html5', 'java', 'javascript', 'json', 'location', 'maps', 'python', 'pytorch', 'regression', 'sdc-net', 'twilio', 'website', 'xml'] | 37 |
10,026 | https://devpost.com/software/visualize-ar | Inspiration
Augmented reality, or AR for short, is one of the most talked-about technology trends in construction. Using the advanced camera and sensor technology, AR combines one’s physical surroundings with computer-generated information and presents it in real-time. While the technology has been used in video games for years, this “augmented” experience is recently making waves in construction, offering immense opportunities to improve the project lifecycle. By combining digital and physical views, augmented reality is helping construction teams drive more efficiency, accuracy, and overall confidence in their projects. The AR global market is expected to grow $90 billion by 2020. Rather than replacing workers on the field, AR can be used to greatly enhance the ways humans and digital machines work together. As technology continues to mature and become adopted, augmented reality in construction will become an invaluable tool and has the potential to change the future of the building.
What it does
Visualize AR projects the pre-built 3D model on the physical construction plan, provides different angles and methods to look and visualize at it.
How I built it
It is built on unity3d, using the echoar cloud. The Image target is a random one chosen from google.
Challenges I ran into
Integrating echoar with Unity3d.
Accomplishments that I'm proud of
We could make the Model responsive to the given input.
What I learned
Integrating unity and echoar.
What's next for Visualize AR
Make it a real-time app for particular constructions using BIM, as unity has Unity-reflect we could use that as well, host our models either on echoar/reflect.
Built With
c#
echoar
unity
Try it out
github.com | Visualize AR | Feel before you build | [] | [] | ['c#', 'echoar', 'unity'] | 38 |
10,026 | https://devpost.com/software/asha-hgt9kb | NA
Built With
adobe-illustrator
firebase
flutter
google-web-speech-api
ibm-watson | Asha | Our application enables you to connect with a psychologist to seek mental support. We aim to remove the stigma behind mental health. "Mental illness" is real and it's okay to seek mental support. | ['Shashwat Agarwal'] | [] | ['adobe-illustrator', 'firebase', 'flutter', 'google-web-speech-api', 'ibm-watson'] | 39 |
10,026 | https://devpost.com/software/the-earth-datasphere | The Earth Datasphere - Enabling natural environment data research at scale
Environmental data comes from a wide variety of sources and this is increasing rapidly with new innovations in data capture.
Data capture innovations provide major opportunities for science but also key challenges.
The Earth Datasphere is a decentralized peer-to-peer (P2P) data synchronization & exchange network
Start your own datasphere network! - National Geographic Challenge
Here’s the Whole Story
Data is central to earth and environmental sciences with significant investments in techniques for managing a wide range of environmental data. The data challenge is quite distinct from many fields of science with the most striking factor being the heterogeneity of the underlying data sources and types of data, hence the inappropriateness of the term “big data” in this field.
Environmental data comes from a wide variety of sources and this is increasingly rapidly with new innovations in data capture:
Large volumes of data are collected via remote sensing where environmental phenomena are observed without contact with the phenomena, typically from satellite sensing or aircraft-borne sensing devices, including an increasing use of drones.
Other data are collected via earth monitoring systems, which consist of a range of sensor technologies more typically in close proximity with the observed phenomena. Such sensors will monitor a range of parameters around the atmosphere, lithosphere, biosphere, hydrosphere, and cryosphere. Examples include weather stations and monitoring systems for water quality.
Significant quantities of data are collected through field campaigns involving manual observation and measurement of a range of environmental phenomena and these are increasingly supplemented by citizen science data collected by enthusiasts with strong exemplars in the areas of soils data.
There are large quantities of historical records that are crucial to the field. Many of these are digitized but, equally, significant quantities of potentially important information are not, particularly at a local level.
Significantly, there is growing interest (as in many fields) of exploiting data mining, discovering data, and data patterns from the web and social media platforms, such as seeking images showing localized water levels during periods of flood or seeking evidence of air quality problems and impacts on human health. This area is in its infancy but is likely to grow massively over the next few years.
The Data Challenge
Data capture innovations provide major opportunities for science but also key challenges.
Managing the variety and heterogeneity in underlying sources of data, including achieving interoperability across data sets;
Reducing the long tail of science and making all data open and accessible through environmental data centers;
Ensuring all data are enhanced with appropriate semantic meta-data capturing rich semantic information about the data and inter-relationships;
Ensuring mechanisms are in place to both record and reason about the veracity of data;
Finding appropriate mechanisms and techniques to support integration of different data sets to enhance scientific discovery and constrain uncertainty.
The Technical Solution
(This is not a blockchain application, does not use a blockchain ledger, has no consensus algorithm and it is built from scratch.)
Each node will host a P2P server that will connect and broadcast data (as JSON, XML, plain text messages) to its peers and a HTTP server that will expose an API (Application Programming Interface) so that other local applications (mobile apps, sensors, etc.) can call with data they wish to distribute to the network.
P2P Server
The P2P server listens via port 5002 (port can be changed) for incoming messages and connects to other peers via websockets (for example ws://another-node-address:5002).
Messages can arrive to the P2P server in two ways:
via a local API call when the data (message) is stored locally and broadcasted to the network peers
data (message) is sent by a peer in which case it is stored locally in a message queue (DATASTORE) so it can be later consumed by local applications (data pipelines, dashboards, etc.)
Data Manager
The Data Manager is focused on storing data locally in a DATASTORE queue when it arrives via API call or from a peer and in a BROADCAST queue when the data needs to be distributed to the network peers (via an API call).
HTTP Server
The HTTP Server exposes an API (POST /api/v1/datasphere) that handles calls from local applications (mobile apps, sensors, etc.)
Message Broker
A message broker will be required to store incoming and outgoing data. For this challenge I opted for RabbitMQ (open-source message-broker) because it's easy to setup (via a docker image) and works very well. Any message-broker that implements the AMQP 0.9 protocol should work with my solution.
Security
Security can be implemented at both the HTTP layer (API JWT token authorization) and at the P2P WS layer. At the P2P WS layer the message has an authorization token. In case the authorization token is invalid peers can reject the message and even remove the sending peer from the network.
Why It Matters
The Earth Datasphere makes it easy to share data across a network of peers, everyone has the same view of the same data and integration costs for anyone joining the network are practically nil. Anyone can join and leave the network at will and decide how much data they share (via API interface) and what they do with the data they receive from peers.
It has a significant impact in the research efforts where relevant data is scattered in multiple locations and stored in different formats (CSV files, plain text, DB tables, XML files etc.). Building data pipelines for a data science of the natural environment becomes incredibly easy.
For Datasphere networks where the same type of data is shared (see National Geographic Challenge slide example for a weather station) the benefit is in building a more complete view of the data (data can vary from location to location). This benefit cannot be achieved at scale in a centralized approach where a master (server) collects data from its slaves (servers) as the cost of integration would be too high.
Challenges
This solution scales incredibly well for real-time data synchronization. Some challenges:
Data harmonization over time -
Let's assume we have an already established network with peers that exchange data. A new peer wants to join the network to share and receive data. Currently, he can only receive data that was broadcasted after the time he joined and no data before that time. Ideally, I want to do data harmonization in a decentralized manner and not have a centralized replication server that holds the historical data.
Peer discovery -
Right now, if a new peer wants to join the network he needs to know the addresses of all the existing network peers. I want to expand the API functionality (something like GET /api/v1/peers) so that if I know the address of one peer, I can get a list of all active network peers and connect to them.
Future
Ideally, I am seeking for partners who are willing to try out this technology (would love to work with the National Geographic team). Long-term I want to make this technology a reality so partnerships are required to test it and grow it to maturity.
In the near-term I will work to further develop this concept into a production ready solution. The Earth Datasphere has the potential to cover many use cases.
A near-term roadmap:
Work on the peer discovery functionality
Implement security at every layer
Better exception handling
Data harmonization over time
Make the technology cloud ready (serverless)
Built With
api
microservices
mq
node.js
websockets
Try it out
github.com | The Earth Datasphere | Enabling natural environment data research at scale | ['Andrei Mititelu'] | [] | ['api', 'microservices', 'mq', 'node.js', 'websockets'] | 40 |
10,026 | https://devpost.com/software/quickeats-x7fdp4 | Home Page
Information Page
Sends the restaurant's data to Firebase
Retrieves the restaurant's information from Firebase to showcase the offered products
Uses the restaurant's address from Firebase to calculate the latitude and longitude and to show where the stores are located through APIs
Firebase Portal
Inspiration
COVID-19 brought much of global economic activity to a halt, hurting businesses and causing people to lose their jobs. In particular, restaurant owners suffer a great loss due to few customers and wasted food. A new study suggests that one in 10 restaurants around the country have permanently closed due to COVID-19. Restaurants Canada says an estimated 800,000 jobs have been lost across the country in the past month and more than 300,000 of those jobs are in Ontario alone.
We feel an urge to help them endure this hardship and thought of a platform for them to resell their stocked food to everyone. Although it is at a lower price, it benefits not only the restaurants in minimizing costs but also the general public in saving money, moreover the environment for not wasting resources.
What it does
QuickBites allows restaurants to make postings of the food they are selling on the
"Partners"
page. Then the information will be stored in the database for everyone to see. Consumers can buy specific products as a discounted rate on the
"Products"
page. In addition, QuickBites have a
"Location"
page that shows all of the restaurant partners so that consumers can easily pick one that is most convenient to their home. This portion is completed with the support of Google Maps and Place API.
How we built it
We built this project using JavaScript, HTML/CSS, Bootstrap, React, and Firebase. We designed our website with React framework and managed all details with HTML, CSS and Bootstrap. Furthermore, we implemented our database using JavaScript and Firebase, and we incorporated the Google Maps and Place API to show the restaurants near the users.
Challenges we ran into
When making this project, one of our struggles was designing a visually appealing and functional website. By using Bootstrap and carefully designing the details, we were able to overcome this problem. The other issue we came across was managing different states in React and integrating everything together with Firebase. The large number of interactions our web app is making causes the issue and we resolved it at the end by continuous debugging and checking over.
Accomplishments that we're proud of
We successfully completed the project and it worked perfectly in the end. We are proud of ourselves since we do not have a lot of experience with React + Firebase. However, we persist to complete the project.
What we learned
We reinforced our knowledge on implementing React and Firebase, and we learned to integrate Google Maps and Place API into our website which created a convenient experience for users.
What's next for QuickBites
In terms of technical aspects, we hope to implement more APIs and explore more with Google Cloud services along with other databases such as MongoDB.
QuickBites can not only help restaurant owners but also other local retail stores in the COVID-19 crisis. We hope to implement it in the near future to contribute our part to the economy amidst the pandemic.
Built With
bootstrap
css
firebase
google-maps
google-places
html
javascript
react
Try it out
github.com | QuickBites | QuickBites is proud to help local restaurants in Toronto to ensure that they are profitable despite of lack of customers due to COVID-19, by letting restaurants sell stocked food to the public. | ['Kevin Xu', 'Jiale Tom Tian', 'Dennis Bae'] | [] | ['bootstrap', 'css', 'firebase', 'google-maps', 'google-places', 'html', 'javascript', 'react'] | 41 |
10,026 | https://devpost.com/software/covid-19-test-centers-map-data | This page in our website contains a map with which users can enter their location and find the nearest Covid testing sites.
This page contains all the test centers per state through geodata and a quantity table.
This page contains a form with which users can submit their own testing locations.
The Team
We are a group of students that wanted to do our part in combatting the COVID-19 pandemic.
Inspiration
We saw that there were many drive-thru test centers for COVID 19, but there wasn’t a database that listed them all. So we wanted to become part of the solution to the COVID 19 crisis by making a website that includes all the testing sites.
What it does
The website we’ve created will compile the list of all the testing centers in the US so that the user can identify the nearest location. It also includes an interactive map, a data dashboard, a form to add more testing locations, and a contact page.
How we built it
First, we used Spreadsheets to collect the data. Then we used Wordpress.com to build the site. We used Storepoint to create the interactive map and also used Google Data Studio for the data dashboard.
Challenges we ran into
We had trouble collecting all the data because there was no single resource with all the testing locations. We had to go through various web pages and news articles to find the test center locations.
Accomplishments that we’re proud of
We’re proud of our cooperation in combining the map with the website. We are also proud of the many hours we put into data collection.
What we learned
We learned that cooperation is essential to success in a group project; if one person lacks, everyone suffers, and the whole project gets delayed.
What’s next for COVID 19 Test Centers Map & Data
The next step is to continue research and find all the test center locations in the US. However, to do this, it is necessary that we gain the public's help through crowd-sourcing; we can also work with other partners to collect more data.
Built With
css
google-data-studio
google-spreadsheets
html
iframe
storepoint
wordpress
Try it out
covidtestingnear.me | Covidtestingnear.me | A website to see the map of all the COVID-19 testing sites in the US, and find the nearest location to the user. | ['Saad Nawaz', 'Ahmed Nawaz', 'Amjad Nawaz', 'Tamjeed Nawaz', 'Zahid Nawaz', 'Tauheed Nawaz'] | ['Highlighted Project', 'Honorable Mention'] | ['css', 'google-data-studio', 'google-spreadsheets', 'html', 'iframe', 'storepoint', 'wordpress'] | 42 |
10,026 | https://devpost.com/software/covid-19-information-website | Inspiration
We wanted to make something that was useful, and right now, the most useful thing is information on what's happening.
What it does
The website provides information on how to flatten the curve and includes many sources on how to properly wear a mask (which is often overlooked).
How we built it
We used HTML and CSS, and a mix of xcode and text wrangler.
Challenges we ran into
At first formatting was difficult then, we were able to reformat which was a relief.
Accomplishments that we're proud of
Our formatting! It ended up working really well.
What we learned
A lot more information about the corona virus and also how to work together on zoom on a code project. Also! We learned a lot more about HTML, and debugging after finding something we wanted to change and using functions to change them over individually replacing them one by one.
What's next?
Even more information, maybe adding a designated live update page, a guide page?
Built With
css
html
Try it out
github.com | Covid 19 information website! | An informative website to help with social distancing efforts, providing sources (extra reading) and tips. | ['sydney cohen', 'Lucia Harrison', 'Sydney Byck'] | [] | ['css', 'html'] | 43 |
10,026 | https://devpost.com/software/borgr | The main page, click the burger to find your nearest burger restaurant!
An example of the kind of Google Maps page it'll redirect you to.
My UiPath StudioX Setup for Outlook email automation
An example of an automatically generated UiPath email
IT'S LIVE! Try it for yourself!
borgr.space
(if your browser complains about that URL go
here
instead!)
Domain.com Best Domain Submission: borgr.space
Inspiration
Close your eyes and imagine you're on a road trip with your best buddies. Someone's stomach grumbles, you need to find somewhere to eat. It's been a long day and you don't want to argue but how will you pick where to go? That's the problem I'm trying to solve. Introducing
borgr
, my new project that sends you to the nearest burger restaurant with one click.
What it does
Using
borgr
is simple: go to borgr.space and hit the button and it'll redirect you to a google maps page for the what radar.io thinks is the nearest burger restaurant to your location.
This works by using a pair of Radar.io's APIs. First it uses the IP Geocoding API to get your coordinates from your IP address, then uses the Place Search API to find the nearest burger restaurant to you from those coordinates. Then, it'll do some
borgr magic
and generate a well formatted google maps search link with it'll then redirect you to, which lets you easily get the directions from.
In addition, my UiPath implementation means that whenever someone uses the website, I get an email saying where the borgr button led them to in an effort to create a
borgr map
in the future to show where in the world people have been wanting burgers and using borgr.
How I built it
A python3 flask server I deployed on Heroku with gunicorn hosts the site which uses HTML on the frontend. The backend heavily utilizes radar.io's IP Geocoding and Place Search APIs. I also used UiPath's StudioX to send myself the Outlook emails.
Challenges I ran into
-Learning to use Heroku
-Fighting with HTML to make the site legible
-My wifi network thinking I was in NJ and giving my devices IPs accordingly. These IPs were nowhere near any burger places :(
-Learning to use StudioX
Accomplishments that we're proud of
-This was the first hackathon I did by myself!
What's next for borgr
-Get a prettier frontend, I'm a backend guy with little to no graphic design/fancy framework knowledge
-Taking data from automated UiPath emails and doing some data visualization on them to create maps of where people are using borgr
Built With
flask
heroku
html5
python
radar.io
uipath
Try it out
borgrapp.herokuapp.com
github.com | borgr | Arguing about where to get food is a thing of the past. | ['Drew Ehrlich'] | [] | ['flask', 'heroku', 'html5', 'python', 'radar.io', 'uipath'] | 44 |
10,026 | https://devpost.com/software/shoppinglist-15qhlg | Inspiration
My team and I aren't very experienced coders so we decided this would be a relatively simple program to write for this Hackathon as it is one of our first programs.
What it does
It asked the user for a set of items that it then spits out in return except in a full list.
Challenges I ran into
It became infinite at one point so I had to add a stopping function and a close after the user stopped adding to the list.
What's next for ShoppingList
I would hope that I can make this list be able to function from voice recognition so it can be a sort of hands free app.
Built With
idle
python
Try it out
github.com | ShoppingList | Convenient shopping list that quickly generates a clear easy to use list of items that the user has added. | ['Sam Sherman', '3l3ctr0n Dron', 'Aaron Chu-Carroll'] | [] | ['idle', 'python'] | 45 |
10,026 | https://devpost.com/software/covaid-53hv21 | CovAid Register Page
CovAid Login Page
CovAid Requests Page
CovAid Requests Viewer
CovAid Request Submission
CovAid Home Page
Inspiration
The world we live in has changed dramatically amidst the COVID-19 outbreak. Although some of us are safe at home with the proper equipment, a large portion of the population does not have access to essentials. In analyzing the issue, we realized the immunocompromised currently had no access to essentials as they could not simply leave their houses to go to a grocery store. We decided to provide a solution to this problem by creating a website in which we could allow users to make virtual requests for items, such as toilet paper or hand sanitizer, and then enable volunteers to accept these requests to donate supplies to them. As there is no preexisting platform that allows for direct pairings between users and volunteer deliverers, we believe this is the perfect solution to help those most impacted by COVID-19.
What it does
CovAid is a web application that connects volunteers to those in need during the COVID-19 outbreak using AI-driven intelligence. The website connects at-risk users with volunteers willing to donate necessities. Users can make requests for items to the website and volunteers can respond to those requests. These pairings are created efficiently with a machine learning algorithm that takes into account various factors such as the distance between the user and the volunteer.
How we built it
Through the development of CovAid, we were able to learn how to integrate Flask, JavaScript, and jQuery as our back-end with HTML and Bootstrap together to develop a website from scratch. We used SQL to operate the database of users and the Google API to calculate the miles and estimated time between users. These topics were new to us and we were able to truly learn how to integrate every part together to create a fully-functioning website. In order to perform the matching between users and volunteers, we developed a Machine Learning Neural Network model to sort the requests on a volunteer’s page, as we wanted requests most relevant to the volunteer to show up when a volunteer is searching for a request to accept. We used Keras, NumPy, Pandas, and a Sequential Machine Learning Neural Network model with Dense layers to develop our model before implementing it into our website.
Challenges we ran into
We faced numerous challenges when it came to properly communicating with Flask view and the various HTML templates. Since CovAid is a dynamic site form data had to be sent back and forth between the files and stored in a database. Using a database was something new to all of us and understanding how to integrate it for our needs was a major roadblock for a while. Another major challenge was implementing our machine learning sorting algorithm with our Flask and HTML to sort the requests for each volunteer, since we had to learn how to get live user data to enter into the model.
Accomplishments that we're proud of
We are proud of how we could efficiently push out a website while allowing everyone on our team to contribute equally. After beginning with our entire team working together to create the basic layout of our website, we split up into two teams. Shrey and Atin worked on the front-end and back-end of the website while Anirudh and Aarav worked on the machine learning aspect of the project. We also learned various CS skills while also helping our community at the same time. In addition, we are also pleased that we have created another scenario that AI can help ease our lives. We are excited to see how our project will be able to create opportunities for other people to make a positive impact on their surroundings.
What we learned
In developing CovAid, aside from exploring new software such as Bootstrap and Flask, we fully understood the broader impacts of our project — that any simple act of kindness can be influential, especially to those that are impacted the most from issues like these.
What's next for CovAid
In order to create a real difference in our community we hope for CovAid to be more widespread and have a larger impact on the world. We also want to implement a system in which users are able to be further interconnected. Our vision is that through our product everyone will have access to essentials and will stay safe as our world continues to change from COVID-19.
Built With
bootstrap
css3
flask
google
html
javascript
jquery
keras
machine-learning
numpy
pandas
python
sqlalchemy
Try it out
github.com | CovAid | CovAid is a web application that facilitates deliveries to those in need during these pressing times. The website connects at-risk users with volunteers willing to donate necessities. | ['Atin Pothiraj', 'Aarav Khanna', 'Shrey Gupta', 'Anirudh Bansal'] | ['2nd Place'] | ['bootstrap', 'css3', 'flask', 'google', 'html', 'javascript', 'jquery', 'keras', 'machine-learning', 'numpy', 'pandas', 'python', 'sqlalchemy'] | 46 |
10,026 | https://devpost.com/software/papure-2tpv60 | paPURE Setup - Angeled View - Utilizing Snorkeling Mask
paPURE Setup - Front View - Utilizing Snorkeling Mask
paPURE Setup - Side View - Utilizing Snorkeling Mask
paPURE Setup - Back View - Utilizing Snorkeling Mask
Original Prototype of paPURE Design View
paPURE Base - Top View - Inserted Compressor Fan and Fan Shroud
paPURE Base - Top View - Empty
Abstract:
The Filtrexa paPURE is an affordable, 3D printed powered air-purifying respirator (PAPR) that provides our healthcare providers with better protection than even N95s, especially in high-risk and confined environments (E.g. ICUs, ERs). It incorporates readily available components and can be easily manufactured locally. We can thus increase accessibility of PAPR technology by enabling hospitals to produce and purchase it as per their need, optimizing the 3D-print to produce it at a cost that is over ten times cheaper than PAPRs currently offered on the market, and using exchanging highly specific components for readily available and affordable components. The Filtrexa paPURE also has made design changes to improve comfort, ease of use, and longevity of PAPR technology.
Introduction
One of the most immediate and impactful effects of the COVID-19 pandemic are global shortages of proper personal protective equipment (PPE), forcing healthcare providers (HCPs) to consistently work in high-risk environments and unnecessarily place their own lives at risk. Our product is a powered air-purifying respirator (PAPR) that creates a positive pressure field with filtered air to protect frontline healthcare workers from airborne threats such as SARS, TB, measles, influenza, meningitis, and most immediately COVID-19. This technology improves upon current PAPR devices in terms of cost-efficacy, ease of access, and ease of implementability. Our solution not only serves to combat general PAPR shortages across the country, but also eases PPE shortages that arise from COVID-19 and future patient surges through an on-demand 3D printing process.
Value Proposition
Powered, air-purifying respirators (PAPRs) are currently the gold standard in medicine when treating patients diagnosed with COVID-19 and other highly infectious respiratory diseases[1] due to their positive pressure system. This system filters air extremely effectively before it reaches the airway. However, this technology package is costly, often totaling over $1800[2] and requires highly specific components which are currently in short supply. Both well-established hospitals such as the Mayo Clinic (with a ratio of 4500 physicians to 200 PAPRs)[2] and smaller county hospitals such as the Hunterdon Medical Center (where not a single PAPR is available to physicians) are facing critical shortages of personal protective equipment (PPE). Evidently, the aforementioned barriers render PAPR technology inaccessible to most frontline HCPs, leaving them far more vulnerable to infection.
Alternatives to PAPR technology include N95s, surgical masks, and currently, homemade masks due to a worldwide shortage of PPE. Although they provide a barrier against aerosols, standard and surgical N95s are easily compromised with an improper fit and have an assigned protection factor (APF) of ten[4], while PAPRs have an APF of 25 to 1000, rendering PAPRs far more effective at protecting HCPs. Additionally, physicians tend to prefer PAPRs over N95s because PAPRs are reusable, easier to breathe through, do not require fit testing, and make them feel safer[1][5].
Our Solution
In order to provide purified air to those in the most high-risk environments, we have developed a novel, inexpensive, and accessible PAPR device that is both lightweight and 3D-printable within 24 hours. Printed using readily-available filaments (e.g. PLA, ABS), paPURE is mounted to the user’s hip and assembled via on-hand motors and batteries. (See Appendix 2.5).
Through PAPR technology, HCPs are given access to filtered positive pressure air systems (in which airflow serves to seal any gaps in masks, as well as reduce respiratory fatigue in HCPs), drastically decreasing infection risk in areas such as ICUs and ERs.
Our device’s customizability allows for interoperability with existing masks, filters, and hosing (See Appendix 3.1), enabling hospitals, or possibly surrounding hobbyists/machinists (regulatory dependent), to produce PAPRs for their physicians and nurses. For images and procedures: See Appendix 1 and 2.
The system features a dual battery set-up that allows HCPs to utilize one or both batteries independently, as well as swap out batteries while the device is in use (such as during an extended patient procedure that a physician cannot leave from). Additionally the belt system, with the fan/chassis on you lumbar and 2 battery on ports on both hips gives a better weight distribution for improved comfort in extended usages (such as a surgeon leaning in an awkward position during the operation). The use of an inline filter means that air is pushed into a filter at the end of the device, as opposed to regular PAPRs that pull air through filters. This setup means that the risk of an imperfect seal compromising air quality is virtually nullified as no negative pressure system exists after air filtration in our device. Additionally, the aforementioned inline filters are better at filtering biological particles without disturbing airflow than standard P100s and are already used extensively in anesthesiology and respiratory care departments of hospitals across the country.
After printing the device’s chassis and shroud, integration with an inline bacterial/viral filter, housing, and masks will be followed by on-site fit and efficacy testing to ensure proper device assembly.[6] Then, an HCP would don their mask, clipping the paPURE chassis and two smart power tool batteries to a provided utility belt, and connecting to the mask via a hose. At most, we expect equipping paPURE to add 1-3 minutes to a medical professional’s routine and greatly improve safety and comfort.
An Improvement from Traditional PAPRs
Our technology eliminates the need for a middle-man manufacturer. Because the only required components are readily available to hospitals and clinics, hospitals can produce the device as per their need. We anticipate working with local 3D-printing facilities to produce and assemble the product, then to distribute the Filtrexa PAPR to hospitals. Physicians and NIOSH officials (most notably Richard Metzler, the first Director of the National Personal Protective Technology Laboratory at NIOSH), have already given us promising feedback regarding the need for this technology, and we are looking into potential partnerships with PPE developers and/or motor manufacturers. Some hospital purchasing experts have additionally communicated a need for affordable PAPRs. Our solution is over 10 times cheaper than current PAPR technologies ($155; see Appendix 2, Figure 2), increasing likelihood of adoption. To allow smaller hospitals to easily obtain our technology, we plan to raise awareness of our business through phone calls and emails to hospitals throughout the country.
Implementation Plan
paPURE’s solution is implementable almost immediately. The main barrier between our tested prototype and implementation is FDA/NIOSH approval (FDA EUA Sec II/IV Approve NIOSH Certified Respirators). We have also identified conditions that will allow us to expedite the regulation and roll-out of the production (such as the IDE and 501(k) pathways suggested to us by regulatory experts).[15] Because our device is based on existing PAPR technology, this predicate nature in combination with existing precedents for 3D-printed medical technology, can help expedite its deployment.[16]
Our technology minimizes the need for a middle-men. We are partnering with regional additive manufacturers to allow for quick, standardized, yet still decentralized production of the device. The only required components are readily available to hospitals and clinics, allowing HCPs to produce the device as per their need. Additionally, if regulatory approval permits, we may utilize local schools/universities/hospitals with on-site 3D printers in order to allow for fully decentralized manufacturing. After NIOSH Approval, our device (and depending on regulatory guidelines, possibly our CAD file) will be sent to those with 3D printers available, who could print and assemble the device (See Appendix 3.1).
Players involved in the production of this technology would be hospital assembly workers, but the design is easily assembled by anyone (the only limitation being that assembly be done under a fume hood to prevent contamination). Physicians we’ve already talked to have given us promising feedback regarding the need for this technology. We are currently looking into potential partnerships with PPE developers (See Appendix 3.2) and/or motor manufacturers. Our solution is over ten times cheaper than current PAPR technologies (See Appendix 3.3), increasing the likelihood of adoption.
Due especially to the length of this health crisis, hospitals are facing dire shortages of PPE. This has accelerated our timeline, but we are confident that it is feasible given the current state of emergency (See Appendix 3.4).
Since this product has yet to be implemented in hospitals, we are writing to you today to gauge your interest in paPURE. Additionally, any feedback you have relating to our product or interest in helping us with laboratory testing of paPURE would be greatly appreciated.
We anticipate our project to reach full fruition within 6-12 months. Our timeline is as follows. Our second iteration of prototyping for clinician testing will conclude in 2-3 weeks, followed by initial clinical testing, which will finish in around 1.5 months. As soon as clinical testing is finished and the product is validated, we will submit our product officially to NIOSH for regulatory approval. We anticipate receipt of regulatory approval within 1.5 months from submission. After approval is obtained, we will also apply for either a provisional patent or copyright, depending on legal advice. Within 1-2 months after regulatory approval, we plan to roll out our product to hospitals via centralized 3D-printing. During the next 1-2 months, we will continue to iterate and optimize the product. Official hospital rollout, with multiple 3D-printing partners and company partnerships, will occur around a month later. This will be around 6-7 months from now. As seen, our timeline is aggressive as we wish to equip healthcare providers with PPE as soon as possible. The prior goals mentioned in our timeline are our key goals and objectives for the project at this time.
Current Testing and Partnerships
Technical Testing is being carried out at Filrexa's primary residence and at Johns Hopkins University and includes analysis of airflow data, battery life, and filtration efficacy. For clinical testing, we already have established connections for clinical testing with both Johns Hopkins Medical Institute and Stanford University. In regards to business-focused assistance, we have also partnered with FastForwardU for advising regarding intellectual property protection, strategic marketing, and clinical networking.
Planned Partnerships
We plan to designate one 3D-printing company (current candidates include Xometry, Protolabs, Cowtown, and Health3D) as our manufacturer during our initial launch into the market, but will continue to partner with additional 3D-printing companies as our business grows. Due to our unique manufacturing approach, all hospitals, regardless of their size, will be able to order and quickly receive PAPRs, lowering the impact of the current shortage. In order to supply the auxiliary materials such as motors, batteries, and more, we plan to initiate company partnerships with large corporations such as 3M, Dyson, Black and Decker, GE, Cuisinart, Hitachi, Makita, Shop Vac, Hoover, Bissell, Shark, iRobot, and Bosch.
Additional Video
https://youtu.be/iFMtzt52BEQ
Appendix and Citations
Click here!
Website
paPURE Website
Built With
3dprinting
cad
cpap
p100 | paPURE | paPURE is a hospital accessible PAPR Technology utilizing 3D printing and readily available hardware to give healthcare's frontline the gold standard of personal protective equipment right now. | ['Sanjana Pesari', 'Hannah Yamagata', 'Sneha Batheja', 'Joshua Devier'] | ['2nd Place Overall Winners', '1st Place', 'The Wolfram Award', 'The Best Business Idea', '3rd Place Hack', 'Best COVID-19 Hack'] | ['3dprinting', 'cad', 'cpap', 'p100'] | 47 |
10,026 | https://devpost.com/software/socialsiren | Cross-platform compatibility using Flutter.
When "Sick" is selected on one user, the other user is warned.
They are warned based on their proximity to each other using GPS coordinates.
What is Social Siren?
A social distancing tool that alerts the app user when they're in proximity of a sick person. Created using geolocation in Flutter, which livestreams GPS data to a Node.js server that uploads to MongoDB Atlas. It is a powerful application because it has the ability to instantly go cross-platform and everything is stored/backed up in the cloud.
Inspiration
With the novel coronavirus, social distancing is essential. When leaving the house for grocery shopping and other essential activities, we get exposed to others. For the at-risk population, it is essential that if they've ever been potentially exposed they are alerted immediately.
What It Does
Geolocation Live-streaming
The application monitors all users through a centralized MongoDB Atlas database and notifies them when they’re around a person who is sick. The latitude and longitude of each user is captured every second and if a healthy user happens to be near a sick user, the healthy user would be notified of a potential threat to their health. It focuses on anonymity and privacy by storing random user ids in the cloud rather than any names.
How We Built It
We used Flutter to create the frontend and tested it on both Android and iOS with no issues. We used Node.js/Express for our server, and of course the amazing MongoDB Atlas for our database. We chose Mongo and Node since JSON formatting is super intuitive and user-friendly.
What we learned
Two of our team members learned Flutter/Dart for the first time. We all learned how to collaborate remotely using design tools such as Figma and using repository tools such as Github. One of our team members also taught another how to use Git Bash and push/pull/commit to project branches. It was a very fun experience, but I’m sure it would have been more fun in person!
What's next for SocialSiren
We hope to improve the Node.js Schema model to only PATCH/PUT the longitude and latitude rather than the entire user schema and improve anonymity measures through bcryptjs. Ideally, we'd want to decentralize and let each phone communicate/find distances through Bluetooth since at scale doing all of those calculations would be impossible. Another advantage of Bluetooth would be the more accurate distances it returns rather than the standard longitude and latitude GPS positions. Calculating people within proximity for thousands of users could be very well optimized but our current model is simply a demo concept and is not meant for any sort of production or testing. In terms of computing power, we could put the Node.js server on a cloud computer like a DigitalOcean Droplet and use physical emulators to decrease our latency since everything was running on one computer.
Built With
figma
flutter
mongodb
node.js
Try it out
github.com | Social Siren | A Tool for Monitoring Social Distancing | ['Nicholas Vitebsky', 'Shreya C', 'Kanishq Kancharla', 'Austin Fitz'] | [] | ['figma', 'flutter', 'mongodb', 'node.js'] | 48 |
10,026 | https://devpost.com/software/stacy-bot | Interface in FB messenger
This representation of NLP
Features which will be added more as time goes
PLEASE NOTE THIS IS A TEST BOT, AS PUBLISHING AND VALIDATION TAKES TIME, SO IF U WANT TO USE THIS THEN U NEED TO BE THE TESTER. BUT U CAN USE THE PHONE CALL FACILITY.
CALL AT: +1 463-221-4880
(This is a toll-free number based in US, if you are out of US then only minimal international charges will be applicable, I am from India and it takes 0.0065$/min)
If you want to use this app in your Facebook Messenger like shown in the video then please comment your Facebook ID in this project's comment section, I will add you as a tester to this app
IT IS JUST AN WORKING DEMONSTRATION OF MY IDEA TO TACKLE THE PROBLEM, IT CAN BE MADE AS PER THE DEMAND OF ANY ORGANISATION. AND THE BEST THING IT IS NOT A CONCEPTUAL IDEA IT IS TOTALLY A REALISTIC IDEA THAT CAN BE DEPLOYED AT ANY MOMENT ACCORDING TO THE DEMAND OF THE ORGANIZATION
Our Goal
General Perspective
Due to the situation of COVID-19 the work force of the world is decreasing(since everyone is maintaining self quarantine and social distancing ), which is creating a big havoc in the world, through this project of mine, I mainly target to tackle this problem and help the health organizations with a virtual workforce that runs 24*7 without any break, and handles all kind of mater, starting from guiding the people to fill up the forms to managing the data of the patients automatically and all-together.
Business Perspective(if required)
Bot service (it is not a company yet, I am just referring to the thing that we want to build or start this company, we are student developers right now) which adds a virtual work force to every client organisation to bloom in the market. In business perspective Our potential business targets are small business,NGO and health organisations and we help them to be free from human service cost and help them to grab more users by providing 24*7 interaction with there users, thus generating more revenue for them.
Inspiration
I really got inspired for making this advance A.I bot by seeing the current COVID-19 situations, because of these COVID-19 situations people are restricted from gathering hence work force and user interaction of various health organisation are diversely effected. Through this project I aimed to connect the health organizations with the patient anywhere in the world,using any platform(not limited by android, ios or Web). And also manage the data of the patients automatically thus reducing human effort and maintaining social distancing.
MADE THIS PROJECT TO BRING A CHANGE
.
How is our product different than others
1)
There are many types of A.I bots,where most of them are Decision tree based models that work with particular buttons only,our products will be totally based on NLP based models,which are more advanced and are in higher demands than others.
2)
Other service A.I bot service providers are confined to only 1 or 2 platforms, whereas we at the same time are providing advantage to the client to choose from a large scale of platforms like FB messenger, google assistant,slack,line,website bots and even in calls
3)
For the health organisations that are willing to buy our technology (We are also willing to donate this tech for free), from business perspective we will also be cheaper than our other competitors, when others are taking near about $3300/year for the service, we are doing it in $100-$1500 one-time fee range with more versatility.
It will totally be free for any user using it, no charges will be applicable for users
What it does
Our bot provides the power to every health organisation at such situations of COVID-19 by managing the screening,testing and quarantine data and also connecting the persons that are willing to do the test with the help of diversified digital platforms. In cases where internet is not working (where other bots won't function) still our bot works inside the phone number thus providing fruitful results in such situations.It basically covers all important aspects of an advanced A.I bot. It also connects the health organisations with volunteers that are willing to donate their time as helping hands in this hour of need.
How I built it
I built it using Google cloud A.I solutions, Google cloud Dialogflow framework(which includes automatic firebase integration) where I trained the bot with NLP with huge datasets from WHO and government and then integrated it with the Facebook messenger through Facebook Developer account. It is also supporting Phone call facility
Challenges I ran into
I had to go through many challenges, starting from being a solo developer, I really had to face a lot of problems as making such a complex app which all the advanced features as mentioned, all these things together cost me a lot of sleepless nights but i hope my hard-work pays off
Accomplishments that I'm proud of
I am really proud of the app that I made because it itself is a big milestone for a solo developer like me.
What I learned
I learned a lot of things through out this journey of developing this app, starting from advance use of Google cloud A.I solutions, Dialogflow and integrating it to Facebook messenger, making filters inside the chat-bot to enhance user experience etc.Connecting it with a phone number to receive phone calls etc.
What's next for Health Bot
If my work gets selected, then for sure I am going to work really hard to make Health Bot even bigger and to add more amazing functionalities to make my users happy.
Built With
dialogflow
facebook
google-cloud
javascript
json
Try it out
github.com | Advanced A.I Health Bot | An A.I bot with: Telephone calling,NLP,24*7 health coverage,total automatic data management,wipes rumors,Easy navigation,HD pictures,Customer service help etc | ['Udipta Koushik Das'] | ['Accessibility: Second Prize', 'Healthcare: Second Prize'] | ['dialogflow', 'facebook', 'google-cloud', 'javascript', 'json'] | 49 |
10,026 | https://devpost.com/software/masked-ai-masks-detection-and-recognition | Platform Snapshot
Input Video
Model Processing
Model Processing
Output Video Saved
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Inspiration
The total number of Coronavirus cases is 5,104,902 worldwide (Source: World o Meters). The cases are increasing day by day and the curve is not ready to flatten, that’s really sad!! Right now the virus is in the community-transmission stage and taking preventive measures is the only option to flatten the curve. Face Masks Are Crucial Now in the Battle Against COVID-19 to stop community-based transmission. But we are humans and lazy by nature. We are not used to wear masks when we go out in public places. One of the biggest challenges is “People not wearing masks at public places and violating the order issued by the government or local administration.” That is the main reason, we built this solution to monitor people in public places by Drones, CCTVs, IP cameras, etc, and detect people with or without face masks. Police and officials are working day and night but manual surveillance is not enough to identify people who are violating rules & regulations. Our objective was to create a solution that provides less human-based surveillance to detect people who are not using masks in public places. An automated AI system can reduce the manual investigations.
What it does
Masked AI is a real-time video analytics solution for human surveillance and face mask identification. Our main feature is to identify people with masks that are advised by the government. Our solution is easy to deploy in Drones and CCTVs to “see that really matters” in this pandemic situation of the Novel Coronavirus. It has the following features:
1. Human Detection
2. Face Masks Identification (N95, Surgical, and Cloth-based Masks)
3. Identify human with or without mask in real-time
4. Count people each second of the frame
5. Generate alarm to the local authority if not using a mask (Soon in video demo)
It runs entirely on the cloud and does detection in real-time with analysis using graphs.
How we built it
Our solution is built using the following major technologies:
1. Deep Learning and Computer Vision
2. Cloud Services (Azure in this case)
3. Microservices (Flask in this case)
4. JavaScript for the frontend features
5. Embedded technologies
I will be breaking the complete solution into the following steps:
1. Data Preparation:
We collected more than 1000 good quality images of multiple classes of face masks (N95, Surgical, Clothe-based masks). We then performed data-preprocessing and labeled all the images using labeling tools and generated PASCAL VOC and JSON after the labeling.
2. Model Preparation:
We used one of the famous deep learning-based object detection algorithm “YOLO V-3” for our task. Using darknet and Yolo v-3, we trained the model from scratch on 16GB RAM and Tesla K80 powered GPU machine. It took 10 hours to train the model. We saved the model for deploying our solution to the various platforms.
3. Deployment:
After training the model, we built the frontend which is totally client-based using JavaScript and microservice “Flask”. Rather than saving the input videos to our server, we are sending our AI to the client’s place and using Microsoft Azure for the deployment. We are having on-premise and cloud solutions prepared. At the moment, we are on a trail so we can’t provide the link URL.
After building the AI part and frontend, We integrated our solution to the IP and CCTV cameras available in our house and checked the performance of our solution. Our solution works in real-time on video footage with very good accuracy and performance.
Challenges we ran into
There are always a few challenges when you innovate something new. The biggest challenge is “The Novel Coronavirus” itself. For that reason, we can’t go outside the home for the hardware and embedded parts. We are working virtually to build innovative solutions but as of now, we are having very limited resources. We can’t go outside to buy hardware components or IP & CCTV cameras. One more challenge we faced was that we were not able to validate our solution with drones in the early days due to the lockdown but after taking permission from the officials that problem was not a deal anymore.
Accomplishments that we're proud of
Good work brings the appreciation and recognition. We have submitted our research paper in several conferences and international journals (Waiting for the publication). After developing the basic proof-of-concept, We went on to the local government officials and submitted our proposal for a trial to check our solution for better surveillance because the lockdown is near to be lifted. Our team is also participating in several hackathons and tech event virtually to showcase our work.
What we learned
Learning is a continuous process. We mainly work with the AI domain and not with the Drones. The most important thing about this project was “Learning new things”. We learned how to integrate “Masked AI” into Drones and deploy our solution to the cloud. We added embedded skills in our profile and now exploring more features on that part. The other learning part was to take our proof of concept to the local administration for trails. All these “Government Procedures” like writing Research Proposal, Meeting with the Officials, etc was for the first time and we learned several protocols to work with the government.
What's next for Masked AI: Masks Detection and Recognition
We are looking forward to collaborating with local administrative and the government to integrate our solution for drone-based surveillance (that’s currently in trend to monitor internal areas of the cities). Parallel, The improvement of model is the main priority and we are adding “Action Recognition” and “Object Detection” features in our existing solution for even robust and better solution so decision-makers can make ethical decisions as because surveillance using Deep Learning algorithms are always risky (bias and error in judgments).
Built With
azure
darknet
flask
google-cloud
javascript
nvidia
opencv
python
tensorflow
twilio
yolo | Masked AI: AI Solution for Face Mask Identification | Masked AI is a cloud-based AI solution for real-time surveillance that keeps an eye on the human who violates the rule by not using face masks in public places. | [] | [] | ['azure', 'darknet', 'flask', 'google-cloud', 'javascript', 'nvidia', 'opencv', 'python', 'tensorflow', 'twilio', 'yolo'] | 50 |
10,026 | https://devpost.com/software/covnatic-covid-19-ai-diagnosis-platform | Landing Page
Login Page
Segmentation of Infected Areas in a CT Scan
Check Suspects using Unique Identification Number (New Suspect)
Check Suspects using Unique Identification Number (Old Suspect)
Suspect Data Entry
COVID-19 Suspect Detector
Upload Chest X-ray
Result: COVID-19 Negative
Upload CT Scan
Result: Suspected COVID-19
Realtime Dashboard
Realtime Dashboard
Realtime Dashboard
View all the Suspects (Keep and track the progress of suspects)
Suspect Details View
Automated Segmentation of the infected areas inside CT Scans caused by Novel Coronavirus
Process flow of locating the affected areas
U-net (VGG weights) architecture for locating the affected areas
Segmentation Results
Detected COVID-19 Positive
Detected Normal
Detected COVID-19 Positive
Detected COVID-19 Positive
GIF
Located infected areas inside lungs caused by the Novel Coronavirus
Endorsement from Govt. Of Telengana, Hyderabad, India
Endorsement from Govt. Of Telengana, Hyderabad, India
Generate Report: COVID-19 Possibility
Generate Report: Normal Case
Generated PDF Report
Inspiration
The total number of Coronavirus cases is
2,661,506 worldwide
(Source: World o Meters). The cases are increasing day by day and the curve is not ready to flatten, that’s really sad!! Right now the virus is in the community-transmission stage and rapid testing is the only option to battle with the virus. McMarvin took this opportunity as a challenge and built AI Solution to provide a tool to our doctors. McMarvin is a DeepTech startup in medical artificial intelligence using AI technologies to develop tools for better patient care, quality control, health management, and scientific research.
There is a current epidemic in the world due to the Novel Coronavirus and here
there are limited testing kits for RT-PCR and Lab testing
. There have been reports that kits are showing variations in their results and false positives are heavily increasing.
Early detection using Chest CT can be an alternative to detect the COVID-19 suspects.
For this reason, our team worked day and night to develop an application which can help radiologist and doctors by automatically detect and locate the infected areas inside the lungs using medical scan i.e. chest CT scans.
The inspirations are as below:
1. Limited kit-based testings due to limited resources
2. RT-PCR is not as much as accurate in many countries (recently in India)
3. RT-PCR test can’t exactly locate the infections inside the lungs
AI-based medical imaging screening assessment is seen as one of the promising techniques that might lift some of the heavyweights of the doctors’ shoulders.
What it does
Our COVID-19 AI diagnosis platform is a fully secured cloud based application to detect COVID-19 patients using chest X-ray and CT Scans. Our solution has a centralized Database (like a mini-EHR) for Corona suspects and patients. Each and every record will be saved in the database (hospital wise).
Following are the features of our product:
Artificial Intelligence to screen suspects using CT Scans and Chest X-Rays.
AI-based detection and
segmentation & localization of infected areas inside the lungs
in chest CT.
Smart Analytics Dashboard
(Hospital Wise) to view all the updated screening details.
Centralized database (only for COVID-19 suspects) to
keep the record of suspects and track their progress
after every time they get screened.
PDF Reports,
DICOM Supports
, Guidelines, Documentation, Customer Support, etc.
Fully secured platform
(Both On-Premise and Cloud)
with the privacy policy under healthcare data guidelines.
Get Report within Seconds
Our main objective is to provide a research-oriented tool to alleviate the pressure from doctors and assist them using AI-enabled smart analytics platform so they can
“SAVE TIME”
and
“SAVE LIVES”
in the critical stages (Stage-3 or 4).
Followings are the benefits:
1. Real-world data on risks and benefits:
The use of routinely collected data from suspect/patient allows assessment of the benefits and risks of different medical treatments, as well as the relative effectiveness of medicines in the real world.
2. Studies can be carried out quickly:
Studies based on real-world data (RWD) are faster to conduct than randomized controlled trials (RCTs). The Novel Coronavirus infected patients’ data will help in the research and upcoming such outbreak in the future.
3. Speed and Time:
One of the major advantages of the AI-system is speed. More conventional methods can take longer to process due to the increase in demand. However, with the AI application, radiologists can identify and prioritize the suspects.
How we built it
Our solution is built using the following major technologies:
1. Deep Learning and Computer Vision
2. Cloud Services (Azure in this case)
3. Microservices (Flask in this case)
4. DESKTOP GUIs like Tkinter
5. Docker and Kubernetes
6. JavaScript for the frontend features
7. DICOM APIs
I will be breaking the complete solution into the following steps:
1. Data Preparation:
We collected more than 2000 medical scans i.e. chest CT and X-rays of 500+ COVID-19 suspects around the European countries and from open source radiology data platform. We then performed validation and labeling of CT findings with the help of advisors and domain experts who are doctors with 20+ experience. You can get more information in team section on our site. After carefully data-preprocessing and labeling, we moved to model preparation.
2. Model Development:
We built several algorithms for testing our model. We started with CNN for classifier and checked the score in different metrics because creating a COVID-19 classifier is not an easy task because of variations that can cause bias while giving the results. We then used U-net for segmentation and got a very impressive accuracy and got a good IoU metrics score. For the detection of COVID-19 suspects, we have used a CNN architecture and for segmentation we have used U-net architecture. We have achieved 94% accuracy on training dataset and 89.4% on test data. For false positive and other metrics, please go through our files.
3. Deployment:
After training the model and validating with our doctors, we prepared our solutions in two different formats i.e. cloud-based solution and on-premise solution. We are using EC-2 instance on AWS for our cloud-based solution.
Our platform will only help and not replace the healthcare professionals so they can make quick decisions in critical situations.
Challenges we ran into
There are always a few challenges when you innovate something new. The biggest challenge is “The Novel Coronavirus” itself.
One of the challenge is “Validated data” from different demographics and CT machines.
Due to the lockdown in the country, we are not able to meet and discuss it with several other radiologists. We are working virtually to build innovative solutions but as of now, we are having very limited resources.
Accomplishments that we're proud of
We are in regular touch with the State Government (Telangana, Hyderabad Government). Our team presented the project to the Health Minister Office and helping them in stage-3 and 4.
Following accomplishments we are proud of:
1. 1 Patent (IP) filled
2. 2 research paper
3. Partnership with several startups
4. In touch with several doctors who are working with COVID-19 patients. Also discussing with Research Institutes for R&D
What we learned
Learning is a continuous process. Our team learnt
"the art of working in lockdown"
. We worked virtually to develop this application to help our government and people. The other learning part was to take our proof of concept to the local administration for trails. All these “Government Procedures” like writing Research Proposal, Meeting with the Officials, etc was for the first time and we learned several protocols to work with the government.
What's next for M-VIC19: McMarvin Vision Imaging for COVID19
Our research is still going on and our solution is now endorsed by
the Health Ministry of Telangana
. We have presented our project to
the government of Telangana for a clinical trail
. So the next thing is that we are looking for trail with hospitals and research Institutes. On the solution side, we are adding more labeled data under the supervision of Doctors who are working with COVID-19 patients in India. Features like
Bio-metric verification, Trigger mechanism to send notification to patients and command room
, etc are under consideration. There is always scope of improvement and AI is the technology which learns on top of data. Overall, we are dedicated to take this solution into real world production for our doctors or CT and X-rays manufacturers so they can use it to fight with the deadly virus.
Built With
amazon-web-services
flask
google-cloud
javascript
keras
nvidia
opencv
python
sqlite
tensorflow
Try it out
m-vic19.com | M-VIC19: McMarvin Vision Imaging for COVID19 | M-VIC19 is an AI Diagnosis platform is to help hospitals screen suspects and automatically locate the infected areas inside the lungs caused by the Novel Coronavirus using chest radiographs. | [] | ['1st Place Overall Winners', 'Third Place - Donation to cause or non-profit organization involved in fighting the COVID crisis'] | ['amazon-web-services', 'flask', 'google-cloud', 'javascript', 'keras', 'nvidia', 'opencv', 'python', 'sqlite', 'tensorflow'] | 51 |
10,026 | https://devpost.com/software/college-major-questionnaire | Inspiration
My fellow classmates have lately been having trouble deciding a college major, and as it is very important for planning your future and career, I though that making this app would aid them in deciding.
What it does
It uses a question-based system to determine your area of study based on your likes and dislikes.
How I built it
I built it using code.org. My code involves using a counter in an array to switch the questions and counters to track all of the different categories that correlate to areas of study.
What's next for College Major Questionnaire
Eventually, I hope to make it into an application available for students worldwide!
Built With
code.org-local-school-database
javascript
Try it out
studio.code.org | College Major Questionnaire | This questionnaire aids prospective college students determine an area of study or major for college! | ['Sydney Dizon'] | [] | ['code.org-local-school-database', 'javascript'] | 52 |
10,026 | https://devpost.com/software/3dprinting-kxwo1b | Inspiration I saw the snake game with 3d style then I tried to turn 2d image I printed to 3d
What it
How I built it: I used Java and Stl format to print the code with the algorithm
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for 3DPrinting
Built With
java
stl
Try it out
github.com | 3DPrinting | Printing 3D using Stl | ['Hoang Do'] | [] | ['java', 'stl'] | 53 |
10,026 | https://devpost.com/software/solocoin | Home Page
Social-distancing timer and Challenges
Detailed Rewards
Rewards and Coupons
Detailed Leaderboards
Leaderboards, Milestones, Badges
The Inspiration
COVID-19 has severely affected the livelihoods of the people around the globe. With people getting infected by COVID each passing day, Consumers have learned to stay home, preserve the money they have, and consume less. However, this affects the economy. Similarly, local businesses have experienced unprecedented losses. For those that have survived this period the possibilities for re-opening, recovery, and growth are limited and possibly bleak. SMBs don't have a sustainable solution through which they can grow their businesses again and recover their losses. Traditional ad-tech channels are broken and without significant investment, any business can't get RoI. But, with inbuilt game mechanics, we can motivate people to purchase and help SMBs advertise their product without any upfront investment. So, the perfect way to help both sides of the community SMBs and making it fun for people suffering to help the SMBs is gamification.
I’m a co-founder of a Blockchain gaming company. I (Arbob) have a lot of experience in implementing game mechanics inside consumer apps. So, I thought, why not apply the same gamification techniques I implement in blockchain into a consumer app to encourage people to do a task based on their location, spend more and help SMBs in economic recovery. So I started to scout for members in hackathon channels and got an amazing team to build this idea.
What it does
SoloCoin is an app that rewards users in virtual coins based on their location. Currently, to engage in social-distancing, based on their location to their home. It's basically an app that rewards users according to their proximity/location. (using the tech of GPS, Geofencing, and Accelerometer - to track whether the phone is put on idle or not - awards will be rewarded only when the user is using the phone). If their smartphone is within a certain radius (~20m) to a reward hotspot (Geofencing reward location), then the app will reward them with virtual coins that they can later redeem for "Partner Coupons". These partners can be SMBs, Local businesses, and any online B2C businesses from E-commerce, Entertainment, Lifestyle, Health, etc. Our app rewards, nudges, and drives beneficial consumer behavior. This will help post-COVID to accelerate economic recovery in local communities. They can also compete with nearby players for achievements and badges which they can later share with their friends.
The global ads market is $600B+ with digital advertising accounting for $220B+. But there is a niche in hyperlocal digital-ads where specifically no-one has targeted their solution. Just in India, $650B is the size of traditional retail shops. Simple and Engaging Advertising needs of such a big segment of the market will create an ad-tech boom in the hyperlocal sector in India. We can be at the forefront of this market in the next 5-7 years.
How we built it?
When Arbob had the idea, he started building it on Python's Beeware toolkit which ports python code to a native mobile app. Then, he started to scout for members across hackathons. Since the idea was innovative and exciting, many amazing people joined his team. then the team decided to make it on native android and iOS directly for better usability and support. We have a group of open-source collaborators from around the world including startup founders & CXOs, people from IITs, BITS, Stanford, Microsoft, Uber, Github, Neuro-Researchers and more among our community, with expertise in Frontend (android-studio, Java, Kotlin), Backend (rails+postgresql | RDS) mobile development, Marketing, Product and Design (Adobe xd), etc. AWS is the cloud service provider with authentication from Firebase and Maps from google. The product is built 100% remotely.
Challenges we ran into
The first challenge was finding good team members with relevant expertise. But, we got through with it thanks to our hard work and to the volunteers joining because of the innovative nature of the product.
Management of tasks. To tackle this issue, we created a Trello and GitHub task board for better collaboration.
*
One thing I've noticed while building our product is the founder's vision for the product is very important. When I first pitched the idea of SoloCoin, I was alone. But the idea was innovative and has never been done before. The vision I put to gamify social-distancing resonated with people and that helped us in getting amazing people with great skills from all over the world to voluntarily work on my idea without any financial incentive. People were and still are excited every day they wake up to build and scale the product. And, that's how from an idea it became an MVP and then a working product. *
Accomplishments that we're proud of
The clarity in our concept.
Innovative approach towards social-cause using gamification.
1000+ sign-ups for the app launch.
5000+ user visits since the website launch.
The prototype was done in 3 days.
Beta Launched in Play Store.
Positive feedback and support from across the community, both tech and non-tech. Influencers asking to promote our app for FREE. Many top-level government officials agreed to promote our product in their network.
Partnerships for reward coupons.
Partnerships with multiple influencers.
An amazing team of 20+ open-source collaborators. 100% remote.
Tackling a greater cause to eradicate COVID-19 from the face of the earth through gamified social-distancing.
Won India’s largest COVID Hackathon - CODE19.
Won World's largest COVID Hackathon - EUvsVirus.
Incubated in European Innovation Council's business accelerator and Central European University's Innovation Lab.
What we learned
Clear documentation from both product and development perspective for faster user onboarding.
The idea of making a difference motivates a person more than money.
Distribute department leads early on to avoid clashes among members. We were lucky that we did it early on.
Single design philosophy to avoid misalignment of colors, content that is being pushed on the internet for promotion and visibility.
Don't let negative people come anywhere near your extremely positive and passionate team. Take care of your team.
Regular motivation and clear communication are very important when managing a remote team.
Just like you can’t expect 9 women to give birth to a baby in 1 month, you shouldn’t expect 10 developers to build a product of a global scale in a day.
Ideas are the most valuable asset to any startup. That's why we have a dedicated #ideas channel our discord for people to post ideas that can help improve our product. Till now we have received 20+ ideas since the project's inception.
Planned Revenue Model
Brands pay us for promoting their deals via Rewards. Sponsors will pay us to get their rewards listed on our app. Like if a new e-commerce startup wants better visibility of their product, they can pay us some fee and get their brand coupon listed on our app, which users can redeem for coins and purchase stuff from their website.
Sponsored badges and goals. For e.g.When someone redeems the coins for Amazon Coupons they’ll get an “Amazon Badge of Honor” which they can share in their social media.
Sponsored Daily/Weekly/Monthly Rewards via Challenges
Location-based targeted ads for grocery, medicine, and other necessities. Later we can open it for a broader audience and sponsors.
Subscription model for faster coin generation.
Impact of SoloCoin
SoloCoin will have a massive impact on the reduction of COVID-19 transmission and on society as a whole. Some of them are:
Impact on the society - It is a “simple way” for citizens to do their part, practices social distancing, and get real economic rewards and esteem awards to show they stood up to the challenge! They can be proud of themselves.
Impact on Public Health Officials: Public Health Agencies can get access to anonymized data insights from our app to help form better messaging to inform/educate the public. ‘
Impact on Partners-Sponsors: Coins can be exchanged for exciting "Partner coupons". These partners can be SMBs, Local businesses, and any online B2C businesses. Our app rewards, nudges, and drives beneficial consumer behavior. This will help post-COVID to accelerate economic recovery in local communities and help get their businesses on track.
Impact on People-driven Health and Wellness: having an all-access platform for behavioral change with people from all around the world will help support post-COVID measures, promoting healthier physical, mental, and social behaviors.
What's next for SoloCoin?
Making this a global app with partner support.
Rewarding for good habits like washing hands, timely self-isolation, Yoga, etc.
Determining efficient/less crowded routes for commuting and avoiding people.
Give users the possibility to chat with nearby quarantined people.
Map list of nearby available essential stores for groceries, medicines, etc.
Adding an anonymous user authentication system.
Give users the possibility to add a mask to their profile picture to tell people they are practicing social-distancing.
What after COVID ends?
We have identified many ideas on what we can do with our tech post-COVID. Some of them are:
Our app can be used for concerts/stores, basically to gather people. The more they stay the more they earn. That way sponsors will get better revenue as well.
Chat with friends, hang out, and earn rewards as you have a good time! Real-time location tracking automatically detects when you and friends are out and about, so you can passively earn points all day.
Use group messaging and location sharing to stay in touch with your friends at any time. Emergency contact features let you get in touch with your circle whenever you need a helping hand because friends watch out for each other!
At the current stage, the app is used as a Social-distancing app with "Home geo-fence". After COVID, we can increase the geo-fence to multiple locations like "rewarding concerts, malls, local stores, etc." Our app can be used for concerts/stores, basically to gather people. The more they stay the more they earn. That way sponsors will get better revenue as well. Our app rewards, nudges, and drives beneficial consumer behavior. This will help post-COVID to accelerate economic recovery in local communities. Now if you think about it we're venturing into not a consumer app space but rather "Ad-tech" space. Our app can now be at the forefront of "Hyperlocal based targeted ads", basically directly competing with Google's and Facebook's Pay Per Click.
Shopping is fun in real life. You can go with friends, treasure hunt for bargains, discover new products, better understand a brand’s vibe, ask store reps questions, and also touch and feel items so you have more confidence you’re going to love it. The shopping revolution we bring will COMBINE ecommerce and entertainment, where both are equal in importance. Toss in gamification and boom. We may see waves of new brands and more pricing transparency for everything... products and services, alike.
We can gamify people’s entire lives, everything they do, with the tech and 20+ sensors present in smartphones.
Our vision is for a healthier, happier, COVID-19-free world and we can't wait to launch this app in the global market and help make the world a better place.
You can also look at our
Product Roadmap
for the present and the future to get an idea of where we are headed.
App Demo Video can be found here
Built With
amazon-web-services
android-studio
firebase
geo-fencing
google-cloud
google-maps
java
postgresql
ruby-on-rails
Try it out
xd.adobe.com
www.solocoin.app
github.com
drive.google.com
docs.google.com
play.google.com | SoloCoin | Get rewarded to shop locally with your friends. Helping SMBs and local businesses towards economic recovery and recoup their losses due to COVID. | ['Arbob Mehmood', 'Adesh Bhansali', 'Aditya Sonel', 'Aayush Patni', 'Vijay Daita', 'Narayani Modi'] | ['Challenge Winner'] | ['amazon-web-services', 'android-studio', 'firebase', 'geo-fencing', 'google-cloud', 'google-maps', 'java', 'postgresql', 'ruby-on-rails'] | 54 |
10,026 | https://devpost.com/software/smart-surf | logo
Inspiration
Every time we have to search anything and want to have different results from different websites, search engines, news etc can take a lot of time by switching tabs, opening websites again and again and so on.
What it does
This app allows you to search Keyword through various search engines at the same time, such that you can compare your results, similarly it allows Dictionary, Music, Shopping. You can search through various platforms at the same time compare the results.
How I built it
I built the app using Kotlin in Android Studio.
Challenges I ran into
It was difficult to manage actives and fragments at the same time as in each activity around 4-7 fragments goes in.The error was to pass keywords from one activity to another activity and then pass it to the required fragment.
Accomplishments that I'm proud of
Managed to complete the project and solve errors
What I learned
I learned, how to use web view perfectly, how to pass information from on activity to other and pass it to fragments. Also, I got my Kotlin programming skills improved and learned designing too.
What's next for Smart Surf
Implement API from different websites to improve smoothness of search results.
Built With
android-studio
kotlin
Try it out
github.com | Smart Surf | One Tap Everything | ['Sunveer Singh'] | [] | ['android-studio', 'kotlin'] | 55 |
10,026 | https://devpost.com/software/a-i-powered-digital-hospital-coronavirus-laboratory-9qsact | Diagnosis Report
A.I Generated Prescription Report
Patient & Doctor Room
Mental Health Test
Digital Nurse
Mental Relaxation Exercise (Full Screen)
A.I Coronavirus Diagnostic Test
Mental Relaxation Exercise
A.I monitored Doctor Consultation
The problem our project solves
:
As the Coronavirus outbreak continues to engulf the globe, the world's scarce healthcare resources threatened to be overburdened. There is 1 doctor for every 666 patients and 2.7 hospital beds per 1,000 persons. This makes overcrowding of hospitals and burnout of healthcare workers a likely scenario. The lack of qualified health personnel in remote regions along with the high population density is a major concern as the world fights coronavirus.
People need someone to guide them, assist them, listen to their problems, and help them feel relaxed. It is the need of the hour to assist hospitals and laboratories by reducing their burden, and offloading the patients to a digital hospital.
The solution we bring to the table (including technical details, architecture, tools used)
:
Our platform provides a COVID-19 diagnostic test that will analyze the responses to the questions through integrated technology like computer vision for facial analysis, NLP to parse the user responses, location identification to calculate distance from the nearest COVID-19 cluster, and voice recognition. These technologies will help in generating the diagnosis report and determining the probability of a person having the Coronavirus. After completing the test, patients will have the option to visit the doctor’s room to get their report reviewed and get a consultation from the doctor instantly. The patients are sort according to the potential risk, and high-risk patients can book the COVID-19 physical test from our partnering labs directly from the platform. There are additional rooms like Mental Health Room, Nutritionist Room, Digital Nurse Room, and Reception Desk like an actual hospital, which altogether helps in assisting the patient through the current challenging time, and living a balanced and healthy life.
The solution’s impact to the crisis
:
In order to ensure every COVID-19 patient receives quality treatment, it is essential to offload workload to automated systems. The A.I powered digital hospital and coronavirus laboratory will help in performing millions of COVID-19 tests each minute and provide A.I monitored real-time doctor consultation to patients.
We aim to provide quality healthcare to each and every individual across the country, which is not possible through traditional hospitals which are expensive to set up.
The digital hospital and laboratory will provide the country’s best healthcare facilities to all the patients across different states through the power of A.I
The necessities in order to continue the project
:
We would need to on-board more doctors on the platform to ensure that we can provide consultation and assistance to all the patients in these tough times. Also, we would require COVID-19 positive patients to take our diagnostic test so we can fine-tune our testing algorithms.
The value of your solution(s) after the crisis
:
Initially, we are focusing on COVID-19 due to the current panic and increasing risk. But in the coming weeks, we would add more tests to our digital testing laboratory, which would be powered by the same A.I technology, which we are developing for our COVID-19 test. That way, by reusing the A.I technology, we can add many more tests like a general weekly health checkup, mental health checkup, diet checkup, etc. for the users of our platform. We have planned to keep all the diagnostic tests in the digital laboratory free of cost to bring a large number of users and help them alleviate the feeling of panic through an instant diagnosis. After finishing the diagnostic test, we will redirect the patients to the doctor’s room. The doctor consultation will be chargeable, which will be very inexpensive in comparison to the regular doctor fees due to the advantage of the bulk consultation bookings as compared to traditional hospitals, which can only serve a limited area based on their location. We would monitor the consultation using automated A.I for generating prescriptions and analyzing the doctor consultation in real-time to ensure that the experience was seamless and genuine for both the doctor and patients. Both the doctors and patients will have 24x7 access to their personal dashboard on the website from where they can contact each other, and access the reports, prescriptions, and other features.
Built With
computer-vision
javascript
machine-learning
mongodb
natural-language-processing
Try it out
beta.covidcare.cloud
github.com | CovidCare - A.I powered Digital Hospital and Laboratory | An end-to-end solution for COVID-19 which helps a person from diagnosis to recovery | Millions of A.I powered diagnosis tests in a minute | Accessible using any smartphone or computer | ['Kavish Goel', 'Taruna Garg', 'Stuti Kalra'] | [] | ['computer-vision', 'javascript', 'machine-learning', 'mongodb', 'natural-language-processing'] | 56 |
10,026 | https://devpost.com/software/deep-learning-drone-delivery-system | Results of our CNN-LSTM
Accuracy after training our model on 25 epochs
MSE of our CNN-LSTM
How we preprocessed data for our model
Data preprocessing
Picture of Drone
Inspiration:
The COVID-19 pandemic has caused mass panic and is leaving everyone paranoid. In a time like this, simply leaving the house leads to a high risk of contracting a fatal disease. Survival at home is also not easy: buying groceries is frightening and online ordered necessities take ages to arrive. The current delivery system still requires a ton of human contact and is not 100% virus free. All of these issues are causing a ton of paranoia regarding how people are going to keep their necessity supply stable. We wanted to find a solution that garners both efficiency and safety. Because of this, drones came into the picture(especially since one of our group members already had a drone with a camera). Drone delivery is not only efficient and safe, but also eco friendly and can reduce traffic congestion. Although there are already existing drone delivery companies, current drone navigation systems are neither robust or adaptable due to their heavy dependence on external sensors such as depth or infrared. Because of this, we wanted to create a completely autonomous and robust drone delivery system with image navigation that can easily be used in the market without supervision. In a dire time like now, a project like this could be monumentally applied to bring social wellbeing on a grand scale.
What it does:
Our project contains two parts. The first part is a deep learning algorithm that allows the drone to navigate images taken with a camera which is a novel and robust navigation technique that has never been implemented before. The second portion is actually implementing this algorithm into a delivery system with firebase and a ios ecommerce application.
Using deep learning and computer vision, we were able to train a drone to navigate by itself in crowded city streets. Our model has extremely high accuracy and can safely detect and allow the drone to navigate around any obstacles in the drone’s surroundings. We were also able to create an app that compliments the drone. The drone is integrated into this app through firebase and is the medium in which deliveries are made. The app essentially serves as an ecommerce platform that allows companies to post their different products for sale; meanwhile, customers are able to purchase these products and the experience is similar to that of shopping in actual stores. In addition, the users of the app can track the drone’s gps location of their deliveries.
How I built it:
To implement autonomous flight and allow drones to deliver packages to people swiftly, we took a machine learning approach and created a set of novel math formulas and deep learning models that focused on imitating two key aspects of driving: speed and steering. For our steering model, we first used gaussian blurring, filtering, and kernel-based edge detection techniques to preprocess the images we obtain from the drone's built-in camera. We then coded a CNN-LSTM model to predict both the steering angle of the drone. The model uses a convolutional neural network as a dimensionality reduction algorithm to output a feature vector representative of the camera image, which is then fed into a long short-term memory model. The LSTM model learns time-sensitive data (i.e. video feed) to account for spatial and temporal changes, such as that of cars and walking pedestrians. Due to the nature of predicted angles (i.e. wraparound), our LSTM outputs sine and cosine values, which we use to derive our angle to steer. As for the speed model, since we cannot perform depth perception to find the exact distances obstacles are from our drone with only one camera, we used an object detection algorithm to draw bounding boxes around all possible obstacles in an image. Then, using our novel math formulas, we define a two-dimensional probability map to map each pixel from a bounding box to a probability of collision and use Fubini's theorem to integrate and sum over the boxes. The final output is the probability of collision, which we can robustly predict in a completely unsupervised fashion.
We built the app using an Xcode engine with the language swift. Much of our app is built off of creating a Table View, and customized cell with proper constraints to display an appropriate ordering of listings. A large part of our app was created with the Firebase Database and Storage, which acts as a remote server where we stored our data. The Firebase authentication also allowed us to enable customers and companies to create their own personal accounts. For order tracking in the app, we were able to transfer the drone’s location to the firebase and ultimately display it's coordinates on the app using a python script.
Challenges:
The major challenge we faced is runtime. After compiling and running all our models and scripts, we had a runtime of roughly 120 seconds. Obviously, a runtime this long would not allow our program to be applicable in real life. Before we used the MobileNet CNN in our speed model, we started off with another object detection CNN called YOLOv3. We sourced most of the runtime to YOLOv3’s image labeling method, which sacrificed runtime in order to increase the accuracy of predicting and labeling exactly what an object was. However, this level of accuracy was not needed for our project, for example crashing into a tree or a car results in the same thing: failure. YOLOv3 also required a non-maximal suppression algorithm which ran in O(n^3). After switching to MobileNet and performing many math optimizations in our speed and steering models, we were able to get the runtime down to 0.29 seconds as a lower bound and 1.03 as an upper bound. The average runtime was 0.66 seconds and the standard deviation was 0.18 based on 150 trials. This meant that we increased our efficiency by more than 160 times.
Accomplishments:
We are proud of creating a working, intelligent system to solve a huge problem the world is facing. Although the system definitely has its limitations, it has proven to be adaptable and relatively robust, which is a huge accomplishment given the limitations of our dataset and computational capabilities. We are also proud of our probability of collision model because we were able to create a relatively robust, adaptable model with no training data.
We were also proud how we were able to create an app that compliments the drone. We were able to create a user-friendly app that is practical, efficient and visually pleasing for both customers and companies. We were also extremely proud of the overall integration of our drone with firebase. It is amazing how we were able to completely connect our drone with a full functioning app and have a project that could as of now, instantly be implemented in the marketplace.
What I learned:
Doing this project was one of the most fun and knowledgeable experiences that we have ever done. Before starting, we did not have much experience with connecting hardware to software. We never imagined it to be that hard just to upload our program onto a drone, but despite all the failed attempts and challenges we faced, we were able to successfully do it. We learned and grasped the basics of integrating software with hardware, and also the difficulty behind it. In addition, through this project, we also gained a lot more experience working with CNN’s. We learnt how different preprocessing, normalization, and post processing methods affect the robustness and complexity of our model. We also learnt to care about time complexity, as it made a huge difference in our project.
Whats Next:
A self-flying drone is applicable in nearly an unlimited amount of applications. We propose to use our drones, in addition to autonomous delivery systems, for conservation, data gathering, natural disaster relief, and emergency medical assistance. For conservation, our drone could be implemented to gather data on animals by tracking them in their habitat without human interference. As for natural disaster relief, drones could scout and take risks that volunteers are unable to, due to debris and unstable infrastructure. We hope that our drone navigation program will be useful for many future applications.
We believe that there are still a few things that we can do to further improve upon our project. To further decrease runtime, we believe using GPU acceleration or a better computer will allow the program to run even faster. This then would allow the drone to fly faster, increasing its usefulness. In addition, training the model on a larger and more varied dataset would improve the drone’s flying and adaptability, making it applicable in more situations. With our current program, if you want the drone to work in another environment all you need to do is just find a dataset for that environment.
As for the app, other than polishing it and making a script that tells the drone to fly back, we think our delivery system is ready to go and can be given to companies for their usage with customers. Companies would have to purchase their own drones and upload our algorithm but other than that, the process is extremely easy and practical.
Built With
drone
firebase
keras
opencv
python
swift
tensorflow
xcode
Try it out
github.com | Autonomous Drone Delivery System | An autonomous drone delivery system to provide efficient and virus-free deliveries. | ['Allen Ye', 'Gavin Wong', 'Michael Peng'] | ['Best COVID-19 Hack', '2nd Place Hack'] | ['drone', 'firebase', 'keras', 'opencv', 'python', 'swift', 'tensorflow', 'xcode'] | 57 |
10,026 | https://devpost.com/software/flowchart-generator | Example flowchart
Flowchart-Generator
Automatically creates Flowcharts from Pseudocode!
Installation
This project was built on Python 3.7.4
Run this to install the necessary dependencies:
pip install Pillow click
Next, clone this project.
Writing the Pseudocode
The Pseudocode is entered into a .txt file. It follows strict rules which must be obeyed
Rules
STOP and START are automatically input by the program, so do not need to be added
Indents don't affect the program, so nothing has to be indented, and incorrect indentation is allowed
The capitalization of the keywords is extremely important. If an error occurs, double check if you have capitalized the keywords like "TO" and "FOR" properly
ELSE IF is not available, but nested IFs are possible
The ENDIF, NEXT var, and ENDWHILE blocks are mandatory
Syntax Guide
#### Input and Output:
INPUT x
OUTPUT x
INPUT X
OUTPUT var
OUTPUT "hello"
IF statements:
IF condition THEN
ELSE
ENDIF
IF x < 3 THEN
OUTPUT X
ELSE
OUTPUT x*2
ENDIF
The else statement is optional (ENDIF is still necessary)
IF x < 3 THEN
OUTPUT X
ENDIF
#### Process-type blocks:
x = x + 1
y = x / 2
#### While loops:
WHILE condition DO
ENDWHILE
WHILE x < 5 DO
OUTPUT x
ENDWHILE
#### For loops:
FOR var <- start TO end
NEXT var
FOR i <- 1 TO 5
OUTPUT i
NEXT i
CLI usage
To run the code, simply execute the following command:
python Converter.py
Arguments
Arguments in the CLI are typed like so:
--fontsize=20
or
--code="enter.txt"
--size
is the font size used. This controls the size of the entire flowchart as well. By default it is 20px
--font
is the font path. Default is "C:/Windows/Fonts/Arial.ttf", but can be changed for different OSs or fonts
--output
is the flowchart's image file. Default is "flowchart.png"
--code
is the file with the pseudocode. Defaults to "enter.txt"
--help
provides CLI help
For example:
python Converter.py --code="code.txt" --fontsize=30 --output="result.png"
Flowchart Image
This image contains the created flowchart which can be shared, printed, etc. Its size varies exactly on the size of the flowchart created, so it may even hit a resolution of 10k pixels! However if the generated flowchart is too big, then the image will be unopenable due to being too large. The user should be careful with flowchart sizes.
Support
If you are having issues, please let me know. You can contact me at
[email protected]
Built With
python
Try it out
github.com | Flowchart-Generator | Automatically creates Flowcharts from Pseudocode! | ['Mugilan Ganesan'] | [] | ['python'] | 58 |
10,026 | https://devpost.com/software/covengers | Arm the doctors to fight Pandemics
Inspiration
Prepare the world by building an infrastructure to fight Pandemics. The world has been prepared for wars but not for Pandemics. COVID19 isn't the first and wont be the last. We need to use this time and situation to build infrastructures that would empower us to tackle any epidemic/pandemic situation, now and in the future.
What it does
Global health data repository that informs doctors in real time about the best treatment option.
How I built it
Ideation in a hackathon
Team building- Recruiting people with complimenting skillets
Survey with doctors and hospitals in various nations- to understand flexibility in data
Challenges I ran into:
A worldwide initiative, every country has a different healthcare system and infrastructure, some more open than the other. It is therefore critical to start in pockets where we generate most partnerships, show positive case studies and move on.
Accomplishments that I'm proud of
Interest from Amazon Web Services in partnership
WHO support
The framework could also be used to provide a central framework for clinical trials across the globe.
What I learned
You need to take the first step to solve a problem and you keep walking towards a solution.
What's next for COVENGERS
Partnerships with different countries/hospitals/doctors to bring it into action.
Built With
data
python-and-javascript-for-programming
r
Try it out
covengers.netlify.app | COVENGERS | A global health data repository that informs clinicians about treatment options in real time | ['nimi vashi', 'Manish Gupta', 'Shaishav Vashi'] | [] | ['data', 'python-and-javascript-for-programming', 'r'] | 59 |
10,026 | https://devpost.com/software/hospit-ai-608itu | This is our logo!
This is part of the data that we used to build this model.
Inspiration: My (Reshma's) mother is a doctor, and she told me about the challenges that hospitals are facing. I wanted to do something about the coronavirus. I (Alice) on the other, was searching for ways to help with the coronavirus crisis and luckily came across this hackathon. I knew that Reshma was big into science and AI so I asked her if we could partner up and create something. We ended up creating Hospit-AI!
What it does: Our model tells hospitals when they will reach their maximum capacity.
How we built it: We used the AutoML Tables API from the Google Cloud Platform in order to build and test our model.
Challenges we ran into: We initially were not sure which angle we wanted to pursue. We wanted to address both the economic and medical impacts of COVID-19. After much thought and discussion, we decided on a project that had elements of both. Later on, we were not sure how to go about this project. A friend recommended the Google Cloud Platform (GCP) to build, optimize, and test our model, so we decided on this. However, it was still challenging to learn how to use this as both of us were completely new to it.
Accomplishments that we're proud of: Initially, we had no idea how to go about this project. We are proud that we were able to learn how to use the GCP and successfully accomplish our project.
What we learned: We learned how to use the GCP and we learned lots about Machine Learning. Most of all, we learned how to work together as a team and had a great time doing so!
What's next for Hospit-AI: We hope to further develop Hospit-AI to reflect the changing circumstances by adding more data. We eventually hope to have it implemented.
Built With
automl
google-cloud
Try it out
console.cloud.google.com | Hospit-AI | We wanted to create a machine learning project that tells hospitals when they will reach their maximum capacity, so they can plan ahead. | ['Reshma Kosaraju', 'Alice Tao'] | [] | ['automl', 'google-cloud'] | 60 |
10,026 | https://devpost.com/software/one-touch-music | Inspiration
The goal of this project is to build a product, be it a website or an app or both, to bring people of the world together through music. As music transcends language, achieving this goal should be simple. However, this requires some more pre requisite goals to be achieved.
Goals
Create a social media website dedicated to musicians and listeners, aspiring artists, celebrities or recruiters. (Need of the hour) Enable AR/VR on website for wider reach, for online concerts and for online training with instruments.
In this way, people can be kept engaged, and helped to hone their talent, making this quarantine incredibly productive. Instead of worrying about the crisis and the various what-ifs.
Depending on the various talent hiring agencies in the music industry, who will conduct competitions in this platform to look for talent, not unlike a virtual Indian Idol or a virtual Sa Re Ga Ma Pa, and their sponsors, the app can be monetized, not to mention premium features for musicians, made available at a basic price.
To read more:
https://docs.google.com/document/d/1dpWo8JzNkmlMRjgRJdKpZASfm2jwEFuFxEuJql0YUb4/edit?usp=sharing
What it does
Create a social media website dedicated to musicians and listeners, aspiring artists, celebrities or recruiters.
How I built it
This is a RESTful API created on Golang. This API will be consumed by a mobile/web app.
Challenges I ran into
There were numerous challenges that I encountered during this hack, one of them being incorporating video AR into the app. Other one was to keep track of live trending music and artists.
Accomplishments that I'm proud of
I am extremely proud of being able to find a solution to track the trending artists and music in real time. Also, since a social media has so many different functionalities and features, developing the entire API within the stipulated time is commendable.
What I learned
Learnt to build API from scratch and to consume it into apps. Learnt DB and network interactivity.
What's next for One Touch Music
Mobile and Web support for this app coming soon!
Built With
android
api
golang
rest
Try it out
github.com | One Touch Music | Bonding the world through music | ['Sam Mitra'] | [] | ['android', 'api', 'golang', 'rest'] | 61 |
10,026 | https://devpost.com/software/efficient-farming | Inspiration
What it does
About portion of the number of inhabitants in India relies upon agriculture for its livelihood. Agriculture is the broadest monetary division and assumes an imperative job in the generally speaking financial improvement of a country. Mechanical progressions in the field of agribusiness will determine to expand the fitness of certain cultivating exercises. The more effective that agriculturists cultivate the more benefits they have at closeout of their harvests. They accomplish more productivity through present day technologies .So here we have a stage as portable application where a farmer can employ cultivating machines, equipment , labors .Presently a day getting labor for farm work is hard .So in this application work will have enlisted portable numbers and when the farm owner need he can enlist them specifically from the application.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Efficient Farming
Built With
angular.js
bootstrap
css3
html5
javascript
mysql
php | Efficient Farming | WEb based Application for Farming Online Portal for Farmers | ['Akash Joshi'] | [] | ['angular.js', 'bootstrap', 'css3', 'html5', 'javascript', 'mysql', 'php'] | 62 |
10,032 | https://devpost.com/software/project-x0rl69ekbpui | صورة توضيحية للمنتوج
Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for
تقويم تشخيصي عبر فيديو تربوي
Built With
edpuzzal
Try it out
edpuzzle.com | تقويم تشخيصي عبر فيديو تربوي | تقويم تفاعلي | ['قناة أحمد لهنوني التعليمية'] | [] | ['edpuzzal'] | 0 |
10,032 | https://devpost.com/software/datathon-yp6k3s | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Datathon
Datathon
Built With
moodle | تقويم ودعم اللغة العربية للمستوى السادس ابتدائي | Datathon | ['Chaimae Ouidadi'] | [] | ['moodle'] | 1 |
10,032 | https://devpost.com/software/l-approche-integratrice-avec-primatice | L'interface de notre blog
L'idée de notre projet émane des problèmes que rencontrent les apprenants dans l'assimilation de leurs leçons, vu les différences des styles et des rythmes. À cela s'ajoute le caractère unidirectionnel de la plupart des ressources numériques existantes actuellement.
Inspiration: Pédagogie différenciée, classe inversée et la théorie des intelligences multiples sont les fondements inspirateurs de notre projet
"L'approche intégratrice avec Primatice" est un projet novateur qui se veut une conception, comme son nom l'indique, qui intègre tous les apprenants quels que soient leurs rythmes et leurs styles d'apprentissage, afin d'assurer leur réussite. L'intégration vise aussi l'usage des nouvelles technologies à bon escient, de la part de l'enseignant et de l'apprenant. L'interactivité de ces outils et leur diversification pour l'atteinte du même objectif d'apprentissage, est parmi les apports de ce projet prometteur.
Le prolongement de l'idée est l'élaboration d'un éventail de choix de ressources numériques interactives pour chaque leçon, voire chaque objectif d'apprentissage. Des choix qui répondent aux besoins urgents des apprenants et à leurs divers profils.
Built With
animaker
camtasia
filmora
h5p
kahoot
photoshop
ppt
videoscrib
wix
Try it out
primatice2020.wixsite.com
drive.google.com | L'approche intégratrice avec PrimaTice | pédagogie numérique à entrées technologiques multiples, est la source inspiratrice de notre projet | ['ام لقمان Tahri Oumaima', 'Loubna Lekrebssi', 'Sana El jamyly', 'zin achel', 'majda slimi', 'Art and Education', 'Mohamed BOUFOUS'] | [] | ['animaker', 'camtasia', 'filmora', 'h5p', 'kahoot', 'photoshop', 'ppt', 'videoscrib', 'wix'] | 2 |
10,032 | https://devpost.com/software/project-aw70gsqh8y4o | Inspiration
DATATHON 2020
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for فيديو تفاعلي في مادة النشاط العلمي/ المستوى الثاني
Built With
h5p
Try it out
h5p.org | فيديو تفاعلي في مادة النشاط العلمي/ المستوى الثاني | يمكن مشروعي من التعلم الذاتي | ['ABDELHAK ZEROUAL'] | [] | ['h5p'] | 3 |
10,032 | https://devpost.com/software/khawarizmiyate | Khawaizmiyate logo
capture3
capture2
capture1
خوارزميات .. موقع تعليمي متخصص في الرياضيات موجه لمتعلمي السنة السادسة من التعليم الابتدائي باعتبارها مستوى إشهاديا و أيضا صلة وصل بين السلك الابتدائي و السلك الثانوي .
ما يميزنا هو التجديد والتركيز على التعلم التفاعلي، هدفنا الارتقاء بالرياضيات وبتعلميها وتعلمها باعتبارها مادة أساسا للبناء الذهني للمتعلم، وباعتبارها تعلما تراكميا مستمرا طيلة المسار الدراسي، نهدف أيضا لتغيير النظرة السائدة عن مادة الرياضيات وتحبيبها للتلميذ بالارتكاز على أساليب التعلم النشط والفعال، كما نؤمن بالدور الكبير الذي يمكن أن تقوم به التكنولوجيا التفاعلية في هذا الصدد، ونتطلع لأن يكون موقع خوارزميات خير رفيق ومساعد للمتعلم الصغير في فهم واستيعاب مادة الرياضيات ولم لا الإبداع فيها.
يحتوي الموقع إل حدود اليوم على خمس و سبعين وثيقة من بينها اثنا عشر موردا تم الاشتغال عليها منذ بداية تكوينات الداتاتون، تهم بالخصوص فترة الدراسة عن بعد على أن يتم إغناوها دوريا بوثائق محينة، حرصنا فيها على جودة المحتوى و تلبية حاجيات المستعمل مع مراعاة خصوصياته. ركزنا أيضا على التواصل و التفاعل معه بإنشاء مجموعة من القنوات التواصلية الأكثر استعمالا و نجاعة ( البريد الإلكتروني، قناة اليوتوب، الفايسبوك، الأنستغرام) كما لم نغفل تخصيص مجال للتوعية و الإعلام.
يتميز موقع خوارزميات أيضا بإمكانيةالتحميل المباشر لمختلف الوثائق مراعاة لتلميذات و تلاميذ العالم القروي و/أو المشاكل التقنية التي قد يعاني منها التلاميذ خصوصا في ظل هذه الظرفية.
Try it out
khawarizmiyate-63.webself.net | Khawarizmiyate | خوارزميات هو موقع خاص بالرياضيات للسنة السادسة من التعليم الابتدائي | ['Fatimezzahra Badri', 'Jawad Ammi', 'chaaraoui rahhal', 'essaadi mohcine'] | [] | [] | 4 |
10,032 | https://devpost.com/software/unite-didactique-6-5aep | aider les élèves à mieux apprendre la langue Française /
Built With
ispring
ppt | unité didactique 6 5aep | créer un produit interactif pour idée les élèves . | ['ado mado'] | [] | ['ispring', 'ppt'] | 5 |
10,032 | https://devpost.com/software/site-educatif-marocain-sem-6o9xms | Inspiration:COVID19 ET l'enseignement à distance
What it does:site en HTML avec leçons interactives en h5p
How I built it:par HTML code et h5p files
Challenges I ran into:on all OS
Accomplishments that I'm proud of
What I learned
What's next for Site éducatif marocain(SEM)
Built With
h5p
html
Try it out
semmaroc.neocities.org | Site éducatif marocain(SEM) | Concevoir et diffuser un environnement d'apprentissage interactif au profit des apprenants. | ['mouna1603', 'Abdelali ELMORCHID', 'Nadia Naim', 'Khadija Elidrissi', 'khadija affane'] | [] | ['h5p', 'html'] | 6 |
10,032 | https://devpost.com/software/les-regles-de-la-grammaire | Inspiration
apprendre
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Les règles de la grammaire
datathon 2020
Built With
h5p | Les règles de la grammaire | Apprendre et s'évaluer | ['Youssef Daife المدون المغربي'] | [] | ['h5p'] | 7 |
10,032 | https://devpost.com/software/datainfo-site-dynamique-d-apprentissage | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for DataInfo: Site dynamique d'apprentissage
Datathon 2020
Try it out
datainfo.maginfo.info | DataInfo: Site dynamique d'apprentissage | Site dynamique d'apprentissage de l'informatique | ['WAFAE ZAGHLOUN', 'Amine BAHHAR'] | [] | [] | 8 |
10,032 | https://devpost.com/software/project-70hi2j31zexs | درس تفاعلي في مكون التطبيقات الكتابية
المشروع غبارة عم درس تفاعلي ، به تمارين تفاعلية، وشرح لاستاذ وكاننا في قسم حقيقي، حاولت ان اقف عند الصعوبات التي تواجه المتعلمين في التطبيقات الكتابية للمستوى الثالث، وقدمت بعض الانشطة الداعمة للفئة المتعثرة
Built With
activepresenter
powerpoint | درس تفاعلي في مكون التطبيقات الكتابية للمستوى الثالث | الصعوبات التي يواجهها المتعلمين في مكون التطبيقات | ['EL BALGHITI Anas'] | [] | ['activepresenter', 'powerpoint'] | 9 |
10,032 | https://devpost.com/software/datathon-groupe-lachkar-khalid-gywqth | 1
Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for datathon groupe lachkar khalid
datathon2020
Built With
ginially
h5p
liveworksheet
quiziniere
Try it out
educat2019.blogspot.com | datathon groupe lachkar khalid | ressources numérique | ['lachkar khalid'] | [] | ['ginially', 'h5p', 'liveworksheet', 'quiziniere'] | 10 |
10,032 | https://devpost.com/software/project-u6j7ptxc5kg8 | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for الأرض في الكون
Datathon | الأرض في الكون | الإبداع التربوي | ['MGHINEF Fatima'] | [] | [] | 11 |
10,032 | https://devpost.com/software/project-g13jbtq0yl5w | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for
datathon2020 | بوابة التعلم الذاتي | سم الله الرحمن الرحيم والصلاة والسلام على أشرف الأنبياء والمرسلين سيدنا محمد صلى الله عليه وسلم وبعد فان بوابة التعلم الذاتي جاءت لتسهل سبل التعلم الذاتي عن طريق واجهة تفاعلية جذابة سهلة الولوج | ['hassan Kafou'] | [] | [] | 12 |
10,032 | https://devpost.com/software/projet-sur-la-terre-4jenlo | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for projet sur la terre
Datathon 2020
Built With
scenariiopale | خاصيات الكائنات الحية | جميعا لتعليم أفضل | ['Abdelilah AIT LAHCEN'] | [] | ['scenariiopale'] | 13 |
10,032 | https://devpost.com/software/licompre-lire-et-comprendre | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for licompre - lire et comprendre
datathon | licompre - lire et comprendre | Licompre est un ensemble des texte et exercices interactifs destiné au élevés du primaire pour améliorer leurs capacités en lecture est compréhension des textes en français. | ['Mbr Mbr'] | [] | [] | 14 |
10,032 | https://devpost.com/software/project-d3cuvqy4tebz | Inspiration
the challenge of dathathon, need of computer information for students in rural area's, experience
What it does
How I built it
we wrote the content in sheets, copy it to power point,work on police, graphics, animation, publish it with i spring8 free
Challenges I ran into
how to make the content interactive, how to play animations, how to do it in time.
Accomplishments that I'm proud of
master the keys of power point and other Softwares, become able to build digital resources.
What I learned
the work on digital resources is usefull, you learn new things, we tried new things we build a digital ressource.
What's next for تعلماتي في الحاسوب
continue the project to make students know about computers, work on it, build with it.
Built With
computer
html
ispring8free
powerpoint | تعلماتي في الحاسوب | المشروع هو المرحلة الاولى من سلسة تعلماتي في الحاسوب | ['AHMED IDBALAHCEN'] | [] | ['computer', 'html', 'ispring8free', 'powerpoint'] | 15 |
10,032 | https://devpost.com/software/project-261g04emacbw | Inspiration
بصفتي أستاذا للغة العربية للمستويين الأول والثاني ابتدائي، فقد حاولت إعداد فيديوهات منذ توقف الدراسة الحضورية، وذلك لمساعدة التلاميذ على استمرار تحصيلهم الدراسي
What it does
هذا الفيديو خاص بالحصة الأولى من الحكاية، ومن خلاله يكتشف التلاميذ الحكاية ويتعرفون عناصرها
How I built it
قمت باستعمال برنامج powerpoint
Challenges I ran into
أردت من خلال إنشاء الفيديو أن يستفيد تلاميذي وكذلك التلاميذ الأخرون، حيث قمت بوضعه على منصة يوتوب
Accomplishments that I'm proud of
أشعر بالسعادة حينما أرى أن فيديوهاتي يتم مشاهدتها من طرف مجموعة من التلاميذ، حيث أن أرقام المشاهدة في تصاعد،فقد سجل أحد الفيديوهات أكثر من 6000مشاهدة
What I learned
تعلمت مجموعة من الأشياء مرتبطة بإعداد الموارد الرقمية وفهمت أهمية إنتاجها خاصة مستقبلا
What's next for حكايات
أحاول إنجاز فيديوهات أخرى وإعداد موارد تعليمية تعلمية أخرى
Built With
powerpoint | حكايات | إعداد فيديو خاص بالحصة الأولى من حكاية دمية وسيارة | ['Mustapha BELHADJ'] | [] | ['powerpoint'] | 16 |
10,032 | https://devpost.com/software/video-interactive-les-verbes-pronominaux-au-present | I am Morocco primary school teacher, Microsoft innovative educator expert and a Skype Master teacher. I am interested by the use of ICT in education, because it helps me to improve my pupils' skills, and feel the happiness through their eyes!
Built With
camtasia
h5p
powerpoint | Vidéo interactive: les verbes pronominaux au présent | Vidéo qui touche l'actualité Covid19, en donnant à la fois la règle de la leçon et des conseils aux apprenants, avec des exercices interactifs qui visent l'amélioration des compétences du 21ème siècle | ['BAYLA Khalid'] | [] | ['camtasia', 'h5p', 'powerpoint'] | 17 |
10,032 | https://devpost.com/software/project-xug4frvs3be5 | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for تدريباتي في علوم اللغة
Datathon For Education | تدريباتي في علوم اللغة | محتوى تعليمي خاص بمتعلمي الثانوي التأهيلي، يمكنهم من تطبيق مكتسباتهم حول الظواهر اللغوية بغرض التعرف على أخطائهم وتصويبها، والأهم هو الوعي بسياقاتها المختلفة. | ['Youssef Laajan'] | [] | [] | 18 |
10,032 | https://devpost.com/software/video-interactive | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for vidéo interactive
datathon2020 | projet pour datathon 2020 | vidéo interactive | ['Abderrahman Drifi'] | [] | [] | 19 |
10,032 | https://devpost.com/software/projet-datathon | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for projet datathon
datathon 2020 | projet datathon | الفكرة هي مساعدة المتعلمين لفهم الكلمات والنصوص المقروءة | ['Prof Ahmed'] | [] | [] | 20 |
10,032 | https://devpost.com/software/la-langues-des-signes-marocaine-au-prescolaire | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for la langues des signes marocaine au préscolaire
datathon2020 | la langues des signes marocaine au préscolaire | enseigner la langue des signes marocaines dès un jeune âge pour bien la maîtriser après | ['FOUZIYA BOULAFTALI'] | [] | [] | 21 |
10,032 | https://devpost.com/software/plaisir-de-lire | Un trouble persistant de l'acquisition du langage écrit que j'ai constaté chez plusieurs apprenants de la 4 AEP .
De grandes difficultés dans l'acquisition et l'automatisation des mécanismes essentiels à la maîtrise de l'écrit.
Ces troubles qui persistent dans le temps m'ont inspiré et encourager à chercher une solution surtout qu'après mes diagnostiques j'ai pu remarqué qu'ils se distinguent par des problèmes affectifs.
Ma ressource numérique est destinée aux enfants dyslexiques et comporte une nouvelle option (Police open Dyslexic,Lignes colorées et texte aéré ...) pour aider les personnes souffrant de ces troubles.
Compatible avec les publication Aurora suivant: Web,Scrom,Emeraude,Pdf et Postscriptum .
Merci infiniment pour ces pertinents et fructueux partages.
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Plaisir de lire
Built With
dys
html5
mode
opale3.8
Try it out
drive.google.com | Plaisir de lire | Enfants dyslexiques | ['Bouchra. بشرى Rhouddani. الغداني'] | [] | ['dys', 'html5', 'mode', 'opale3.8'] | 22 |
10,032 | https://devpost.com/software/project-ec2b74g63hpm | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for فيديو تفاعلي حول موضوع التوالد عند الحيوان
dathaton 2020 | موضوع التوالد عند الحيوان | مورد رقمي تعلمي تفاعلي حول موضوع التوالد عند الحيوان | ['mostafa مصطفى'] | [] | [] | 23 |
10,032 | https://devpost.com/software/ressource-numerique-cours-physique | .
Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for ressource numérique cours physique
DATATHON | ressource numérique cours physique | c'est une ressource numérique cours physique avec vidéo explicative et Auto evaluation | ['Said Messaoud'] | [] | [] | 24 |
10,032 | https://devpost.com/software/le-developpement-durable | d'après les séances que j'ai assisté je me suis inspiré par les applications et les procédures qu'il faut suivre
une ressource numérique interactif en géographie secondaire
selon le model ADDIE
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Le développement durable
Built With
camtasia
en
mediator | Le développement durable | un cours interactif en géographie intitulé "Le Développement durable" au niveau de tronc commun secondaire | ['abdelouahed wahid'] | [] | ['camtasia', 'en', 'mediator'] | 25 |
10,032 | https://devpost.com/software/vedio-interactive-sous-forme-jeux-educative | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for jeux educative sous forme vedio interactive
Datathon For Eduaction | jeux educative sous forme vedio interactive | Datathon For Eduaction | ['Toufik elbannany'] | [] | [] | 26 |
10,032 | https://devpost.com/software/project-49xjwbiumntf | تعلم تفاعلي
Datathon 2020 | تعلم تفاعلي: "أقرأ وأتزكى" | تعلم تفاعلي: "أقرأ وأتزكى" | ['Mohammed Hamri'] | [] | [] | 27 |
10,032 | https://devpost.com/software/test-9rmwn2 | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for test
datathon 2020 | test | محتوى تعليمي تفاغلي | ['ABDELLAH MENNIOUI'] | [] | [] | 28 |
10,032 | https://devpost.com/software/datathon-2020-ebc7d1 | Page d’accueil
Ce qui rend le travail facile c'est le fait de ce basé sur un scénario de déroulement du jeu, ce qui donne comme résultat un travail qui pousses les enfants a le terminé en réalisant bien sur les objectifs du cours. L’apprentissage d'un nouvel environnement de développement du ressources e-learning comme adobe captivate c'était pas passé facilement surtout avec la contrainte du temps, mais mon background en tant que développeur ma facilité beaucoup la tâche.
Le jeu a été développer sur la plateforme d'adobe captivate 2019, et le son intégré au jeu avec l'outil Audacity, et pour les images ils sont traités avec Photofiltre l'outil d'édition des images.
je suis bien satisfait de ce travail et les acquis que j'ai appris en réalisant ce jeu, et j’espère que la continuité du DATATHON n'arrete pas a la fin de cette édition, j'aime bien qu'on arrive a créer une plateforme marocaine qui sert a la formation continue avec des mooc pour perfectionner les acquis des enseignants en matière d'innovation dans le domaine de l'éducation.
Le jeu est disponible en format HTML et aussi sur les périphériques Android.
Built With
adobe
audacity
captivate
photofilter
Try it out
github.com | GRAMMAIRE FRANÇAISE | le projet est un jeu d'exercices interactives pour les élèves du primaire pour maîtriser la conjugaison du verbe aller au présent de l'indicatif avec une interface intuitive moderne. | ['https://www.youtube.com/channel/UCOllR29UhFFZ7j4R3yUR78g', 'Karroumi Yassine'] | [] | ['adobe', 'audacity', 'captivate', 'photofilter'] | 29 |
10,032 | https://devpost.com/software/les-propositions-subordonnees | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Les propositions subordonnées
datathon
Try it out
infostioughza.wixsite.com | Les propositions subordonnées | propositions | ['Abde Hammou'] | [] | [] | 30 |
10,032 | https://devpost.com/software/cours-multimedia-complet-sur-l-accord | cours multimédia complet sur l'accord
Inspiration
cours multimédia complet sur l'accord pour apprenants handicapés et normaux
What it does
Aidant tous les apprenants en difficulté d'apprentissage
How I built it
Opale+Rubis
Challenges I ran into
design
Accomplishments that I'm proud of
ressources numériques multimédia
What I learned
varier les ressources pédagogiques
What's next for cours multimédia complet sur l'accord
amélioration de la création des ressources ainsi que leur design
Built With
html
mhtml
opale
rubis
Try it out
taalimma-my.sharepoint.com
taalimma-my.sharepoint.com
taalimma-my.sharepoint.com
taalimma-my.sharepoint.com | cours multimédia complet sur l'accord | cours multimédia complet sur l'accord pour apprenants handicapés et normaux | ['ADIL ANNTAR'] | [] | ['html', 'mhtml', 'opale', 'rubis'] | 31 |
10,032 | https://devpost.com/software/project-ow9nfej74hbx | في البداية كان التحدي من اجل ايصال المعلومة الى المتعلمين ، ومع مرور الايام وبمساعدة انشطة جمعية اليس بتيزنيت ، استطعت ان انجز هذا المشروع بوفيق من الله سبحانه وتعالى ، ورغم الصعوبات وقلة الامكانيات و الانشغالات المتزايدة فالمشروع اصبح حقيقيا ،فشكرا لكل من ساهم في هذا العمل التربوي من قريب او من بعيد خصوصا المؤطرين والموجهين والمصاحبين واعضاء الفريق كله.
Built With
ar
h5p
html5
image
paint
youtub
Try it out
abderrahmanjoidate.wixsite.com | موقع الكتروني تربوي | نظرا لجائحة كورونا قررت ان احدث هذا الموقع الالكتروني قصد مساعدة المتعلمين على التعلم عن بعد | ['ABDERRAHMAN JOIDATE'] | [] | ['ar', 'h5p', 'html5', 'image', 'paint', 'youtub'] | 32 |
10,032 | https://devpost.com/software/la-numerisation-a-distance-des-textes-de-l-expression-orale |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
La présentation des textes de l'expression orale est basée sur des images et des textes non disponibles dans les manuels des élèves ce qui cause une faiblesse dans les résultats des élèves dans cette discipline importante. Pour cela nous avons pensé à numériser à distance tous les textes de l'expression orale pour pouvoir aux élèves de les regarder sur les tablettes et les smarts phone à la maisons.
Built With
camtasia
e-anim
h5p | La numérisation à distance des textes de l'expression orale | Maintenant, l'apprenant peut construire son apprentissage de manière indépendante | ['Ismail Bahsaine'] | [] | ['camtasia', 'e-anim', 'h5p'] | 33 |
10,032 | https://devpost.com/software/test-7rpw6c | What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I l
datathon 2020 | تمارين تفاعلية المستوى السادس | تمارين تفاعلية تشمل اللغة العربية والفرنسية والرياضيات | ['mokhtar ibizi'] | [] | [] | 34 |
10,032 | https://devpost.com/software/calcul-menta | l'interface de mon projet
مورد رقمي للمستوى الأول ابتدائي
Built With
arabe
Try it out
www.samaup.co | Calcul mental | Calcul mental_1partie | ['Mounir Taghia'] | [] | ['arabe'] | 35 |
10,032 | https://devpost.com/software/project-zm3nil4b7ewg | datathon 2020
J'ai toujours eu une idée pour numériser notre établissement et la rendre suivre les évolutions de la mondialisation et les changements rapides dans le monde de la technologie et la première idée était de créer un site simple qui va nous permettre la communication avec l'environnement externe de notre établissement mais le premier obstacle était la contrainte du temps pour mettre en place ce projet ... mais une fois le confinement est annoncé j'ai décidé de me faire opérer et de mettre le projet sur la table depuis le début du confinement , mais les besoins bien sûr sont totalement changés avec le changement de la situation où il était nécessaire de rechercher des services à la hauteur de la continuité pédagogique et pas seulement un site communicatif donc le résultat était après un travail fatigant et dur la création d'un site qui contient un ensemble de services de base dont l'élève aura besoin pour continuer ses apprentissages sans Problèmes.
et bien sûr le défi continu...
challenge2020
Built With
bigbluebutton
blogger
caster
casterfm
drive
fm
h5p
html
html5
javascript
mixxx
supportduweb
viloud
youtube
Try it out
www.sidi-abdelmalek.com | Plateforme éducative 2.0 | Un site Web qui contient un ensemble des services à la fois tels que la tv scolaire, la radio scolaire, les cours, les exercices interactifs... | ['Mounir MOUSTIR'] | [] | ['bigbluebutton', 'blogger', 'caster', 'casterfm', 'drive', 'fm', 'h5p', 'html', 'html5', 'javascript', 'mixxx', 'supportduweb', 'viloud', 'youtube'] | 36 |
10,038 | https://devpost.com/software/build-an-blockchain-app | Problems It Resolves
Inspiration
I.Fear of Missing Out’ Blockchain Solutions
2.Opportunistic Solutions
3.3. Trojan Horse Projects
Evolutionary Blockchain Projects
Blockchain-Native Solutions
I built it Using Blockchain to build a blockchain app using Etherium smart contracts . I will learn how to create a todo app with Etherium smart contracts using the Solidity programming language. I will also learn to write tests, deploy to the blockchain, and create a client-side application.
I will learn how to create a todo app with Etherium smart contracts using the Solidity programming language. I will also learn to write tests, deploy to the blockchain, and create a client-side application.
Accomplishments that I will learn how to create a todo app with Etherium smart contracts using the Solidity programming language. I will also learn to write tests, deploy to the blockchain, and create a client-side application.
I learned I will learn how to create a todo app with Etherium smart contracts using the Solidity programming language.
Cryptocurrency Converter In JavaScript .This is my Next Goal Which I am Currently Working On,
Built With
blockchain
django
iot
Try it out
github.com | Build an Blockchain app | A Guide to the Types of Blockchain Projects Ruling the Decentralized Economy By Sudeep Srivastav | ['Situ Dash'] | [] | ['blockchain', 'django', 'iot'] | 0 |
10,038 | https://devpost.com/software/covid19-kit | dashboard
please check video for the features of the app
booking an appointment with the proctor/ faculty
project and document submission
creating channels for online teaching and mentorship
please check the mentioned github repo for the App. This app is for caretakers of patients with serebral pasly and people on wheelchais.
body temperature, heart rate, alarm functionality with data stored in the cloud database.
dues and assignments
messaging services
online proctored tests
Inspiration
During online classes, many students verbally harass the teachers and students of the class. This spoils the whole environment of the class, So we decided to block these students using speech recognition technology.
Then we all must have seen that delivering the things without contact has become a major problem, therefore we designed a hand gesture moving messenger who deliver things to Covid19 infected people in care centers.
What it does
The first part is a
remote education
android app which resolves all the problems stated above. It contains all the features a student will want in his/her app. We tried to involve every activity that we use to do in offline college times in this App. It consists of
Video call functionality
with a special feature of blocking students who are speaking abusive or bad words during a live session. The student will be reported to the admin of the app and all the records of the blocked student will be sent to the admin app. Admin can unblock the student again. Then our app contains a
chat room
for each classroom a student is enrolled in, it will allow the students and teachers to communicate as they use to do in Offline College. Then comes the appointment feature. Before contacting any teacher we have to make an appointment with him/her to ask for their time. So our App includes this cool feature of
appointment
for the students. This reduces the chaos and brings the working thing so that follows proper protocol. Teachers wanted an invigilation system to invigilate students during the test. In our App, we provided this feature by
camera proctored examination
feature. Under this a teacher can proctor all students through their webcams while the students are giving tests, also the teacher can pass their voice in the whole class to
convey messages
during tests. Also, our app has a feature of
assignment submission
. The teachers can upload the assignment questions along with the due date and students on the other hand can upload the solutions of these tests on the app itself.
How I built it
We used the android studio to build a remote education app. For backend, we used firebase realtime database. For identification of abusive words we used IBM speech to text services to convert the speech of the students in text and then we used this text in the loop to find whether he is abusing or not. We took the dataset of abusive words from Kaggle and gitHub.
For our IoT bot, we used the hand gesture sensor and on the basis of the gesture, the robocare bot will move and deliver thing to patients. It can also be used as a wheelchair.
Challenges I ran into
We faced many challenges like detecting and blocking students who speak the abusive language during the live class. We wanted to make something that everyone can relate with offline college activities. Therefore, we need proper planning and structure. The assignment section needed a proper structure to be executed.
Teachers all over the globe wanted a platform for cheat-proof examination. Our challenge was to make a cam proctored examination with cheat-proof features like on leaving the test you can not re-enter it.
Accomplishments that I'm proud of
We are proud of our abusive language detector system which blocks users when they speak bad words. Also, the structure we made is highly related to offline day to day activities. Our cam proctored test system is awesome, and it restricts the user from cheating and helps the invigilator to invigilate during a test.
What I learned
We learned, how to work with the realtime database, how to use IBM's speech to text services to detect abusive words. In this pandemic situation, we learned the complete use of GitHub and how to collaborate our work with teammates. Also, we learned some new IoT features which helped us to make the robocare bot.
What's next for Covid19 Kit
For future aspects we are planning to make a complete, general messenger system for private and government offices which they can use to share files, letters, assigning task and doing all other stuffs which people do in offline office hours.
Built With
android-studio
arduino
e-learning
education.com
firebase
ibm-watson
iot
Try it out
github.com
drive.google.com
drive.google.com | Covid19 Kit | An android app, an IoT device, and a Covid19 tracker, a complete kit for students, doctors, patients, and common people. An IoT bot to follow social distancing practices. | ['Ayush Sharma', 'Elio Jordan Lopes', 'Shaolin Kataria', 'Ritik Gupta', 'DEVANSH MEHTA'] | ['The Wolfram Award'] | ['android-studio', 'arduino', 'e-learning', 'education.com', 'firebase', 'ibm-watson', 'iot'] | 1 |
10,038 | https://devpost.com/software/home-health-care-patients-tracking-application | Home Health Care Mobile
Home Health Care Sample Decision Support System
COVID-19 Risk Prediction Tool
Salesforce
The follow-up of Home Health Care and elderly patients is not made digitally and home health data is not processed data. This makes it difficult for the elderly patient to follow the situation. Healthcare professionals are obliged to learn the examinations, drugs and patient status previously applied to the patient during the patient visits. With the current COVID-19 outbreak, patient visits have decreased considerably. Since the patients in this group are in the highest risk groups of COVID-19, hygiene requirements during the visits complicate the maintenance procedures. In addition to this situation; Symptom monitoring of home care patients, people in the geriatric class (65 years and older) and potential / recovering COVID-19 patients should be done remotely.
First of all, the information necessary for the follow-up of home health care patients was prepared for information entry in the Android environment. Improvement was made in the Salesforce environment to retain the data. A website was prepared in RStudio environment by developing an AI based model for monitoring the health status of home health care patients. For the COVID-19 symptom follow-up, the data of 22,000+ COVID-19 patients were processed all over the world, and an website was prepared in RStudio environment.
The biggest challenge we face is to find anonymous data that we will use for decision support systems and to clean and make the data available.
In the later stages of the project, video speech, voice recognition and sensor and smart watch (Apple Watch) integration will be supported.
Built With
android
api
css
flutter
html
java
r
rstudio
salesforce
Try it out
dveshealth.com
twitter.com
www.linkedin.com
www.instagram.com
dveshealthai.shinyapps.io
dveshealthai.shinyapps.io
drive.google.com | HOME HEALTH CARE PATIENTS TRACKING APPLICATION | DVESHealth provides AI based home health & elderly care decision support and monitoring mobile / web / cloud solutions. | ['Berna Kurt', 'Mustafa Aşçı', 'Asım Leblebici'] | [] | ['android', 'api', 'css', 'flutter', 'html', 'java', 'r', 'rstudio', 'salesforce'] | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.