hackathon_id
int64
1.57k
23.4k
project_link
stringlengths
30
96
full_desc
stringlengths
1
547k
title
stringlengths
1
60
brief_desc
stringlengths
1
200
team_members
stringlengths
2
870
prize
stringlengths
2
792
tags
stringlengths
2
4.47k
__index_level_0__
int64
0
695
10,041
https://devpost.com/software/dr-covid-9s4vdm
Dr COVID Logo Developers We are 4 developers with very different backgrounds who came together with a common objective in mind:" Educate people about the pandemic with facts in their language ". A lot of people who cannot read or write often fall prey to the mass misinformation and blindly believe everything that is said to them, this creates a hurdle in the community efforts in breaking the chain. What our app does Dr COVID is your Corona Oriented Vital Information and Details assistant , which can share facts derived from trusted sources to the users in both textual and audio format. We provide services like quizzes, Real time Covid dashboard and Helpline contact information through which the users can track the situation and educate themselves in the comfort of their houses. How we built it Through strong determination and team work. Challenges we ran into and how we approached it* We had a lot of challenges thrown at us at every turn in our development process, Lot of technologies we used were new to us and demanded us to spend hours learning it. We had lots of ideas, which we had to shortlist and carefully plan our actions since we are racing against time and all of us have to build the components separately at the same time coordinating with the team members simultaneously. We used Divide and Conquor technique effectively to evenly spread and make most of our time and finally here we are standing together as a team with our final submission, but we do intend to further develop it and make it better .... for the technical future ahead! What we learnt We as a team learnt that nothing is impossible when developers come together to make an impact. What's next for Dr COVID This is only small step against towards our objective " Correct and factual information for all " We are planning to expand beyond COVID and help even the illiterates know and understand their role in the community in breaking the chain. We want to provide many more medical services through our app on demand basis and at an affordable cost. Built With css dialogflow folium google-assistant google-cloud google-maps google-places html5 java javascript material-design python Try it out github.com
Dr COVID
Our app is built with a strong motive to eliminate any confusion and misinformation regarding the COVID-19 Pandemic of a comman man in the language they speaks .While providing additional services.
['anush krishna v', 'Prahitha Movva', 'Akshita Sharma', 'Manish Chandra']
[]
['css', 'dialogflow', 'folium', 'google-assistant', 'google-cloud', 'google-maps', 'google-places', 'html5', 'java', 'javascript', 'material-design', 'python']
42
10,041
https://devpost.com/software/the-impossible-tic-tac-toe-game
Inspiration We are a group of beginners to programming and we've learned the basics of python from resources on the internet. Although we learned it, We wanted to do create something that validates and progresses our learning experience in the real world and serve as a motivation to learn more. What It Does It is a basic TIc Tac Toe game, with a GUI built using the tkinter module. In addition to playing with a friend it has also an option to play against and try to beat the impossible computer. How We Built It In the beginning our only intention was to learn to create a GUI that can take inputs from the mouse button and call functions accordingly, In other words, initially we had meant it to be a simple Tic Tac Toe game which you could only play with your friend. When we finally reached the point where the goal was accomplished, we showed to our family and friends who were supportive, but in all wouldn't prefer it against playing it on a piece of paper, i.e it was not engaging enough. That is when we decided to create a computer to play against, initially the computer would just choose any random box among empty boxes to select from. Then we added code the computer part to stop the human player from winning by choosing the third winning box if two were taken by the human player, and to chose the third box if the computer had already chosen a winning two. but still, that nowhere near made it interesting. So we taught the computer to perform deadlocks that can guarantee a win, but always there was a way around that if the human could perform those deadlock first. Next up, we taught the computer to counter human deadlocks. Then again we tested it by making it go against very smart people. The computer did lose, but each time it did, We noted the moves (collected data) which ended in the computer losing and taught it how to counter it. This went on for a few times until no one we knew of could beat the computer. Challenges We Ran Into Whenever we taught the computer something new, It often broke the old strategies the computer was programmed to use. We had to code in a way that didn't affect the previous algorithm. Accomplishments That We're Proud Of As this was our first real project that involved any programming , it is for us a peek at the possibilities and opportunities that programming can offer. It required hard reasoning and logical thinking, and we are really happy that we didn't give up in between although there were times when we thought this wouldn't work. Used math we learned in grade 11, which many people said we would never XD. What We Learned Learned how to create GUI using tkinter. Learned how to take user inputs in the form of buttons, and call functions accordingly in real time. Learned how to Object Orient (Although we am nowhere near perfect, we have an idea now at the very least). What's Next For The Impossible Tic Tac Toe Game Can be developed further, give a bit more attractive UI, mainly we only focused on the back end for this project. The front end has a lot of improvements to be made. Can add an option to enter players name and store his/her scores in a database. Can add an option for the player to choose between X and O, for now the player is X and the computer is always O. Note While trying it out, please make sure you have the 'Pillow' Module installed. Built With photoshop python tkinter Try it out github.com
The Impossible Tic Tac Toe Game
A computer that can never be beaten in a Tic Tac Toe game.
['Amal Prakash', 'Andrew Chan', 'Deep Chandra', 'Manish Varrier']
['Best Design', 'Good Project']
['photoshop', 'python', 'tkinter']
43
10,041
https://devpost.com/software/the-quarantine-gamehub
The Main GameHub Sample of the Tic-Tac-Toe Game Feature Time Played Message-box Feature 'Number of Times Played' Page Feature The About Page Inspiration During this lock down, I trusted board games and computers to keep me from boredom. This also inspired me to create The Quarantine GameHub to give others the same pleasure I derived from computers and board games and help them stay cheerful and excited throughout this time. What it does The Quarantine GameHub provides users with 4 simple yet exciting games that can keep one occupied for hours together. It also comes with a "Time Played" system to give users a mental idea on how long they have been playing so nothing goes out-of-control. It also comes with a user-friendly GUI so users can thoroughly enjoy themselves. How I built it I used Python and its vast ocean of modules, including Tkinter and the Datetime module. I built each game as a separate program and used subprocess module to call them from the main program. Challenges I ran into There were a lot of challenges I ran into. Multiple loops in the Battleship game caused multiple confusions and wrong executions and it took me almost one hour to debug. Getting the GUI to display correctly was not any walk in the park either. Getting the buttons in the right places was quite a job but I managed to get it right soon after I used the grid function Accomplishments that I'm proud of When I click Submit, I would have successfully participated in my first Hackathon , which I feel is a big achievement. I am also proud of the fact that I have created, from scratch, a program that could possibly bring a smile to people's faces. What I learned I actually learned a lot while writing my code. I learned about _ Tkinter _ and its various features to create a great-looking GUI. I gained experience regarding running a program from another and getting functions from one program into another. What's next for The Quarantine GameHub A lot of new games are on the way for The Quarantine GameHub . A new and updated GUI might also come up soon. I hope to also add a login function to keep your games to yourself. In the near future, I hope to develop The Quarantine GameHub into a truly memorable application. Built With datetime python tkinter Try it out github.com
The Quarantine GameHub
4 Games, 1 Hub, Infinite Memories...
['Anirudh Ramesh']
[]
['datetime', 'python', 'tkinter']
44
10,041
https://devpost.com/software/safetravels-pr429f
SafeTravels Logo Arduino Hardware Wiring Diagram Constructed Arduino Hardware Circuit Mobile App Signup Page Mobile App Bus Line List Without Filters Mobile App Seating Chart with Recommendation (Blue) Mobile App RFID Setup Page Mask Detection Analysis (High Confidence True) Mask Detection Analysis (High Confidence False) Audio Spectrogram for Cough Detection Admin Website View + Add Bus Lines Inspiration Public transportation is a necessity to society. However, with the rapid spread of COVID-19 through crowded areas, especially in lines like city metros and busses, public transportation and travel have taken a massive hit. In fact, since the beginning of the pandemic, it is estimated that usage of public transportation has dropped between 70-80%. We set out to create a project that would not only make public transportation safer and more informed, but also directly reduce the threat of disease transmission through public transportation, thus restoring confidence in safe public transportation. What it does SafeTravels improves safety in public transportation by enabling users to see the aggregated risk score associated with each transportation line and optimize their seating to minimize the risk of disease transfer. A unique RFID tag is tied to each user and is used to scan users into a seat and transportation line. By linking previous user history on other transportation rides, we can calculate the overall user risk and subsequently predict the transportation line risk. Based on this data, our software can recommend the safest times to travel. Furthermore, based on seating arrangements and user data, a euclidean based algorithm is utilized to calculate the safest seat to sit in within the transportation vehicle. Video analysis for mask detection and audio analysis for cough detection are also used to contribute to overall risk scores. How we built it Mobile App A mobile app was created with Flutter using the Dart programming language. Users begin by signing up or logging in and linking their RFID tag to their account. Users are able to view public transportation schedules optimized for safety risk analysis. Seat recommendations are given within each ride based on the seat with the lowest disease transfer risk. All user and transportation data is encrypted with industry-level BCrypt protocol and transferred through a secure backend server. Administrator Website The administrator website was created with React using HTML/CSS for the user interface and JavaScript for the functionality. Administrators can add transportation lines and times, as well as view existing lines. After inputting the desired parameters, the data is transferred through the server for secure storage and public access. Arduino Hardware The Hardware was created with Arduino and programmed in C++. An MFRC522 RFID reader is used to scan user RFID tags. An ESP8266 WiFi module is utilized to cross reference the RFID tag with user IDs to fill seat charts and update risk scores for transportation lines and users. If a user does not scan an RFID tag, an ultrasonic sensor is used to update the attendance without linking the specific user information. Get requests are made with the server to securely communicate data and receive the success status to display as feedback to the user. Video Analysis (Mask Detection) Video analysis is conducted at the end of every vehicle route by taking a picture of the inside and running it through a modified Mobile Net network. Our system uses OpenCV and Tensorflow to first use the Res10 net to detect faces and create a bounding box around the face that is then fed into our modified and trained Mobile Net network to output 2 classes, whether something is a mask or not a mask. The number of masks are counted and sent back to the server, which also triggers the recalculating of risks for all users Audio Analysis (Cough Detection) We also conduct constant local audio analysis of the bus to detect coughs and count them as another data point into our risk calculation for that ride. Our audio analysis works by splitting each audio sample into windows, conducting STFT or Short Time Fourier Transform on that to create a 2D spectrogram of size 64 x 16. This is then fed into a custom convolutional neural network created with Tensorflow that calculates the probability of a cough (using the sigmoid activator). We pulled audio and trimmed it from Youtube according to the Google AudioSet, by getting audio labeled with cough and audio labeled as speech and background noise as non_cough. We also implemented silence detection using the root mean square of the audio and a threshold to filter out silence and noise. This works in realtime and automatically increments the number on the server for each cough so the data is ready when the server recalculates risk. Backend Server The backend was created with Node.js hosted on Amazon Web Services. The backend handles POST and GET requests from the app, hardware, and Raspberry Pi to enable full functionality and integrate each system component with one another for data transfer. All sensitive data is encrypted with BCrypt and stored on Google Firebase. Risk Calculation A novel algorithm was developed to predict the risk associated with each transportation line and user. Transportation line risk aggregates each rider’s risk, mask percentage, and the duration multiplied by a standard figure for transmission. User risk uses the number of rides and risk of each ride within the last 14 days. Because transportation line risk and user risk are connection, they create a conditional probability tree (Markov chain) that continually updates with each ride Optimal Transportation Line and Seat After the risk is calculated for each transportation line and user, algorithms were developed to pinpoint the optimal line/seat to minimize disease transmission risk. For optimal transportation lines, the lowest risk score for lines within user filters is highlighted. For optimal seat, the euclidean distance between other riders and their associated risk levels is summed for each empty seat, yielding the seat with the optimal score Challenges we ran into One challenge that we ran into when doing the audio analysis was generating the correct size of spectrogram for input into the first layer of the neural network as well as experimenting with the correct window size and first layer size to determine the best accuracy. We also ran into problems when connecting our hardware to the server through http requests. Once the RFID tag could be read using the MFRC522 reader, we needed to transfer the tag id to the server to cross reference with the user id. Connecting to a WiFi network, connecting to the server, and sending the request was challenging, but we eventually figured out the libraries to use and timing sequence to successfully send a request and parse the response. Accomplishments that we're proud of Within the 24 hour time period, we programmed over 3000 total lines of code and achieved full functionality in all components of the system. We are especially proud that we were able to complete the video/audio analysis for mask and cough detection. We implemented various machine learning models and analysis frameworks in python to analyze images and audio samples. We were also able to find and train the model on large data sets, yielding an accuracy of over 70%, a figure that can definitely increase with a larger data set. Lastly, we are also proud that we were able to integrate 5 distinct components of the system with one another through a central server despite working remotely with one another. What we learned One skill we really learned was how to work well as a team despite being apart. We all have experience working together in person at hackathons, but working apart was challenging, especially when we are working on so many distinct components and tying them together. We also learned how to implement machine learning and neural network models for video and audio analysis. While we specifically looked for masks and coughs, we can edit the code and train with different data sets to accomplish other tasks. What's next for SafeTravels We hope to touch up on our hardware design, improve our user experience, and strengthen our algorithms to the point where SafeTravels is commercially viable. While the core functionalities are fully functional, we still have work to do until it can be used by the public. However, we feel that SafeTravels can have massive implications in society today, especially during these challenging times. We hope to make an impact with our software and help people who truly need it. Built With c++ css dart html javascript kotlin objective-c python ruby swift Try it out github.com safetravels.macrotechsolutions.us
SafeTravels
Restore confidence in safe public transportation
['Sai Vedagiri', 'Gustav Hansen', 'Elias Wambugu', 'Arya Tschand']
['Best Hardware Hack presented by Digi-Key']
['c++', 'css', 'dart', 'html', 'javascript', 'kotlin', 'objective-c', 'python', 'ruby', 'swift']
45
10,041
https://devpost.com/software/smarttracker-covid19
Inspiration : Now a days whole world facing the novel Corona Virus, to track the spread of novel Corona Virus country-wise, details of confirmed cases, deaths and Recovered, awareness regarding COVID-19. This Android app was created to spread awareness about the covid -19 virus. What it does : The Android app named as ‘SmartTracker-Covid-19’ created to spread awareness about the COVID -19 virus. App includes following functionalities: CoronaEx Section - This section having following sub components: • News tab: Having latest new updates. Fake news seems to be spreading just as fast as the virus but as we have integrated from official sources so everyone will be aware from fake news. • World Statistic tab: Real-time Dashboard that tracks the recent cases of covid-19 across the world. • India Statistic tab: Coronavirus cases across different states in India with relevant death and recovered cases. • Prevention tab: Some Prevention to be carried out in order to defeat corona. CoronaQuiz section - quiz that will help people know about the Corona virus and its effects on human body. It chooses random questions and show the correct answer for the questions and at the end user will get to know their highest score. Helpline Section - As this application particularly made for Indian citizen to use, all state helpline number of India included. Chatbot Section - A self-assisted bot made for the people navigate corona virus situation. Common Questions: Start screening,what is COVID-19? , What are the symptoms? How we built it : We built with using Android studio. For the quiz section we have used sqlite database and live news data we have integrated from the News API. For the coronavirus statistic we have collected data from worldometer and coronameter. Challenges we ran into : At time of integrating the chatbot in application. Accomplishments that we're proud of : Though , It was the first attempt to create chatbot.we have tried to up our level at some extent. What's next for SmartTracker-COVID19 : For the better conversation we will be looking to work more on chatbot. Built With android-studio chatbot java news quiz sqlite Try it out github.com
SmartTracker-COVID-19
Android app to track the spread of Corona Virus (COVID-19).
['Pramod Paratabadi', 'Supriya Shivanand Madiwal .']
['Best Use of Microsoft Azure']
['android-studio', 'chatbot', 'java', 'news', 'quiz', 'sqlite']
46
10,041
https://devpost.com/software/exercise-together
Live Video Streaming Video Room Youtube enabled Live Data Syncing Search Bar Authentication DynamoDB Home Inspiration We know that physical activity and social interaction have immense benefits*. During lockdown, many people aren't able to go to the gym or see any of their friends in person. I wanted to create an app to help people get their endorphins up and see their gym buddies across the world. * https://www.cdc.gov/physicalactivity/basics/pa-health/index.htm , https://www.mercycare.org/bhs/services-programs/eap/resources/health-benefits-of-social-interaction/ What it does Exercise Together is a web app that allows 3 people to share video while watching the same Youtube exercise class and log their exercise activity. It works like this: A user visits the website and either creates and account or logs in. Amazon Cognito is used for authentication. Once authenticated, the user is directed to a dashboard depicting the amount of time spent exercising with Exercise Together. The user clicks join room and enters a room name. Up to 3 of their friends enter the same name to join the same room. The users enter a video chat room and can search for a Youtube exercise video together by utilizing the search bar. Once everything is ready, they click start exercise to begin! When the video ends, the user returns to the dashboard and their time spent exercising is logged. Exercise Together is helpful when you want to exercise with your friends and simulates an exercise class you could do at the gym like yoga or pilates. This way people can work out with their friends that are all over the world! How I built it I used react and redux to build the front end of the project. For the backend, I used Serverless functionality like Cognito, AWS Lambda, S3, DynamoDB, and App Sync. Cognito verifies the user so that I can log exercise data for every user separately. All data is stored in DynamoDB. When people enter a room, Agora.io livestreams everyone's video to each other, so they can see each other's faces while React is used to display everyone's video. Every change you make to the search bar or clicking a Youtube video is logged to DynamoDB and is logged to all the other clients in the same room through AppSync. As a result, everyone in the room can see the same view at the same time. When you finish the workout, the data is sent to DynamoDB with the email you logged in as the key for the data. On the dashboard, a get request is made back to DynamoDB, so that you can see your exercise data for the whole week. Challenges I ran into I used a wide variety of services in order to develop the application that I wasn't experienced with previously like Agora.io, AWS Amplify, and AWS AppSync. Learning them was difficult and I went through a lot of troubleshooting with those services in the code. Moreover, syncing all these services together into one application was a large challenge, and I kept trying different pieces of code one at a time to try to get them to work together. Accomplishments that I'm proud of I was able finally learn how to use web sockets (AWS AppSync uses web sockets), which I'm really excited to use for my future projects! Web sockets are especially crucial for online games, which I want to make. What I learned I learned how to use a multitude of services and link them together. For example, I learned web sockets, Agora.io, AWS Amplify, and AWS Appsync. All these services would be immensely useful for my fire projects, so I believed that I really benefited from creating this project. What's next for Exercise Together Some extensions I'd like to make include: Adding Fitbit and Apple Health functionality so that users who use them can all see data logged onto the website. Making a sidebar like to that people could use to see who is currently online out of their friends list and join a room with them. In order to implement that, I would have to use AWS Neptune, which uses the same technology that Facebook uses for Facebook Friends. Creating a phone app using React Native. I feel that more people would like to use a phone app rather than the website. There are still many bugs , especially with the video streaming since I'm using a third party API and a free account for it. For example: The video streaming only works chrome. Entering the video room with more than one person is a buggy process. The way I get it to work is by duplicating the tab for each user entering and closing the previous tab. The Cognito verification link redirects to localhost, but will confirm the account. Built With agora.io amplify appsync cognito cookie dynamodb graphql javascript lambda materialize-css node.js react redux s3 serverless ses websocket Try it out exercisetogether.rampotham.com github.com www.youtube.com
Exercise Together
Exercise Together is a webapp that simulates your own group fitness class online with your friends
['ram potham']
['The Wolfram Award']
['agora.io', 'amplify', 'appsync', 'cognito', 'cookie', 'dynamodb', 'graphql', 'javascript', 'lambda', 'materialize-css', 'node.js', 'react', 'redux', 's3', 'serverless', 'ses', 'websocket']
47
10,041
https://devpost.com/software/liquay
Model of the Liquay Inspiration As I was relaxing at my desk, watching a youtube video on different Asian snacks, one part of the video got my attention. As the vlogger was talking about the mountain of snacks piled on the check out table, I noticed that instead of directly handing money to the cashier, he placed it on a tray. The cashier then took the money and placed some coins on that same try. Teeming with curiosity, I did a quick Google search. What it does The Liquay offers a place to put money so that the cashier and the customer don't need to directly touch each other to complete an in-person transaction. This system of putting money in trains originally comes from Japan, but I am just making my own version with a few changes due to the coronavirus. In addition, it's meant to be cleaned at the end of each day because of all the money that it touched. How I built it I first made a model of the tray in Autodesk Fusion 360, then I made a simple website to display some of the information about my project. Then after I made a presentation, I published it to youtube and began learning how to edit the video well. Challenges I ran into Since it's been a long time since I've used Autodesk Fusion 360, I had to relearn the basics and even some advanced techniques to bring out the best in the model. Plus, my computer's GUI isn't optimal for Fusion 360 so there was a plethora of crashes and problems that I ran into. Accomplishments that I'm proud of I'm proud of launching my first, complete, individual project on DevPost. Plus I'm really proud of relearning some design and implementation techniques in Autodesk Fusion 360. What I learned I learned about basic and advanced techniques in Autodesk Fusion 360. I learned how to solve some of the problems with my GUI and learned a little more deeply in Computer Hardware. What's next for Liquay All I'm really looking forward to do is to inspire someone more qualified to release products and hope that the community improves based of this idea. I just hope that the negative side effects of the coronavirus become alleviated thorough our hardwork and determination. Built With autodesk-fusion-360 css3 html5 javascript w3s-css Try it out rashstha.netlify.app
Liquay
A CAD-Designed Cash Tray to leverage direct contact in places like stores
['Rashmit Shrestha']
[]
['autodesk-fusion-360', 'css3', 'html5', 'javascript', 'w3s-css']
48
10,041
https://devpost.com/software/breakout-pandemic-edition
This is the Pandemic-edition of the game running on a Smartphone (web-browser). This is the Pandemic-edition of the game running on PC (web-browser). This is the Standard-edition of the game running on the PC (web-browser). Inspiration Breakout and COVID-19 What it does Its purpose is to help remind yourself that there is hope. How I built it Using Javascript and Visual Studio Code Challenges I ran into Hitboxes, ball speed, High-score and changing the images into COVID-19 Accomplishments that I'm proud of None. What I learned What looks simple, is complex. What looks amazing, is way more complex. What's next for Breakout-Pandemic Edition -Pause Function -Sound effects and bonuses That's pretty much it. Note This project does not use Paraccurate or EchoAR. Built With javascript Try it out breakout-pandemic-edition--godzillar34.repl.co
Breakout-Pandemic Edition
This was made just to have some fun in the pandemic we are currently in. This game's purpose is to raise awareness about COVID-19
['Hasib H.']
[]
['javascript']
49
10,041
https://devpost.com/software/find-your-way-9y80ga
Find Your Way Inspiration I wanted to build a web app where people can have fun, which is like a game, it is why I built this... What it does Find your way is a website which anyone can access as long as they have an internet connection and a suitable device. At Find Your Way you find challenging steps which you need to pass... It all starts with a black and white start page, the first challenge is to find the NEXT button which is completely black and is placed in a black background. It's difficult, but if you have a good tech knowledge and if you are intelligent you can find tips and find the NEXT button. After that too you get several steps which are different from each other that you need to pass... To understand it you must try it out... It's fun, you will understand how stupid you are or realize how intelligent you are if you pass the whole game without a single failure.... Remember Getting LOL's mean you Fail... Even at the last moment you find challenges... Most people won't be able to pass this successfully, try whether you can... How I built it I built the website using the help of website builders as well as html, After coding all the pages, I got a free domain at freenom. Then a hosting account was created at infinity free. Uploading files to my website was done through filezilla. After that I also used cloudflare to secure my website. Challenges I ran into Some challenges I ran into include building the video. When creating the video I ran into so much of trouble, I spent almost half of the time I spent on building my website on it. It was because of my video being too large due to screen captures that I have added. Also while building the website there were some challenges related to coding that I ran into. Accomplishments that I'm proud of I am proud of successfully building this website. Also something that I am proud of is the comments I got from my friends, they were really interested in this website. (They weren't in any other project I did). What I learned I learned some coding tips and few animation effects. What's next for Find Your Way I will be building a mobile app for find your way too, and I've got to add many more steps and challenges to Find Your Way. Built With cloudflare css html Try it out www.findyourway.tech www.github.com
Find Your Way
Find the way out of this tricky web app, many steps that require intelligence and co-ordination to pass...
['Senuka Rathnayake']
['SCROOOOOOOOOOLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLL']
['cloudflare', 'css', 'html']
50
10,041
https://devpost.com/software/lynz
Landing Page View Busyness Levels Page Inspiration To flatten the COVID-19 curve, we're all doing our best to minimize social interactions. If possible, we've even barricaded ourselves moat and drawbridge at home hoping to wait this storm out. Even so, there is one necessity that nobody can wait out forever: grocery shopping. Trying to socially distance and shop at the same time… Lengthy checkout lines and crowded supermarkets... Having to run a 3 hour errand at Walmart just for weekly groceries... These challenges paired with the current global situation have led us to develop Lynz, an easy to use webapp that allows people to find out how busy any particular supermarket is based on live data provided by other shoppers. The hope is that by spreading accurate and actionable information, we can shop smarter and safer. What it does Users can view the busyness levels of nearby grocery stores. The busyness level is calculated using data from other users who reported the busyness level of the given store when they visited it. By providing this data to users, they can make informed decisions of when and where to go grocery shopping. How we built it This project was built using the MERN stack. On the front end, the React library was essential to design the UI and UX. When called upon, location data taken directly from Google Maps API enables Lynz to figure out a user’s current location and display all supermarkets within a given radius. The backend is built with Node.js and Express. The backend server sends busyness information from a MongoDB database to the user and also relays busyness reports from the user to store in the database. To ensure that the busyness shown to users is accurate, we used a regression model based on exponentially moving averages. This means that database entries are depreciated based on the time elapsed, giving exponentially greater weights to more recent busyness entries. Our algorithm is geared to work in real-life situations based on the assumption of mass scale. This means that sufficient data is required before an accurate busyness is displayed to users. Challenges we ran into Working with new technologies including MongoDB, Express, React, Node.js, and integrating them together has been challenging. Being stuck at our homes, it has also been difficult coordinating with one another effectively. We also had difficulty deploying our webapp. Accomplishments that we're proud of Prior to this hackathon, our team had no experience working with the MERN stack. We did, however, decide that if we were to build something together, it would promote change through connecting communities in the midst of the global pandemic. Bouncing off one anothers' prior skills, strengths, and interests, we learned the MERN stack to build Lynz. Each one of us can resolutely say that "yes, this was a challenging experience and it has also been worth the while". What we learned We learned all about the MERN stack and how to effectively collaborate with each other despite being in our own homes. What's next for Lynz Moving past our hackathon project, we plan to spread awareness and engage people around local communities to use Lynz. With more users, the busyness level data will more more accurate and benefit everyone. We would also work on steps to build a mobile version of our platform to enable users to receive notifications on selected stores and streamline both convenience and accessibility. Built With axios bootstrap express.js firebase google-maps heroku mongodb mongoose node.js react Try it out github.com github.com lynz-frontend.web.app
Lynz
Outsmarting lines, together
['Nicholas Tao', 'Matthew Jiao', 'Adam Lam']
[]
['axios', 'bootstrap', 'express.js', 'firebase', 'google-maps', 'heroku', 'mongodb', 'mongoose', 'node.js', 'react']
51
10,042
https://devpost.com/software/peeker-subr6c
Peeker Module with Rechargeable Battery Case Front and Back View Our Seekr Team has grown! The problem our project solves: My grandfather is visually impaired and depends on others to do everyday tasks. Given that he is reliant on touch, we are worried that he will be susceptible to COVID-19. This has led him to struggle with self-isolation because of having no companionship, also making him more prone to emotional distress during this period of self-isolation. This is the reality of 200 million people who are visually impaired around 188 countries. And research shows that the number is expected to triple within the next 4 decades. As social and emotional well-being is of the utmost importance for these marginalized people, the Peeker aims to provide life-long companionship and alleviate emotional distress. The solution we bring to the table: The Seekr is a personalized, interactive voice-assistive device with a bone-conducting earpiece. To use the Seekr, the user does not require any technical knowledge and can interact freely and independently. The user interface is designed in such a way to imitate the human interaction of able-bodied people. The bone-conducting earpiece does not block their natural senses and the rotating clip allows for convenient placement of the device There are two main features of the Peeker 1) Inbuilt thermal imaging sensors which alerts the visual impaired of other people with high body temperatures around them 2) Text and Object detection through which the Peeker will alleviate loneliness and allow the experience of live companionship during this period of self-quarantine. What we have done during the Hackathon: We have developed a working prototype of the Peeker with standard features such as object and text detection within the course of the hackathon. The working prototype also receives speech input from the user and responds using voice output, making it a complete interactive device. In addition, we have developed the 3D design of our proposed final product, keeping minimalistic design, user experience, and fashion in mind. To ensure that our features meet the needs of the visually impaired, we met some visually impaired workers in the front-line such as those working at restaurants, convenience stores, and NGOs to test our prototype and discuss which features would be most useful to them even after the COVID-19 pandemic. We have also conducted thorough market research to analyze our competitors and have realized that there is no product like the Seekr especially with its adaptable and customized features. According to research, there are many other devices within the assistive device market, however, 39.9% of differently-abled people do not like to wear such devices as it usually is very bulky and medical equipment like or is blocking some of their senses. Hence, our device offers quality services without blocking any of their natural senses. The solution’s impact on the crisis: Our solution is simple. The Peeker protects the visually impaired from people with high body temperatures who might be susceptible to COVID-19. This is to encourage social distancing and reduce the chances of getting COVID-19. Furthermore, the Seekr’s objective is to reduce reliance on touch so that the visually impaired are not in contact with the surfaces unnecessarily through its features. Finally, the interactive component of the device engaged the visually impaired to alleviate emotional distress by offering companionship and living independently. For instance, the Peeker allows the visually impaired to conduct activities such as reading a book, distinguishing between different objects, reading road signs, etc. which they otherwise could not do without a helping hand. The necessities to continue the project: Currently, our prototype is deployed on the Raspberry Pi. Going forward, we would like to design the integrated circuit of the final product ourselves, so the device is more compact and durable. In addition, our product will also have a silicone cover which is easily identified by the visually impaired. Our current solution is trained on the COCO-object detection dataset, restricting the number of total objects detected to 80. Going forward, we want to train the model with our dataset focusing on the common items that the visually-impaired person uses on a daily basis. Subsequently, we will partner up with manufacturing companies to mass-produce the Seekr. We are also looking to collaborate with NGOs, Governmental Organisations, Optometrists, and Eye Hospitals to develop more user-friendly features so that every Seekr is customized to the needs of the individual. The value of our solution after the crisis: The Seekr allows independence with ease to the visually impaired through Peeker. For example, they would be able to go grocery shopping with the help of text detection. They would be able to navigate where things are with the help of object detection. Other added features within the premium package such as different languages, haptic feedback accessories, heart rate, and blood pressure monitor, face recognition, and color detection ensures that the Peeker is a life-long companion that accommodates the visually impaired no matter which demographic they are from. Built With coco opencv pytesseract python pyttsx3 raspberry-pi speech-recognition yolo
Seekr
Your Visual Companion
['Turzo Bose', 'Lamia Sreya Rahman', 'Kartikay Sharma']
['2nd Place', 'Top 10', '1st Place Hack']
['coco', 'opencv', 'pytesseract', 'python', 'pyttsx3', 'raspberry-pi', 'speech-recognition', 'yolo']
0
10,042
https://devpost.com/software/deep-learning-drone-delivery-system
Results of our CNN-LSTM Accuracy after training our model on 25 epochs MSE of our CNN-LSTM How we preprocessed data for our model Data preprocessing Picture of Drone Inspiration: The COVID-19 pandemic has caused mass panic and is leaving everyone paranoid. In a time like this, simply leaving the house leads to a high risk of contracting a fatal disease. Survival at home is also not easy: buying groceries is frightening and online ordered necessities take ages to arrive. The current delivery system still requires a ton of human contact and is not 100% virus free. All of these issues are causing a ton of paranoia regarding how people are going to keep their necessity supply stable. We wanted to find a solution that garners both efficiency and safety. Because of this, drones came into the picture(especially since one of our group members already had a drone with a camera). Drone delivery is not only efficient and safe, but also eco friendly and can reduce traffic congestion. Although there are already existing drone delivery companies, current drone navigation systems are neither robust or adaptable due to their heavy dependence on external sensors such as depth or infrared. Because of this, we wanted to create a completely autonomous and robust drone delivery system with image navigation that can easily be used in the market without supervision. In a dire time like now, a project like this could be monumentally applied to bring social wellbeing on a grand scale. What it does: Our project contains two parts. The first part is a deep learning algorithm that allows the drone to navigate images taken with a camera which is a novel and robust navigation technique that has never been implemented before. The second portion is actually implementing this algorithm into a delivery system with firebase and a ios ecommerce application. Using deep learning and computer vision, we were able to train a drone to navigate by itself in crowded city streets. Our model has extremely high accuracy and can safely detect and allow the drone to navigate around any obstacles in the drone’s surroundings. We were also able to create an app that compliments the drone. The drone is integrated into this app through firebase and is the medium in which deliveries are made. The app essentially serves as an ecommerce platform that allows companies to post their different products for sale; meanwhile, customers are able to purchase these products and the experience is similar to that of shopping in actual stores. In addition, the users of the app can track the drone’s gps location of their deliveries. How I built it: To implement autonomous flight and allow drones to deliver packages to people swiftly, we took a machine learning approach and created a set of novel math formulas and deep learning models that focused on imitating two key aspects of driving: speed and steering. For our steering model, we first used gaussian blurring, filtering, and kernel-based edge detection techniques to preprocess the images we obtain from the drone's built-in camera. We then coded a CNN-LSTM model to predict both the steering angle of the drone. The model uses a convolutional neural network as a dimensionality reduction algorithm to output a feature vector representative of the camera image, which is then fed into a long short-term memory model. The LSTM model learns time-sensitive data (i.e. video feed) to account for spatial and temporal changes, such as that of cars and walking pedestrians. Due to the nature of predicted angles (i.e. wraparound), our LSTM outputs sine and cosine values, which we use to derive our angle to steer. As for the speed model, since we cannot perform depth perception to find the exact distances obstacles are from our drone with only one camera, we used an object detection algorithm to draw bounding boxes around all possible obstacles in an image. Then, using our novel math formulas, we define a two-dimensional probability map to map each pixel from a bounding box to a probability of collision and use Fubini's theorem to integrate and sum over the boxes. The final output is the probability of collision, which we can robustly predict in a completely unsupervised fashion. We built the app using an Xcode engine with the language swift. Much of our app is built off of creating a Table View, and customized cell with proper constraints to display an appropriate ordering of listings. A large part of our app was created with the Firebase Database and Storage, which acts as a remote server where we stored our data. The Firebase authentication also allowed us to enable customers and companies to create their own personal accounts. For order tracking in the app, we were able to transfer the drone’s location to the firebase and ultimately display it's coordinates on the app using a python script. Challenges: The major challenge we faced is runtime. After compiling and running all our models and scripts, we had a runtime of roughly 120 seconds. Obviously, a runtime this long would not allow our program to be applicable in real life. Before we used the MobileNet CNN in our speed model, we started off with another object detection CNN called YOLOv3. We sourced most of the runtime to YOLOv3’s image labeling method, which sacrificed runtime in order to increase the accuracy of predicting and labeling exactly what an object was. However, this level of accuracy was not needed for our project, for example crashing into a tree or a car results in the same thing: failure. YOLOv3 also required a non-maximal suppression algorithm which ran in O(n^3). After switching to MobileNet and performing many math optimizations in our speed and steering models, we were able to get the runtime down to 0.29 seconds as a lower bound and 1.03 as an upper bound. The average runtime was 0.66 seconds and the standard deviation was 0.18 based on 150 trials. This meant that we increased our efficiency by more than 160 times. Accomplishments: We are proud of creating a working, intelligent system to solve a huge problem the world is facing. Although the system definitely has its limitations, it has proven to be adaptable and relatively robust, which is a huge accomplishment given the limitations of our dataset and computational capabilities. We are also proud of our probability of collision model because we were able to create a relatively robust, adaptable model with no training data. We were also proud how we were able to create an app that compliments the drone. We were able to create a user-friendly app that is practical, efficient and visually pleasing for both customers and companies. We were also extremely proud of the overall integration of our drone with firebase. It is amazing how we were able to completely connect our drone with a full functioning app and have a project that could as of now, instantly be implemented in the marketplace. What I learned: Doing this project was one of the most fun and knowledgeable experiences that we have ever done. Before starting, we did not have much experience with connecting hardware to software. We never imagined it to be that hard just to upload our program onto a drone, but despite all the failed attempts and challenges we faced, we were able to successfully do it. We learned and grasped the basics of integrating software with hardware, and also the difficulty behind it. In addition, through this project, we also gained a lot more experience working with CNN’s. We learnt how different preprocessing, normalization, and post processing methods affect the robustness and complexity of our model. We also learnt to care about time complexity, as it made a huge difference in our project. Whats Next: A self-flying drone is applicable in nearly an unlimited amount of applications. We propose to use our drones, in addition to autonomous delivery systems, for conservation, data gathering, natural disaster relief, and emergency medical assistance. For conservation, our drone could be implemented to gather data on animals by tracking them in their habitat without human interference. As for natural disaster relief, drones could scout and take risks that volunteers are unable to, due to debris and unstable infrastructure. We hope that our drone navigation program will be useful for many future applications. We believe that there are still a few things that we can do to further improve upon our project. To further decrease runtime, we believe using GPU acceleration or a better computer will allow the program to run even faster. This then would allow the drone to fly faster, increasing its usefulness. In addition, training the model on a larger and more varied dataset would improve the drone’s flying and adaptability, making it applicable in more situations. With our current program, if you want the drone to work in another environment all you need to do is just find a dataset for that environment. As for the app, other than polishing it and making a script that tells the drone to fly back, we think our delivery system is ready to go and can be given to companies for their usage with customers. Companies would have to purchase their own drones and upload our algorithm but other than that, the process is extremely easy and practical. Built With drone firebase keras opencv python swift tensorflow xcode Try it out github.com
Autonomous Drone Delivery System
An autonomous drone delivery system to provide efficient and virus-free deliveries.
['Allen Ye', 'Gavin Wong', 'Michael Peng']
['Best COVID-19 Hack', '2nd Place Hack']
['drone', 'firebase', 'keras', 'opencv', 'python', 'swift', 'tensorflow', 'xcode']
1
10,042
https://devpost.com/software/papure-2tpv60
paPURE Setup - Angeled View - Utilizing Snorkeling Mask paPURE Setup - Front View - Utilizing Snorkeling Mask paPURE Setup - Side View - Utilizing Snorkeling Mask paPURE Setup - Back View - Utilizing Snorkeling Mask Original Prototype of paPURE Design View paPURE Base - Top View - Inserted Compressor Fan and Fan Shroud paPURE Base - Top View - Empty Abstract: The Filtrexa paPURE is an affordable, 3D printed powered air-purifying respirator (PAPR) that provides our healthcare providers with better protection than even N95s, especially in high-risk and confined environments (E.g. ICUs, ERs). It incorporates readily available components and can be easily manufactured locally. We can thus increase accessibility of PAPR technology by enabling hospitals to produce and purchase it as per their need, optimizing the 3D-print to produce it at a cost that is over ten times cheaper than PAPRs currently offered on the market, and using exchanging highly specific components for readily available and affordable components. The Filtrexa paPURE also has made design changes to improve comfort, ease of use, and longevity of PAPR technology. Introduction One of the most immediate and impactful effects of the COVID-19 pandemic are global shortages of proper personal protective equipment (PPE), forcing healthcare providers (HCPs) to consistently work in high-risk environments and unnecessarily place their own lives at risk. Our product is a powered air-purifying respirator (PAPR) that creates a positive pressure field with filtered air to protect frontline healthcare workers from airborne threats such as SARS, TB, measles, influenza, meningitis, and most immediately COVID-19. This technology improves upon current PAPR devices in terms of cost-efficacy, ease of access, and ease of implementability. Our solution not only serves to combat general PAPR shortages across the country, but also eases PPE shortages that arise from COVID-19 and future patient surges through an on-demand 3D printing process. Value Proposition Powered, air-purifying respirators (PAPRs) are currently the gold standard in medicine when treating patients diagnosed with COVID-19 and other highly infectious respiratory diseases[1] due to their positive pressure system. This system filters air extremely effectively before it reaches the airway. However, this technology package is costly, often totaling over $1800[2] and requires highly specific components which are currently in short supply. Both well-established hospitals such as the Mayo Clinic (with a ratio of 4500 physicians to 200 PAPRs)[2] and smaller county hospitals such as the Hunterdon Medical Center (where not a single PAPR is available to physicians) are facing critical shortages of personal protective equipment (PPE). Evidently, the aforementioned barriers render PAPR technology inaccessible to most frontline HCPs, leaving them far more vulnerable to infection. Alternatives to PAPR technology include N95s, surgical masks, and currently, homemade masks due to a worldwide shortage of PPE. Although they provide a barrier against aerosols, standard and surgical N95s are easily compromised with an improper fit and have an assigned protection factor (APF) of ten[4], while PAPRs have an APF of 25 to 1000, rendering PAPRs far more effective at protecting HCPs. Additionally, physicians tend to prefer PAPRs over N95s because PAPRs are reusable, easier to breathe through, do not require fit testing, and make them feel safer[1][5]. Our Solution In order to provide purified air to those in the most high-risk environments, we have developed a novel, inexpensive, and accessible PAPR device that is both lightweight and 3D-printable within 24 hours. Printed using readily-available filaments (e.g. PLA, ABS), paPURE is mounted to the user’s hip and assembled via on-hand motors and batteries. (See Appendix 2.5). Through PAPR technology, HCPs are given access to filtered positive pressure air systems (in which airflow serves to seal any gaps in masks, as well as reduce respiratory fatigue in HCPs), drastically decreasing infection risk in areas such as ICUs and ERs. Our device’s customizability allows for interoperability with existing masks, filters, and hosing (See Appendix 3.1), enabling hospitals, or possibly surrounding hobbyists/machinists (regulatory dependent), to produce PAPRs for their physicians and nurses. For images and procedures: See Appendix 1 and 2. The system features a dual battery set-up that allows HCPs to utilize one or both batteries independently, as well as swap out batteries while the device is in use (such as during an extended patient procedure that a physician cannot leave from). Additionally the belt system, with the fan/chassis on you lumbar and 2 battery on ports on both hips gives a better weight distribution for improved comfort in extended usages (such as a surgeon leaning in an awkward position during the operation). The use of an inline filter means that air is pushed into a filter at the end of the device, as opposed to regular PAPRs that pull air through filters. This setup means that the risk of an imperfect seal compromising air quality is virtually nullified as no negative pressure system exists after air filtration in our device. Additionally, the aforementioned inline filters are better at filtering biological particles without disturbing airflow than standard P100s and are already used extensively in anesthesiology and respiratory care departments of hospitals across the country. After printing the device’s chassis and shroud, integration with an inline bacterial/viral filter, housing, and masks will be followed by on-site fit and efficacy testing to ensure proper device assembly.[6] Then, an HCP would don their mask, clipping the paPURE chassis and two smart power tool batteries to a provided utility belt, and connecting to the mask via a hose. At most, we expect equipping paPURE to add 1-3 minutes to a medical professional’s routine and greatly improve safety and comfort. An Improvement from Traditional PAPRs Our technology eliminates the need for a middle-man manufacturer. Because the only required components are readily available to hospitals and clinics, hospitals can produce the device as per their need. We anticipate working with local 3D-printing facilities to produce and assemble the product, then to distribute the Filtrexa PAPR to hospitals. Physicians and NIOSH officials (most notably Richard Metzler, the first Director of the National Personal Protective Technology Laboratory at NIOSH), have already given us promising feedback regarding the need for this technology, and we are looking into potential partnerships with PPE developers and/or motor manufacturers. Some hospital purchasing experts have additionally communicated a need for affordable PAPRs. Our solution is over 10 times cheaper than current PAPR technologies ($155; see Appendix 2, Figure 2), increasing likelihood of adoption. To allow smaller hospitals to easily obtain our technology, we plan to raise awareness of our business through phone calls and emails to hospitals throughout the country. Implementation Plan paPURE’s solution is implementable almost immediately. The main barrier between our tested prototype and implementation is FDA/NIOSH approval (FDA EUA Sec II/IV Approve NIOSH Certified Respirators). We have also identified conditions that will allow us to expedite the regulation and roll-out of the production (such as the IDE and 501(k) pathways suggested to us by regulatory experts).[15] Because our device is based on existing PAPR technology, this predicate nature in combination with existing precedents for 3D-printed medical technology, can help expedite its deployment.[16] Our technology minimizes the need for a middle-men. We are partnering with regional additive manufacturers to allow for quick, standardized, yet still decentralized production of the device. The only required components are readily available to hospitals and clinics, allowing HCPs to produce the device as per their need. Additionally, if regulatory approval permits, we may utilize local schools/universities/hospitals with on-site 3D printers in order to allow for fully decentralized manufacturing. After NIOSH Approval, our device (and depending on regulatory guidelines, possibly our CAD file) will be sent to those with 3D printers available, who could print and assemble the device (See Appendix 3.1). Players involved in the production of this technology would be hospital assembly workers, but the design is easily assembled by anyone (the only limitation being that assembly be done under a fume hood to prevent contamination). Physicians we’ve already talked to have given us promising feedback regarding the need for this technology. We are currently looking into potential partnerships with PPE developers (See Appendix 3.2) and/or motor manufacturers. Our solution is over ten times cheaper than current PAPR technologies (See Appendix 3.3), increasing the likelihood of adoption. Due especially to the length of this health crisis, hospitals are facing dire shortages of PPE. This has accelerated our timeline, but we are confident that it is feasible given the current state of emergency (See Appendix 3.4). Since this product has yet to be implemented in hospitals, we are writing to you today to gauge your interest in paPURE. Additionally, any feedback you have relating to our product or interest in helping us with laboratory testing of paPURE would be greatly appreciated. We anticipate our project to reach full fruition within 6-12 months. Our timeline is as follows. Our second iteration of prototyping for clinician testing will conclude in 2-3 weeks, followed by initial clinical testing, which will finish in around 1.5 months. As soon as clinical testing is finished and the product is validated, we will submit our product officially to NIOSH for regulatory approval. We anticipate receipt of regulatory approval within 1.5 months from submission. After approval is obtained, we will also apply for either a provisional patent or copyright, depending on legal advice. Within 1-2 months after regulatory approval, we plan to roll out our product to hospitals via centralized 3D-printing. During the next 1-2 months, we will continue to iterate and optimize the product. Official hospital rollout, with multiple 3D-printing partners and company partnerships, will occur around a month later. This will be around 6-7 months from now. As seen, our timeline is aggressive as we wish to equip healthcare providers with PPE as soon as possible. The prior goals mentioned in our timeline are our key goals and objectives for the project at this time. Current Testing and Partnerships Technical Testing is being carried out at Filrexa's primary residence and at Johns Hopkins University and includes analysis of airflow data, battery life, and filtration efficacy. For clinical testing, we already have established connections for clinical testing with both Johns Hopkins Medical Institute and Stanford University. In regards to business-focused assistance, we have also partnered with FastForwardU for advising regarding intellectual property protection, strategic marketing, and clinical networking. Planned Partnerships We plan to designate one 3D-printing company (current candidates include Xometry, Protolabs, Cowtown, and Health3D) as our manufacturer during our initial launch into the market, but will continue to partner with additional 3D-printing companies as our business grows. Due to our unique manufacturing approach, all hospitals, regardless of their size, will be able to order and quickly receive PAPRs, lowering the impact of the current shortage. In order to supply the auxiliary materials such as motors, batteries, and more, we plan to initiate company partnerships with large corporations such as 3M, Dyson, Black and Decker, GE, Cuisinart, Hitachi, Makita, Shop Vac, Hoover, Bissell, Shark, iRobot, and Bosch. Additional Video https://youtu.be/iFMtzt52BEQ Appendix and Citations Click here! Website paPURE Website Built With 3dprinting cad cpap p100
paPURE
paPURE is a hospital accessible PAPR Technology utilizing 3D printing and readily available hardware to give healthcare's frontline the gold standard of personal protective equipment right now.
['Sanjana Pesari', 'Hannah Yamagata', 'Sneha Batheja', 'Joshua Devier']
['2nd Place Overall Winners', '1st Place', 'The Wolfram Award', 'The Best Business Idea', '3rd Place Hack', 'Best COVID-19 Hack']
['3dprinting', 'cad', 'cpap', 'p100']
2
10,042
https://devpost.com/software/unity-topdownshooter
Menu Screen Intense Gaming Experience Game Backstory The cats have acquired brainwashing taser equipment and have turned your friends into foes. It is up to you to save them. Objectives Shoot the cats 3 times to kill them and shoot the brainwashed mice to convert them into allied mice. Bank these mice to increase your total score. A score of 50 banked mice is required to win the game. Controls W.A.S.D for movement. Mouse to aim. Left Click or Space to shoot. Q/E to switch mice formations. R to bank mice. Game Strategies When an allied mice collides with a cat, both will be destroyed. Allied mice can be ordered into 3 different formations to protect the player. The Follow formation makes the allied mice follow behind the player mouse. The Shield formation makes the allied mice revolve around the player mouse. The Freeze formation makes the allied mice freeze in their current positions. There are cooldowns for each of these formations, so the player needs to use them strategically. Special Gameplay Features A mini-map is included in the top-left corner of the screen to allow the player to easily see incoming cats and enemy mice. Spawn rate of cats and brainwashed mice gradually increases as the player progresses in the game. When the player reaches 10, 20, 30, or 40 banked mice (scorebar at the top of the screen), a "Swarm Incoming" warning will appear while the screen flashes red to alert the player that a swarm of cats and brainwashed mice will be approaching. These swarms will become larger and larger as the player gets closer to the winning score of 50. Immersive Gameplay Features A soothing background track is played throughout the game experience, and a special track is played when the player wins. Simple cartoon artwork for the sprites and background provides the player with an immersive gaming experience. Built With asp.net c# hlsl objective-c shaderlab unity Try it out github.com
C.U.T.E.
C.U.T.E. is a creative strategic top-down shooting game made with Unity and written in C#
['Richard Cao']
['Best UI']
['asp.net', 'c#', 'hlsl', 'objective-c', 'shaderlab', 'unity']
3
10,042
https://devpost.com/software/the-advonauts
Home page of the website Inspiration So I'm in this program called Youth & Government, where students from different high schools basically advocate to solve national and global problems. We also go to conferences here in California, where students from all over California get to meet and collaborate on making the world a better place. Since these conferences revolve around politics, I get to hear loads and loads of people's opinions and thoughts. But after these events end, so many ideas are missed out on, which in turn indirectly hurts everyone, because I believe that every problem can be solved if everyone came together, regardless of race, regardless of gender, regardless of any other differences, so that the world can be a better place for all types of life on this earth of ours. What it does You can submit your ideas and thoughts on the following pages of the website, which will be received by me and I will possibly post it into Instagram (if you decide so). How I built it I used some front-end and back-end languages, along with Google Forms to display my ideas. Challenges I ran into Since I haven't really used some of these languages I've mentioned, I had to become proficient at it before I decided to go any further. But the learning process was still really fun, despite my setbacks. Accomplishments that I'm proud of I'm proud of submitting a project for my first hackathon! (Yayyy) But I'm also proud of instantly utilizing some new things that I've learned over my time with the project. What I learned I got to learn more deeply about some of the languages mentioned somewhere up above. What's next for The Advonauts Next, I'm planning on replacing the form by Google to some php powered form, after I learn it of course. Built With aos bootstrap css3 html5 node.js npm sass w3css Try it out github.com
The Advonauts
A website to gather thoughts on common problems, then display them on Instagram @theadvonauts
['Rashmit Shrestha']
['submitted the same hack to multiple hackathons and did not realize this is not a serious hackathon?']
['aos', 'bootstrap', 'css3', 'html5', 'node.js', 'npm', 'sass', 'w3css']
4
10,042
https://devpost.com/software/providing-vulnerable-workers-with-legitimate-job-postings
Inspiration COVID-19 pandemic is affecting economies in every continent. Unemployment rates are spiking every single day with the United States reporting around 26 million people applying for unemployment benefits, which is the highest recorded in its long history, millions have been furloughed in the United Kingdom, and thousands have been laid off around the world. These desperate times provides a perfect opportunity for online scammers to take advantage of the desperation and vulnerability of thousands and millions of people looking out for jobs. We see a steep rise in these fake job postings during COVID-19. In the grand scheme of things, what may start off as a harmless fake job advert, has the potential of ending in human trafficking. We are trying to tackle this issue at the grassroot level. What it does We have designed a machine learning model that helps distinguish fake job adverts from genuine ones. We have trained six models and have drawn a comparison among them. To portray how our ML model can be integrated into any job portal, we have designed a mobile application that shows the integration and can be viewed from the eyes of a job seeker. Our mobile application has four features in particular: 1) Portfolio page: This page is the first page of the app post-login, which allows a job seeker to enter their employment history, much like any other job portal/app. 2) Forum: A discussion forum allowing job seekers from all around the world to share and gain advice 3) Job Finding: The main page of the app which allows job seekers to view postings that have been run through our Machine learning algorithm and have been marked as real adverts. 4) Chat feature: This feature allows job seekers to communicate with employers directly and discuss job postings and applications. How we built it We explored the data and provided insights into which industries are more affected and what are the critical red flags which can give away these fake postings. Then we applied machine learning models to predict how we can detect these counterfeit postings. In further detail: Data collection: We used an open source dataset that contained 17,880 job post details with 900 fraudulent ones. Data visualisation: We visualised the data to understand if there were any key differences between real and fake job postings, such as if the number of words in fraud job postings was any lesser than real ones. Data split: We then split the data into training and test sets. Model Training: We trained various models such as Logistic regression, KNN, Random Forest etc. to see which model worked best for our data. Model Evaluation: Using various classification parameters, we evaluated how well our models performed. For example, our Random Forest model had a roc_auc score of 0.76. We also evaluated how each model did in comparison to the others. Immediate Impact Especially during but also after COVID-19, our application would aim to relieve vulnerable job seekers from the fear of fake job adverts. By doing so, we would be re-focusing the time spent by job seekers onto job postings that are real, and hence, increase their chances of getting a job. An immediate consequence of this would be decreasing traffic onto fake job adverts which would hopefully, discourage scammers from posting fake job adverts too. Police departments don’t have the resources to investigate these incidents, and it has to be a multi-million-dollar swindle before federal authorities get involved, so the scammers just keep getting away with it. Hence our solution saves millions of dollars and hours of investigation, whilst protecting the workers from getting scammed into fake jobs and misused information. Revenue generated Our Revenue model is based on: 1) Premium subscription availability to job seekers to apply for jobs 2) Revenue from the advertisements 3) Commission from the employers to post the jobs Funding Split 1) Testing and Development: $ 10,000 2) Team Hire Costs: $ 2000 3) Patent Application Costs: $ 125 4) Further Licensing conversations: $ 225 TOTAL: $ 12,350 Future Goals We would hope to partner up with LinkedIn or other job portals in a license agreement, to be able to integrate our machine learning model as a feature on their portal. Built With adobe python Try it out github.com xd.adobe.com
Providing vulnerable workers with legitimate job postings
Preventing vulnerable workers from the trap of fake job posting scams
['Arushi Madan', 'Arun Venugopal', 'Aerica Singla']
[]
['adobe', 'python']
5
10,042
https://devpost.com/software/stop-covid-19-spread
Inspiration Stop Covid-19 Spread is a two-way SMS, Email and Databased based application to Connects Covid-19 Patients, Health Frontliners, Hospitals, the Poor and Needy with their Donors/volunteers, Physicians and Medical Authority. Its time to flattern Covid-19 Pandemic Curve(CPC). After seen so many news reports on tragedy created by this Covid-19 Pandemic, every nations shutting down, restaurants, groceries, business in panic. Even us farmers are in trouble as Covid-19 cases is every where. The poor are hungry because they have no money to buy foods. Hospitals, Physicians, Health Front-liners running short of PPE and other equipment's needed to fight Covid-19. I was afraid when I chat up with my friends and he told me that their hospital are being overloaded and they lack beds, ventilators and infact that they need help in general. While Covid-19 patients, the Poor, the needy, the elderly , medical frontliners need help, then it will be awesome to partner all of them together by developing an app that could solve their needs and thereby limiting the spread of the virus in other to flatten the Curve. Consequently, there is serious need in getting Donors/Contributors and Volunteers on board who can help especially the poor, the needy, the elderly etc. Finally, there is another serious need for creating awareness as some persons does not even know what this virus is, what causes it, how its transmitted and how to prevents oneself from being affected. This simply means that there is a need for Covid-19 guidlines and Tips and Website Materials where one can get authentic information on how to contain this virus. Hello are you listening?. Our application has a solution to all this. To programmatically solve all this problems outlined above, Stop Covid-19 Spread was born. What the Applications Does. The app plays 6 major roles 1.)Save Your Life: By Filling and Sending Covid-19 Questionaire Medical Report A user experiencing COVID-19 symptoms will automatically use this platform to send a details regarding Covid 19 related sysmptom such a Soar Throats, Cough, Pnuemonia etc. via SMS and Email Messages to available health Provider or to country's Emergency Contacts and recieve a recommendation back from the Medical authority on the next line of actions which might include whether to come for Medical Test, or Further stay at home. This Medical Questionaire are propagated via Email and SMS Messaging Components. In the Medical questionaire, the User is allowed to responds to all the sysmptom he is having as coded in the applications and then send it to appropriate Medical team for recommendations on the next line of actions. This will help to prevent the hospitals from being overloaded with Patients suffering from this illed Covid-19 Pandemic. 2.) Save Life of others affected: Report Infected Persons, Friends and Relations A Covid-19 Patients or Sufferer or anyone can use this platform to send informations about other people they may have come in contact with during Pre and Post Covid-19 illness to limit Covid-19 from further Spreading. To do this, the Covid-19 sufferer will forward those people's name and their contacts to any Emergency Health Authority via SMS and Email for immediate response.Those Reportee(Person being reported) will be immediately be visited by Medical authority for medical evaluation. This helps support contact tracing efforts to slow the spread of the novel covid-19 virus. 3.)Connecting Hospitals, Medical Frontliners & their Covid-19 Resources Updates: Connecting Medical Frontliners with those who can help them out. As most hospitals, Medical frontliners are running out of Personal Protective Equipments(PPE) and other Covid-19 equipments for taking care of the Patients. it will be awesome to connect them with those who can help. To this effect, the application allows Various Hospitals and Health Frontliners within the Vicinity or country to use the platform to update their Covid-19 resources availability or deficiency as regards to Eg: A.) No. of available Gloves, Medical Wears etc B.)No. of Available Beds C.) No. of Available Testing Kits. D.) No of Available Ventilators. E.) No of Covid-19 Patients etc. F.)What they lacks and what they needs by forwarding their request via SMS to any higher Medical Authority, Philantropist, Country's Emergency Contacts and to the general public within the platform. This will help to attract positive responses and help to the Medical Frontliners by Individuals, Philantropist, Communities, Government etc. Again the manufacturer of those components can also contacts those hospitals for supply.. Consequently, availability and live Updates/Posts about this resources by various Medical/Hospital Teams will also help Covid-19 Patients in decision making on which Hospital to move to should they be any need due to Covid-19 Pandemic. The hospital will also provide their Contacts infos along with their locations/address for easy geo mapping and location accessibility and directions on google map. 4.) Physicians & Patients Connections: Connecting Physicians who are willing to help with Infected Covid-19 Patients. How does it work A Certified and Verified Physicians will access the applications and then updates/uploads their Social connectivity data. like Phone no.,Facebook Chat Messenger Id, Whatsapp Number and Email Address Then a user expriencing Covid-19 related sysmptom such a Soar Throats, Cough, Pnuemonia etc. can contact the available physicians from their Facebook Chat Messenger, Mobile Contacts, Email, SMS and whatsapp on what to do next regarding the sysmptoms he or she is having. The Medical recommendations by the Physician can be whether the user can come for a test, take some drugs etc. This remote Physicians vs Covid-19 Social communications will help to curtail unneccessary mobility, panic, fear and also prevent all available Medical hospitals from being overloaded... 5.)Help the Poor and Needy: We always has poor, the needy and less advantaged people around us. As nations goes into lockdown and people are quarantined at home, most of this poor people are worried about foods and drugs. We can help them in any way we can to survive this pandemic and ensure that foods and drugs gets to the Needy home. You can help by becoming either update your data within the platform as a Donor/Contributor or a Vounteer A Donor/Contributor: is the one who plays the most vital roles by helping the poor and the needy through money donation. The money donated will be used to buy foods and Drugs that will be shared and distributed to the Poor and Needy. Currently a Donor can only donate money to the recipient account via Paypal A Volunteer: is the one who helps the poor and the needy by managing and assisting in the food distributions. A volunteer ensures that foods gets directly to the needy homes. The app allows a volunteer updates his contact details within the platform for ease of connections and communication as regards to assisting the poor, the needy, elderly and Child-Care thus ensuring that foods are properly distributed to the home of the less privilege people who needs it. The Poor & Needy: is the one who is asking for help either for foods. The application allows the needy to update their contact details along with the number of family members who need help. This contact details will be used to contact and locate them and ensure that foods gets at their door steps without worrying for payments.. 6.) Become Informed: Get Insight and Tips on How to contain Covid-19 Virus. Prevent the spread of this virus starts by knowing what causes it, how it spreads, the effects and how to contain it from escalating. This app provides one with all the Medical Tips and Guidelines on how to avoid contamination with this deadly virus. Consequently, the app also provide one with use website link on where to get first class information about this novel virus and be safe. How I built it It was built using Ajax, Jquery, Php, Mysql, Css and Bootstraps Challenges I ran into Light/ Power outage. We live a community where having electricity for 2 hours in every 2 days will be like celebrating Christmas. I just borrowed generator to get light to code this. About our SMS Gateway. We are using Sms gateways that are affordable for us. In future we will be using Twilio for Wide Coverage when using the application always ensure that your mobile contact begins with + sign followed by country code Eg. +145789000000 Below are List of Countries our application can send sms to and fro Gambia Ghana India Nigeria United States Canada Cote d’Ivoire Spain Belgium Germany France Sri Lanka South Africa Netherlands Algeria Australia United Arab Emirate United Kingdom Kenya Turkey Portugal Pakistan Vietnam China Tanzania Austria Testing the application. Register your data and login to start accessing our app functionality. if you want to use an already account then use this email address = [email protected] password = 123 Application Platform of Run-ability Our application is highly responsive and thus can fit in into any screen size. It runs in all major Browser ranging from Desktop, laptops,Web and all Mobile Devices What's next for Stop Covid-19 Spread Unlimited Features coming soon Built With ajax bootstrap css jquery mysql php Try it out equationdev.com
Stop Covid-19 Spread
Stop Covid-19 Spread is a two-way SMS, Email and Databased based application to Connects Covid-19 Patients, Health Frontliners, Hospitals, the Needy with their Donors/volunteers, Medical Authority.
[]
[]
['ajax', 'bootstrap', 'css', 'jquery', 'mysql', 'php']
6
10,042
https://devpost.com/software/alerta-para-febre-atraves-do-smartphone
O objetivo principal é identificar com rapidez se o usuário do aparelho celular está com febre ou não. Este recurso vai ajudar a identificar um dos sintomas da covid 19, e de outras doenças, sabemos que muitas pessoas as vezes nem percebem que a temperatura aumenta, e se tem febre podem estar doentes sem saber. . Alerta para Febre através do Smartphone será um aliado para a população, se conseguirmos colocar isso no celular. com certeza será uma inovação. Built With portugues
Um aplicativo para alerta através do Smathphone - febre
Toque da temperatura
['Juanice Andrade']
[]
['portugues']
7
10,042
https://devpost.com/software/mcafee
As one of the most common parts of "free" applications installed on pre-built PCs, McAfee security products are something that many people have discovered, but not everyone wants. Although you can become the founder of the company for information, your best bet for figuring out how to uninstall McAfee would always be to follow the steps below. Although you want to turn it back on here, if you're running McAfee LiveSafe McAfee Antivirus, McAfee Security Scan Plus, or whatever the company has set, this is how to reinstall them. Warning : As we know you need to remove McAfee, it is crucial to have antivirus protection on your computer. Windows Defender is great, but with one of those best free antivirus applications is a great second step to protecting your system. How Do I Turn Off McAfee Temporarily? To remove the configuration that McAfee uses. Here's how to get products with Windows' Measure 1: Open Settings menu by clicking start menu option Measure 2: Open the Programs menu. Measure 3: Use the search box to search for McAfee and locate everything related to McAfee on your system. Measure 4: As soon as you confirm, it will undergo McAfee uninstaller. Each variant is somewhat different, but follow the removal instructions and it will uninstall the McAfee item. The exact same process can be used to uninstall virtually any Windows program. McAfee Consumer Product Removal tool When the Windows app menu doesn't do the job for you, and there are still some elements of a portion of McAfee apps kicking your system, you can use MCPR. Note : This instrument may request a restart, so be sure to save all work before beginning. Measure 1: Download the latest version of MCPR from McAfee’s website Measure 2: Run this tool. It does not need an installation. Measure 3: Accept the license agreement and then enter the requested CAPTCHA code, clicking Next as vital. Measure 4: Wait for the uninstall process to work. When finished, for additional details on the procedure, click View Logs. Measure 5: If you are asked to restart your machine, be sure to save everything you want, and then restart because you normally would. How To Disable McAfee? For applications, uninstalling is as simple as dragging and locating the program in its folder. However, McAfee applications will be more difficult to remove than that and it is complex. Fortunately, there is a method that is better than you can try. Measure 1: Make sure you are logged in to an administrator account if necessary. Open your Applications folder and then choose the Utilities folder. In Utilities, open Terminal. Measure 2: It is possible to do a command in Terminal to uninstall McAfee applications, but it has to be exact. If you are removing McAfee version 4.8 or earlier applications, you will want to enter sudo / / Library / McAfee / cma / uninstall.sh. If you are using version 5.0 or later of the McAfee software, you will prefer to enter sudo / / Library / McAfee / cma / scripts / uninstall. When done, press Enter. Measure 3: If this practice is complete, restart your Mac and live your own life without McAfee. Even a Terminal control is enough to remove McAfee, once they try to remove the software from the computer, and people can still get leftovers. In cases like this, use it to extract any McAfee file freely and the best option is to get a fantastic uninstaller. AppCleaner is a totally free and fast alternative, or you can select the uninstaller you prefer!
How to uninstall McAfee
If you wishing to uninstall McAfee antivirus then you can follow the few step to do this
['Bella Swan']
[]
[]
8
10,042
https://devpost.com/software/virtualcovid
GamePlayVeiw03 GamePlay View 04 GamePlay View 01 GamePlay View 02 Virtual Gaming: CoviFitness [Virtual Covid Game] This is an extension to the Covid Raccoon that I developed in extension to Covid Raccoon Game. Good news is that you can play it virtually! Kindly Refer to my Github repo or Youtube Video for better documentation as some images are missing here. CoviFitness is a 2D fun, interactive and awaring game made for kids and individuals to play the game with real time moves. Are you also getting bored and lazy this covid? Do you have a habbit of skippinig morning walks? Is your exercise getting rescheduled due to your sleep cycle? Do you also want flexible timing for work-out? Well in that case, CoviFitness is a must try! Installation Instructions Open Terminal and type the following command: pip install -r requirements.txt Once all the dependencies are installed, open the terminal and type the command: python3 CoronaIntegrated.py Calibrations When you run the code, the following text will appear on your screen: Don't move during this process, it tries to detect the face and accordingly do the calibration. Note that most of the movements are defined based on the facial orientation. Once the Calibration is successful, you will see this window: This will end by popping a cv2 window which will proceed to calibration, this is for calibrating the bending height, make sure your camera is positioned in such a way that your chin must be above this blue line. These lines are added with time delay so that they won't get changed quickly and allow for reaction time. In case you want to bring down the height even more you can modify this line of code in CoronaIntegrated.py . The popup screen looks like this: Move your right hand to the top-right box to lower the line, move your left hand to the top-left box to raise the height, once you're satisfied, move both of the hands to complete the calibration process. Once the calibration is complete the following screen pops up: Now enjoy the game, jump to make your character jump and crouch to make it crouch. Note that currently these controls are calibrated and the other control (like boxing gesture is in test mode). Project Components Computer Vision (OpenCV python) Pygame (For building intutive 2D game) Inspiration I have always been curious about learning new things, whether it be related to stem or something else (though, I am always inclined towards STEM). Game Dev is a booming field and seems to have a promising future if correctly used. Video games are no more just a source of fun and entertainment, today we can use this virtual technology (especially in fields of AR and VR) to make use of this tech and create a real-life learning experience. From the very beginning, this has caught my attention to the field of gaming. This project is one such attempt to demonstrate how these things can actually change the way we live. Though I am well versed with other Development (App-Dev/ Web-Dev/ Designing) and Instrumentation Tech (IoT/ Robotics etc.) but Game-Dev was always something which I wanted to learn, and that's what turned me to try it here for the first time after creating a very basic 2D game covid raccoon . What it does? The game is a story about a boy (Here it's me) who is sanetizing entire city and needs to safely reach his home, which is located at the end of the city. Various types of viruses, infected hosts and bats are roaming around the city, which need to be dotched or sanetized without coming in contact with them. Here are some of the GamePlay Screenshots: It allows you to Roam around the city, which is apparently static (unless you add your custom background :-P) and look for the virus-infected people or viruses escape them till you reach your home, which is at the end of your city. In this adventure, you will see different types of viruses that will try to infect you! and you're on a mission to reach your home while sanetizing the city and without getting infected. To escape from the viruses, you have to ensure that you don't come in contact with them at any cost! Also, you need to keep a distance from those who are already infected. Unless you're not sanitizer protected, any incoming virus can infect you. If you catch a sanitizer (which always happens), you will be able to blow it three times before it ends, and then you have to protect yourself against them. For all the land viruses, you need to jump and skip them. On the other hand, for all the air viruses, you need to bend down and let them pass away. With your speed, it takes a fixed amount of time to reach the end of the city where your sanitization process is over, and you could safely keep yourself locked unless the pandemic ends! How I built it? The whole project is built in Python using OpenCV (cv2) and Pygame Library. After finishing my first project (Covid Raccoon), I started working on integrating it with Vision functionality. This entire game is solely built in python with opencv and pygame library. The characters (in the form of png images) are prepared in Microsoft PowerPoint. Purpose of this Project I have created this project mainly because of three reasons: Keeping the World Corona fit by involving in the fun activity. This game will help them exercise in a fun way even from the home. Getting Exposure of Game Development This project has pretty much helped me to take my first step towards Game development, slowly I would try to dive deeper and do more customizations and learn more. For the Beginners and Developers For the Beginners and Developers who want to learn PyGame, I am planning on converting this project into Video Tutorial Series. This will not just be there for them to get started with PyGame but also to use this project as the base template to do the modifications in their own project. For the Kids and Babies to Develop their Minds to Generate Awareness about Cleanness and Sanetization This game could led to a positive awareness among the mind of kids and create a good image of importance of sanitation and hygene in maintaining their health, I am not sure how nicely it will work, but expect it to have some positive impact. Built With computer-vision haar-cascades image-processing opencv pygame python Try it out github.com
Covi Fitness
A vitual gaming experience during Covid lockdown where the game responds to your own own actions and accordingly performs it in the virtual screen.
['Shivam Sahil']
[]
['computer-vision', 'haar-cascades', 'image-processing', 'opencv', 'pygame', 'python']
9
10,042
https://devpost.com/software/v-learn
Inspiration Awareness and Education are two of the essential ingredients of developing belief. Awareness has been highlighted by many as a key indicator of success in a range of performance environments. It is arguably the most important ingredient for belief as every other skill, quality, and task you have and undertake can be traced back to awareness. Being aware will give you an insight into your beliefs and whether they are positive or holding you back. But it takes a lot more than information to make kids understand and follow things. While on the other hand education is important to shape an individual. I wanted to make something that helps create awareness, about the do's and don't s of Covid-19 among kids, alongside it being entertaining, immersive, and educational. What better way than games to do this, and as VR is the best immersive technology available out there and keeping in mind the tendency of kids to explore new things, this application has been developed. What it does It is a multiplayer Virtual Reality Quiz Application. The app has many topics to choose from to play, which also helps spread awareness about COVID-19, other preliminary things that kids need to learn. It also has a real-time leaderboard of every topic that people choose to play. How I built it A. Unity3D- It is built on unity3d which is a powerful cross-platform 3D engine and a user-friendly development environment. I used unity to build the whole game from UI to Realtime database system to the game itself. B. Google VR SDK - a new open-source Cardboard SDK for iOS and Android. I used the Google VR SDK to develop the VR game scenes, which is not possible without it. C.Photon PUN - Photon Unity Networking (PUN) re-implements and enhances the features of Unity's built-in networking. I used it for networking. D. Google Firebase - Firebase is Google's mobile application development platform that helps you build, improve, and grow your app. I used Firebase, to make manage database systems to verify credentials, sore data, retrieve data, update leaderboards. E. Photoshop - I used Photoshop for the development of user interface elements. Challenges I ran into As this is a multiplayer application, to store and retrieve data in real-time (real-time database) I used Google-Firebase (Unity SDK), integrating it with unity has been tough work. As this is the first time I was working on networking using PUN, it has been a problem, as networking is not as easy as it seems to be, with PUN having many internal issues in my version of unity, I had to make the whole non-networking scenes again in a new version that supported PUN. Accomplishments that I'm proud of I could finish the development of the application in less than a day. What I learned Integration of realtime databases with unity apps, networking. What's next for V- Learn The VR application currently supports Android, Windows and hence the next goal would be to make an ios version, redefine UI, and releasing it to production so that users can have an immersive experience of modern gaming and education techniques. Built With c# firebase photoshop pun unity Try it out github.com
V- Learn
Immersive Approach To Awareness and Education.
['Vasa karthik']
[]
['c#', 'firebase', 'photoshop', 'pun', 'unity']
10
10,042
https://devpost.com/software/narad
A 3D render of a Narad Unit. The Daran Labs website. The COVID-19 Cluster tracker. How many users the Cluster is getting, according to Google Analytics. We are a team of 4 dedicated high school students who each have something unique to offer but have the same passion for changing the narrative in society. Our team consists of Somesh Kar (16), Angad Singh (16) Ashvin Verma (16) and Priyanshi Ahuja (17). Team Narad came together after a member of our team saw something that they couldn’t forget. He was on a family trip and crossed multiple villages along the way, but as he was approaching the end of the trip, his car broke down near a poor farmer’s house. The farmer gave them some water. Our team member tried thanking him in Hindi but the farmer couldn’t comprehend. Just then, another man, who could understand both hindi and the local language told us about how their language isn’t that well known and that they don’t hear it anywhere. Intrigued, our team member talked to the man about how they get their information, and to his surprise, our team member learnt that the village was on the other side of a very wide information and resource gap that existed due to the very seemingly mundane reason of speaking a language that was not sufficiently recognised in our massive country. We learned that life isn't as simple and superficial as it seems in the urban context. We learned that there are people out there who don't have access to resources, information and language. But at the same time we learnt that anyone, even high school students like us, can take a step towards making their life a bit more easier. Considering many services that have come up which support the urban population especially during the corona virus pandemic, we need one that supports the rural as well as the urban areas, and one which can be used after we pass this time as well. We've always been fascinated by microcomputers like the Raspberry Pi, and we were amazed to discover we could underclock the CPU clock speeds of one core to effectively be able to transmit at FM frequencies. With the first version of what we built, we decided to take it a step further and add a simple piece of solid gauge wire, which acted as the antenna. With this setup, we could transmit at distances over 500m with no distinguishable loss in quality. However, we soon realised a single Raspberry Pi couldn't cover the area of an entire Indian village. As such, we decided to use a mesh (a type of network topology, akin to star networks) system consisting of multiple raspberry pis, with only one requiring an internet connection. We settled on a WiFi based mesh network, since Raspberry Pi Zero Ws are inexpensive and come with WiFi antennas built in. We also tried using Zigbee and LoRa (Long Range radio) for this, but soon realised the extra cost didn't carry sufficient benefits for us, since each unit being relatively cheap is a major selling point for Narad. Initally we had the plan to have a few Narad Units built with extra features such as live broadcasting in the local area, for village Sarpanchs (the head of Indian villages) to be able to transmit mission critical information whenever required. However, this plan didn't work out as the units would've need a screen(preferably touch based) and a microphone, which significantly add to the cost. This led us to build the Narad app, which gives people in local governments and intuitive way to live broadcast locally relevant information. The Narad app is built using react native, and communicates with the Raspberry Pis in the mesh network by joining the WiFi network they create. Built With golang nextjs node.js react react-native Try it out daranlabs.now.sh github.com cluster.covid19india.org github.com
Narad
Narad solves the linguist,socio-economic information barriers.We designed a localised broadcasting device, an optional app,a covid cluster-graph tracker (already has a 9.8m user base-google analytics)
['Somesh Kar', 'Ashvin Verma', 'Priyanshi Ahuja', 'Angad Singh']
['Top 10']
['golang', 'nextjs', 'node.js', 'react', 'react-native']
11
10,042
https://devpost.com/software/remote-learning
Inspiration Due to the coronavirus pandemic, most schools have closed and moved to online learning. This led to many challenges such as difficulty engaging and providing materials. Students are more used to in-classroom learning and they can lose motivation to do school work. For this reason, we have built a platform that allows teachers to quiz, test and engage with students in a new way. What it does Users can create private rooms and invite others via a shareable link. No sign-up is required, the user simply enters their full name to join a room. Everyone in the room can communicate in real-time, the room creator can create challenges for others to participate in. Challenges range from questions, multiple choice, whiteboarding and sketching. Once the timer has ended, all students are able to see the correct answer, everyone's answer and how they did. You can use RemoteLearning with anyone, co-workers/students/friends etc. It benefits any group of people who are looking to learn and collaborate together. How I built it Websockets was used as the core bone behind the application, it supports messaging, challenges and syncing data. Challenges I ran into At the start, the idea was integrate a video chat similar to Zoom, however due to some technical challenges we decided to put it aside and develop an MVP with the core features (challenges). Accomplishments that I'm proud of We have implemented the application, deployed to production and is ready to use. The real-time syncing works pretty well when creating challenges and interacting with other users, however this has only been tested on devices on the same network. There could be issues when using it across the global, we will be on it when that happens. What I learned Integrating real-time data using Websockets and WebRTC. What's next for Remote Learning Continue adding features and improving the platform, send it to users, schools and share it around. Built With node.js react webrtc websockets Try it out remotelearning.space
Remote Learning
RemoteLearning aims to put the offline classroom into the digital world and allows teachers and students learn together even if they are not in the same room.
['Sami khalil']
[]
['node.js', 'react', 'webrtc', 'websockets']
12
10,042
https://devpost.com/software/outbrake
Inspiration World Health Organization (WHO) publishes infectious disease outbreak news with information such as locations, dates, and cases. The information is often under-utilized, as manual reading is required to act upon them. What it does Outbrake is an artificial intelligence system that can read disease outbreak news and extract the key information. How I built it Words that correspond to key entities such as disease, location, date and case are labeled like this. Ebola B-DISEASE virus I-DISEASE disease O – O Democratic B-LOC Republic I-LOC of I-LOC the I-LOC Congo I-LOC Since O 17 B-DATE February I-DATE 2020 I-DATE , O no B-CASE new I-CASE cases I-CASE have O been O reported O The training data is fed to an encoder/decoder architecture with attention mechanism to produce a machine learning model. Challenges I ran into Detecting number of cases is difficult due to many variations such as no new cases, one confirmed case, or 1000 probable cases. Dates are also difficult to detect due to variations such as 10 April 2020, or 15 to 20 March 2020. Accomplishments that I'm proud of Open source. Fighting epidemics and pandemics require effort from everyone. Fair and inclusive. Data is trusted and highlights third-world and poor countries often neglected by mainstream media. User privacy is protected by data aggregation and anonymization. Disease, locations, dates, and number of cases are extracted in real time from WHO disease outbreak feed. What I learned COVID-19 gets all the attention today. But infectious diseases like ebola and MERS-CoV are ongoing threats, especially in third-world countries. Locations where these other diseases are active should take extra precautions. Extra resources like personal protective equipment should be set aside in these locations. What's next for Outbrake Outbrake lays the foundation for future-state AI systems that prevent pandemics. An example is a case tracking application that uses data extracted from disease outbreak news feeds. Users can see case trends by location. Another use case is an early-warning system for more severe infectious diseases such as MERS-CoV. Built With python tensorflow Try it out colab.research.google.com drive.google.com
Outbrake
Put the brakes on disease outbreaks with artificial intelligence.
['Don T']
[]
['python', 'tensorflow']
13
10,042
https://devpost.com/software/covidseek
Inspiration Since the beginning of this pandemic, many people globally are in a state of confusion and panic. Many healthcare systems need a way to allocate resources properly based on the density of the pandemic. Furthermore, many people do not know when this virus will keep spreading. We built COVIDSeek to answer these problems through providing an accurate visualization and predictions/forecasts of the pandemic. What it does COVIDSeek is a web application that connects people and healtchare systems through accurate information, and predictive analytics. Users enter their location to see a density heatmap of the virus on an international scale, which is also useful for medical practitioners and the healthcare system. They also will see the specific number of cases and deaths in their respective area on a given day. Finally, they are provided with a forecast of what cases might rise/lower to in the next 1-2 months. How we built it On the front end, we used html, css, and javascript through the bootstrap web framework. On the backend, we first use the google-maps api in python (through gmaps) to visualize the heatmap, and we passed this into an html file. Furthermore, we used Flask to serve the json data of the cases and deaths (across the world) to our front end, and SQLAlchemy as a way of storing data schema in our database. We use the FBProphet library to statistically forecast time-series data and future cases through Bayesian analysis, logistical growth, and predictive analytics, by factoring in trend shifts as well. Challenges we ran into We ran into challenges regarding the visualization of the heatmap, as well as the creation of our forecasting algorithm, as we didn't have much experience with these areas. Furthermore, serving some parts of the data to the front-end from Flask had some errors at first. It also took time to assemble data into a consolidated file for analysis, which was a bit hard in terms of finding the right content and sources Accomplishments that we're proud of We are proud of how much progress we've made considering how new we were to libraries such as FBProphet and Flask, and the unique, special, and effective way we learned how to implement it. We learned how to create opportunities to benefit different areas across the world through data analytics, which is something that we're very proud of doing. What we learned In terms of skills, Aryan learnt how to develop his front-end skills with Bootstrap and using different ways of styling. Shreyas also developed his front-end skills while working with Aryan to structure the front-end, as well as finding new skills in learning Flask and the Gmaps API. We learnt that there are numerous ways that an individual can help the world around them through computer science. What's next for COVIDSeek In the future, we want to add a user-interactive search bar that places a marker on their location and zooms into the map, as well as a way for users to report symptoms/cases on the map. We also want to add more features, such as nearby testing sites, hospitals, as well as nearby stores with a certain amount of resources that they might need. Overall, we want to make this web app more scalable worldwide. Built With bootstrap css3 fbprophet flask google-maps html5 javascript matplotlib numpy pandas python sqlalchemy Try it out github.com
COVIDSeek
Serving healthcare systems and people through accurate data tracking, visualizations, and forecasting of the coronavirus
['Shreyas Chennamaraja', 'Aryan Agarwal']
[]
['bootstrap', 'css3', 'fbprophet', 'flask', 'google-maps', 'html5', 'javascript', 'matplotlib', 'numpy', 'pandas', 'python', 'sqlalchemy']
14
10,042
https://devpost.com/software/safeslot-getting-essentials-safely-during-crisis
Logo Inspiration In these days of the crisis, one of the biggest problems is buying essentials - food and medicine. Wherever you go, there are big queues for stores or overcrowded stores. With less enforcement of social distancing, people are not confident about going to stores. Over that, a lot of stores are closed or operating for a lesser duration than normal. What it does Our solution to the above problems is to Evenly Spread Customer Visits at Different Times Provide Customers with correct and updated opening status and information Providing Customers with a Proof of Essential Travel (if any law enforcement agency asks for it) Our app, SafeSlot helps in implementing these solutions. When user opens the app, they can see the nearest stores based on their location. We plan to divide the store timings into various time slots with a maximum cap of registrations based on the store/counter size and let users book the slot for their essential shopping. For the same, users can book a maximum of two slots per day for any store. We also have an option of DriveThru option in which users can upload their grocery list/doctor's prescription and stores can pack it by the time they arrive at the store. Hence, reducing the customer visit time to an average of 5 minutes. How I built it Our team built it in NodeJS and ReactJS Challenges I ran into The major challenge we have right now is mass adoption. We are trying to solve it by approaching various government authorities and showcasing the app in various contests. Accomplishments that I'm proud of We are proud of developing the solution within a short duration of 3 days. What I learned We learnt how to deal with a real-life crisis. We ran our idea with various people and learnt how to make a solution practical. What's next for SafeSlot We are in the process of creating a Store Side Application to update the slots on a live basis. Maps integration is in process. Upload prescription/grocery list feature is being added. Branding in the app is being taken care of Built With node.js react Try it out safeslot.in
SafeSlot
Getting Essentials Safely during Crisis
['Sanket Patel', 'Shubham Jain', 'Aditi Katyal', 'Aditya Sonel', 'Hardik Gupta', 'Akshay Nagpal']
[]
['node.js', 'react']
15
10,042
https://devpost.com/software/devit
Introduction Amid the current COVID-19 situation that has engulfed the whole world; everyone is facing challenges and are socially impacted. This has encouraged us to develop a project management platform on which corporate companies manage their work with total privacy and ownership during remote work from home. Problem Statement With the World Health Organization (WHO) declaring Coronavirus (COVID-19) a global pandemic, the communities around the world are forced to practice social distancing, while companies have enforced work from home policies in an effort to flatten the curve of viral infections across the population. Given the isolation currently being experienced within communities right now, we want to create an online platform or space where developers can ideate, experiment and build software during this crisis. Our Idea Due to the prevailing situation from COVID-19 companies and organisation are forced to work from home. Most of them need to use third party platform and softwares to manage their project/work online. Using such third party platform does not assure total ownership of the project since third party may also see their confidential or important project files, work or prototypes. Living in era of web 3, our idea is to make a controlled platform that will help developers from corporate background, who are forced to work from home, and even free lancers to develop, store, manage their ongoing projects amid this lockdown with true ownership, integrity, transparency and security. To achieve this goal, we have used blockchain-based BlockStack technology which eliminates the involvement of any third party and provides transparency. We have used BlockStack Auth and Gaia Storage to give true owner ship to users. Moreover, we have only used the BlockStack eco-system. Team Quaran-Hack Built With blockchain blockstack css html javascript node.js Try it out devit-7cd11.web.app github.com
DevIT
An open source project secured by blockstack with commitment of privacy and ownership for your remote workspace.
['Devarsh Panchal', 'Naivedh Shah', 'Harsh Mauny', 'dwij patel']
['Best Open Source Project on GitHub', 'Top 10']
['blockchain', 'blockstack', 'css', 'html', 'javascript', 'node.js']
16
10,042
https://devpost.com/software/divoc-5p2g0o
Teacher Dashboard Utilities Students Joined Flowchart Inspiration There is an old saying, The Show Must Go On , which kept me thinking and finding out a way to connect teachers and students virtually and allow teachers to take lectures from home and to develop a completely open source and free platform different from the other major paid platforms. What it does This website is completely an open source and free tool to use This website whose link is provided below, allows a teacher to share his / her live screen and audio to all the students connected to meeting by the Meeting ID and Password shared by the teacher. Also this website has a feature of Canvas, which can be used as a blackboard by the teachers. Including that, this website also contains a doubtbox where students can type in their doubts or answer to teachers questions while the lecture is going on. Again this website also has a feature of tab counting, in which, tab change count of every student is shown to the teacher. This will ensure that every student is paying attention to the lecture. Also, teacher can ask questions in between the lecture, similar to how teacher asks questions in a classroom. How I built it 1) The main component in building this is the open source tool called WebRTC i.e. Web Real Time Communication. This technology allows screen, webcam and audio sharing between browsers. 2) Secondly Vuetify a very new and modern framework was used for the front end design, routes and lucid front end animations. 3) Last but not the least NodeJS was used at the backend to write the API's which connect and interact with the MongoDB database. Challenges I ran into The hardest part of building this website was to find a open source tool to achieve screen and audio sharing. This is because Covid crisis has affected most of the countries economy due to lockdown. Hence, it is of utmost important that schools and colleges do not need to pay for conducting lectures. Accomplishments that I'm proud of I am basically proud of developing the complete project from scratch and the thing that anyone who has the will to connect to students and teach them can use it freely. Also in other applications, there is no way to know if the student is actually concentrating at the other end. Here, the features like 'Tab Change Count' and 'Ask a Question' make in possible. What I learned I learned a new technology called WebRTC which I believe that is going to help me more than I expect in future. What's next for Divoc Integrating an exam module and allowing teachers to take exams from home. Built With mongodb node.js vue webrtc Try it out divoc-app.herokuapp.com github.com
Divoc
An application that equips teachers to conduct a lecture similar to a classroom lecture during Lock Down.
['Sanket Kankarej']
[]
['mongodb', 'node.js', 'vue', 'webrtc']
17
10,042
https://devpost.com/software/natural-mask
Inspiration: To save world What it does: Personal protective product How we built it: Using homemade materials like cloth, natural herbs. Challenges we ran into: Availability of materials Accomplishments that we're proud of: Disinfectant What we learned: Unity and save nature What's next for Natural mask: Educating public to prepare and implement in all homes. Built With hardware
Natural mask
Homemade mask with natural materials act as a disinfectant. It can be even prepared by any people without technical knowledge. It avoid multiplication of microbes.
['Priya A.K']
[]
['hardware']
18
10,042
https://devpost.com/software/drone-as-a-virus-detector
Inspiration: To save earth with innovations What it does: Virus detector How we built it: RC car Challenges we ran into: No human involved in it. purely automated. Accomplishments that we're proud of: Blood samples will be collected without person to person contact What we learned: Challenges What's next for Drone as a Virus Detector: Implement using drone Built With hardware
Drone as a Virus Detector
instead of drone RC car used for prototype. It will detect infected person and collect blood sample without person to person contact.
['Priya A.K']
[]
['hardware']
19
10,042
https://devpost.com/software/covid-visual
GIF COVID-19 Timeline Visualization Inspiration Much the same as many professionals are going through these days, my work project got scrapped in the mid of March, 2020, due to the impact of COVID-19 faced by the small and medium businesses. For almost a week I didn't have much to do on and while waiting for things to get back to normal, I began working on this project. The idea is to create a visualization of COVID-19 over these few months all around the world (and in the US) by gathering data on cases being reported and getting a sense of where we started to where we are going from then on. In the hope that we'll all recover and recover well and soon from this. What it does It's a project showcasing data visualization on the latest COVID-19 cases reported around the world (and further drilling down to the US) to provide meaningful insights on how we are doing on dealing with the virus. There is no actual metric but, with the timeline, we compare the rise in total cases from the previous day. We also show new cases/deaths by countries on the world map. The map is color coded using percentile distribution of case counts. It shows a timeline of this data in a tabular form sorted by countries and allows the user to pin the table next to the map. We also show a timeline slider which begins from early Jan and goes till this day. A similar timeline data is viewable when drilling down to the USA. The data refreshes every hourly for showing the latest counts. How we built it The client uses React-map-gl which is a wrapper on top of map-gl to display numerical data on the world map/US map. An express based server is set to crawl/scrape data from online sources, parsed and cleaned before being fed into Redis using NodeJS backend. Redis is used for storing the keys for each country and date which gets refreshed every hourly through crons which make the scraping and merging services to update Redis data. Nothing stops us from making the hourly updates to mins but it's not something that we planned on doing ATM. Color coding and percentage calculations are calculated in the client. To make our app faster, we use Webpack and also pre-fetch some of the images. Webpack reduces our client build greatly and helps us serve the static assets faster on slower networks. We enable gzip compression for our service to further improve data transfer through the network. Our app is deployed on Heroku cloud service. Challenges we ran into Data collection was our first challenge. Having a reliable source and collecting data daily/hourly, in cost efficient way with having the flexibility to mold the schema was our first priority. Data storage for a timeline view, deciding on the most effective schema consumable by the client and helping the it run faster was our second challenge as the geo data tends to become huge in size. This can introduce lags in the client making it jittery with calculating color coding on the fly that needed a schema customized to suit the needs better. Filling the gaps of back dated data, recovering from the mishaps that happened during development were also some significant challenges. With respect to running the app, we run it using free dynos on Heroku and experience server cold starts but it's what you get with $0, so can't complain. :) Accomplishments that we are proud of Coming up with the concept, quickly putting the front-end and backend pieces and meeting the deadlines we set for ourselves is something that we celebrated after releasing our website successfully. What we learned We learned that dedicating time and effort to a project helps big time making thoughts into reality. We also learned how important it is to work together and achieve more as a team. What's next for covid-visual Theming of the app to light and dark. Optimize on DB storage by compression at rest. Localization of the client. Mobile app Built With github heroku jest mapgl node.js react redis webpack Try it out covid-visual.herokuapp.com
COVID-19 Timeline Visualization
We all have seen a COVID-19 visualization, have we now! But did you build one yourself? Experience the journey with us.
['Rachita Bansal', 'Dixit Patel']
[]
['github', 'heroku', 'jest', 'mapgl', 'node.js', 'react', 'redis', 'webpack']
20
10,042
https://devpost.com/software/generating-electricity-by-walking-xcwimv
The primary hardware components used. A bunch of piezoelectric sensors! An inside view of the shoe. 17 piezoelectric sensors can be seen in this side. There is an additional 16 sensors on the other side. Summary The average American walks approximately 3,500 steps per day; each step creates mechanical energy, energy which ends up being wasted and dispersed into the environment. Tapping into this wasted energy opens a door for opportunities to supplement the user’s actions. Varying amounts of piezoelectric sensors were used to generate this energy which gets stored in a LiPo battery through the aid of the BQ25570 chip. My design used 33 piezoelectric sensors, which generated, approximately 0.27 volts or 23.625 mAh just after 60 steps. If a user wore this shoe and walked the average amount of steps per day, they would generate 1,378.125 mAh! In addition, I developed an add-on to this project that adds an Arduino Nano with an Accelerometer and Gyroscope sensor. The data from these sensors are run through a neural network that predicts the behavior the user is doing. For example, if the user is jumping it will predict they are jumping. How I built it The hardware component of this project has one layer of styrofoam on the top and bottom. This protects the piezoelectric sensors and increases comfort for the user. Then there are two layers of cardboard, each side of the cardboard has 8-9 piezoelectric sensors, connected in series. The two cardboard pieces are connected in parallel. There is then a thin piece of paper between the two cardboard pieces, to make sure no wires short out when they touch each other. The software uses Keras with TensorFlow. I created a Google Cloud Virtual Machine Instance, which runs a python script that reads in data regarding user's motion and then with Keras and TensorFlow creates a model of the data that can be used for prediction. Challenges I ran into Developing the hardware of the shoes took the bulk of my time. I have never used Piezoelectric sensors before, so I had to learn how to use them. In addition, it took me a while to optimize the energy outputted from the shoe. The green BQ25570 chip helped me do that though. Accomplishments that I'm proud of This is the world's most efficient shoe that generates electricity! Other solutions mostly use different means to generate electricity. My solution used Piezoelectric sensors, and then the BQ25570 chip to control the flow of electricity from the two capacitors on the chip to the battery. This minimizes the electricity wasted. What I learned I learned a lot! In general, I am better at software related projects, this project, being a hardware-first project, increased my skills in dealing with hardware. I got better at soldering, understanding the mathematical calculations of voltage and current, Piezoelectric sensors, Arduinos and various hardware compounds. On the software side, this was my first time using Google Cloud. I am now comfortable in creating complex Virtual Machines in the cloud that can run various advanced scripts. What's next for Generating Electricity By Walking I want to add a wifi/Bluetooth chip into the Arduino Nano, this will enable the data from the accelerometer and gyroscope to transfer to a web server in the cloud without the need of a wire. With this advancement, I could develop a mobile/web app that tracks various foot-related fitness activities, including jumping, running and walking. Built With google-cloud keras piezoelectric tensor-flow
Generating Electricity By Walking
Generate a lot of electricity just by walking!
['Tarun Ravi']
[]
['google-cloud', 'keras', 'piezoelectric', 'tensor-flow']
21
10,042
https://devpost.com/software/divoc-e0fywm
Flow chart depicting the working of the whole system. Homepage of the application Teacher Login Student Login Teacher Dashboard Student Dashboard Canvas as a blackboard Asking question in middle of a lecture Tab Change alert to gain students attention to the lecture Inspiration There is an old saying, The Show Must Go On , which kept me thinking and finding out a way to connect teachers and students virtually and allow teachers to take lectures from home and to develop a completely open source and free platform different from the other major paid platforms. What it does This website is completely an open source and free tool to use This website whose link is provided below, allows a teacher to share his / her live screen and audio to all the students connected to meeting by the Meeting ID and Password shared by the teacher. Also this website has a feature of Canvas, which can be used as a blackboard by the teachers. Including that, this website also contains a doubtbox where students can type in their doubts or answer to teachers questions while the lecture is going on. Again this website also has a feature of tab counting, in which, tab change count of every student is shown to the teacher. This will ensure that every student is paying attention to the lecture. Also, teacher can ask questions in between the lecture, similar to how teacher asks questions in a classroom. How I built it 1) The main component in building this is the open source tool called WebRTC i.e. Web Real Time Communication. This technology allows screen, webcam and audio sharing between browsers. 2) Secondly Vuetify a very new and modern framework was used for the front end design. 3) Last but not the least NodeJS was used at the backend to write the API's which connect and interact with the MongoDB database. Challenges I ran into The hardest part of building this website was to find a open source tool to achieve screen and audio sharing. This is because Covid crisis has affected most of the countries economy due to lockdown. Hence, it is of utmost important that schools and colleges do not need to pay for conducting lectures. Accomplishments that I'm proud of I am basically proud of developing the complete project from scratch and the thing that anyone who has the will to connect to students and teach them can use it freely. What I learned I learned a new technology called WebRTC which I believe that is going to help me more than I expect in future. What's next for Divoc Integrating an exam module and allowing teachers to take exams from home. Built With mongodb node.js vue webrtc Try it out divoc.herokuapp.com
Divoc
DIVOC - An Antidote For - COVID
['Sanket Kankarej']
[]
['mongodb', 'node.js', 'vue', 'webrtc']
22
10,042
https://devpost.com/software/one-stop-info-covid-19-n
Example of one my tabs. Example of my interactive Coronavirus Statistics Tab Inspiration What inspired me to create this website was when I noticed how frustrating and inconvenient it was to open up multiple tabs and search for a multitude of things relating to COVID-19 like how to protect myself from it, what if I don't hand sanitizer, how I can help others during this time, etc. Not only that, but I also wanted to get my information from credible sources to ensure everything about COVID-19 was indeed factual and proven. This is why I decided to create a website. So people can come onto one website conveniently and navigate through what they want to know, knowing everything has been proven and said by the CDC. What it does This website serves to help others gain factual information about COVID-19 as the website has multiple tabs ranging in from mental health resources to an accurate COVID-19 tracker. How I built it I used Spotify developer to embed my playlists onto the website and everything else I coded on my own, strictly using only HTML as I wanted to challenge myself to see how far I could go with just using HTML, no CSS or Javascript. A challenge I ran into One of the main challenges I went through was having to embed the BCC news tab. It was very difficult to figure out how to implement onto HTML because everywhere on Google, it seemed like I could only embed the recent news scroll using API which was something I was unfamiliar with. Although I did attempt to implement Google's API onto my coding, it didn't turn out successful which is okay, because I'm still trying to learn more about Google's API, so I could embed Google News as well and hopefully Google Maps. Accomplishments that I'm proud of I'm proud that I managed to implement the Spotify playlists and BCC News, news scroll because they were both very difficult to learn and code with especially since I was only using HTML. In fact, I was thinking breaking my HTML only challenge because the process for both challenges was incredibly stressful and I had spent over 4 hours trying to learn how to embed them into HTML, but eventually, I figured out how to solve both challenges through watching a ton of youtube videos and reading plenty of blogs What I learned I learned that it's possible to code a whole interactive website with HTML. I always thought that I'd need CSS to style my fonts, alignment, color,etc., but I realized through this process that a lot of things that I would usually code on CSS, can easily be applied to HTML through some fixtures and additions of code. What's next for One-Stop Info-Covid-19 I plan on sharing my website on social media and hope others find my website useful! I also plan on updating this website daily with new DIYs, resources, playlists, etc. Built With html Try it out github.com
One Stop Info-Covid-19
This website serves as a one stop place to get factual information about COVID-19 and how people can help, stay safe, keep up on recent news, and listen to music all while at it.
['Nora C.']
[]
['html']
23
10,042
https://devpost.com/software/covidcentral-u21txv
Landing Page Landing Page Landing Page Landing Page - Contact Us Section Signup Page Login Page Content Summarizer Comparison of 4 Types of Content Summarizer Text Insights Preprocessing Inspiration This year has been really cruel to humanity. Australia is being ravaged by the worst wildfires seen in decades, Kobe Bryant’s passing , and now this pandemic due to the Novel Coronavirus originated from the Hubei province (Wuhan) of China. Coronavirus disease (COVID-19) is an infectious disease caused by a newly discovered coronavirus. More than 3 million people are affected by this deadly virus across the globe (Source: World O Meters). There have been around 249,014 deaths already and it’s counting. 100+ countries are affected by this virus so far. This is the biggest health crisis in the last many years. Artificial Intelligence has proved its usefulness in this time of crisis. The technology is one of the greatest soldiers the world could ever get in the fight against coronavirus. AI along with its subsets (Machine Learning) is leveraging significant innovation across several sectors and others as well to win against the pandemic. After Anacode releases “The Covid-10 Public Media Dataset” , we took this as an opportunity to use Natural Language Processing on those data composed of Articles. According to Anacode “It is a resource of over 40,000 online articles with full texts which were scraped from online media in the timespan since January 2020, focussed mainly on the non-medical aspects of COVID-19. The data will be updated weekly”. Anacode further says “We are sharing this dataset to help the data community explore the non-medical impacts of Covid-19, especially in terms of the social, political, economic, and technological dimensions. We also hope that this dataset will encourage more work on information-related issues such as disinformation, rumors, and fake news that shape the global response to the situation.” Our team leveraged the power of NLP and Deep Learning and built “CovidCentral” , a PaaS (Platform as a Service) . We believe our solution can help media people, researchers, content creators, and everyone else who is reading and writing articles or any kind of content related to the COVID-19. What it does Our tagline says “Stay central with NLP powered text analytics for COVID-19”. CovidCentral is one of its kind NLP driven platform for fast and accurate insights. It generates a summary and provides analytics of large amounts of social and editorial content related to COVID-19. STAY CENTRAL INSHORTS. It does three things: 1. CovidCentral platform can help to understand large contexts related to COVID-19 in a matter of minutes. Through the platform, Get actionable insights from hundreds of thousands of lines of texts in minutes. It generates an automated summary of large contents and provides word-by-word analytics of the texts from total word count to the meaning of each word. The user can either enter an URL to summarize and getting insights or enter the complete content directly into the platform. 2. The large content of text data is hard to analyze. It is very difficult to analyze the large content of texts. CovidCentral can help people to get insights within minutes. Manual analysis of texts leads to a number of hours. Media people, researchers, or anyone who is having the internet can access our platform and get the insights related to the COVID-19. 3. Humans are lazy in nature and people want to save time. This platform can generate content’s summary within minutes via a single URL. CovidCentral uses NLP and Deep Learning technologies to provide an automated summary of texts. Very helpful for getting short facts related to the COVID-19. Why Use CovidCentral? 1. Fast 2. Ease of Use (User-friendly) 3. High Accuracy 4. Secure (No content or data will be saved in the server rather we are sending NLP to you at the frontend.) How we built it We built CovidCentral using AI technologies, Cloud technologies, and web technologies. This platform uses NLP as a major technique and leverages several other tools and techniques. The major technologies are: a. Core concept: NLP (Spacy, Sumy, Gensim, NLTK) b. Programming Languages: Python and JavaScript c. Web Technologies: HTML, CSS, Bootstrap, jQuery ( JS) d. Database and related tools: SQLITE3 and Firebase (Google's mobile platform) e. Cloud: AWS Below are the steps that will give you a high-level overview of the solution: 1. Data Collection and Preparation: CovidCentral is built on mainly using “Covid-19 Public Media Dataset” by Anacode. A dataset for exploring the non-medical impacts of Covid-19. It is a resource of over 40,000 online articles with full texts related to COVID-19. The heart of this dataset are online articles in text form. The data is continuously scraped from a range of more than 20 high-impact blogs and news websites. There are 5 topic areas - general, business, finance, tech, and science. Once we got the data, the next step is obviously “Text Preprocessing”. There are 3 main components of text preprocessing: (a) Tokenization (b) Normalization (c) Noise Removal. Tokenization is a step that splits longer strings of text into smaller pieces, or tokens. Larger chunks of text can be tokenized into sentences, sentences can be tokenized into words, etc. Further processing is generally performed after a piece of text has been appropriately tokenized. After tokenization, we performed “Normalization” because, before further processing, the text needs to be normalized. Normalization generally refers to a series of related tasks meant to put all text on a level playing field: converting all text to the same case (upper or lower), removing punctuation, converting numbers to their word equivalents, and so on. Normalization puts all words on equal footing and allows processing to proceed uniformly. In the last step of our Text preprocessing, we performed “Noise Removal” . Noise removal is about removing characters digits and pieces of text that can interfere with your text analysis. Noise removal is one of the most essential text preprocessing steps. 2. Model Development: We have used several NLP libraries and frameworks like Spacy, Sumy, Gensim, and NLTK. Apart from having a custom model, we are also using pre-trained models for the tasks. The basic workflow of creating our COVID related NLP based summarizer or analytics engine is like this: Text Preprocessing (remove stopwords, punctuation). Frequency table of words/Word Frequency Distribution – how many times each word appears in the document Score each sentence depending on the words it contains and the frequency table. Build a summary or text analytics engine by joining every sentence above a certain score limit. 3. Interface: CovidCentral is a responsive platform that supports both i.e. Mobile and web. The frontend is built using web technologies like HTML, CSS, Bootstrap, JavaScript (TypeScript, and jQuery in this case). We have used a few libraries for validation and authentication. On the backend part, it uses python microservice “Flask” for integrating the NLP models, SQLITE3 for handling the database, and Firebase for authentication and keeping records from the User forms. 4. Deployment: After successfully integrating backend and frontend into a platform, we deployed CovidCentral on the cloud. It runs 24*7 on the cloud. We deployed our solution on Amazon Web Services (AWS) and use an EC-2 instance as a system configuration. Challenges we ran into Right now, the biggest challenge is “The Novel Coronavirus”. We are taking this as a challenge and not as an opportunity. Our team is working on several verticles whether it is medical imaging, surveillance, bioinformatics and CovidCentral to fight with this virus. There were a few major challenges: Time constraint was a big challenge because we had very little time to develop this but we still pulled CovidCentral in this short span of time. The data which has more than 40K articles are pretty much messy, so we got difficulties dealing with messy data in the beginning but after learning how to handle that kind of data, we eliminated that challenge to some extent. We also got challenges while deploying our solution to the cloud but managed somehow to do that and still testing our platform and making it robust. Accomplishments that we're proud of Propelled by the modern technological innovations, data is to this century what oil was to the previous one. Today, our world is parachuted by the gathering and dissemination of huge amounts of data. In fact, the International Data Corporation (IDC) projects that the total amount of digital data circulating annually around the world would sprout from 4.4 zettabytes in 2013 to hit 180 zettabytes in 2025. That’s a lot of data! With such a big amount of data circulating in the digital space, there is a need to develop machine learning algorithms that can automatically shorten longer texts and deliver accurate summaries that can fluently pass the intended messages. Furthermore, applying text summarization reduces reading time, accelerates the process of researching for information, and increases the amount of information that can fit in an area. We are proud of the development of CovidCentral and to make it Open Source so anyone can use it for free on any kind of device to get important facts related only to COVID-19. What we learned Learning is a continuous process of life, the pinnacle of the attitude and vision of the universe. I tell my young and dynamic team (Sneha and Supriya) to keep on learning every day. In this lockdown situation, we are not able to meet each other but we learned how to work virtually in this kind of situation. Online meeting tools like Zoom in our case, GitHub, Slack, etc helped all of us in our team to collaborate and share our codes with each other. We also strengthen our skills in NLP (BERT, Spacy, NLTK, etc) and how to integrate our models to the front-end for end-users. We spent a lot of time on the interface so people can use it and don’t get bored. From design to deployment, there were many things that helped us improve our skills technically. We learn many things around us day by day. Since we are born, we learn many things, and going forward, we will add more relevant features by learning new concepts in our platform. What's next for CovidCentral We are adding features like “Fake News Detector” to spam fake news related to the COVID-19 very soon on our platform. CovidCentral’s aim is to help content creators, media people, researchers, etc to only read that matters the most in a quick time. APIs to be released soon so anyone who wants to add these features in their existing workflow or website can do it so they won’t need to use our platform rather they can just use our APIs. We are also in discussion with some text analytics companies to collaborate and bring an even more feasible, robust, and accessible solution. In the near future, we will make CovidCentral an NLP powered text analytics platform in general for all kinds of text analytics for anyone, free to use from anywhere on any kind of devices (Mobile, Web, Tablet, etc). Built With amazon-web-services bootstrap css firebase flask html javascript natural-language-processing nltk python sqlite Try it out covidcentral.herokuapp.com
CovidCentral
CovidCentral is one of its kind NLP driven platform for fast and accurate insights. It generates a summary and provides analytics of large amounts of social and editorial content related to COVID-19.
[]
[]
['amazon-web-services', 'bootstrap', 'css', 'firebase', 'flask', 'html', 'javascript', 'natural-language-processing', 'nltk', 'python', 'sqlite']
24
10,042
https://devpost.com/software/masked-ai-masks-detection-and-recognition
Platform Snapshot Input Video Model Processing Model Processing Output Video Saved Output Video Snapshot Output Video Snapshot Output Video Snapshot Output Video Snapshot Output Video Snapshot Output Video Snapshot Inspiration The total number of Coronavirus cases is 5,104,902 worldwide (Source: World o Meters). The cases are increasing day by day and the curve is not ready to flatten, that’s really sad!! Right now the virus is in the community-transmission stage and taking preventive measures is the only option to flatten the curve. Face Masks Are Crucial Now in the Battle Against COVID-19 to stop community-based transmission. But we are humans and lazy by nature. We are not used to wear masks when we go out in public places. One of the biggest challenges is “People not wearing masks at public places and violating the order issued by the government or local administration.” That is the main reason, we built this solution to monitor people in public places by Drones, CCTVs, IP cameras, etc, and detect people with or without face masks. Police and officials are working day and night but manual surveillance is not enough to identify people who are violating rules & regulations. Our objective was to create a solution that provides less human-based surveillance to detect people who are not using masks in public places. An automated AI system can reduce the manual investigations. What it does Masked AI is a real-time video analytics solution for human surveillance and face mask identification. Our main feature is to identify people with masks that are advised by the government. Our solution is easy to deploy in Drones and CCTVs to “see that really matters” in this pandemic situation of the Novel Coronavirus. It has the following features: 1. Human Detection 2. Face Masks Identification (N95, Surgical, and Cloth-based Masks) 3. Identify human with or without mask in real-time 4. Count people each second of the frame 5. Generate alarm to the local authority if not using a mask (Soon in video demo) It runs entirely on the cloud and does detection in real-time with analysis using graphs. How we built it Our solution is built using the following major technologies: 1. Deep Learning and Computer Vision 2. Cloud Services (Azure in this case) 3. Microservices (Flask in this case) 4. JavaScript for the frontend features 5. Embedded technologies I will be breaking the complete solution into the following steps: 1. Data Preparation: We collected more than 1000 good quality images of multiple classes of face masks (N95, Surgical, Clothe-based masks). We then performed data-preprocessing and labeled all the images using labeling tools and generated PASCAL VOC and JSON after the labeling. 2. Model Preparation: We used one of the famous deep learning-based object detection algorithm “YOLO V-3” for our task. Using darknet and Yolo v-3, we trained the model from scratch on 16GB RAM and Tesla K80 powered GPU machine. It took 10 hours to train the model. We saved the model for deploying our solution to the various platforms. 3. Deployment: After training the model, we built the frontend which is totally client-based using JavaScript and microservice “Flask”. Rather than saving the input videos to our server, we are sending our AI to the client’s place and using Microsoft Azure for the deployment. We are having on-premise and cloud solutions prepared. At the moment, we are on a trail so we can’t provide the link URL. After building the AI part and frontend, We integrated our solution to the IP and CCTV cameras available in our house and checked the performance of our solution. Our solution works in real-time on video footage with very good accuracy and performance. Challenges we ran into There are always a few challenges when you innovate something new. The biggest challenge is “The Novel Coronavirus” itself. For that reason, we can’t go outside the home for the hardware and embedded parts. We are working virtually to build innovative solutions but as of now, we are having very limited resources. We can’t go outside to buy hardware components or IP & CCTV cameras. One more challenge we faced was that we were not able to validate our solution with drones in the early days due to the lockdown but after taking permission from the officials that problem was not a deal anymore. Accomplishments that we're proud of Good work brings the appreciation and recognition. We have submitted our research paper in several conferences and international journals (Waiting for the publication). After developing the basic proof-of-concept, We went on to the local government officials and submitted our proposal for a trial to check our solution for better surveillance because the lockdown is near to be lifted. Our team is also participating in several hackathons and tech event virtually to showcase our work. What we learned Learning is a continuous process. We mainly work with the AI domain and not with the Drones. The most important thing about this project was “Learning new things”. We learned how to integrate “Masked AI” into Drones and deploy our solution to the cloud. We added embedded skills in our profile and now exploring more features on that part. The other learning part was to take our proof of concept to the local administration for trails. All these “Government Procedures” like writing Research Proposal, Meeting with the Officials, etc was for the first time and we learned several protocols to work with the government. What's next for Masked AI: Masks Detection and Recognition We are looking forward to collaborating with local administrative and the government to integrate our solution for drone-based surveillance (that’s currently in trend to monitor internal areas of the cities). Parallel, The improvement of model is the main priority and we are adding “Action Recognition” and “Object Detection” features in our existing solution for even robust and better solution so decision-makers can make ethical decisions as because surveillance using Deep Learning algorithms are always risky (bias and error in judgments). Built With azure darknet flask google-cloud javascript nvidia opencv python tensorflow twilio yolo
Masked AI: AI Solution for Face Mask Identification
Masked AI is a cloud-based AI solution for real-time surveillance that keeps an eye on the human who violates the rule by not using face masks in public places.
[]
[]
['azure', 'darknet', 'flask', 'google-cloud', 'javascript', 'nvidia', 'opencv', 'python', 'tensorflow', 'twilio', 'yolo']
25
10,042
https://devpost.com/software/covnatic-covid-19-ai-diagnosis-platform
Landing Page Login Page Segmentation of Infected Areas in a CT Scan Check Suspects using Unique Identification Number (New Suspect) Check Suspects using Unique Identification Number (Old Suspect) Suspect Data Entry COVID-19 Suspect Detector Upload Chest X-ray Result: COVID-19 Negative Upload CT Scan Result: Suspected COVID-19 Realtime Dashboard Realtime Dashboard Realtime Dashboard View all the Suspects (Keep and track the progress of suspects) Suspect Details View Automated Segmentation of the infected areas inside CT Scans caused by Novel Coronavirus Process flow of locating the affected areas U-net (VGG weights) architecture for locating the affected areas Segmentation Results Detected COVID-19 Positive Detected Normal Detected COVID-19 Positive Detected COVID-19 Positive GIF Located infected areas inside lungs caused by the Novel Coronavirus Endorsement from Govt. Of Telengana, Hyderabad, India Endorsement from Govt. Of Telengana, Hyderabad, India Generate Report: COVID-19 Possibility Generate Report: Normal Case Generated PDF Report Inspiration The total number of Coronavirus cases is 2,661,506 worldwide (Source: World o Meters). The cases are increasing day by day and the curve is not ready to flatten, that’s really sad!! Right now the virus is in the community-transmission stage and rapid testing is the only option to battle with the virus. McMarvin took this opportunity as a challenge and built AI Solution to provide a tool to our doctors. McMarvin is a DeepTech startup in medical artificial intelligence using AI technologies to develop tools for better patient care, quality control, health management, and scientific research. There is a current epidemic in the world due to the Novel Coronavirus and here there are limited testing kits for RT-PCR and Lab testing . There have been reports that kits are showing variations in their results and false positives are heavily increasing. Early detection using Chest CT can be an alternative to detect the COVID-19 suspects. For this reason, our team worked day and night to develop an application which can help radiologist and doctors by automatically detect and locate the infected areas inside the lungs using medical scan i.e. chest CT scans. The inspirations are as below: 1. Limited kit-based testings due to limited resources 2. RT-PCR is not as much as accurate in many countries (recently in India) 3. RT-PCR test can’t exactly locate the infections inside the lungs AI-based medical imaging screening assessment is seen as one of the promising techniques that might lift some of the heavyweights of the doctors’ shoulders. What it does Our COVID-19 AI diagnosis platform is a fully secured cloud based application to detect COVID-19 patients using chest X-ray and CT Scans. Our solution has a centralized Database (like a mini-EHR) for Corona suspects and patients. Each and every record will be saved in the database (hospital wise). Following are the features of our product: Artificial Intelligence to screen suspects using CT Scans and Chest X-Rays. AI-based detection and segmentation & localization of infected areas inside the lungs in chest CT. Smart Analytics Dashboard (Hospital Wise) to view all the updated screening details. Centralized database (only for COVID-19 suspects) to keep the record of suspects and track their progress after every time they get screened. PDF Reports, DICOM Supports , Guidelines, Documentation, Customer Support, etc. Fully secured platform (Both On-Premise and Cloud) with the privacy policy under healthcare data guidelines. Get Report within Seconds Our main objective is to provide a research-oriented tool to alleviate the pressure from doctors and assist them using AI-enabled smart analytics platform so they can “SAVE TIME” and “SAVE LIVES” in the critical stages (Stage-3 or 4). Followings are the benefits: 1. Real-world data on risks and benefits: The use of routinely collected data from suspect/patient allows assessment of the benefits and risks of different medical treatments, as well as the relative effectiveness of medicines in the real world. 2. Studies can be carried out quickly: Studies based on real-world data (RWD) are faster to conduct than randomized controlled trials (RCTs). The Novel Coronavirus infected patients’ data will help in the research and upcoming such outbreak in the future. 3. Speed and Time: One of the major advantages of the AI-system is speed. More conventional methods can take longer to process due to the increase in demand. However, with the AI application, radiologists can identify and prioritize the suspects. How we built it Our solution is built using the following major technologies: 1. Deep Learning and Computer Vision 2. Cloud Services (Azure in this case) 3. Microservices (Flask in this case) 4. DESKTOP GUIs like Tkinter 5. Docker and Kubernetes 6. JavaScript for the frontend features 7. DICOM APIs I will be breaking the complete solution into the following steps: 1. Data Preparation: We collected more than 2000 medical scans i.e. chest CT and X-rays of 500+ COVID-19 suspects around the European countries and from open source radiology data platform. We then performed validation and labeling of CT findings with the help of advisors and domain experts who are doctors with 20+ experience. You can get more information in team section on our site. After carefully data-preprocessing and labeling, we moved to model preparation. 2. Model Development: We built several algorithms for testing our model. We started with CNN for classifier and checked the score in different metrics because creating a COVID-19 classifier is not an easy task because of variations that can cause bias while giving the results. We then used U-net for segmentation and got a very impressive accuracy and got a good IoU metrics score. For the detection of COVID-19 suspects, we have used a CNN architecture and for segmentation we have used U-net architecture. We have achieved 94% accuracy on training dataset and 89.4% on test data. For false positive and other metrics, please go through our files. 3. Deployment: After training the model and validating with our doctors, we prepared our solutions in two different formats i.e. cloud-based solution and on-premise solution. We are using EC-2 instance on AWS for our cloud-based solution. Our platform will only help and not replace the healthcare professionals so they can make quick decisions in critical situations. Challenges we ran into There are always a few challenges when you innovate something new. The biggest challenge is “The Novel Coronavirus” itself. One of the challenge is “Validated data” from different demographics and CT machines. Due to the lockdown in the country, we are not able to meet and discuss it with several other radiologists. We are working virtually to build innovative solutions but as of now, we are having very limited resources. Accomplishments that we're proud of We are in regular touch with the State Government (Telangana, Hyderabad Government). Our team presented the project to the Health Minister Office and helping them in stage-3 and 4. Following accomplishments we are proud of: 1. 1 Patent (IP) filled 2. 2 research paper 3. Partnership with several startups 4. In touch with several doctors who are working with COVID-19 patients. Also discussing with Research Institutes for R&D What we learned Learning is a continuous process. Our team learnt "the art of working in lockdown" . We worked virtually to develop this application to help our government and people. The other learning part was to take our proof of concept to the local administration for trails. All these “Government Procedures” like writing Research Proposal, Meeting with the Officials, etc was for the first time and we learned several protocols to work with the government. What's next for M-VIC19: McMarvin Vision Imaging for COVID19 Our research is still going on and our solution is now endorsed by the Health Ministry of Telangana . We have presented our project to the government of Telangana for a clinical trail . So the next thing is that we are looking for trail with hospitals and research Institutes. On the solution side, we are adding more labeled data under the supervision of Doctors who are working with COVID-19 patients in India. Features like Bio-metric verification, Trigger mechanism to send notification to patients and command room , etc are under consideration. There is always scope of improvement and AI is the technology which learns on top of data. Overall, we are dedicated to take this solution into real world production for our doctors or CT and X-rays manufacturers so they can use it to fight with the deadly virus. Built With amazon-web-services flask google-cloud javascript keras nvidia opencv python sqlite tensorflow Try it out m-vic19.com
M-VIC19: McMarvin Vision Imaging for COVID19
M-VIC19 is an AI Diagnosis platform is to help hospitals screen suspects and automatically locate the infected areas inside the lungs caused by the Novel Coronavirus using chest radiographs.
[]
['1st Place Overall Winners', 'Third Place - Donation to cause or non-profit organization involved in fighting the COVID crisis']
['amazon-web-services', 'flask', 'google-cloud', 'javascript', 'keras', 'nvidia', 'opencv', 'python', 'sqlite', 'tensorflow']
26
10,044
https://devpost.com/software/safeboda-on-boarding-application
Dropdown login form Dashboard with data visualisation Multi step registration form Riders list with overview of on-boarding process Riders profile with editing options Option for self registration on landing page coming soon Inspiration Bajaj motorcycles are popular amongst Boda Boda riders because of their functional reliability. This inspired us to build an application which we hope will be just as popular and reliable for Safeboda. What it does A web application that both Safeboda field and academy staff can access, and use to collect and store rider data in a central database. System users will be able to view rider profiles, edit information, and activate the rider if all requirements have been fulfilled. This web application will feature a dashboard that displays an overview of the data collected in a visual way that is easy to understand. Users will be able to view and download daily & weekly reports about the various stages of the onboarding process, such as the number of riders registered, in training, and ready for activation. How we built it We divided roles amongst us, having two frontend coders and two backend coders. We identified all the features and zeroed down to 4 key features, which were assigned to each team member to create. We collaborated on GitHub, and each team member made improvements on the code until we were all in agreement. We held daily team meetings to discuss our progress and way forward. Challenges we ran into 1) The remote working conditions slowed down our communication and decision-making process 2) An unreliable internet connection and occasional electricity blackouts made it difficult to work within the short deadline 3) We encountered errors that blocked us from successfully linking our frontend features to the database Accomplishments that we're proud of 1) The multi-step registration form that breaks down the registration process into easy to follow steps 2) Successfully set up a database and REST API server 3) Created an attractive and responsive UI prototype What we learned 1) Research is important for gaining a better understanding of programming processes 2) Teamwork and division of tasks is important to get a lot of work done with limited time 3) Dealing with bugs is part of the coding journey, so don't give up and keep trying to solve errors as they come What's next for Safeboda On-boarding Application Give riders a login feature where they can register themselves and offer a reward incentive system for riders who recommend others to sign up, in order to meet the ever-growing demand. Built With bootstarp css express.js github html javascript mongodb mongoose node.js Try it out github.com
Boda On-Board
Digitalisation of the rider on-boarding process
['Liz Kamugisha', 'marieblessed Musimenta', 'Famious Orishaba', 'Angella Nalwanga']
['Scholarship', 'Internship']
['bootstarp', 'css', 'express.js', 'github', 'html', 'javascript', 'mongodb', 'mongoose', 'node.js']
0
10,044
https://devpost.com/software/codequeenhackathon-m8le0g
Inspiration Our inspiration was picked from the passion to learn and explore programming What the idea is Our idea is basically to create a centralized and interconnected system that handles 3 processes in a fast and timely manner using a simple web tool that will enable direct and immediate access of the data input by the recruiter in the by the team at the academy Greatest achievement Having the web application functionality working well Successfully set up a database and REST API server Created an attractive and responsive UI prototype Being able to work with the a templating which we didn't have prior knowledge on. Our main challenge working against a time limit What we learnt We learnt to work in a team remotely and produce results. We learnt that initial planning is really an important aspect and saves loads of time while working What's next for CodeQueenHackathon Built With axois bootstrap css3 express.js html5 javascript mongodb node.js pug Try it out github.com
REIGN TEAM
Our idea is basically to create a centralized and interconnected system that handles 3 processes in a fast and timely manner using a simple web tool
['Nyayic Fanny', 'Beatrice Akatukunda', 'Wanyana Prossy', 'Henry Sekandi', 'Saraholila Apunyo', 'Shanitah Alice']
['Scholarship']
['axois', 'bootstrap', 'css3', 'express.js', 'html5', 'javascript', 'mongodb', 'node.js', 'pug']
1
10,044
https://devpost.com/software/eran-hackathon
Inspiration To challenge ourselves beyond our knowledge comfort zones where progress happens, and also be part of a solution to a challenge. What it does It enables real-time data sharing between the recruiters in the field and the system administrators in the Academy making the on-boarding process efficient. How we built it We used html, CSS, java script and some online templates. Challenges we ran into Poor internet connection, broken down laptops, electricity load-shedding, time factor (being fresh students in coding, our limited knowledge drove us to online tutorials which took a lot of time and affected the completion of the project. Accomplishments that we're proud of Learning how to work in a dynamic team, we were able to learn how to use zoom calls and other online ways of collaboration in order to work remotely, the added knowledge in coding that enabled us build the website. What we learned Team work, more knowledge on coding. What's next for ERAN-HACKATHON To make improve the website and give the users a better experience while using it. Built With css html javascript Try it out github.com
ERAN-HACKATHON
A website that makes SafeBoda on-boarding process of drivers less time consuming and also allows easy update and tracking of driver status.
['RebekahIronga Ironga', 'Atuheire Elizabeth', 'hamna nuru', 'nuwasiima adrine']
[]
['css', 'html', 'javascript']
2
10,044
https://devpost.com/software/code-queen-safeboda-hackathon
Inspiration the idea of cracking the hackathon challenge to be ready for what creating actual websites with real problem statements What it does it enables an administrator to create create user accounts who in turn register drivers. and it also enables one to update a driver detail's and status as well as driver's list at the different stages of onboarding How we built it we created a back-end and front-end where we split the roles of each person we used languages like html5, css, javascript, nodejs, mongodb. we collaborated on github and each team member had improvements to make on the code Challenges we ran into figuring out how to debug errors while connecting the back-end to the front-end, poor internet connections and shortage of power Accomplishments that we're proud of being able to create a system that corresponds with what the hackathon required What we learned team work is key in-order to accomplish something Research is crucial What's next for code Queen-Safeboda Hackathon Creating a provision where the drivers can do self registration Built With bootstrap css express.js github html5 javascript mongodb mongoose node.js
code Queen-Safeboda Hackathon
building a responsive driver on boarding website with a proper authentication system that enables the administrator to create user accounts who in turn register drivers
['Daphyn L', 'Diana Apolat', 'Brenda Natunga']
[]
['bootstrap', 'css', 'express.js', 'github', 'html5', 'javascript', 'mongodb', 'mongoose', 'node.js']
3
10,044
https://devpost.com/software/theaces
Inspiration Learning to code. What it does It makes recruitment easy. How I built it We were a team of four and we used html,css, javascript, node, bootstrap. Challenges I ran into Covid-19, collaborationg via zoom, slack and whatsapp, data consumption. Accomplishments that I'm proud of teamwork, collaboration, our finished product What I learned patience, resilience, remote collaboration, javascript, css and html, google. What's next for TheAces hopefully we win the scholarship; conquering the world with our new skills. Built With css html javascript Try it out github.com
RiTA
RiTA made easy
['Dorothy Palesa Nantagya']
[]
['css', 'html', 'javascript']
4
10,045
https://devpost.com/software/meetical-for-confluence
New meeting calendar Create a meeting page from a calendar event Agenda view of the calendar 1-on-1 meeting, created with one of the new templates Weekly meeting with the new rating macro Change template for recurring meetings Automate the creation of meeting pages for recurring events Inspiration Throughout my career as a developer and IT consultant I’ve avoided pointless meetings as much as I could, yet I’ve been learning that when done right, meetings can collectively decide whether a company succeeds or fails! They might block us from doing our jobs, or drive success by aligning individuals and teams to perform and deliver their best work. Lukas Gotter - Founder & CEO of Meetical Confluence is an amazing tool to share, prepare and document meeting notes. However, it lacks integration with modern Calendars and creating pages for meetings often feels like a waste of time. When pages start to get outdated and even lost, people become dissatisfied with the platform. Sadly, this can result in less effective collaboration and planning, making finding relevant information a challenge. Meetical’s mission is to change that, helping teams to excel at meetings and revolutionizing how people work with Confluence Meeting Notes. What it does Meetical for Confluence is a meeting management tool built by Confluence users for Confluence users. It allows teams to enhance meeting planning, documentation, and review, through seamless integration with popular third-party calendars like Google Calendar and Outlook. The App allows you to directly create meeting pages from your Calendar and automatically links the event with a confluence meeting page. In addition, recurring meetings get automatically grouped and you can fully automate the creation of pages to ensure a Confluence page exists at the right time for every key meeting. Meetical also provides users with a variety of new Confluence templates, comprised of thoughtful combinations of our custom, proprietary macros that help standardize and improve the meeting experience. What problems does it solve Have you ever lost meeting notes and had to spend time tracking down people and/or information? Have you ever wished to create a meeting page with a single click containing all basic meeting info from your calendar? Have you ever wanted to automate the creation of Confluence meeting pages? Have you ever wanted a standardized way to publish a meeting page for everyone attending? Have you ever wanted professionally maintained confluence macros providing meeting metadata at-a-glance? Have you ever wanted customizable metrics around meeting effectiveness, driven by attendees? Meetical is the solution for all of these common problems! (1) Meetical supports and integrates directly into services like Google Calendar and Outlook, with extensions that allow users to create templated Confluence pages directly in those applications, so that immediately after creating your meeting, you can add Meetical integration and seamlessly publish a meeting page to a space of your choosing. (2) Meetical adds structure to meeting notes on Confluence. By standardizing the process by which meeting notes are created, we make it simple for notes to be organized. The Confluence page is also directly linked in the meeting description, so you only have to track down the meeting to find its associated notes. (3) Furthermore, by automatically linking the meeting Confluence page in the meeting description, all attendees are candidly afforded access to the meeting notes before, during, and after the meeting! Meetical takes the burden of notifying meeting participants of the meeting page by adding them as watchers. (4) One of the coolest aspects of Meetical are its macros. Tens of custom-designed macros, providing different segments of meeting metadata with varying degrees of customizability, are made available to all users on a Confluence instance. (5) One of Meetical’s newest macros is its Rating macro, which allows all attendees to vote on a customizable aspect of a meeting or any question in general. User ratings are persisted in Confluence to ensure everyone only gets one vote, and a general average of all votes is also available. Choose between a variety of visual rating styles, including emojis, stars, and a scientific Likert scale! How we built it Meetical is built on Atlassian Connect and Spring Boot, providing backend services, rendering macros as individual template views and, as appropriate, integrated with React JS and Atlaskit for a cutting-edge user experience. GitLab pipelines provide continuous integration and automated testing, while Heroku supports our production and staging environments. Challenges we ran into One of the most significant challenges was overcoming limitations of Confluence Cloud. For example, the absence of a REST API to get the user’s time zone and an API to get a Confluence user account id by email address. Setting up a Redis cache was our solution to this to have performant access and match meeting participants with Confluence users. Another significant challenge was “React”ifying our macros. Recognizing the performance impact of dynamic Confluence macros, there were still some things that made a lot of sense to write in React. This meant top-down integration, to allow macros to communicate with backend services and provide distinct visual feedback using Atlaskit. Integration with the server side Atlassian Connect App connected the dots for that problem. Since we were working on an existing product with a growing user base, we could not move with the speed that other teams might have been able to, who could have compromised testing, documentation, and/or backward-compatibility. In spite of that, we were still able to move forward relatively quickly and make significant improvements. Something that should not be understated is the time spent on setting up continuous integration pipelines and development environments. Before making significant changes to the codebase, we ensured that GitLab was running all of our JUnit tests and that docker-compose would eliminate the dependency on starting Postgres and Redis instances manually. None of this existed before the start of the hackathon. Our hackathon team is 3 software engineers, spread across 3 countries and 4,000 miles. The time-zone differences presented unique challenges but we made it work with JIRA, Confluence, Slack, Zoom, and even Meetical, facilitating near-daily stand-ups and constant communication. Accomplishments that we’re proud of We are very excited to be able to present Meetical as a greatly improved finished product to a wider audience. Meetical was spawned out of a side project from our CEO, Lukas Gotter, who found that companies gravitated toward using our app organically, without advertising campaigns or anything like that, eventually becoming paying customers to support Lukas’ full-time dedication to the project. We’re proud of our existing first customers which also helped us to shape and optimize the product, and giving us early feedback for planned new features and epics. Meetical was growing quickly, but in order to accommodate the demands of the Codegeist hackathon, and to deliver on most of the feature improvements to set Meetical apart from others, Lukas had to look for like-minded software engineers from all around the world that were both inspired by the product and familiar, to some degree, with the existing architecture. For Meetical to have evolved so much since the start of the hackathon, makes us all very proud and excited for the future of the application! What we learned Developing Cloud native Apps with the Atlassian Connect Framework Minification of React components and integration with Atlaskit Use Redis for scalable, performant, distributed caching and make microservices communicate with each other Process of facilitating reactive communication between macros and backend How to stay productive and communicate effectively when working with people around the world What's next for Meetical The Meetical hackathon team has found each other during this amazing event and our dream would be to be able to continue working on this together in full time. Winning this hackathon would give Meetical both a financial and reputational boost to help make this dream come reality almost overnight. Our new App features will be deployed to production as soon as we finish the submission and we are excited to hear the user’s feedback. In the future we want to allow custom meeting page titles, and improve the overall performance. Also, we want to develop a new Slack App to create Confluence meeting pages, which we already prototyped during the hackathon. We even have new app ideas that were discovered during the hackathon but didn’t have time to implement. One of the most critical judging criteria is the “extent to which the solution can help the most Atlassian users”, and it’s a focal point that Meetical is best able to distinguish itself from other submissions. We believe that Atlassian is looking for more than just an idea. Because an idea, not supported or maintained, does nothing for the platform. Meetical wants to continue to support and grow our application post-hackathon, regardless of whether we win. The fact that we were able to release a few months before the hackathon to organically grow our clientele, and prove that Meetical filled a need that companies currently have (and are willing to pay for) might just be our biggest selling point. Meetical is here to stay, and we’re excited for what the future holds. Built With atlaskit atlassian-connect heroku java javascript postgresql react redis spring spring-boot Try it out marketplace.atlassian.com
Meetical for Confluence Cloud
Create Confluence Meeting Notes from your Calendar! Automate recurring meeting admin tasks and make your meetings more effective with this amazing Confluence App.
['www.meetical.io', 'https://marketplace.atlassian.com/1222405?utm_source=devpost', 'Lukas Gotter', 'Lucas Jahn', 'aubrey-y Yan']
['Built with Connect - Grand Prize']
['atlaskit', 'atlassian-connect', 'heroku', 'java', 'javascript', 'postgresql', 'react', 'redis', 'spring', 'spring-boot']
0
10,045
https://devpost.com/software/lively-recorder-for-confluence-audio-video-screen
Simply insert the recorder macro to start a new recording! Using the recorder pop-up, recording everything from screencasts to audio is a breeze! After you're done, insert your recording and share it with your colleagues! Yes, we've got data locality! You can choose the region of the S3 bucket where we will store your recordings. Never fear running out of attachment space again. With the Lively Recorder you can get up to 5 TB of storage, just for your recordings! Inspiration In our internal Confluence instance we document a lot of our processes in the format of “How we do X” articles. This works great for simple stuff, but for more complicated workflows we have found screen recordings to be much more useful than plain text. As a result, we started adding screen recordings to these pages. After some time, this turned chaotic as different team members were using different operating systems, tools and settings. Less tech-savvy team members also had trouble correctly setting up a recording tool on their machine, so they weren’t able to participate. At the same time, we noticed that these videos were taking up valuable attachment storage in our Confluence instance. We were afraid that we would soon run out of attachment storage if this continued. We needed a simple unified way to create and share screen, audio and video recordings in Confluence, without using our attachment storage. This is how the idea for our Lively Recorder App was born! What it does Upon inserting the Lively Recorder macro, you can choose what type of recording you want to create. Currently our app supports the following types: Audio Video Screen After choosing a type, our recorder pop-up opens. Here you can choose the correct microphone, camera, or screen for the recording. From there, everything is quite simple: Start the recording. Stop it once you’re done. Give it a name and possibly a description. Upload your recording! Once the recording is fully uploaded, the pop-up will close automatically and insert the recording into your page! Now your colleagues can listen to your charming voice, look at your beautiful face, or watch you delete the production database - all in your Confluence page! Now, here’s the icing on the cake: for all EU-based folks and those who value data locality, we have implemented a special feature! When first using the recorder, an admin can decide where in the world your recordings should be stored. Or in more technical terms: you can choose the region of the S3 Bucket that we will create for you! As we are partly EU-based ourselves, this was quite important to us. How we built it We started by building a frontend-only proof of concept that allowed us to make simple video recordings in the browser. After we got that down, we implemented a prototype of our backend that locally stored recordings, so we could test uploading and viewing them. This was later replaced by a mechanism that stores the files in an S3 Bucket. After that we implemented screen and audio recording and focused on improving the UX. Challenges we ran into Because Connect apps live in iframes and those iframes are sandboxed strictly, we had trouble gaining access to the user’s camera and microphone. We solved this problem by using a pop-up, which also turned out to serve the usage flow of our app very well. It also wasn’t easy to come up with a good solution for storage quotas. We wanted to give every user as much storage and flexibility as possible while keeping the price of the app low. We initially wanted to provide different tiers/plans, so that each customer could choose exactly the amount of storage that they need for their instance. This, however, turned out to not be possible on the Atlassian Marketplace. Consequently, we decide to give every instance 100GB for the start and increase that depending on the amount of paid users in the instance (up to a maximum of 5TB). Accomplishments that we're proud of Of course, we are happy that we were able to develop a cool app that solved issues we were facing internally. But we are even more proud of how we worked together as a team to make this happen. We are a small team split across Munich, Germany and Vancouver, Canada and every member was working from home due to the current global situation. Using tools like Confluence, Jira and Slack to coordinate and synchronize ourselves across time zones, we managed to work together just like we were in the same office. This definitely strengthened our bond as a team. :) What we learned Pop-ups can give you access to browser APIs that might otherwise be blocked in Connect iframes. With the right tools fully remote teams can have a great team experience! Not to underestimate the amount of work required for a presentation video, starting early pays off! What's next for Lively Recorder for Confluence We are super excited about our new app and have lots of ideas for it moving forward: Add a simple editing/trimming tool so unwanted parts of your recording can be removed easily. Automatically set storage quota depending on amount of paid users (up to 5TB per instance). Page-level recordings overview page (similar to the attachments overview). Space-level settings and overview (e.g. allow/disallow recordings in this space, see storage used in space, etc.). More player options (force mute, colors, play speed, etc.). Track views of recordings and display them if desired. Volume level indicator for microphone device selection. Allow direct upload of video and audio files. Improvements to the global settings page (more details about what spaces/pages use how much storage, set max recording length, restrict usage to group, clean-up tools, etc.). Continuous improvements to the overall look & feel of the app. Built With amazon-web-services atlassian-connect-spring-boot aui connect java javascript spring-boot typescript webrtc
Lively Recorder for Confluence
Create audio, video and screen recordings directly from within Confluence!
['Braxton Hall', 'Sven Schatter', 'Fabian Siegel', 'Felix Grund', 'Nick Peterson']
['Built with Connect - Second Place']
['amazon-web-services', 'atlassian-connect-spring-boot', 'aui', 'connect', 'java', 'javascript', 'spring-boot', 'typescript', 'webrtc']
1
10,045
https://devpost.com/software/zenrpa-for-jira
Inspiration As product builders, we have collectively spent the past 7 years leading product teams and using JIRA at 3 major unicorn companies representing over $1B worth of annual recurring revenue. We've found that large Enterprise B2B companies have unique challenges when it comes to customer issue triage: We tended to have large enterprise contracts that customer success managers had a huge incentive to protect We tended to have large enterprise prospects that account executives had a trememdous incentive to win We tended to have major partnerships that business development folks had a large incentive to grow and unblock The Pareto principle applied: 80% of revenue came from less than 20% of our logos, a highly uneven distribution Unsurprisingly, each stakeholder thinks the issue THEY filed in JIRA is the most important. Typically, whoever has the loudest voice wins this tug-of-war. Finally, we all know that one person who only ever chooses "P1" severity when filing their issue. Yes, you know who I'm talking about. Everything is a P1 to that person. FTW! We believe product teams everywhere are underserved for this conundrum. We believe they are missing a powerful, informative, and opinionated triaging tool within their own issue tracking systems. What it does Our product is an embedded issue triager tool for JIRA. It is a meticulously crafted and opinionated re-design of the way a Product team triages incoming issues in the Enterprise B2B context. Our product lets Product teams take a TurboTax-like approach to triage issues within JIRA Software. For every issue that gets filed, a PM is guided through a series of simple questions and verifications. At each step, the PM is presented with just the data they need to triage, pulled from a CRM, analytics tool, or internal Admin, conveniently into the JIRA issue. Each step has a simple one-click answer. When we showed our designs to Product teams, they were excited, but we learned that their triage processes differ so much that they needed a way to actually build their own process in. So: Our product is not only the issue triage tool itself, but also a no-code app builder that lets Product teams make a triage tool of their own. How we built it We used the Atlassian Connect framework to build a triage interface for Product Managers directly into issues in JIRA. A webpack bundle carrying a React app is deployed into the JIRA environment. Since Product teams have different requirements for their triage process, and require different data inputs, we built an intuitive no-code app builder that actually re-compiles the triage app for a Product team's specific process. Some features include: Connect your CRMs, analytics tools, and internal Admins to source data from Pull your customer profiles (ex: Salesforce Account record) and analytics context right into a JIRA issue Build the connected path of TurboTax-like questions a PM can quickly answer for every issue they triage Challenges we ran into We got super constructive feedback on our tool from various Product teams. As a result, we had to re-architect some of our earlier product designs when we learned that they had such uniquely different triage processes that they needed flexibility to re-build the triage tool for their process. Accomplishments that we're proud of We landed a proof-of-concept launch with a major unicorn company that will put our TurboTax-like triage approach to the test, so we can prove that our approach really works even at scale. We believe the next generation of 1M users of JIRA will need simple but powerful triage capabilities, and we're very proud to deliver that for them. What we learned From interviewing customers and showing them our app designs, we learned why they would pay for our solution: Issue triage happens faster , when PMs don't need to pull up the process on a Google Doc and gather data such as customer ACV from Salesforce, usage stats from Mixpanel, etc. Issue triage becomes fair and consistent , which avoids the "loudest voice in the room" situation Issue triage becomes measurable for the first time, because you can capture the entire timeline of actions and what specific actions the PMs take in the course of triaging, you can see your team impact from triaging right When we learned this, we published our findings to our quick landing page for this product! What's next for ZenRPA for JIRA We're gearing up to run the POC with a potential enterprise customer and are focused on making that a success by August. Built With express.js mongodb nextjs postgresql react Try it out config.zenrpa.com
ZenRPA Triager for JIRA
TurboTax for issue triage. Build your own simple triage flow without code.
['Albert Ho', 'Tongbo Huang']
['Built with Connect - Third Place', 'Best Jira App']
['express.js', 'mongodb', 'nextjs', 'postgresql', 'react']
2
10,045
https://devpost.com/software/microsoft-teams-for-jira-pro-edition-app
Start a new Microsoft Teams conversation from Jira See live Teams chats & conversations in Jira Search and post issues from Microsoft Teams Inspiration For at least the last two years, customers have been nagging us to do a proper Microsoft Teams app for Jira. A pro edition, with a deeper integration and a flawless stream of communication between the two products. First, we were hesitant – but as our team grew, and as everyone was stuck in home office, our Microsoft Teams usage went through the roof. And we suddenly saw the hurdles our customers had to overcome. This needed to be fixed. What it does As we know from our other apps, the biggest time saver for our customers is avoiding switch costs between highly used tools of Atlassian and Microsoft. Our new Microsoft Teams for Jira app offers an easy access to the most popular, full collaboration tool Microsoft Teams, but from Jira. (Something we would have loved Stride to be, to be honest.) Imagine the following scenario: You are working in your Jira backlog and need to discuss something with a designer or stakeholder. Usually you’d have to leave Jira and jump over to Microsoft Teams to find the correct person to talk to. Our app simplifies this. You now can… …create new group chats or channel threads from Jira …see live updates to this chat / conversation in Jira to get instant feedback …get related conversation and comments shown in Jira automatically, if a Jira issue is shared in Microsoft Teams This deeply integrated collaboration allows you to see all relevant information and discussions for every Jira issue. How we built it We built it on the Atlassian Connect platform & Atlassians React Atlaskit component suite. We connect to Microsoft 365 services using Microsoft Graph. Challenges we ran into As always, if you start pushing the boundaries of what an integration does, you run into API limitations real fast. We have been working directly with Microsoft to allow us to provide as much value as possible for an initial integration and continue to work with them on advanced features for the future. Accomplishments that we're proud of We started to work on this app early this year and will be able to ship a satisfying first release this summer to our customers. Despite the challenges of working remotely we managed to create a new app with a great user experience! We worked closely with our marketing team as well, not only to validate our use cases with our business user experts, but also to build a cross-channel launch later this summer for this app. What we learned Our customers define our roadmap. This is one of our values as a company as we strive to build apps that fit the needs / that fit our users needs. At first we weren’t sure, if our new Microsoft Teams for Jira app would fit the market. But now, we’re very certain it will. What's next for Microsoft Teams for Jira - Pro Edition We will ship the first Jira Cloud release to both Marketplaces (Atlassian & Microsoft) this summer. After that, we will add support for Jira Server & Datacenter and put our focus on the next big topic: Microsoft Teams File sharing and meetings. Built With atlaskit atlassian-connect microsoft-graph node.js react redux typescript
Microsoft Teams for Jira - Pro Edition
Connect your team. Start a Microsoft Teams conversation directly in your Jira issue. Your colleagues can easily join the discussion in Teams while you continue working in Jira!
['Tamara Braun', 'Tobias Viehweger', 'Andy Schmidt', 'Anke Viehweger']
['Built with Connect - Fourth Place']
['atlaskit', 'atlassian-connect', 'microsoft-graph', 'node.js', 'react', 'redux', 'typescript']
3
10,045
https://devpost.com/software/pair-up
Pair Up App Relevant people with expertise Relevant issues Dynamic keywords and list of people Inspiration In most vertical engineering orgs, each agile team owns a piece of the product from end-to-end. A frequent problem faced by individuals on these teams is to identify the best person in the org who can help unblock important bugs and enable them to make progress. This has become more challenging with the current pandemic related lockdowns, where teams have become remote and quick communication across orgs is becoming difficult given fluid schedules and time-zones. What it does When you launch the Pair Up issue glance, the app ranks people in your org who would be best suited to resolve the current issue based on similar issues they have resolved in the past. It also maps these individuals to the Jira issues they've fixed, that the app found relevant to the one currently being worked on. Pair Up helps users quickly connect with the right people and solve the issue more efficiently. How I built it We built the app as an issue glance using the Atlassian Connect framework. The app has a webhook module to process new issues and updates to existing issues. In the backend, the app uses AWS Lambda and IBM Watson NLU AI Framework to find relevant keywords from the summary and description of issues. Based on the analysis of keywords, we build multiple knowledge graphs using NetworkX, which in turn identifies and ranks people who have the experience to resolve the issue. Challenges I ran into Finding the right UI elements to use in the issue glance was a bit challenging. There was a bit of confusion on whether we should use Atlassian UI (AUI) or Atlaskit for the UI elements. Accomplishments that I'm proud of The app does a good job at identifying related keywords in issues, some of which might not have been explicitly mentioned by the reporter. Finding and ranking these people cannot be done purely based on keywords as most issues have multiple keywords. To overcome these issues, we built custom knowledge graphs for each issue which ranks importance of all people in the org to resolve this issue based on all issues they had resolved previously. We also had to fine tune our usage of the Watson API and our graph algorithm in order to accomplish this. Since we designed it with a cloud-based approach, the app is extremely scalable, reliable and capable of handling production load. What I learned We learned how to develop an Atlassian app and use the Atlassian UI elements in a way that maintains the look and feel of Atlassian products. What's next for Integrate with Confluence in order to make the knowledge graph more accurate and targeted Integrate with tools like Slack to make the collaboration easier and seamless. Built With amazon-web-services connect ibm-watson network-x node.js python Try it out github.com
Pair Up
Pair Up helps remote teams address their Jira issues efficiently by leveraging tribal knowledge held within their organization.
['Nithya Renganathan', 'Vikram Parthasarathy', 'Revanth Anireddy', 'Siddharth Subramanian', 'Gopinath Sundaramurthy']
['Built with Connect - Fifth Place']
['amazon-web-services', 'connect', 'ibm-watson', 'network-x', 'node.js', 'python']
4
10,045
https://devpost.com/software/jirami-real-time-retrospective
Jirami! Real-time Retrospective Complex projects and large teams can cause a lot of stress. Increasing workload and complexity make it difficult to see the actual progress and team members communicate problems too late. All of this causes frustration and stress, which can lead to delays and even burnouts. During the Atlassian Codegeist Hackathon 2020 we used Atlassian’s powerful Connect framework to develop Jirami: Realtime Retrospective. An app that gives insight into project status, promotes communication and brings people together as a team in a playful way. Welcome to Jirami, an unexplored island that houses marvelous wonders but also great perils! Exploring this island will be no easy task but uncovering its secrets will be worth it! Will your team take up the adventure to form an exploration party and map the entire island? Jirami provides insight into the project’s status, challenges of individual team members and their vitality. Using the fast and easy Atlassian Connect Express, each chosen sprint progress is visualized by a route across the island with the points of interest representing user stories and issues. Discover unexplored areas by completing user stories and issues but don’t fall behind on the tour guide! Jirami connects team members and makes working together the main focus. Create a quest to let your team members know you need help. Quests immediately show up in the Jira board and on the Jirami Island. Jirami promotes communication and an open environment by letting team members give an impression of their mood each day. Team members can communicate their mood during the day by assigning a grade to how they feel, picking a representative emoji and writing a short description. This is a conversation starter for addressing problems. To give the team something to work towards and further strengthen cooperation, Jirami offers the option to set goals and rewards. For example, the scrum master will bring cake or karaoke for the whole team if a goal is reached. Jirami uses story points from issues in a Jira board to create a score. This score can be used to set goals for teams to reach. When the amount of user stories in the Jira board is completed, the goal is reached, and the reward can be claimed. Exercise and vitality are important aspects of a healthy and stress-free work environment. This is why Jirami can be linked to Google Fit, to integrate real-time health data into the project status. Team members fill up a health bar during the day by doing at least 30 minutes of moderate exercise, as recommended by the World Health Organization. To motivate the team even more, team members can unlock badges for achievements. These can be for example: helping a lot of team members by solving quests, earn the most points in a sprint or building up a streak of exercising days. Each new sprint the scoreboard resets and there is a new chance to be a winner. Built With google-fit jira mongodb typescript
Jirami
Jirami offers project insights, team challenges, and vitality using Atlassian Connect Express. Sprint progress visualized on an island route.
['Stan Engels', 'Rayco Haex', 'Jeffrey van den Elshout', 'Lian Kuiper', 'Thomas Driessen']
['Built with Connect - Honorable Mentions']
['google-fit', 'jira', 'mongodb', 'typescript']
5
10,045
https://devpost.com/software/easytime-cnzqu2
Worklog created just by viewing the issue Asking for user input on an extended worklog EasyTime glance after creating a worklog Part of the Configuration Screen Be Lazy, The Machine Does It Better Instant, Automatic, Visual Inspiration EasyTime is all about taking a painful, manual process and letting the machine do the hard work. Our team had been struggling, manually inputting time data so that we could accurately bill our customers, when we finally got fed up and decided to let the machine handle it. We have been using EasyTime internally at TechTime and have seen accurate completion of timesheets skyrocket, while even less time is spent by team members worrying about it. What it does automatically records time based on viewing an issue, commenting on an issue or resolving an issue records time in predefined chunks aligns worklogs to a time grid produces distinct messages configurable for different events optionally limit tracking to select projects only track time for select user groups only automatically suspends tracking when browser window is not active, resumes when activated records in Jira Issue View, Jira Software Boards and Jira Service Desk Queues recognises priority of events identifies clashes, shrinks or replaces low priority events merges sequential worklogs silently prompts to merge worklogs that are too far apart How we built it We already had a Server / DC version of the app, but had little experience writing apps with Connect for Jira Cloud, so we decided to use the same codebase for the business logic behind the decisions EasyTime makes like when to merge, when to overwrite and when it's best to not do anything. We literally use the same business logic in the server and cloud versions of our app, so a lot of the work went into the translation between the data that Jira Cloud provides and the data our business logic was built to work with. Challenges we ran into async and lossy nature of Jira Cloud webhooks mean EasyTime needs to be more forgiving of the raw data provided by Jira. limited nature of some Jira Cloud API's and extension points mean we needed to squeeze EasyTime into the UI in slightly unnatural way, particularly the "flags" UI which we use extensively in the server version of our app, works in a completely different way for cloud. Accomplishments that we're proud of: Accepted to the Atlassian Marketplace and live today in production :) The first working automated worklog on Jira Cloud was a magical little moment. What we learned Sometimes adapting your solution to the platform is the hardest part of the problem Leaned a huge amount about the Connect platform and Jira Cloud in general What's next for EasyTime Integrating with other tools you use every day, to log time in Jira, like Bitbucket, Confluence, Slack, IDE's etc. etc. Polishing of the interface, to reach the standards expected of a high quality, consumer-facing application Built With atlaskit atlassian connect java jira Try it out marketplace.atlassian.com
EasyTime
Start tracking instantly, recording time automatically and review timesheets visually. For Relaxing Times – Make it EasyTime
['Ed Letifov', 'Poorvi Jhawar', 'Richard White', 'Richard Lapwood']
['Built with Connect - Honorable Mentions']
['atlaskit', 'atlassian', 'connect', 'java', 'jira']
6
10,045
https://devpost.com/software/zoom-recordings-for-confluence
Try it now: https://marketplace.atlassian.com/apps/1223133/zoom-recordings-for-confluence Inspiration With so many people now working from home or remote, asynchronous Zoom recordings have become a key team communication tool for many businesses. But sharing recordings among team members is quite a mess: Cloud recordings are limited to Pro accounts. To save costs most companies will have only a few employees with paid Zoom accounts which results in siloed recordings. No central location to access relevant recordings. The current workaround is to bulk forward emails with recording links or manually copy URLs into a shared spreadsheet. Recordings are password protected by default. When this setting is left on it requires finding the recording owner who then needs to search their emails for the password. What it does It's a macro that allows teams to collaboratively add, edit, display, search, sort and watch any Zoom cloud recordings without leaving Confluence. Some features: Connect with Zoom Oauth to fetch and add recordings from your Zoom account. Manually add non-password protected Zoom recording links. Edit recording topics instead of the default eg "Nathan Waters' Zoom Meeting". Add a password for password-protected recordings (easy access via iframe popup). Display thumbnails in three layouts: table, grid and list. Search, pagination and sort by date, topic or length. Recordings open into a dialog popup with a video player. Pasting a Zoom recording link into Confluence editor auto-converts to video player. No data leaves the Confluence instance (it's all within content properties). How I built it React and Atlaskit with code splitting. Confluence content properties. Serverless Cloudflare Worker for backend API. Zoom API and Oauth (awaiting app approval). HTML5 video component. Static build deployed to Netlify. Challenges I ran into There are many hacks involved in making this work: Dealing with potentially sensitive recording material I didn't want to do any external storage. However the key-value Confluence content properties are limited to 32KB chunks. So I built and tested a helper function to split data into 32KB max chunks and run a loop to get/set content properties. Would be good if that chunking was handled by the getContentProperty() and setContentProperty() functions. There were also some useful features (eg auto-add, password bypass) I had to drop because I didn't want to store Zoom access/refresh tokens externally or in content properties. Secure storage of 3rd-party access tokens within Confluence would be a neat option. Please change CONFCLOUD-62377 from a suggestion to a bug. My workaround hack for that was to use URL query parameters and a timestamp in the saveMacro data. The Zoom API is severely limited. Zoom requires all interactions happen server-side, the /recordings response doesn't return any thumbnails, or any video embed code and doesn't say which recordings are password protected. Not even in the official Zoom account dashboard do you see thumbnails for recordings. Through trial and error I discovered the returned download_url works within a HTML5 video element. So the thumbnails you see in my app are video elements skipped to 1 second in and with all controls/interactions disabled. The manual-add link feature is using a Cloudflare Worker to fetch the HTML and extract hidden input elements with the topic, date and duration details. It does the same to check if a recording is password protected. Accomplishments that I'm proud of Solving the challenges above piece by piece and putting the work in to get this project over the finish line. I'm a solo dev and for this app it's taken 7 days per week, ~10-12 hours per day for the past ~3 weeks. #codelife What I learned I finally made the switch to React Hooks with this app so learned a lot there (I still prefer Svelte over React). More in love with Cloudflare Workers than I was before. I built a custom API using tiny-request router. Found a few bugs and limits in the Atlassian developer platform. I figure you're running this hackathon for two reasons: more apps and to improve the DX. So hopefully some of my feedback is useful :) What's next for Zoom Recordings+ for Confluence Awaiting app approval from Zoom. Until then the Oauth connect only works for my account. Submit the app for approval on the Atlassian marketplace. Iterate based on feedback from customers. Built With atlaskit cloudflare react zoom Try it out marketplace.atlassian.com
Zoom Recordings+ for Confluence
Add, display, search and watch Zoom cloud recordings directly within Confluence.
['Nathan Waters']
['Built with Connect - Honorable Mentions']
['atlaskit', 'cloudflare', 'react', 'zoom']
7
10,045
https://devpost.com/software/happee-know-how-your-employees-feel-by-collecting-feedback
Welcome to Happee! Let's tell us how you feel Rate how you feel about your team How happy are you with the company? What made you feel good this week? Is there anything that concerned you? All of the questions can be answered anonymous, so your feedback can be really honest. At the Happiness Dashboard you can see different kind of metrics related to the given votes. This is your personal Happiness Index. You can compare your votes with the average votes of your colleagues. The overall company happiness shows the average results of the last weeks. Take a look at the feedback wall to see the latest feedback entries. Inspiration We are working completely remote for months now. We have experienced how difficult it is to get an insight of how our coworkers feel, especially how happy they are with the team and the company. There are no conversations in the tea kitchen any more, nor is there any lunch together, where people talk about the good and the bad things at work. Especially for roles such as Scrum Masters and Agile Coaches it is a special challenge to have so little personal contact in order to identify where challenges currently exist. With Happee we want to bring teammates closer together - no matter if they are working co-located or remotly. What it does Pulse survey On a regular basis, employees are asked to measure their personal happiness, happiness with the team and also with the company. Facts that influence their mood are asked as well. The survey only takes a couple of minutes and the employee can choose whether to share his or her name or to remain anonymous. Dashboard Happee offers a simple and effective analysis of the submitted results. A dashboard provides the possibility for employees to compare their own mood with the company's average or to review how the happiness has changed over time. How we built it We built the Front-end with Vue.js – the data persistence is handled via Google Firestore and the Authentication mechanisms are implemented with Atlassian Connect. The backend is running on Cloud Run. :) Challenges we ran into iFrame resizing of the macros and dynamically generated lists can get a bit tricky when developing Confluence Cloud apps. Accomplishments that we're proud of We built the MVP in a week after work ❤️ What we learned It is worth the time to build a good development set-up. We achieved this using ngrok and nodemon to emulate our microservices locally. This way we were able to build our backend without time intensive deployment delays. The atl.general webPanel location is quite handy when implementing an application wide reminder functionality. Combined with a cookie within the Connect iFrame we used it to show a notification if the user can share information regarding his or her happiness. What's next for Happee - Know how your employees feel by collecting feedback Next, we want to provide the possibility to start an interaction based on the feedback. It will be possible to vote on feedback items or start a discussion with other people. Recommendations will be shown depending on the personal entries: For example, if there are negative entries, Happee suggests you to address them in the next retrospective or discuss them with a Scrum Master or any other teammate who you trust. We also want to add more metrics . Built With atlassian-apis atlassian-connect firestore javascript love scss typescript vue
Happee - Company Happiness for Confluence
Whether your employees work in a remote team or share the same location, being happy at work means you're more engaged and productive & less likely to change your job. So Happee asks "How are you?"
['Sarah Schmitt-Bär', 'Colin Stark', 'Alex Plutta', 'Julian Wolf']
['Built with Connect - Honorable Mentions']
['atlassian-apis', 'atlassian-connect', 'firestore', 'javascript', 'love', 'scss', 'typescript', 'vue']
8
10,045
https://devpost.com/software/screenful-reports
Final report Schedule report Example chart Inspiration We wanted to create a tool that allows you to create fully customizable reports from Jira data by combining charts and text fragments. Reports can be used for sharing project status even with those who don't log in to Jira regularly. What it does You can construct a report and store it as PDF or schedule it to be sent to your colleagues via email or Slack. How I built it We first built a chart editor which allows creating custom charts such as line or bar charts. We also developed a List view which allows creating lists of issues. Finally, we created a report editor which allows combining charts and task lists into a report. Challenges I ran into There were no ready made chart libraries that would work out-of-the-box so we ended up customizing the charts quite a lot. Turning a HTML report into a PDF caused a few headaches as well. Accomplishments that I'm proud of The editor is simple yet powerful. The final reports look beautiful. What I learned With the right team, anything is possible What's next for Screenful Reports We'll add things like more complex layouts, new widget types etc. Built With node.js vue Try it out marketplace.atlassian.com
Screenful Reports for Jira Cloud
Business Intelligence reports for Project Managers
['Tuomas Tammi', 'Nairi Harutyunyan', 'Sami Linnanvuo', 'Gevorg Harutyunyan', 'Mikayel Petrosyan', 'Vilina Osilova', 'Hayk Yaghubyan', 'Ville Piiparinen']
['Built with Connect - Honorable Mentions']
['node.js', 'vue']
9
10,045
https://devpost.com/software/scrum-maister
UI navigation - from Jira issue view to ScrumMaister components Issue breakdown exmaple and recommendations Sprint analytics generated for the active sprint that discover potential problems AI-modelled retrospective items from the completed sprint Update - May 2021 Scrum Maister is now packed with a plenty of new features and LIVE on Atlassian Marketplace - give it a go! Inspiration As a director of engineering in product company, I spend my days building Agile development teams and departments and making them efficient. I live and breath Scrum and spend 2 years researching how to make Scrum teams successful in my MBA. My other passion is Machine Learning and the ways the AI and machine learning models can improve development practices. When working with teams, I believe in immediate feedback and tools that shift all activities closer to the team, Scrum included. The key to driving successful changes is a persistent presence of the proposed improvements in all aspects of work and levels. What I noticed in most teams that practice Agile development is smaller or larger inconsistencies around the work in Jira - each team have their own way and all them claim being Agile. However, it became evident that, compared to successful teams, failing ones had poorer practices of describing, breaking down their work, following up on it during sprints and discussing pitfalls and potential improvements in the retrospectives. In most cases, it was not about following Scrum by the book, but rather not being present in Jira as a team. To solve that, I came up with an idea of creating an AI that lives in Jira and is a part of the team and works as a super-powered Scrum Master that spots tiniest inefficiencies, slowdowns or blockers and informs the team about them right away. I invited my brother Maksym who is a published scientist and a PhD student in Machine Learning and AI program of EPFL (Switzerland) as a key expert in AI development for the project. The name for the project was easy to come up with - Scrum Master + AI = Scrum mAIster What it does Scrum Maister is a Forge add-on that actively helps teams improve their development, collaboration and SCRUM practices. It natively integrates with Jira and operates as 3 main modules at all stages of development process - before, during and after sprint activities: "Scrummarly" - it uses AI to analyze issue descriptions and then suggests text changes to improve issue breakdown to follow Agile and SCRUM practices. It also suggests a user which issue fields to fill and how to generally improve it for other users. It computes "SM breakdown score" which indicates how well is issue prepared for efficient work in SCRUM or Kanban setting. "Sprint analytics" - active discovery of potential blockers and pitfalls during active sprints across 7 different dynamic dimensions. This module performs assessment of all issues in the sprint, founds the communication, collaboration, progression problems and makes them visible in the sprint analytics dashboard. "Retrospectives" - the model collects and analyzes sprint patterns, breakdown specifics, communication and work progression and provides the baseline for team retros that includes issues that are not visible to a naked eye. Scrum Maister has a potential to become a game-changer in Agile development - its feedback is available immediately, AI text processing and modelling of potential improvements can play a significant role in improving development practices, fostering team work and enhancing not only Scrum but any development framework in any organization that uses Jira as it fits into all stages of SCRUM process as well as other less formal development methodologies (notice the blue monster icon representing Scrum Maister): How I built it I followed the Forge design spirit and utilized Azure PaaS (AppService) an FaaS (Logic apps) in the micro-service architecture pattern - where the data is sent from Forge app and Jira into the cloud-hosted solution API and model APIs. The retrospective analytics is saved into the highly-available geo-redundant Azure CosmosDB. The architecture is shown on the picture below: We used Natural Language Processing model with 2-4 grams and training set of about 2000 issues labeled according to the quality of their description. The privacy and data security is an important factor for the application, we limit the data that leaves Jira to only minimal viable (issue description, numbers of disruptions) and exclude personal or sensitive information. We also do not store the processed data and follow best practices of security for FaaS and PaaS solutions. Challenges I ran into The Forge UI toolset is slightly limited with certain functions being on the roadmap. We had to find the ways to use the existing functionality to achieve our goals. We expect to improve our design, UX and information representation practices as the platform develops. Another challenge was how to apply theory to solving practical problems - we did build the model fairly quick, but getting the training data, creating the API and returning meaningful suggestions to users turned out to be a hard but rewarding work. Accomplishments that I'm proud of When I apply Scrum Maister suggestions to real teams and their backlogs, I can see a huge amount of small improvements that, together, can significantly improve the quality of the product the team develops and the team work itself. If our tool helps at least one team to become more successful, this would me a world to me. What I learned We learned how to build and prototype quickly - the whole project was created within 4 weeks, and forge theory into practice. We are absolutely excited for the next steps in Scrum Maister roadmap and delivering even more advanced machine learning and AI models that will help teams become efficient. What's next for Scrum Maister We will spend the next weeks on improving product quality and reliability - while it works well for the key functions, some corners had to be cut short. We need to implement more robust and user-friendly error handling and cover the code with autotests. Then, the next big step would be to switch from the N-gram model for text generation to the state-of-art deep transfer learning model. We will also work on improving the checks for sprint analytics and retrospective generation. Around this time we expect our first beta customers to arrive. Then, it is all about scaling and listening to the customer feedback. We hope by November, Forge apps will be added to the Atlassian marketplace so that we get access to more customers and follow the Atlassian marketplace best practices. Also, we will work on our design, icon sets and website to make it consistent, enterprise-friendly and proprietary. Our roadmap: Follow latest updates Check out our website and tune in for the updates on functionality and availability: https://scrummaister.com/ Built With forge javascript machine-learning natural-language-processing python Try it out scrummaister.com marketplace.atlassian.com
Scrum Maister
Scrum Master + AI = intelligent helper that revolutionizes Agile development through AI-powered issue grooming, sprint analytics and retrospectives
['Igor Andriushchenko']
['Built with Forge - Grand Prize']
['forge', 'javascript', 'machine-learning', 'natural-language-processing', 'python']
10
10,045
https://devpost.com/software/visualize-with-aws-atlassian-forge
Visualize with AWS - app benefits Visualize with AWS - Jira issue panel with PlantUML AWS diagram Visualize with AWS - Confluence macros with Vega diagrams Visualize with AWS - Confluence macros with Vega-Lite diagrams Visualize with AWS - Confluence macros with Mermaid diagrams Visualize with AWS - Confluence macros with PlantUML C4 and AWS diagrams Visualize with AWS - Confluence macro editor (diagram source via inline JSON) Visualize with AWS - Confluence macro editor (diagram source via URL) Visualize with AWS - Confluence macro editor and preview Inspiration Our initial inspiration was the excellent Charts.xkcd library, which we both found appealing and funny, and we wanted to integrate it into Jira and/or Confluence via an AWS based rendering backend. Alas, that specific library requires a bit more work to use outside of a browser DOM, so we switched to a couple of other powerful chart and diagramming engines first, hoping to get back to the original plan and add XKCD style diagrams as another option later on. What it does Visualize with AWS allows you to use a variety of declarative diagram rendering engines to visualize any kind of data. It provides a Jira issue panel to render a single diagram (for example an architecture diagram via Mermaid or PlantUML (incl. AWS and C4 modes), and a Confluence macro to go wild with as many visualizations as you can fit on a page (notably via Vega and Vega-lite ). Diagrams can be provided and edited inline, or referenced via URL for more complex or dynamically updating charts and diagrams (down the road we want to separate the so far combined visualization and data source declaration) – meet Visualize with AWS (Atlassian Forge) . How we built it We used the Forge CLI to quickly explore the various Forge modules to determine the applicable UI components for the use case. We then started over with the app itself and have refined it since, which mostly meant finding sufficiently usable workarounds for encountered UX issues based on Forge UI limitations. The Forge CLI's excellent DX makes all this a breeze, so it is a great prototyping tool for Atlassian apps in general. The rendering engines are primarily provisioned on serverless infrastructure via Docker containers on AWS Fargate right now, though we intend to migrate most to AWS Lambda functions for even better scalability and utilization. Challenges we ran into The special circumstances of the Forge execution model, namely the combination of React style components with a Lambda based backend invocation cycle made managing state and async calls a bit irritating at first, but mentally 'translating' this to a kind of old school, page cycle based web application model helped a lot to overcome this. Also, the mentioned Forge UI limitations force some workarounds to prevent obvious usability flaws, but we are confident that the Atlassian Team will iterate quickly on providing additional layout and design options so that elements can be presented in more flexible ways in dialogs, forms, and tables. Accomplishments that we're proud of We are really happy that the app is cross-product from the get go and works conceptually similar in Jira and Confluence. We are also happy about the well scalable and easily extensible serverless API that will allow us to add new rendering engines quickly, with the most intriguing missing candidate being Charts.xkcd to visualize data via “sketchy”, “cartoony” or “hand-drawn” styled charts . What we learned We learnt a lot about Scalable Vector Graphics (SVG), a technology we have always been fond of, but never had the need to dive into from a development perspective. So far crafting applicable SVG images seems to be the most versatile workaround for Forge UI limitations. Of course, it still can't replace JavaScript based interactions with diagrams for example. Based on the excellent Forge CLI DX, we also intent to use a 'Forge first' development approach for our Cloud apps going forward, because the limitations force you to stay laser focused on the customer value, while ignoring UI/UX bells and whistles during the prototyping phase. This eases developing a domain model and the backing API/SPI so that the app core remains independent from the frontend technology. Whether or not an app still requires the currently superior UI versatility of Connect then depends on how the Atlassian Cloud platform evolves over the coming months and years. What's next for Visualize with AWS (Atlassian Forge) We would very much like to make this app available to users via the Atlassian Marketplace, so depending on the ETA for the public distribution of Forge based apps, we might need to migrate the app to Atlassian Connect on the short term. That's no big deal given its static characteristics, but still a pity, because once we do and need to create a likely more appealing AtlasKit based UI anyway, migrating back without disrupting users will be difficult. Feature wise we primarily need to separate the so far embedded remote data sources as a separate entity so that diagrams are easier to use and maintain, and more importantly, to provide live local data via JQL and CQL. We also need to rethink the prototype domain model for Jira in terms of how to best store and present more than one diagram eventually. Finally, we would also like to add an XKCD style rendering engine via Charts.xkcd . Built With amazon-web-services atlassian confluence forge jira svg Try it out marketplace.atlassian.com marketplace.atlassian.com
Visualize with AWS (Atlassian Forge)
Visualize data via declarative chart and diagram engines such as Vega/Vega-Lite, PlantUML, Mermaid, and Graphviz
['Steffen Opel', 'Henrik Opel']
['Built with Forge - Second Place']
['amazon-web-services', 'atlassian', 'confluence', 'forge', 'jira', 'svg']
11
10,045
https://devpost.com/software/precodegeist
How I built it (Part 1: get the data and train the neural network). How I built it (Part 2: Atlassian Forge & Jira). Use case 1. How I built it (Part 2: Atlassian Forge & Jira). Use Case 2. GIF Issues from Apache Project GIF CSV file with more than 100K Jira issues An user reads a Jira issue. An user creates a issue with "\p" (prediction) prefix. Issue created with "\p" (prediction) prefix. Issue type predictions, with tensorflow Issue priority predictions, with tensorflow Inspiration Is it possible to automatically fill in some of the fields for Jira's issues, based on the title of the issue? We can be more productive and avoid human errors if we do this automatically. Can we train a neural network with hundreds of thousands of issues to achieve this? Using Deep Learning to predict some properties of Jira issues, reading the summary of the issue. At this time, we will predict the type (bug or improvement) and priority of Jira issues (high, medium, low). What it does It predicts from the summary of a Jira issue, its type and priority. This prediction appears in the detail of the issue. Also, for users who prefer to use shortcuts, a trigger has been designed so that if the issue summary starts with "\ p", then a comment is automatically written to the issue with the prediction and the prefix "\ p" is deleted. This prediction is achieved thanks to a neural network that has been created and trained for this project, using Tensorflow. How I built it Part 1: get the data and train the neural network. First of all, I downloaded many real Jira issues projects, like apache projects ( https://issues.apache.org/jira ). After preprocessing the information, I created a CSV file with more than 100,000 real Jira issues. For each issue, we have its summary, type, priority, etc. Next, I've designed 2 neural networks with Tensorflow 2 and trained them to predict the type of an issue and its priority. I've done several experiments with different hyperparameters, up to over 75% accuracy with test issues. Afterwards, I exported the trained models so I can use them with TensorflowJS. This step is necessary because in the Forge FaaS (Function as a Service) I can execute code in javascript (node) but not code in pytyon (TensorFlow). However, I explain later that tensorflowjs didn't work in Forge FaaS and then I've fixed it with a workaround (AWS Lambda). Part 2: Atlassian Forge & Jira. Use Case 1 . An user reads a Jira issue. The “IssuePanel” of “Predictions for Jira” appears. If the predictions are cached in the issue property, the predictions are returned directly. If the predictions aren’t cached, the predictions are made in AWS Lambda functions created for this project. Part 2: Atlassian Forge & Jira. Use Case 2 . An user creates a Jira issue. He uses “\p” prefix, because he wants an automatic comment with the prediction. The "issue created" event triggers a function and checks “\p” prefix. The predictions are made in AWS Lambda functions. A comment is automatically published with the prediction, and the prefix "\ p" is removed. Challenges I ran into It is my first application for Jira, it has been a challenge :) Also, I have not used an existing neural network, but I’ve looked for actual Jira issue dataset (from apache opensource projects), I’ve preprocessed the dataset and I’ve trained 2 neural networks to predict specific properties of Jira issues: type and priority. The accuracy achieved is very good. TensorFlowJS didn’t work in Forge FaaS. “TextEncoder is not a constructor" error appeared during the deploy. As a solution, I did a workaround and setup TensorFlowJS with trained models on AWS, using AWS Lambda and AWS API Gateway. It has also been a challenge to cache the predictions in "Issue Properties" and use the Jira API to detect the "\ p" prefix in the summary and edit the summary. Accomplishments that I'm proud of I really think that deep learning is very useful in Jira. I am very proud to build a quality dataset, to train the networks to get very good results and to develop an application with Forge that is useful for work teams, because it reduces human errors and increases productivity. I am also proud of the workaround to be able to run tensorflowJS. Finally, I am proud of the idea of using a "\p" prefix in the summary, as a "shortcut" for advanced users. And very happy to complete the project before the deadline. What I learned I’ve learned to create applications with Forge for Jira. I’ve learned more about neural networks, and I’ve learned a lot about networks that work with text and I’ve practiced the "Word Embedding" method. I've also learned how to collect dataset and how to better pre-process data. What's next for predictions for jira In some scenarios it may be interesting that, some properties can be automatically filled in with the prediction of the neural network, when the user leaves theses properties empty. The neural network can be trained for specific Jira projects, so that it can even predict the person assigned to each issue. If the issue includes a "description", also use it for predictions. Be aware of the improvements in Forge, to be able to take advantage of them. Built With amazon-web-services deeplearning forge jira lambda tensorflow tensorflowjs Try it out github.com
Predictions for Jira
Forge Jira app with Deep Learning to predict some properties of Jira issues
['Javier Campos']
['Built with Forge - Third Place']
['amazon-web-services', 'deeplearning', 'forge', 'jira', 'lambda', 'tensorflow', 'tensorflowjs']
12
10,045
https://devpost.com/software/ai-insights-for-jira-service-desk
Issue satisfaction timeline Agent conversation scores Customer satisfaction chart Problem tickets reported on the dashboard Inspiration There is a huge potential for the use of AI within the cloud space to support and enhance business processes. Taking advantage of an opportunity to leverage the Forge framework to get a taste for what might be possible was a 'no-brainer' What it does The app uses the IBM Watson Tone Analysis to detect tones and emotions found in Jira Service Desk ticket conversations. Tones detected with this endpoint include frustrated, sad, satisfied, excited, polite, impolite and sympathetic. Conversations between user agents and customers are broken down and given a score according to their tone. Real time updates and a timeline of conversation tones are displayed via the quick chart API right within the issue panel. Tones and their scores are then stored within Jira for filtering, reporting and tracking. How we built it Forge of course... and with some help from the Jira Kanban board and Bitbucket. The Forge Slack channel was helpful. Challenges we ran into We only decided to have a go at building this proof of concept within Forge for codegeist early in June. Just 10 days before Codegeist entries were due to close! We wanted to get a feel for forge and see if we could actually stand something up in such a short amount of time. We definitely would have liked a little more time to refine and polish things a little more.... Accomplishments that we are proud of We think our app is pretty cool. Like playing with Siri or Alexa right within Service Desk. Working out how to incorporate the quick chart API into the Forge UI was rewarding too. What I learned We learnt that Forge has great potential and will keenly watch and support further development. What's next for AI Insights for Jira Service Desk We plan to utilise the winnings from Codegeist to plan and build a fully featured version of this app for the Atlassian Cloud marketplace. Built With forge ibm-watson quick-chart Try it out izymesdev.atlassian.net
AI Insights for Jira Service Desk
Measure and report your on your customer’s experience from within Service Desk in real time via AI linguistic analysis. Build dialog strategies to adjust conversion patterns accordingly.
['Ulrich Kuhnhardt', 'Michael Moriarty']
['Built with Forge - Honorable Mentions']
['forge', 'ibm-watson', 'quick-chart']
13
10,045
https://devpost.com/software/comala-read-confirmations-forge
Using a simple on-page macro, assign users to confirm that they have read content. When a team member is assigned, they can acknowledge reading pages by simply pressing the "OK" button. Confluence Cloud creates amazing documents. Make sure the right people see them with Comala Read Confirmations. Use Comala Read Confirmations to make sure the right people have read the right Confluence Cloud documents. You can configure the app to show the elements that your users need to see, like due date, readers, detailed view and confirmation status Inspiration Our users already enjoy Read Confirmations in our Comala Document Management Family of Apps. Comalatech itself has been using Read Confirmations internally as we have been implementing SOC 2 practices. The feature has proven so useful that we wanted to bring it to Confluence Cloud as a stand-alone app in Forge. We are really excited to bring this app for teams of any size in the near future when Forge is widely available for Atlassian products. What it does Assign team members to read Confluence pages, and keep track of who has read them, with Comala Read Confirmations. Users confirm reading with a simple click, and you can request new people to read the document at any time. If the content of the page changes, send out a new request for confirmation from the previous assignees. How we built it Read Confirmations exists as a feature within our existing apps, but there was plenty of work needed to bring it to the Cloud. When Codegeist 2020 was announced, our team kicked into high gear to deliver this app on both the Forge and Connect platforms. In the spirit of innovating and for the team's first challenge we started to play around with the Forge UI, leveraging ContentAction as an entry point for our app and enabling user interaction with the ModalDialogs component. We knew that we wanted to implement the UserPicker to select users from a Confluence instance to read a page, and leveraging the AvatarStack component made it a really easy to pull user account information when the app assigns users to read a page. To our surprise Forge provides developers easy access to UI elements. All though it is still in the early stages we see the potential to interact with all the elements that Confluence allows for Connect apps. Challenges we ran into The first challenge was getting our Cloud team up to speed on the Forge platform. Thankfully the platform is very developer friendly, so soon we were able to focus on the development challenges ahead of us. When we did run into some issues with Forge, Atlassian's team was incredibly responsive in helping us resolve the problems through the Slack groups they setup for the hackathon. They made it really easy to follow up on the issues through a Jira board. Accomplishments that we're proud of We're very pleased to be among the first Marketplace Vendors to release a Forge app. We're big believers in Atlassian's Cloud vision, and Forge is a huge milestone in moving the ecosystem in that direction. What we learned Building a FaaS is not an easy task and Atlassian is making an incredible effort to launch a developer friendly environment where teams/organizations can create solutions on top of Atlassian's top products. Was surprised to know that in order for us to develop on Forge we needed both Jira and Confluence active on an instance - mental note: read the docs :P Support is critical in the first 3 days when you start building on a brand new platform, here is where we really put the pressure on Atlassian and they manage to deliver an excellent level of attention to all the information we provided and did a great follow up - kudos to the Forge team! Being open to not having all the answers let's you and your team make quick decisions to move forward in building software. This is what hackathon's all about...so plenty of fun during the days prior to submitting this app What's next for Comala Read Confirmations - Forge We're not stopping here with Read Confirmations. We have plans to release additional features for the app, and we've also discussed integrating it with our Comala Document Control app. The future should be exciting for both Confluence Cloud and Comala Read Confirmations! Built With ace confluence forge forgeui node.js runtimeapi
Comala Read Confirmations - Forge
Whether you are sharing key pages with team members, or using formal read acknowledgements for compliance purposes, Read Confirmations allows users to confirm they have read a page with just a click
['Naiara Martín', 'Claudio Cossio']
['Built with Forge - Honorable Mentions']
['ace', 'confluence', 'forge', 'forgeui', 'node.js', 'runtimeapi']
14
10,045
https://devpost.com/software/xyz-m0g1pw
Reach the links out of sight Inspiration We wanted to show on an Epic all issues that are related to the particular issue in any way (support requests, releases, development tasks). Many of them do not have a direct relation to Epic. Unfortunately, there is no view available in Jira that would gather all related issues, not just the direct ones. Of course, the solution is available not only for Epic but for all issue types. For example, being on the feature request in the service desk project I can see not only the linked requirement which fulfills this issue but also the release issue in which this feature was released. What it does This app shows all linked issues to the third depth which appears in the Issue Glance . It provides a quick way for a user to get an overview of information about these issues. They are three available criteria: Project Issue Type Status Thanks to presenting this data under a particular issue, a user can understand better the context and estimate the impact of related tasks on the current issue. This app works also in Next-Gen Jira projects. How we built it We used Forge UI components which helped us create the front end of our app: IssueGlance Table Text Link We also used Forge Fetch API which is a partial implementation of node-fetch. We manage the entire project using Jira Cloud and Confluence. We have prepared the interface graphic designs in Figma. Challenges we ran into To enjoy our full success, we lacked a few elements that are currently not supported in Forge: the ability to define the icon size in the component, such as Issue Type or Priority (now they are really huge, what looks weird) option to format the location of components on the form (cannot be justified, right / left aligned) Limitations on interface styling can negatively impact user experience, especially if we add new elements in the glance. All suggestions we reported to the Atlassian team. Accomplishments that we're proud of Our beginnings with Forge were very difficult. That is why we consider it successful that the application works. What's more: it works correctly. The strength of our solution is the lack of configuration. The application is installed and is immediately ready for use. On the other hand, we are aware that this can be a problem for large instances. Therefore, in the future, we would like to add a module with a configuration for the administrator. We think that the design of the panel (glance) and the mechanics of the application's operation do not differ from the Jira standards. We believe that it is important that the end-user does not feel that he is using an external application. Our application fits perfectly into the product offered by Atlassian. What we learned This was our first contact with forge, so everything was new to us. We learned how Forge works, what is currently possible to achieve, what not yet. In case of this app, the biggest limitation was using only their components to build the interface. We cannot use HTML tags and design UI with CSS. Worth noticing that having React background was really helpful to create the app quickly. None of us has ever participated in any hackathon! We work with each other every day, but it is a completely different style of work than during such a competition. We were still thinking about how little time we have left, and how much is still to be done. But we can say it was worth it and it certainly is not our last time. We still want to work together :) What's next for Deep Linked Issues for Jira The most important functionalities we would like to add to the application are: New criteria, such as Priority or Link Type Possibility to filter linked issues Graph view A lot depends on how much Forge will offer us and in what direction its possibilities will develop. Currently, Forge is not supported on the Atlassian Marketplace. (We are looking forward to adding such support!) For this reason, we decided to convert the application into a Connect app to be able to share it with other users and publish it on the Marketplace. Built With forge
Deep Linked Issues for Jira
Browse not only directly related issues with the current issue. Reach the links out of sight. Let the user delve into the connections and better understand the context of the issue.
['Krzysztof Skoropada']
['Built with Forge - Honorable Mentions']
['forge']
15
10,045
https://devpost.com/software/issues-security-for-jira
Forge - Register account Spring Boot Authentication Server - Register form Mobile App - Login Screen Mobile App - Register device and card Forge - Sign issue Mobile App - Requests list Mobile App - Completed sign issue Forge - Completed sign issue Inspiration Jira Cloud lacks in securing issues with elements, such as online signatures. Currently, our company Transition Technologies PSC, in order to migrate from Jira Server to Jira Cloud would need such a feature. On Jira Server, we managed to solve this problem by custom development. We also wanted to test out the newest Forge technologies and check it's compatibility with other external micro services, such as the ones created in Spring Boot and mobile applications (such knowledge could be useful in the future). What it does Our system allows your Jira Cloud's users to register into our independent Spring Boot application, and later with such account, to sign Jira issues, using our mobile app and contact less card. After you install our Forge application in your Jira Cloud instance, you must do the following steps: You must authorize our Forge app on your Jira Cloud. On issue view, users will first need to register in our Spring Boot application.. Continue registration by logging on mobile app, and providing contactless card (you must have NFC module). Beware: you can register only one device and one card per account! Going into your Jira Cloud issue, and sending requests for signing issue. Approving request by logging on mobile app. And that's it! Your issue is signed! How we built it Our system is composed of 3 elements: Spring Boot application, deployed on Heroku. Mobile application on Android (in the future, iOS), written in ReactNative. Forge application. Challenges we ran into Connecting 3 separate, independent micro services, which depend on each other, creating one system. First experience with new Forge technology. Accomplishments that we're proud of We can show our company the possibility of migrating from Jira Server to Jira Cloud, by providing the same level of security, as it is currently created, in our addon. What we learned We learned the basics of Forge, and we improved our current abilities. What's next for Issues Security for Jira Our system is in alpha version. There is a lot of space for improvements, such as: Improving security measures. Better user experience. Multi platform. iOS mobile application. Built With forge heroku java javascript jira microservices react reactnative spring thymeleaf
Issues Security for Jira
Allowing users to use online signatures on Jira Cloud issues with personal contactless cards.
['Adrian Kruk', 'Krzysztof Gruszczyński', 'Kamil Stepien', 'Rafał Najs', 'Michał Gajewski', 'Michael Dubel']
['Built with Forge - Honorable Mentions']
['forge', 'heroku', 'java', 'javascript', 'jira', 'microservices', 'react', 'reactnative', 'spring', 'thymeleaf']
16
10,045
https://devpost.com/software/mockuper
Sample home page Inspiration Prototypes and mockups are an essential part of the Software development process. However, it is not possible to create interactive prototypes directly in Jira. This means you need to integrate third-party solutions, and you end up wasting time from changing between different software. So, what if you could quickly and easily create interactive prototypes directly in your Jira Issue? Well, that's what Prototyper is for. What it does It allows you to create interactive and responsive prototypes in your Jira Issue using a simple, yet powerful markup language. The parser transforms raw text in beautiful, interactive and responsive prototypes (made from SVG and HTML). Seventeen widgets have been developed so far, but more is coming =) Card List List Items Input Button CheckBox Radio Option Select Label Title Header Image Search Link Separator Line Goto You can customize alignment, spacing, colors, and icons. We use the FontAwesome icons, which means you have access to hundreds of icons to use in your prototypes. Input fields and forms :title My Title align=center top=60 bottom=30 :input Name :input Email :input Password :select City :checkbox I accept the terms and conditions :button Sign up goto=home :link I have an account Renders this: Cards, Tabs and Headers :header Events right=search left=bars :card title=Pink Floyd subtitle=The best show of the year image=show goto=details :card title=U2 subtitle=The 360° Tour image=show :tab Featured icon=list color=primary :tab Promos icon=percent :tab Weekend icon=calendar goto=weekend :tab Favorites icon=star Renders this: List and List items :header Pink Floyd left=chevron-left :image type=show :title Pink Floyd :label This is the best show of the year :list :item Stall subtitle=$140 right=chevron-down :item Boxes subtitle=$200 right=chevron-down :item VIP subtitle=$599 right=chevron-down :button Checkout goto=checkout Renders this: Search inputs :header Filter left=chevron-left :goto Back goto=home :search Search events :list :item Pink Floyd goto=details :item Queen :item Guns N'Roses Menu :header Events right=search left=bars :goto Dismiss Menu goto=home :menu Menu :item Profile :item Settings :item Notifications :item Logout goto=login :card title=Pink Floyd subtitle=The best show of the year image=show :tab Favorite icon=list :tab Promos icon=percent :tab Weekend icon=calendar :tab Favorites icon=star Renders this: Changing colors :color primary=green :header Events right=search left=bars :card title=Pink Floyd subtitle=The best show of the year image=show goto=details :card title=U2 subtitle=The 360° Tour image=show :tab Featured icon=list color=primary :tab Promos icon=percent :tab Weekend icon=calendar goto=weekend :tab Favorites icon=star Renders this: Responsive pages How I built it Forge only allow us to use their components to build interfaces. This means that we cannot use HTML tags to create interfaces. The secret sauce to create the prototypes was to use SVG images with a foreignObject tag inside it. This allowed me to embed arbitrary HTML elements, including even styles. Challenges I ran into The biggest challenge was finding a way to create beautiful and interactive prototypes without being able to use HTML tags directly in Forge. After trying different approaches, the final solution of using SVG images with a foreignObject inside allowed me to fulfil my vision for the app. Accomplishments that I'm proud of I'm really proud of the flexibility and interactivity it is possible to achieve with my app. It can be easily extended to include more widgets, more customisations, and yet everything is very simple to use. What I learned It was a great experience to learn how to use Forge and other Atlassian products. The ecosystem is amazing and allows you to create amazing products by leveraging Atlassian solutions. What's next for Prototyper Add desktop specific widgets Add more widgets Add canvas widgets (to allow drawing anything) Allow cloning a page Add more themes Add more device sizes Built With forge javascript react svg Try it out github.com
Prototyper - Create Prototypes in your Jira Issue
Easily create interactive prototypes in your Jira Issue. No external tool needed.
['Gustavo Zomer']
['Built with Forge - Honorable Mentions', 'Best Open Source Forge App']
['forge', 'javascript', 'react', 'svg']
17
10,045
https://devpost.com/software/trello-board-flow
Create mirrors directly from your cards in Trello Quickly navigate between your mirrors inside of Trello Set rules to create mirrors automatically Use mirrors to distribute cards effectively across teams Create a personal master board to manage all cards that are assigned to you Simplify team meetings by raising relevant cards automatically Break down your work effectively by linking checklist items to cards. Inspiration At placker.com we help our users to bring focus in their work by making complex work simple. A key driver in complexity in work is that work needs to be split into smaller pieces then distributed amongst teams or grouped into bigger chunks. To manage distributed work effectively, you need to link and sync cards. This is where card mirrors come into play. What it does This power-up links and syncs cards across boards in Trello, either by making mirrors manually or making mirrors automatically, for example to mirror a card to my personal board when I'm assigned to the card, or to raise an impacted card to the stakeholders board by assigning the 'stakeholder' label to the card. Mirrors cannot only be created between cards, it is also possible to create mirrors from a checklist item on one board to a card on another board. Or to share cards across multiple boards. This way it is possible to create cascading work breakdown structures that helps teams to manage a portfolio in programs in projects, or a product backlog that consists of epics and user stories into tasks to sync the work between the product team and the development team. How I built it The power-up is based on the placker platform, for this power-up we added two features; 1. the ability to create mirrors by using mirror rules in Placker and 2. the ability to mirror multiple cards in one mirror group. Challenges I ran into Allowing multiple cards in a mirror group turned out to be tricky as it could happen that the update on one card triggered and update on another and a loop was introduced. Accomplishments that I'm proud of We have been testing these features already with a selected user group and the responses have been very great, this power-up contains a lot of the lessons we have learned over the years with regards to why and how people mirror cards, with more features to come very soon. What I learned We have learned a lot on the technical side related to managing async updates in groups and ensuring rules get executed correctly. On the functional side on why and how our users are using mirroring and what features are What's next for Board mirror (by Placker) We will release more triggers and actions for the rules, specifically the ability to automatically create item to card mirrors when a user is assigned to a checklist item. We will be moving the mirror overview and mirror actions to the card-back-are so they are more clear in Trello and some other UI/UX improvements based on feedback that we'll get from our user group. Try it out The power-up is not yet officially released, when you are already a placker user, please reach out to us in the chat https://placker.com so we can add you to our Mirror user group and help you get set. Until then, be sure to test our Projects by Placker Power-up in the power-up directory on Trello.com Built With javascript placker trello
Card mirror (by Placker)
Automatically Link & Sync cards across Trello boards
['Reinder Visser']
['Best App for Remote Working']
['javascript', 'placker', 'trello']
18
10,045
https://devpost.com/software/devsheds
DEVsheds Launchpad Software Engineer View Data Science View Service Modules Module Dependencies Key Benefits Elevator Pitch Removing friction between teams and working application & ML code, DEVsheds provides instant workspaces for software development and data science teams to build and run code backed by a Bitbucket repository. Major pain points in the development lifecycle include: On-boarding and empowering a new member of the team Efficiently communicating progress directly from the most tangible output - working code Fixing problems that begin with the words “it works on my machine” DEVsheds broadens support across the development lifecycle and addresses the pain points above while staying within the familiar Atlassian user experience; allowing teams to enjoy the benefits of a consistent experience, single user administration flow and integrated development workflow. Inspiration I’ve been the new person on a team and I’ve been accountable for bringing new people into teams. When joining a new team, you want to make a good impression and you’re excited about being able to contribute. Yet the first experience of many software developers, data scientists, and test analysts is often the frustrating, time consuming and demoralising one of trying to set up a local development environment to get started. I get excited about the creative aspects of projects and the buzz of a team firing on all cylinders. I want to create tools that help people focus on what they enjoy rather than get bogged down with the mechanical requirements. What it does For teams creating software and data products, working code is where “the rubber hits the road”. It’s what is showcased to stakeholders and presented to customers. DEVsheds removes the friction between teams and working code by making it easy to start working with the code, share work products with teammates, and iterate on functionality of value to customers. It does this through: Being an embedded part of the Atlassian toolset to ensure a smooth flow across the development workflow involving Jira user stories and tasks, development of source code in Bitbucket, submitting and reviewing pull requests, and releasing code to communicate progress. Supporting the research and experimentation phase of systems, especially ones involving analytics and machine learning, with Notebooks that allow immediate feedback loops of write, run, and visualise. Provisioning reproducible and automated environments that allow rapid feedback and smooth collaboration. How I built it The solution employs the Atlassian Connect Express (ACE) library to integrate with Bitbucket and uses React for the user interface. Development workspaces are created as Kubernetes containers. DEVsheds builds on the open source projects Eclipse Theia, JupyterLab, the Jupyter ecosystem, and Garden.io. Challenges I ran into The key technical challenge of the project has been in automating, securing, and ensuring scalability of the underlying infrastructure. From a design perspective, the key challenge has been turning a complex process into a simple and intuitive user experience. Accomplishments that I'm proud of Making the complex simple: enabling easy transition from source code to development environment and working application. Hiding the complexity of integration and orchestration across multiple computing clusters. What I learned Lessons learned along the way include: Embedded product design is about more than integrating the technology - the goal is to create a simple and productive user experience Codegeist has afforded a deep dive into the APIs and modules available to the Atlassian Developer The many challenges in securing distributed systems What's next for DEVsheds The goal is to enable a closer connection between research, development, and the user experience. For example, I want to enable product teams to be able to experiment with data and machine learning models, engineer a promising model as a software service, and then deploy this service to see the impact on the user experience – all within the same workflow and set of tools so the focus is on creativity. This would be a major leap forward compared to the disjoint set of processes and tools that exists for many teams today. Future areas of focus include: Deeper integration within Atlassian workflows including pull-requests, deep linking of Jira user stories and tasks, and integration with Bitbucket Pipelines. Improved collaboration including commenting and review of code, notebooks, and releases. Easy access to environments for background jobs and specialised workloads such as machine learning model training. Built With atlassian-connect-express bitbucket kubernetes node.js python react
DEVsheds
Removing friction between teams and working application & ML code, DEVsheds provides instant workspaces for software development and data science teams to build and run code.
['Mark Moloney']
['Best App for Remote DevOps']
['atlassian-connect-express', 'bitbucket', 'kubernetes', 'node.js', 'python', 'react']
19
10,045
https://devpost.com/software/screenful-reports-gepjwq
Reports context menu Final report Sample chart Inspiration We wanted to create a tool that allows you to create fully customizable reports from Trello data by combining charts and text fragments. What it does You can construct a report and store it as PDF or schedule it to be sent to your colleagues via email or Slack. How I built it We first built a chart editor which allows creating custom charts such as line or bar charts. We also developed a List view which allows creating lists of issues. Finally, we created a report editor which allows combining charts and task lists into a report. Challenges I ran into There were no ready made chart libraries that would work out-of-the-box so we ended up customising the charts quite a lot. Turning a HTML report into a PDF caused a few headaches as well. Accomplishments that I'm proud of The editor is simple yet powerful. The final reports look beautiful. What I learned With the right team, anything is possible What's next for Screenful Reports We'll add things like more complex layouts, new widget types etc. Built With node.js vue
Screenful Reports
Business Intelligence reports for Project Managers
['Mikayel Petrosyan', 'Tuomas Tammi', 'Nairi Harutyunyan', 'Sami Linnanvuo', 'Hayk Yaghubyan', 'Vilina Osilova', 'Ville Piiparinen']
['Best Trello Power- Up']
['node.js', 'vue']
20
10,045
https://devpost.com/software/flow-for-jira
FLOW for JIRA Adding an Activity Adding Description Marking as Done Inspiration Flow helps teams to simplify their daily work activities, creating a friendly and easy to use checklist to follow the progress of the story at anytime, it will allow them to create a independent checklist for each Status/Board Column. Flow allows teams to easily extend functionality to adjust to different team needs. WebHooks, Notifications and Workflow Validation are in the roadmap. Main problem that FLOW solves: Easy way to understand the progress of an issue quickly Each team/ project is unique and have different way to track progress. Misinterpretation of a plan text issue description What it does Using Checklist help the team to understand and follow the progress of an Issue. How I built it I used ForgeUI and ForgeAPI, with itergation with JIRA and StoreAPI Challenges I ran into Not much really the documentation is really good. Accomplishments that I'm proud of Of building an app that Looks & Feels as JIRA and add so much value is easy to extend. What I learned A lot about the Atlassian Platform not only as an end user as before. What's next for FLOW For Jira I will add webhooks for external integrations, Workflow validations, Approvers and Actions Built With forge forgeapi forgeui jira storageapi Try it out bitmind-forge.atlassian.net
FLOW For Jira
Flow helps teams to simplify their daily work activities, creating a friendly and easy to use checklist to follow the progress of the stories
['Matias Ariel Urbano']
[]
['forge', 'forgeapi', 'forgeui', 'jira', 'storageapi']
21
10,045
https://devpost.com/software/pagebrain-ai
Inspiration I personally use Jira and Confluence at work on a daily basis and I’m aware of some of it’s limitations when it comes to search. And it also happens that I’m obsessed about Natural Language Understanding; specifically, I have been playing around with Question Answering (QA) for more than two years. Initially I implemented this add-on with a simple CQL query and QA. I asked if I would install this myself? The answer was an honest no. It’ll be useful but not enough to justify its position in my add-on list. At some point, somehow I recalled Elon Musk saying this an interview: "If you're entering anything where there's an existing marketplace, against large, entrenched competitors, then your product or service needs to be much better than theirs ... It can't just be slightly better. It's got to be a lot better." In the other corner of my brain, Eric Schmidt goes: “You often hear people talk about search as a solved problem. But we are nowhere near close.” My only intention was to incorporate QA into Confluence but since I’m already here I thought I might as well take a stab at it. Search is a hard problem. An incredibly hard problem if you want to do it at a world wide web scale but not as much if you’re in a safe, contained, well structured and well intended environment like Confluence. And that’s what all this is about. What it does Basically indexes information from Confluence & Jira and makes it easily accessible through various forms of search. Question Answering - Extracts answer from raw web pages for any given question People Also Ask - Finds similar questions related to user’s Federated Search across multiple confluence and jira instances Access Control System for Atlassian’s Permissions and Restrictions Autocomplete Custom stop words and synonyms Spell checking and typo tolerance Real-time Search Optical character recognition Image labelling Reverse image search How we built it We spent more a year intensively researching the Question Answering system which clearly came in handy. Also years of our prior experience in actively studying and researching state-of-the-art machine learning models helped quickly deploy models for People Also Ask and Reverse Image Search features. We just use ElasticSearch as our primary search engine; MongoDB and Nodejs with Atlassian Connect to glue various microservices together. We use Tensorflow extensively to train and deploy models. It goes without saying that we primarily use Python for all our ML workloads. A messy combination of GRPC & REST for inter-service communication, Redis for cache and j*uery for frontend. I chose jQuery instead of something like React as that would slow me down even further, I already had a lot of things to learn in great painful detail. We run all our workloads inside a single Kubernetes cluster on Google Cloud. Kubernetes allowed us to dynamically scale ridiculously expensive GPU instances down to zero instance when it’s not being actively used. On top of that we also use preemptible instances to reduce our operating costs even further. We mostly use TPUs for training and GPUs for inference. Challenges we ran into Familiarizing ourselves with Atlassian ecosystem and developer toolkits Implementing access control system Coordinating communication between fairly large number of microservices Elasticsearch. Accomplishments that we're proud of We worked on training a machine learning model that automatically builds Knowledge Graph from raw text. It basically extracts relationships with various entities in a paragraph. It was performing relatively well to our surprise! For example, given the wikipedia page of Google as input, the model can generate subject, object verb triplets like below: Google, subsidiaryOf, Alphabet Google, foundedOn, September 4, 1998 Google, foundedBy, Larry Page Google, foundedBy, Sergey Brin We never got it interfaced with the rest of our system in time to feature on our demos. I’m super excited about this! What we learned The whole is greater than the sum of its parts. Each of these features can seem incremental on their own, but when put together, they truly are impressive. And hopefully useful. What's next for Semantica Finish the Knowledge Graph generating model Improve model performances. Especially we could do a lot better in Spelling Correction Infrastructure cost optimization. GPUs account for huge margin of our operating expenses even in our current setup (preemptible & scale to zero) Analytics and data collection for insights Public Alpha - we’d really love to hear from others how we can improve! And if there's any actual commercial interest: Public Beta on Atlassian Marketplace Consider offering PageBrain as on-prem solution Built With connect kubernetes node.js python tensorflow Try it out pagebrain.ai
PageBrain for Atlassian
Better search for Confluence and Jira
[]
[]
['connect', 'kubernetes', 'node.js', 'python', 'tensorflow']
22
10,045
https://devpost.com/software/motivateme-lqdwp0
Inspiration This is was something that constantly bugged us. We kept changing the due dates. Were we disciplined enough? Or did we underestimate the time requirement? Though this happened numerous times, we never seemed to learn from it, because we didn't keep track of it. What it does This addon keeps track of whenever you change deadlines. It tells how many times each member has changed the 'due dates' on any card, along with the reason for doing so. The next time you are about to change a due date, you would be more careful! How I built it Using Trello's developer kit. The documentation and GitHub samples were awesome enough to get started and learn about the platform quickly Challenges I ran into Handling promises. Still a beginner at asynchronous programming Getting started with a new development platform Accomplishments that I'm proud of Building something that would actually benefit people! What I learned What's next for MotivateMe Add analytics - graphical representations of member-wise stats Smartly suggest due dates next time automatically based on past records. Built With javascript trello
Due-date manager for Trello
Do you and your team keep pushing deadlines frequently? Become better at estimating and managing deadlines with this addon.
['Pankaj Kumar']
[]
['javascript', 'trello']
23
10,045
https://devpost.com/software/scrummy-8uowrh
Inspiration Corona made us all work a lot remotely. And agile project management is a lot harder in remote situations. It's difficult to reach people for pull-request reviews and updates and to stay in the loop It's very hard to manage code on the go remotely (review pull-requests, make small changes) There needs to be some other concept than pull-requests to maintain code integrity / safety and still keep developers productive. What it does Scrummy helps with project management. Daily Standup, Planning, and knowing what's going on without spending a lot of time in online meetings. How we built it With time ... lots of ... but still not enough Challenges we ran into It's not always possible to add stuff to the Atlassian UI where you would want it to be. Accomplishments that I'm proud of It works and looks cool (could be better, but given the limited time we are happy) What I learned A lot about team collaboration while building a team collaboration app :-) What's next for Scrummy Focus on collaboration and management features. Mobile app with mobile ready code editor to manage teams on the go with smartphone or tablet. Bringing development and continuous delivery to the mobile. Deeper integrations into JIRA and bitbucket. Improved AI search for the whole project (JIRA, Confluence, Commits, Code Comments) - we have a beta going, but it needs improvement More gamification Integrate different time management ideas and other fancy team management features Built With and atlassian bitbucket else even evertything jira love more node.js react Try it out www.ilovemy.coach
Scrummy
Team collaboration made easy. Built around Bitbucket and Jira, Scrummy offers an intuitive interface that allows remote teams to work together. From developers for developers.
['Chris Ly']
[]
['and', 'atlassian', 'bitbucket', 'else', 'even', 'evertything', 'jira', 'love', 'more', 'node.js', 'react']
24
10,045
https://devpost.com/software/forge-super-emojis-emoji-classifier-of-confluence-workspace
Welcome Emoji Relaxing Emoji Switch Emoji Confluence Workspace Overview We have built a cloud-based forge app for Atlassian's Confluence platform. It used power of emoji to lighten up the workspace. It enhances the workspace by classification with a related emoticon. Users can add different emojis at different places according to their preferences. Ready up to lighten your world. Built With atlassian forge javascript node.js npm react typescript Try it out github.com
Forge Super Emojis: Emoji classifier of confluence workspace
Forge Super Emojis is a cloud application built on Atlassian's forge platform to power up your confluence workspace. It enhances the workspace by classification with a related emoticon.
[]
[]
['atlassian', 'forge', 'javascript', 'node.js', 'npm', 'react', 'typescript']
25
10,045
https://devpost.com/software/git-integration-for-jira-private-servers
Inspiration Back in 2016, we launched Git Integration for Jira Cloud to connect a variety of git servers to Jira Cloud. Customers configure “integrations” with the git server url, credentials and some settings and our indexer is then able to connect to the git server to index the commits/branches/pull requests. One concern that we had (in 2016) was that the market for a Jira Cloud app that would connect to locally hosted git servers would either be small or admins would not approve of such connections. Fortunately, a sizeable number of Cloud customers have found great utility in our offering. But there are two groups that we have not been able to serve yet: Customers hosting their git server on a private network Customers who cannot (for privacy, security, and other reasons) grant access to source code directly. We have had almost 100 separate customers ask us for some kind of work around. We expect the total market to be considerably larger. Our solution: Git Integration: Private Servers. What it does Building on our existing product, Git Integration: Private Servers, we have built the ability to configure webhooks on the git server to send basic commit, branch and pull request information that our application can index. When configuring a new Private Server integration - the Jira admin obtains a unique and secret url to be used in configuring webhooks on the git server. The git server will begin transmitting webhooks to the unique URL where our indexer can index the webhooks. The two advantages of indexing webhooks in this fashion are: Many more self-hosted git servers will be able to send development information to Jira Cloud. Webhooks sent by the major git hosting providers do not contain source code. Challenges we ran into We had two main challenges to overcome In creating the Private Servers edition of Git Integration for Jira: Durability and Completeness. When a webhook is sent to us by a customer’s git server, the webhook is sent once and only once. If our application is down for any reason (maintenance, code updates, too high load, etc) then that webhook is lost forever. This problem is compounded by the Completeness problem. Webhooks are only ever sent for current activity - you cannot send webhooks for past activity. This means that our application (unlike our current feature set) cannot request missed activity. This means we need to have virtually 100% uptime as we aim to reach enterprise customers in addition to our traditional SMB customers. How we built it We already host Git Integration for Jira Cloud on Amazon Web Services and to solve the challenges detailed above we are employing several special AWS services. When a Private Server integration is created we generate a unique/secret url recognized by the AWS API Gateway. The webhook is then is placed in an AWS Simple Queue Service (SQS). By relying on these industrial strength AWS services, we are employing highly-redundant and performant services to capture the webhook traffic sent to us. Our application server can then collect the messages from SQS and dispatch them to our indexing service where they are delivered to the customer’s data store. The indexer extracts and stores Git information from the webhook payload such as: commits, authors, file names, branches, pull requests and so forth. With this information - we display commit, branch and pull request information in the Jira Cloud UI for our app. Additionally - we upload this information directly to Jira Cloud using the Development Information API. By uploading this information to Jira Cloud, our customers can take advantage of such powerful features such as JQL searching, workflow triggers, Release Hub, and other features offered by Atlassian. Accomplishments that we're proud of We’re proud that we’ve built an application that can process high webhook loads with enterprise quality uptime while giving us the flexibility to update the application in rapid fashion. We’re also very excited to reach out to a whole new class of customer that has not been able to use Jira Cloud for tracking development activity without this feature set. What we learned Git servers send the information that they send and sometimes the information is different between git servers. Additionally - git servers will truncate the activity sent if there are large changes. We will have to be excellent at educating our customers on the limitations of this type of integration. We will be building out some features for Jira admins to troubleshoot issues on their own without having to reach out to us. We also will need to educate our users on the difference of the two types of integrations: Private Servers (using webhooks) or traditional integrations (using APIs) as some features are not available to Private Server integrations since they do not provide source code at the indexing stage. What's next for Git Integration for Jira: Private Servers Currently the Git Integration: Private Servers app in the scope of our Codegeist submission is only supporting GitHub.com repositories. We have extensive experience supporting GitHub Enterprise, GitLab, AWS CodeCommit, all Microsoft git based servers and more and will be rolling out support for their webhook payloads in the coming weeks. We will also be setting up various customer demos to show off our progress and gather feedback on prioritizing features. We hope to launch publicly on the Atlassian Marketplace in the next month. Known issues with Codegeist submission Only GitHub.com repositories supported Not all features of our current app are supported (some due to data missing in webhooks and others will be finished in coming weeks) Built With amazon-web-services aws-api-gateway aws-sqs java
Git Integration for Jira: Private Servers
Jira Cloud customers hosting privately hosted git servers (behind firewalls / private networks) can send commit, branch, and pull request data by configuring webhooks.
['Adam Wride', 'Chivorotkiv Sergei Shmakov', 'Nastya Dvornaya']
[]
['amazon-web-services', 'aws-api-gateway', 'aws-sqs', 'java']
26
10,045
https://devpost.com/software/inspirational-quotes-in-jira
Sample quote Inspiration Working for long hours can be tiresome and demotivating. Unfortunately, there isn't much of an option to delay it to another day given the upcoming release, major bug fix, etc. This app will make an attempt to make one feel better and inspired while working. What it does Users while working, can get inspiration quotes at a click of a button directly in JIRA. How I built it Using Forge UI and APIs. Accomplishments that I'm proud of Working app. An app I will use myself. An attempt to make people feel better and inspired. What I learned Forge Atlassian products What's next for Inspirational Quotes in JIRA Non-textual inspirations - image, GIF, stickers, music link, video links. Built With forge javascript Try it out github.com
Inspirational Quotes in JIRA
Not feeling inspired enough while working today? This app will show you an inspirational quote in an attempt to inspire you and feel better. After all, feeling good and inspired is all that matters!
['Maansi Srivastava']
[]
['forge', 'javascript']
27
10,045
https://devpost.com/software/feature-bundle-for-jira-service-desk-bom4tz
Create banner which take your breath away from your heart Inspiration Our inspiration had two sources. The first of them came from a standard Jira Service Desk. It allows you to add announcement banners to the Help Center and Customer Portal, but its configuration is very limited. The problem for us was the fact that we wanted to display some information on other pages. The second source of motivation was one of Atlassian's products - Statuspage. It allows you to show information about incidents on defined pages, including the Customer Portal. We noticed that such messages could have any form and be presented in different time for different groups of users. To be honest, we didn't have much experience in creating applications for Jira Service Desk and for the cloud. However, we thought that since it is a hackaton, we have to take gloves! What it does This app allows you to create banners on the Jira Service Desk screens. However, it is fully configurable at many levels. Location: the administrator can specify on which screens the banner should appear. You can choose between global screens (Help Center, Requests, Approvals, User Profile, Request Details View) and project screens (Customer Portals, Request Forms). Calendar: the administrator chooses whether the banner is displayed in the selected location all the time, or in selected time periods, specific days. It can also specify conditions, for example that a given banner is displayed until issue ABC-123 changes to Resolved status. Visibility: the adminstrator can display the banner to all logged in users or selected users. Restrictions can be defined at the user's language level, membership of the organization / Jira group / project role, as well as the email domain. Appearance: the administrator can enter the banner text and edit it using the Rich Text Editor. You can also enter HTML. In addition, banners can appear in four states: Enabled - the banner is active and its display conditions are checked. Disabled - the banner is inactive. Archived - the banner is archived, moved to a separate section, maybe someday it will be used again. Deleted - the banner has been removed pernamently (its content and configuration). Due to such a multitude of settings, we can, for example, show different messages for customers from two separate organizations or create messages in the languages ​​of our clients (which is especially important in multilingual portals). How we built it The application implementation has been divided into three parts: the server-side (Java, Spring Boot, Atlassian Connect), frontend (ReactJS, Atlaskit UI), a PostgreSQL database. We have built our product in such a way that the interface for the end-user as well as the configuration was easily transferable to Jira Service Desk Server / Data Center. We manage the entire project using Jira Cloud and Confluence. We have prepared the interface graphic designs in Figma. Challenges we ran into In our work, we encountered two major challenges. The first of these is the amount of context that we need to take into account when showing or hiding the announcement banner is impressive. For this reason, the level of complexity of the rules responsible for generating a banner on the portal is high. The second challenge was creating the extension on Jira Service Desk Cloud - this is our first app on this Atlassian product. We encountered several errors in connect in the serviceDeskPortalHeaders module when using the "page" attribute. This problem we reported to the Atlassian team. Accomplishments that we're proud of We are glad that we were able to come up with and finish the application in the assumed (short) time. We are very happy because we even managed to send the application to the Atlassian where we are waiting for acceptance and publication on the Atlassian marketplace. This application will also be saved as the first in the history of our team, for which before the start of development the interface mockups in Figma were prepared. We believe that our application significantly expands the possibilities of Jira Service Desk and brings real value to customers. What we learned We learned how to create extensions to Jira Service Desk Cloud (previously we created applications only for server version). We learned the possibilities, but also the limitations, REST API JSD. Our team had the opportunity to delve into topics from the ITSM area at Jira Service Desk, such as Organizations, Request Participants, differences between Help Center and Customer Portal. None of us has ever participated in any hackathon! We work with each other every day, but it is a completely different style of work than during such a competition. We were still thinking about how little time we have left, and how much is still to be done. But we can say it was worth it and it certainly is not our last time. We still want to work together:) What's next for Feature Bundle for Jira Service Desk The application has been submitted to Atlassian and we are waiting for approval to publish it as a Marketplace add-on. Feedback from customers is very important to us, which will definitely have a significant impact on the product's roadmap. Nevertheless, there are elements that we definitely want to add: view as a user - checking which banners the selected user sees the option of adding buttons to banners and counting the number of clicks and recording who clicked what (for example to collect consents regarding data processing, understanding of the regulations etc.) new banner types: flags, notifications, dialogs further features for Jira Service Desk, such as: showing values ​​from different fields on Request Details View, such as Assignee, Priority, SLA option to import / export settings of request types (as in the version for server / data center) As our app Feature Bundle for Jira Service Desk is available for Server and Data Center hostings, we would like to migrate announcement banners from cloud version to them. Built With java react springboot Try it out marketplace.atlassian.com
Feature Bundle for Jira Service Management
Create dismissible announcement banners across all service desk pages with just a few clicks. Define where, when, and for whom you want to display relevant information. Use HTML or Rich Text Editor.
['Krzysztof Skoropada']
[]
['java', 'react', 'springboot']
28
10,045
https://devpost.com/software/covidscan-an-forge-intergrated-ai-radiology-tool-for-covid19
Fig. 1: Map of Covid19 cases around the world (as of 4/30/2020) Fig 2: Top 10 countries with most COVID-19 deaths Fig 3: Current chest X-ray diagnosis vs. noval process with CovidScan.ai Chart of wait-time reduction of AI radiology tool (data from a simulation stud reported in Mauro et al., 2019). Inspiration What will be working situation for medical staff in hospitals during and after the COVID-19 pandemic? How can the medical staff quickly and securely log in and perform PPE safety check while dealing with a huge influx of patients in critical conditions? How can we automate the process of COVID-19 diagnosis so precious time can be saved for both medical doctors and the patients? How can our solution for hospital later be scaled and implemented to be a essential tool for automating the daily operation at hospital even after the COVID-19 pandemics is over? To answer these core questions, we did some background research to identify the main challenges in order to develop the best solutions around those: COVID-19 Pandemic: Fig. 1: Map of Covid19 cases around the world (as of 4/30/2020). Our team created the map based on data collected by the Johns Hopkins University Center for Systems Science and Engineering. As we see from the map above and the pie chart below, COVID-19, previously known as the novel Coronavirus, has killed more than 63,860 people and infected over 1,067,061 people in the United States alone, topping all other countries around the world. This number is continuing to grow every day. Fig. 2: Top 10 countries with most COVID-19 deaths. The main problem occur in the healthcare system during the pandemics is the long wait time for COVID-19 chest X-ray result:** Fig 3: Current chest X-ray diagnosis vs. novel process with CovidScan.ai Patients can first be screened for flu-like symptoms using nasal swap to confirm their COVID-19 status. After 14 days of quarantine for confirmed cases, the hospital draws the patient’s blood and takes the patient’s chest X-ray. Chest X-ray is a golden standard for physicians and radiologists to check for the infection caused by the virus. An x-ray imaging will allow your doctor to see your lungs, heart and blood vessels to help determine if you have pneumonia. When interpreting the x-ray, the radiologist will look for white spots in the lungs (called infiltrates) that identify an infection. This exam, together with other vital signs such as temperature, or flu-like symptoms, will also help doctors determine whether a patient is infected with COVID-19 or other pneumonia-related diseases. The standard procedure of pneumonia diagnosis involves a radiologist reviewing chest x-ray images and send the result report to a patient’s primary care physician (PCP), who then will discuss the results with the patient. _Fig 4: Chart of wait-time reduction of AI radiology tool (data from a simulation stud reported in Mauro et al., 2019). _ A survey by the University of Michigan shows that patients usually expect the result came back after 2-3 days a chest X-ray test for pneumonia. (Crist, 2017) However, the average wait time for the patients is 11 days (2 weeks). This long delay happens because radiologists usually need at least 20 minutes to review the X-ray while the number of images keeps stacking up after each operation day of the clinic. New research has found that an artificial intelligence (AI) radiology platform such as our CovidScan.ai can dramatically reduce the patient’s wait time significantly, cutting the average delay from 11 days to less than 3 days for abnormal radiographs with critical findings. (Mauro et al., 2019) With this wait-tine reduction, patients I critical cases will receive their results faster, and receive appropriate care sooner. What it does Using the power of pretrained machine learning models from open source, CovidScan.ai is created as a full-scaled AI tool for radiology clinics and hospitals. It can automate the process of detecting sign of COVID-19 and pneumonia on chest X-ray images to assist radiologists during the pandemics. This tool of cutting edge technology can be used to reduce the workload for clinicians, and speed up patients’ wait time for pneumonia lab results in this critical time of the COVID-19 pandemic. In summary, a patient who need COVID-19 testing will go through the following process using our application: A user answers a series of questions using an algorithm built to identify whether they need additional screening or not. If they need additional screening/X-Ray then we proceed to use their postal code to geo-locate the nearest hospitals with testing available Once the case reaches at that point, the user just waits and it advances to a physician’s worklist The physician opens the case and looks through the information and uploads X-ray images to identify whether the patient tests positive for pneumonia The process of sending out for an X-Ray and getting them back are excluded from this application. The X-Rays could also be part of the patient’s existing medical records which could easily be located by the hospital’s system Benefit of COVIDSCAN APP: Using this application, the medical staff take patients’ chest X-ray images using the specialized machine and then upload the taken images to the database of web-app for testing for sign of COVID-19 infection or bacterial pneumonia. It is due to the fact that an AI system can review, highlight the pneumonia sign and classify each X-ray image all in less than 10 seconds (comparing the radiologist’s 20 minutes that we mentioned earlier), and it can do that same task effortlessly for 24 hours without taking a break. This time cut is especially critical in the time amid the pandemic of COVID-19. With this spreading rate, it will be overwhelming for radiologists to review a massive number of chest X-ray images of potential COVID-19 infected patients. With the assistance of CovidScan.ai, it can automatically highlight the suspected signs of pneumonia for the radiologists and speed up the process of chest X-ray review. Therefore, more COVID-19 positive-tested patients will get their result back faster and receive appropriate care sooner to prevent the spread of the virus. How we built it Forge integration: First, we deployed the API of previous-built AI model of Covidscan. Then, we built the Forge app with custom button on JIRA ticket screen and integrate the AI model in using rest API. When a person click on the “Covid Smart Scan” button, the app will: Fetch all attachment from the ticket with Get Issue rest API 2.Call jira attachment url and get arrayBuffer from the response Convert the arrayBuffer to base64 with a custom function Post base64 string to external covid scan API Display result in jira screen Chest X-ray Classification Model: For the deep learning model, we developed a Pytorch model. This project’s goal is to draw class activation heatmaps on suspected signs of pneumonia and then classify chest x-ray images as “Pneumonia” or “Normal”. For this project, we are going to use a dataset available at Kaggle consisting of 5433 training data points, 624 validation data points and 16 test data points. C. For the model, we load the pre-trained Resnet-152 available from Torchvision for transfer learning. ResNet-152 provides the state-of-art feature extraction since it is trained on a big dataset of ImageNet. ResNet-152, as the name sounds, consists of 152 convolutional layers. Due to its very deep network, the layers are arranged in a series of Residual blocks. These Residual blocks skip connections to help prevent the vanishing gradients which are a common problem with networks with deep architecture like ours. Resnet also supports Global Average Pooling Layer which is essential for our attention layer later on. For the attention layer to draw the heatmap, we use the global average pooling layer proposed in Zhou et al. Global average pooling layer explicitly enables the convolutional neural network (CNN) to have remarkable localization ability. We achieve 97% accuracy on the training dataset and 80% on the testing dataset. Component testing instruction: Demo link: https://kuafusoft.atlassian.net/secure/RapidBoard.jspa?rapidView=2&projectKey=COV&selectedIssue=COV-3 Create a ticket with patient’s name and basic info Move ticket to xray status Upload a xray jpeg file as attachment Click “Covid Smart Scan” button Observe returned chest x-ray result with heatmap and classify the image as positive or negative based on the heatmap and label. The step by step demo is in this video: https://vimeo.com/348402764 Technical Requirements: The packages required for this project are as follows: Forge App Jira Torch (torch.nn, torch.optim, torchvision, torchvision.transforms) Django Numpy Matplotlib Scipy PIL Tensorflow jQuery Challenges we ran into This hackathon project was a very different experience for us which challenged us throughout this project in the Pega intergration and deep learning model training part. This is the first time we all were working with creating endpoints of the pre-trained deep learning model to intergrate to Forge. Accomplishments that we're proud of We manage to finish the project in such a limited time of 2 weeks in our free time from school and work. We still keep striving to submit on time while learning and developing at the same time. We are really satisfied and proud of our final product for the hackathon. What we learned Through this project, we learn to deploy a complicated image-recognition deep learning models on Pega platform. We also learn the process of developing a mini data science project from finding dataset to training the deep learning model and finally deploy & integrate it into a web-app. This project can’t be done without the efforts and collaboration from a team with such diverse backgrounds in technical skills. What's next for CovidScan: In the next 2 months, our plan is: We will raise fund to invest more into the R&D process. We will partner with research lab to collect more dataset and find hospitals to test our solution. One of our memeber has published his newly collected dataset on this open-source github: https://github.com/nihalnihalani/COVID19-Detection-using-X-ray-images-/ Regarding our R&D, we plan on improving the performance of the platform, preferably by reading more scientific literature on state-of-art deep learning models implemented for radiology. We also plan to add the bound box around the suspected area of infection on top of the heatmap to make the output image more interpretable for the radiologists. We are working to implament the multilabeling model of COVID-CXR on our dataset to improve our application. This model is published by The Artificial Intelligence Research and Innovation Lab at the City of London's Information Technology Services division and has accuracy 0.92, precision 0.5, recall 0.875, auc 0.96. In many pieces of literature, they mentioned developing the NLP model on radiology report with other structured variables such as age, race, gender, temperature... and integrating it with the computer vision model of chest X-ray to give the expert radiologist’s level of diagnosis. (Irvin et al., 2019; Mauro et al., 2019) We may try to implement that as we move further with the project in the future. With the improved results, we will publish these findings and methodologies in a user-interface journal so that it can be reviewed by expert computer scientists and radiologists in the field. Eventually, we will expand our classes to include more pneumonia-related diseases such as atelectasis, cardiomegaly, effusion, infiltration, etc. so that this platform can be widely used by the radiologists for general diagnosis even after the COVID-19 pandemics is over. Our end goal is to make this tool a scalable that can be used in all the radiology clinic across the globe, even in the rural area with limited access to the internet like those in Southeast Asia or Africa. References: Crist, C. (2017, November 30). Radiologists want patients to get test results faster. Retrieved from https://www.reuters.com/article/us-radiology-results-timeliness/radiologists-want-patients-to-get-test-results-faster-idUSKBN1DH2R6 Irvin, Jeremy & Rajpurkar, Pranav & Ko, Michael & Yu, Yifan & Ciurea-Ilcus, Silviana & Chute, Chris & Marklund, Henrik & Haghgoo, Behzad & Ball, Robyn & Shpanskaya, Katie & Seekins, Jayne & Mong, David & Halabi, Safwan & Sandberg, Jesse & Jones, Ricky & Larson, David & Langlotz, Curtis & Patel, Bhavik & Lungren, Matthew & Ng, Andrew. (2019). CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison. Kent, J. (2019, September 30). Artificial Intelligence System Analyzes Chest X-Rays in 10 Seconds. Retrieved from https://healthitanalytics.com/news/artificial-intelligence-system-analyzes-chest-x-rays-in-10-seconds Lambert, J. (2020, March 11). What WHO calling the coronavirus outbreak a pandemic means. Retrieved from https://www.sciencenews.org/article/coronavirus-outbreak-who-pandemic Mauro Annarumma, Samuel J. Withey, Robert J. Bakewell, Emanuele Pesce, Vicky Goh, Giovanni Montana. (2019). Automated Triaging of Adult Chest Radiographs with Deep Artificial Neural Networks. Radiology; 180921 DOI: 10.1148/radiol.2018180921 Wang, L., & Wong, A. (2020, March 30). COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest Radiography Images. Retrieved from https://arxiv.org/abs/2003.09871 Built With forge matplotlib numpy pil pytorch==1.0.1 torchvision0.2.2 Try it out kuafusoft.atlassian.net github.com
CovidScan-An Forge-intergrated AI Radiology App For COVID19
CovidScan.ai is developed to be a secured AI platform with the purpose to assist radiologists with fast and accurate pneumonia dectection amid this COVID-19 pandemic.
['Vi Ly', 'Ryan Sun', 'Moksh Nirvaan', 'Nihal Nihalani']
[]
['forge', 'matplotlib', 'numpy', 'pil', 'pytorch==1.0.1', 'torchvision0.2.2']
29
10,045
https://devpost.com/software/placeholder-j7e89a
Hello World! I'm Christina - a Software Engineer, Tech Lead and, Scrum Master. I'm happy to share with you Super Scrum - a Jira plugin platform to power up your most important meetings . Inspiration With everyone working remotely due to the global pandemic, team meetings have become more important than ever. As a team lead, what's most important to me these days is making sure that everyone stays connected, and energized. Over the last few months, Agile ceremonies have been the main time that the whole team is all in one 'room'. I work remotely for a cross-location team and we use the Atlassian suite to manage our work. However, I often find myself reaching for other tools (such as draw.io for 'whiteboarding' or miro for 'post-it notes') to facilitate these remote team meetings. This results in a bit of redundancy when needing to copy the notes back into Jira/Confluence for documentation. For Codegeist we took a step back to re-imagine how we run our agile ceremonies - Standup , Planning and, Retrospectives . The Super Scrum add-ons create dedicated virtual team spaces for these meetings, providing contextual tools streamline team meetings. What it does This is a platform of Jira add-ons to facilitate and streamline Agile team ceremonies. There are individual Jira Apps: Super Standup There are two challenges that this aims to solve: How can we make our standups more efficient? How can we integrate giving real-time feedback as a team? I believe that daily standups are the most important meeting for team health - it's where you see your whole team and kick off a new day. Yet, we've all been to standups that take way longer than they should, or where the people go off-topic. In addition, I've been on many teams that struggled with giving real-time feedback. This Jira App structures standups, leaving room for a few minutes of daily team appreciative feedback. How it works Open up the app from the project sidebar Create a new team or select an existing one to launch the standup. When creating a new team, you can configure the duration of the standup or whether or not to enable the feedback time. Roll the dice! Once the standup begins, there's a user dice to randomly select who's speaking next See active issues assigned to the user and prompts for what to speak about The timeline at the top let's you know how much time you have left, to keep the flow going At the end of standup, it puts a few minutes on the clock so that the team can share appreciative feedback I've used this model to run several team standups, and I'm always surprised by how much this brings up the level of energy at the start of the day and improves the ground of health of the team. Project Map The main challenge that this aims to solve is: How can we visualize all the work and their dependencies? Whenever there's a new epic/sprint of work, it can be helpful to 'white board' out all the tasks to lay out and visualize dependencies before creating tickets for them. Yet, working remotely, it's difficult to 'white board' and I've found myself creating draw.io diagrams that mimick the jira ticket dependencies resulting in more wasted time than I want to admit. This Jira App creates a drag and drop space to visually lay out a sprint, epic or group of tickets and their dependencies. How it works Open up the app from the project sidebar Create a new Project map Add issues to the project map area. As issues are being added, you can move them around or re-position to group them. You can also auto-sort the issues! If an issue is not created yet, it can be created right from this App. Clicking on any issue brings up more details. On the issue details view, there's a component that also shows the dependencies of the issue. I've used this method of 'feature-planning' to make sure that all the tickets we need are accounted for before our sprint planning meetings or to communicate the work with external stakeholders. Retro The main challenge that this aims to solve is: How can we make sure to follow up on our retro action items? Retros are crucial for continuously improving the team, but it only works if we follow up on action items! Too often, I've been in a retro where we still haven't finished what we set out to do before. This is due to a combination of using external tools or post-it notes to hold our retro and missing the tasks we set for ourself. This add-on makes it so that you can run a retro right in Jira, creating issues for action items that are easy to refer back to. How it works: Open up the app from project sidebar Create a new Retro - during creation, the team, columns and duration of retro can be configured Retrospect! Add cards with notes, emoji respond to topics that you want to discuss, and discuss There's also a timer to help timebox conversations Create Jira issues for action items directly in the retro How I built it The core guiding principles when building this app was that it needed to be seamless, streamlined and fun - meetings shouldn't be draining, they should be energizing! The app was first designed with these requirements in mind, finding ways to integrate into the Jira flow. The coding was all done in React using NX to manage the monorepo. Challenges I ran into One of the things I wanted to do was to only use the Jira for storing data for these plugins - this ensures that all the data is secure and makes it easier to manage and retire data along with the projects. However, there was no way to subscribe to changes in the project data so I needed to work around that. It was also a little difficult developing with ngrok because the connection was a bit slow and would have been cool if there was a different way to emulate the Jira APIs. Accomplishments that I'm proud of We love the polished feel of the experience and intro screens. The little things really add up and make it fun! I also am proud of building something that I will use! I'm excited to introduce these to my team at work. What I learned We learned how to use the Jira APIs and it's the first project that we used NX to manage a monorepo for and it was pretty smooth. What's next for Super Scrum These tools were built with real teams in mind. Regardless of what happens with this hackathon, the plan is to publish them to the Atlassian Marketplace! A few todos before publishing: Adding error logging Cross browser and Accessibility testing Usability tests We would love to get some advice on what the best practices are so that the experience is smooth and seamless for users! We believe that Jira is more than just a documentation tool - it's a virtual team space. The Super Scrum add-ons bring Jira into all the team Agile ceremonies streamlining and making them more fun in the process! Built With jira Try it out superscrum.web.app
Super Scrum
Power up your most important meetings with Super Scrum! This is a platform of Jira add-ons to facilitate and streamline Agile team ceremonies
['Christina Kayastha']
[]
['jira']
30
10,045
https://devpost.com/software/facets-dive
Inspiration Dive is a tool for interactively exploring large numbers of data points at once. What it does Dive provides an interactive interface for exploring the relationship between data points across all of the different features of a dataset. Each individual item in the visualization represents a data point. Position items by "faceting" or bucketing them in multiple dimensions by their feature values. Success stories of Dive include the detection of classifier failure, identification of systematic errors, evaluating ground truth and potential new signals for ranking. How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for Facets-Dive Built With connect javascript webcomponents Try it out github.com
Facets-Dive
Dive is a tool for interactively exploring large numbers of data points at once.
['Krishna Kumar']
[]
['connect', 'javascript', 'webcomponents']
31
10,045
https://devpost.com/software/tic-tac-toe-for-confluence
Inspiration Creating documents can be a lot of work so I thought of creating a game of Tic tac toe for people to have a bit of fun in Confluence. 🙂 What it does You can play Tic tac toe with friends. How I built it I built the game with Forge. The game logic was done with javascript and the awesome graphics were created with SVG code attached to a Forge UI Image component. Challenges I ran into Creating a game was very challenging because Confluence pages are not meant to be gaming platforms but, it was a lot of fun trying to hack a game on a Confluence page. Accomplishments that I'm proud of Creating something that hopefully will be fun What I learned I learned how to create macros with Forge. Built With forge javascript node.js Try it out github.com
Tic tac toe for Confluence
It's Tic tac toe in Confluence! 🤷‍♂️ Have a bit of competitive fun while collaborating on your documents.
['Harry Banda']
[]
['forge', 'javascript', 'node.js']
32
10,045
https://devpost.com/software/agite-tools-connect
Inspiration Throughout our working career, while working with many individuals, teams, and organizations, a common shortcoming keeps surfacing - lack of group focus. There are countless reasons people tend to work towards their individual goals hindering success at higher levels, like team, department, product, or the whole company. An American sociologist Ron Westrum has developed a typology of organizational cultures. Every organization can be one of: pathological (power-oriented), bureaucratic (rule-oriented) and generative (performance-oriented). The fact that from our empirical observation most organizations that we came in contact with fall in the first and second typology and the urge to follow and fulfill the dreams of Kent Beck, one of the founders of Agile movement - to heal the wounds between the business and development, a tool to help with the best scrum practices started to emerge. A common goal for all team members is a powerful concept that should have all the love from the tooling we use every day. Honoring the importance of Sprint Goal as described in the Scrum Guide, the Sprint Goal Success metric was shaped. As important as it is, it is “only” a fragment of our mission to help organizations on their agile transformation journey. We are addressing one of the obstacles on that path - how to measure the progress. Evidence-Based Management is the theory we base upon in the toolset. What it does The interface application collects some additional information at two scrum events - at Sprint Planning and at Sprint Review. The additional information helps the teams to stay focused on a Sprint Goal and tracks the success rate by sending the data to a SaaS product Agile Tools. Other information can be channeled from JIRA to Agile Tools to further help visualize the progress (or lack of) in four Key-Value Areas of the Evidence-Based Management framework. How we built it The AgileTools team prepared an API for delivering the values of the sprints directly to the AgileTools Portal, we started to setup an interface application on the development instance what would trigger the changes, once a complete sprint would be performed. Sadly this is with current REST API of Jira not possible. Challenges we ran into The REST API of Jira does not support the changes of a sprint. It provides a possibility to control sprints from the remote application, however it does not support passing values one a sprint is changed in Jira. Also en extension of the sprint dialog or at least a WebHook or similar trigger is not provided, this has stopped us from further implementation, but decide to submit our project to generate some attention, what Jira does not support teams and and team based changes to the outside world as needed for EBM and needed events that are needed by SAFe or other bigger Scrum frameworks. Accomplishments that we're proud of We have a clear picture of the missing peaces and master the Atlassian-Connect extension while searching for the solution. We plan to continue to work on this issues and with some support of Atlassian maybe trigger a change in the REST API to provide additional interfaces. What we learned We accumulated a lot of knowledge on the Jira integration and all the gaps we have faced on the way to setup the integration. What's next for AgileTools-Jira Connect We will continue the work on the requirements for the seamless integration between Jira and AgileTools Suite. Built With javascript typescript Try it out agile-tools-alpha.web.app
AgileTools-Jira Connect
Evidence based management framework offers a very nice and consolidated way to support the effectiveness of teams. The integration of AgileTools with Jira offers a one stop overview for managers.
['Herbert Paar', 'Jure Cekon', 'Borut Bolčina', 'Rok Bertoncelj']
[]
['javascript', 'typescript']
33
10,045
https://devpost.com/software/facets-overview
Inspiration Overview takes input feature data from any number of datasets, analyzes them feature by feature and visualizes the analysis. What it does Overview gives users a quick understanding of the distribution of values across the features of their dataset(s). Uncover several uncommon and common issues such as unexpected feature values, missing feature values for a large number of observation, training/serving skew and train/test/validation set skew. How I built it Challenges I ran into Accomplishments that I'm proud of What I learned What's next for Facets-Overview Built With connect javascript webcomponents Try it out github.com
Facets-Overview
Overview takes input feature data from any number of datasets, analyzes them feature by feature and visualizes the analysis.
['Krishna Kumar']
[]
['connect', 'javascript', 'webcomponents']
34
10,045
https://devpost.com/software/confluence-text-analytics
Appears in the context menu Macro that shows the keywords in the document Macro that shows the concepts in the document Modal that appears when the action is selected Dialog that appears when the menu item is selected Appears in the actions menu Inspiration Applying natural language understanding to draw insights from documents when working in a team. The goal of the application is to help users draw helpful insights from their documents, easily find important topics in the content, and write documentation that may be easier to read and understandable. What it does Uses the IBM Watson Natural Language Processing APIs for text analytics on Confluence documents. Includes four modules: Confluence Macro for Concepts Shows the high-level concepts in the content, it's relevance to the document, and a link to the DBpedia resource on the concept Confluence Macro for Keywords Shows the important keywords in the content, it's relevance to the document, a sentiment score (positive, neutral, negative), and the emotion associated with the keyword in the content (sadness, joy, fear, disgust, anger) Confluence Context Menu Shows the analysis of the emotional and language tones of the selected text Confluence Content Action Shows the analysis of the emotional and language tones of the document How I built it Built using Forge for Confluence. The application is written in Node.js and makes use of @forge/ui and @forge/api for components and API calls to the IBM Watson Natural Language Understanding and IBM Watson Tone Analyzer APIs. Challenges I ran into Working within the limitations of Forge. As a mainly React developer, getting used to working with the Forge UI components such as <Fragment /> and <Text /> . As well as using api.fetch from Forge API to make calls to the IBM Watson APIs with _ Forge environment variables _. Accomplishments that I'm proud of Successfully creating a Forge application that can make third-party API requests and display them in four distinct modules. What I learned Learned about Forge and more about the Atlassian platform including products like Jira and Confluence. First time using the IBM Watson APIs as well. What's next for Confluence Text Analytics More configuration options for Macro modules, i.e. the option to change the number of concepts and keywords returned by the API Extending functionality to Jira A wider set of analytics options available Built With confluence forge ibm-watson javascript node.js Try it out bitbucket.org
Confluence Text Analytics
A Forge application that uses the IBM Watson Natural Language Processing APIs for text analytics on Confluence documents.
['Alex Yu']
[]
['confluence', 'forge', 'ibm-watson', 'javascript', 'node.js']
35
10,045
https://devpost.com/software/helpful-pages
Inspiration Nabil and I (Adil) are working in a very open company where collaboration is part of our DNA. Collaborating on projects and sharing knowledge in Confluence goes without saying. Recently we hired new employees who had to go remotely through their onboarding process. We heavily relied on our documentation in Confluence and asked everyone to let us know which pages helped them (achieve a task) or confused them. We tracked their feedback in a Confluence table and revised the pages based on their comments. This inspired other teams to follow our example and ask for feedback for their own pages in their spaces. As this helped us a lot and we want to continue with that practice we wanted to build an app that captures users feedback quickly and provides an overview for authors. What it does We have built a small feedback form that is displayed just below the page title that allows authors to gain insight about how their content is received. All users need to do is to state if they find the page helpful or not and leave a short comment. The feedback is displayed in an overview within the space administration with useful stats. How we built it We have built the app with Forge, more specifically with the latest Confluence extension points that were introduced on July 2nd, 2020: Confluence Byline and Confluence Space Settings. Challenges we ran into We didn't have any experience with Forge but loved the recent updates to it which is why we wanted to start building our first cloud app. Initially, we hoped to have a more engaging feedback form with a slider to capture more precise feedback. This would have resulted in a more insightful feedback overview. We also wanted to change the text below the page title for users who have already submitted their feedback. Accomplishments that we're proud of After attending the last Atlas Camp in Vienna we were delighted by what we have learned about Forge and couldn't wait to work with the new framework. As its capabilities were very limited we only made small features just to get a feeling for it. When the Forge team announced the new extension points we immediately decided to try and build our app with Forge. This is why we are extremely proud of our result as it wouldn't have been possible less than two weeks ago. On a side note: This was the first project we worked on without drinking buckets of coffee. #itsthelittlethings What we learned By using Forge we really don't need to worry about hosting our app that does not only come with extra costs but with the related legal and data privacy issues. This takes a huge load off our mind. What's next for Rate My Page Depending on the capabilities of Forge we definitely want to improve the user experience: More options in the feedback form for more precise feedback (eg. 'I am missing information, There are no links to related pages etc) Notifications for page authors (and editors) Reset rating for individual pages Link ratings to page versions Voting function for anonymous users (for documentation sites) and many more :) Built With forge
Rate My Page
When it comes to harnessing your teams collective knowledge in pages there is no better tool than Confluence. To let authors know that their content has been helpful you can now easily share feedback.
['Adil Nasri', 'Nabil Nasri']
[]
['forge']
36
10,045
https://devpost.com/software/profanity-checker
Profanity Checker Profanity Checker - Main App Inspiration Whether it is a technical documentation, blog or an article businesses need to take care of the fact that no obscene content gets promoted through their brand or else it can harm their brand image. Therefore, a profanity check needs to be implemented on the brand content before publishing it online. Implementing the profanity check methods manually for large-scale necessities can be a daunting, time consuming and costly affair to practice. Hence, an automated content moderating solution is very much required to save costs, efforts and time of the businesses in managing their brand content. What it does Profanity Checker is a Content Moderator that analyzes Confluence pages for Profanity and alerts the user if it finds any profane words. This is automatically using Natural Language Processing. How We built it We built it using Atlassian Forge, Forge UI and Confluence APIs. Accomplishments that we're proud of We were able to build apps very fast using Forge was very easy. It was a great learning for the both of us. We would definitely love to build more apps for Atlassian Products using Forge. What we learned We learnt how Atlassian Forge could be used to build apps for Atlassian Cloud products. Building the app using Forge was a great experience as it does most of the work for you. All we need is to code! What's next for Profanity Checker Extend the capability to Jira Service Desk issues When Forge adds more capabilities, we would like to extend the capability of this app to identify profane words in the response being written by a Support Representative. Built With atlassian forge javascript Try it out bitbucket.org
Profanity Checker
Profanity Checker is a Content Moderator that analyzes Confluence pages for Profanity and alerts the user if it finds any profane words.
['Sumanth Muni', 'Priyadarshini Murugan']
[]
['atlassian', 'forge', 'javascript']
37
10,045
https://devpost.com/software/outdo
Home page Workspace listing Repositories listing Synced. issue tracker with bitbucket issue tracker Prototype listing screens Upload screens from your design team and build clickable prototypes Add / Create new chat channel Add a member to the channel - uses bitbucket usernames Private and group messaging Start a new video call / video meetings Share the meeting links to peers to let them join the meeting Meetings - In the screenshot only one user joined, but multiple users can join too Inspiration Often as developers we tend to switch between different apps on our daily work. Our managers use Jira, the whole team uses Skype / Google meet / Slack / Hipchat for communication. Trello to manage any in-house work. There was a lack of a tool which combines all of this into ONE. What it does Issue tracking, group private chats and video chats and workflow management for bitbucket made simple How I built it Vue CLI 3 and Vuex to build the front-end Laravel for API back-end Twilio Chat and Video API CORS using nginx Bitbucket OAuth consumer for API integration Tools Used Laravel 7 Vue + VueRouter + Vuex + VueI18n + ESlint Pages with dynamic import and custom layouts Login, register, email verification and password reset Authentication with JWT Socialite integration Vue-Atlas + Font Awesome 5 Challenges I ran into State storing using Vuex's - and calling bitbucket API only when needed Integrating Twilio group and video chat Layouting the whole dashboard Accomplishments that I'm proud of Integrating front-end and back-end a build an amazing SPA Completed the app with minimal features as planned What I learned The Bitbucket connect / oauth APIs Atlassian design system and vue-atlas UI framework Twilio client SDK and APIs for Chat, Video What's next for outdo.app Workflow prototyping features like Invision / Marvel app Better video calls to enable - draggable video thumbnails on any page Realtime issue tracking while chatting using web-hooks and bots Built With app bitbucket clickable-prototyping issue-tracking laravel protoyping twilio video-conferencing vue Try it out outdo.app github.com www.facebook.com twitter.com
Outdo
Simple solutions for complex connections
['Shankar Ganesh']
[]
['app', 'bitbucket', 'clickable-prototyping', 'issue-tracking', 'laravel', 'protoyping', 'twilio', 'video-conferencing', 'vue']
38
10,045
https://devpost.com/software/autoquiz
Add Auto Quiz Macro to your Confluence Page Click on Start Quiz button to Start the AutoQuiz Sit back, relax and take the quiz! We know you would have read the document clearly. So why fear?! :P Results of the quiz are displayed right after the quiz. Use it to showoff your achievement! Inspiration Many companies use Atlassian Confluence to host their technical documentation, internal wiki, blogs and for knowledge management. Although usage of Confluence pages is extravagant, often the content in these pages can get too long, monotonous and quite frankly, boring! While people are reading such articles, they zone out frequently and lose track of what's going on. Also, if they want to revisit the content of the page, they end up reading the whole article again which really is a waste of time. In this process, there is a lack of feedback; there is no one to test you if you've really understood what you've been reading. What if there is a mechanism that automatically generates a quiz for any Confluence page that you read to evaluate your understanding? What it does AutoQuiz is a macro for Atlassian Confluence that generates Quiz automatically using Natural Language Processing. Users who read Confluence pages can use AutoQuiz to evaluate their understanding of the content that they read. Many organizations use Confluence to host Compliance related documents which the employees need to read mandatorily. Such companies can request the employees to read the documents, take the AutoQuiz and submit the results to their Managers or HRs. This process can help the companies ensure that their employees understand the critical policies such as Security, Privacy, IT equipment use and other HR policies. Employees who revisit a page can take the quiz before they read the page to understand the topics that they are mostly weak in or the topics that they don't have a good understanding in. After the quiz, the employee can concentrate on reading only that part of the document rather than reading the whole document. At AutoQuiz , we want to make the lives of employees and employers who use Atlassian Confluence easy! There are many use-cases in which AutoQuiz can be useful other than the ones listed above. What are you waiting for, start AutoQuiz zing! Want to try AutoQuiz ? Follow these steps: Step 1 : Add AutoQuiz Macro to your Confluence Page Step 2 : Users can click on "Start Quiz" button to Generate Quiz for the Confluence page that they are reading Step 3 : Quiz is generated right below the page content. Take the AutoQuiz and Enjoy! Demo of AutoQuiz - Quiz on Atlassian Confluence Demo of AutoQuiz - Quiz Results on Atlassian Confluence How we built it We built it using Atlassian Forge, Forge UI and Confluence APIs. Natural Language Processing Module that generates quiz is written in Python and it is exposed to the Forge App as a web API. Using AutoQuiz is easy, all that you need to do is add it to your page and click a button! Accomplishments that we're proud of We were able to build apps very fast using Forge was very easy. It was a great learning for the both of us. We would definitely love to build more apps for Atlassian Products using Forge. What we learned We learnt how Atlassian Forge could be used to build apps for Atlassian Cloud products. Building the app using Forge was a great experience as it does most of the work for you. All we need is to code! What's next for AutoQuiz Add feature to add custom questions to the list of AutoGenerated questions. Give a customization feature to the user to choose the maximum number of questions the user wants to generate. Add a feature to generate a Certificate and Store the result on completion of the quiz. Add a feature to generate questions based on the images and videos used in the Confluence page. Built With atlassian forge javascript Try it out bitbucket.org
AutoQuiz
Generates Quiz automatically using Natural Language Processing for Atlassian Confluence pages.
['Sumanth Muni', 'Priyadarshini Murugan']
[]
['atlassian', 'forge', 'javascript']
39
10,045
https://devpost.com/software/summarizer-qor48y
Step 1: Add Read In Shorts Macro to your Confluence Page Step 2: Users can click on "Generate Summary" button to Generate Summary for the Confluence page that they are reading Step 3: Summary is generated right below the page content. Read and Enjoy! Inspiration How much of a document do you actually read? In our fast-paced society, research suggests that it's not much. According to a recent study, users only read approximately 20% of words on a website. People often believe this is enough information to determine whether or not to spend more time actually reading through the details of the site. Although we don't know the average amount of a work document that is read, we can assume the amount is pretty limited. We just don't have time! That's why we need Read In Shorts . These shortened overviews of documents allow readers to decide whether or not they need to read the complete document. Read In Shorts allow readers to determine the results and recommendations of the document and whether or not the document is applicable to their business needs. What it does Read In Shorts generates summary for Confluence Pages using Natural Language Processing and Atlassian Forge. This can be used by users who are reading Confluence pages to quickly read a short summary of the page rather than a long document. Users can use it to determine whether or not they need to read the complete document. Want to Read All Confluence pages in shorts ? Follow these steps: Step 1 : Add Read In Shorts Macro to your Confluence Page Step 2 : Users can click on "Generate Summary" button to Generate Summary for the Confluence page that they are reading Step 3 : Summary is generated right below the page content. Read In Shorts and Enjoy! How we built it We built it using Atlassian Forge, Forge UI, Confluence APIs and Natural Language Processing. Accomplishments that we're proud of We were able to build apps very fast using Forge was very easy. It was a great learning for the both of us. We would definitely love to build more apps for Atlassian Products using Forge. What we learned We learnt how Atlassian Forge could be used to build apps for Atlassian Cloud products. Building the app using Forge was a great experience as it does most of the work for you. All we need is to code! What's next for Read In Shorts Add feature to add custom questions to the list of AutoGenerated questions. Give a customization feature to the user to choose the maximum number of questions the user wants to generate. Add a feature to generate a Certificate and Store the result on completion of the quiz. Add a feature to generate questions based on the images and videos used in the Confluence page. Built With atlassian forge Try it out bitbucket.org
Read In Shorts
Generates summary for Confluence Pages using Natural Language Processing and Atlassian Forge.
['Sumanth Muni', 'Priyadarshini Murugan']
[]
['atlassian', 'forge']
40
10,045
https://devpost.com/software/time-tracking-for-confluence
Easy time tracking for Confluence Inspiration The concept of logging work time in Confluence was born out of the needs of our team. We often have the opportunity to work with copywriters or analysts who are employed for projects that we implement. Such people, representing the areas of business and marketing, very often do not use Jira, but Trello. Our product team has the entire process laid out in Jira, we keep the documentation in Confluence and do not use Trello in our daily work. The work of external persons must be accounted for separately, and keeping this data in Excel was not convenient for us (another document to fill out ...). That is why we thought: since these people work in Confluence, we also use it, why not let them log in time for the Confluence pages they created? What it does This app allows you to log in working time on specific pages in Confluence. The login history on a given page can be previewed and exported to a .csv file. In addition, there is a dashboard (report) in the global view that shows the login work time for the current user. In the list, you can display working times in a selected period time grouped by spaces or pages. From this view, you can also export data to a .csv file for further processing. How we built it The application implementation has been divided into three parts: the server-side (Java, Spring Boot, Atlassian Connect), frontend (ReactJS, Atlaskit UI), a PostgreSQL database. We manage the entire project using Jira Cloud and Confluence. We have prepared the interface graphic designs in Figma. Challenges we ran into Returning breadcrumbs turned out to be a problem. Currently, this does not work as we would like: they should be returned in the information of each page. We would like to show this information in the dialog of saved logins on a particular Confluence page. Thanks to this, the user knows for which page he checks the information. Fortunately, we know how to do it! It is laborious because it requires additional calls to REST API, but we will introduce it in the next version of the app. Apart from that, we did not encounter any difficulties that would interfere with the implementation of our idea. Accomplishments that we're proud of We are proud that this application is really easy to use. At first, we were afraid that we would add so many functions to it that it would become complicated and difficult to understand. In addition, by talking with potential users - the people we work with on a daily basis - we learned that their needs are very simple: they want to enter how much time and when they worked on a given page and be able to add a short description. The simplicity of this application is its strength. Adding more functionalities in the future, we will remember not to lose this advantage. What we learned We have gained new experience in building a dedicated application for Confluence. Earlier, we were able to release a macro that is available on the Atlassian Marketplace: Content Template Macro for Confluence . We learned how to add elements to the menu on the Confluence page and display information in dialogues. In addition, for the first time, we had the opportunity to create a global page that is available to every user. None of us has ever participated in any hackathon! We work with each other every day, but it is a completely different style of work than during such a competition. We were still thinking about how little time we have left, and how much is still to be done. But we can say it was worth it and it certainly is not our last time. We still want to work together :) What's next for Time Tracking for Confluence The application has been submitted to Atlassian and we are waiting for approval to publish it as a free Marketplace add-on. Feedback from customers is very important to us, which will definitely have a significant impact on the product's roadmap. Nevertheless, there are elements that we definitely want to add: the ability to create teams (or using Teams which are available in Atlassian products for Cloud) and view the work logs of the entire team more filtering options - now you can narrow down logins only to time periods and switch between the list of spaces and pages We believe that this application will open a completely new area for Confluence (= time tracking), which until now has been overlooked. Built With atlaskit heroku java react springboot Try it out appsvio.atlassian.net marketplace.atlassian.com
Time Tracking for Confluence
Give your teams the opportunity to log the time they actually spent creating documents or meeting notes. Don’t force users to create Jira issues just to log working time - do it in Confluence!
['Krzysztof Skoropada']
[]
['atlaskit', 'heroku', 'java', 'react', 'springboot']
41
10,045
https://devpost.com/software/jira-issue-analyzer
GIF Jira Issue Analyzer - Negative Sentiment Jira Issue Analyzer - Negative Sentiment GIF Jira Issue Analyzer - Positive Sentiment Jira Issue Analyzer - Positive Sentiment Inspiration Sentiment analysis is useful, and important, for monitoring and improving customer experience. Customers’ feelings towards a brand can be influenced by a number of factors. Companies can resort to sentiment analysis to go through product or service reviews, for example, and attribute a score to each of them, allowing customer service agents to reach out to the customers with the most negative opinions first and try to defuse the bad situation as soon as possible. As for the reviews with more positive scores, these allow for companies to understand what actions trigger positive emotions on customers as a benchmark going forward. What it does Jira Issue Analyzer is an Atlassian forge app that helps businesses analyze sentiment of Jira Issues and categorizes the issue into Positive , Negative or Neutral . This can help customer service agents to turn their attention to the most frustrated or dissatisfied customers without having to go through each of the issues in the queue manually to assess their priority. The app can be used by support managers to measure customer's or reporter's overall satisfaction with their support team. How we built it We built it using Atlassian Forge, Forge UI, Confluence APIs and Natural Language Processing. Accomplishments that we're proud of We were able to build apps very fast using Forge was very easy. It was a great learning for the both of us. We would definitely love to build more apps for Atlassian Products using Forge. What we learned We learnt how Atlassian Forge could be used to build apps for Atlassian Cloud products. Building the app using Forge was a great experience as it does most of the work for you. All we need is to code! What's next for Jira Issue Analyzer Extend the capability to Confluence. When Forge adds more capabilities, we would like to extend the capability of this app to Analyze the sentiment of the response written by a Support Agent. Agent can use it to check if any negative words have been use and then enhance the response to be more positive. Built With atlassian forge javascript Try it out bitbucket.org
Jira Issue Analyzer
Jira Issue Analyzer is a Confluence app that helps agents, support managers & other assignees understand the sentiment of Jira Issues. This will help them understand frustration faced by the customer.
['Sumanth Muni', 'Priyadarshini Murugan']
[]
['atlassian', 'forge', 'javascript']
42
10,045
https://devpost.com/software/ai-quizzer
Inspiration When our knowledge bases grow, it becomes difficult to comprehend such large amounts of data. You might miss out on some really critical info hidden in your organization's Confluence. Be it your companies knowledge base containing vital info, or if you are a student, your notes for memorizing something effectively. What it does This addon automatically generates fun quizzes based on your confluence. How I built it Built the UI using Atlassian Forge. Used 'compromise' library for Natural Language Processing tasks - Named entity Recognition. Generated questions Architecture Challenges I ran into Getting the text from confluence pages was a challenge initially. But we sorted it out after digging the docs Accomplishments that I'm proud of Building a complete product that would actually be useful to people! What I learned Using reusable components. A bit of Natural Language Processing. What's next for AI Quizzer Much more enhanced NLP engine - generate tricky questions. Keep track of performance records Analytics based on past performance. Built With compromise forge javascript node.js Try it out github.com
Memory Master for Confluence - AI powered quizzing
Memory Master helps you master any content with AI powered quizzes.
['Pankaj Kumar']
[]
['compromise', 'forge', 'javascript', 'node.js']
43
10,045
https://devpost.com/software/covid-update
Inspiration Many people are worried about the uncertainties surrounding COVID-19. Get a quick glance of the situation so that you can plan tasks accordingly. What it does Gives you the latest COVID-19 updates How I built it JIRA, Forge, Js What's next for Covid Update Similar app for Confluence. Built With forge javascript jira
Covid Update
Worried about scheduling tasks with the uncertainty around covid-19?? Get the latest updates so that you know how to plan out your tasks.
['Maansi Srivastava']
[]
['forge', 'javascript', 'jira']
44
10,045
https://devpost.com/software/stock-tracker-dz27x3
GIF Inspiration This stock app was initially inspired by our side hobby of investing and just generally following the stock market. We thought it would be cool to see stock prices in real time at work. Once we started working on the project though, we realized that there are actually many practical use cases for a stock tracker app. The biggest use case would be for journalists and just general people working in the finance field. News articles about stocks often have a real time update macro that lists the stock price the news article is talking about. Similarly, this macro enables journalists and finance workers to easily spin up a similar feature in their confluence document that they use in their workspace. What it does The Stock Ticker uses the Yahoo Finance API to pull stock information so it can easily be displayed within Confluence. Using the forge infrastructure, type in To display information, all one has to do is to input their desired stock symbol.Then general information for that stock( such as price, asset profiles, price changes in both dollar and percentage amounts) will appear as a macro. How we built it The macro app is built on three states: input, success, and fail. Input is the text box screen Success is where the correct stock is displayed Fail is when the stock symbol does not exist. Within the Success screen, we were able to fetch Yahoo API using Forge API functions. Then the json data is stored within an Typescript interface and is called upon when the UI declares the different individual elements. Challenges we ran into It was very tough to work without React Components and CSS stylings as it is not supported by the Forge API. Therefore, we had to find an alternative way to make styling changes (such as font size, color, and font-weight). Our solution for that is creating text SVGs from scratch. They may generally be used for icons and logos, but SVGs behave very similar to HTML + CSS styling. In the end, we were able to import the SVG as a Image, which is one of the components within the Forge API. What we learned This section heavily overlaps with the previous section because our biggest challenge forced us to learn SVGs and how it works. Once we got the hang of it, it is basically like playing around with CSS: doing minor adjustments via forge tunnel till the user interface looks right. Another learning process was getting used to React components within Typescript, as we have not experienced this type of development. Accomplishments that we're proud of This section also overlaps with both of the previous sections. We were pretty proud of the way it turned out despite all the limitations within the Forge UI. However, using SVGs to change the colors of price changes was a big deal as that is essential within a stock application. What's next for Stock Tracker This was a pretty cool small project to do. Since we only just started our experience with SVGs, we would like to add in a chart for the stock if that is possible in the future. Also, we could implement a live update feature for the stock. Built With confluence forge react typescript Try it out github.com
Stock Tracker
Confluence Macro to track stocks in real time.
['Siddhesvar Kannan', 'Charles Chung']
[]
['confluence', 'forge', 'react', 'typescript']
45
10,045
https://devpost.com/software/link-shortener-macro
Shortened links in confluence Page Link shortener macro settings Forge link shortener for confluence Bit.ly dashboard Inspiration I wanted an easier way to shorten links from inside a confluence Page. Shortened links (using Bit.ly and similar tools) help in analytics as well. I have used bit.ly API for shortening of the links. What it does This macro shortens any URL given by the user. It uses URL shortener bitly API for shortening. The link analytics (click stats) can be accessed on bit.ly dashboard. How I built it I built it using Atlassian forge. The application is coded in node.js/typescript. The macro is deployed on a confluence cloud server. Accomplishments that I'm proud of I was able to do achieve URL shortening as well as analytics using this macro. The user can also edit the link provided. What I learned I learned about Forge API and UI elements. Forge helped me in building applications for atlassian products in a short period. What's next for Link Shortener Macro I want to integrate other URL shortener APIs and also deploy this macro for JIRA and other platforms as well. Built With atlassian bit.ly confluence forge node.js typescript Try it out bitbucket.org atlas-maker.atlassian.net
Link Shortener Macro
Confluence macro to shorten links easily. This uses Bit.ly API for link shortening. The links can be edited using the macro settings. Link analytics (click stats) are available in bitly dashboard.
['Jayshree Anandakumar']
[]
['atlassian', 'bit.ly', 'confluence', 'forge', 'node.js', 'typescript']
46
10,045
https://devpost.com/software/codeflow
TEST Built With javascript
TEST
TEST
['Harry Banda']
[]
['javascript']
47
10,045
https://devpost.com/software/get-latest-tech-news-in-jira
Inspiration I love reading technical articles on everything currently happening around in the world of Tech. This is the reason why I came up with this app so that everyone using JIRA can get the latest trending articles directly in the issue panel to take a short break in the middle of their work. What it does This forge app will allow the users to directly get the latest trending news in the world of technology, directly from the JIRA issue panel on a click of a button. How I built it Using Forge UI and APIs. Accomplishments that I'm proud of A working app. An app I will use myself. What I learned Forge Atlassian products FaaS What's next for Get latest Tech news in JIRA Integrate a Trello plugin to allow the users an option to copy the fetched tech articles to their Trello account. Built With forge javascript Try it out github.com
Latest Tech news in JIRA
Developers love reading the latest tech news. This app will allow them to get the latest news on whats happening around the tech world directly in JIRA with a short description and the article link.
['Maansi Srivastava']
[]
['forge', 'javascript']
48
10,045
https://devpost.com/software/date-fact
Inspiration A small step in cheering up your daily standup meetings! What it does It returns a fact about the day. How I built it Forge, JIRA, Js What's next for Date Fact Extend it to confluence Built With forge javascript jira
Date Fact
Cheer up your Jira Tickets by getting a random fact about the significance of the day. It may even make your ticket seem insignificant in the grand scheme of things!!
['Maansi Srivastava']
[]
['forge', 'javascript', 'jira']
49
10,045
https://devpost.com/software/random-inspiration
Inspiration I use Trello a lot - for both Personal & Business use-cases. I also love to read inspirational quotes to keep me going through that huge list that I need to work through. I thought - why not combine both? What it does Once enabled, the Trello Power-Up displays you random inspirational quotes whenever you open any card. Be inspired and work that list! How I built it I built the server by using Python Flask but Node.js will also easily do. I obtained the endpoint through NGROK because that seemed great for development. Used the tutorials provided by Atlassian to build on the Trello platform and consume its APIs for various purposes. Challenges I ran into The tutorials that were provided were in Node.js only and I am more acquainted with Python so I had to do a code transformation and to my happiness - it all worked smoothly in the end! Accomplishments that I'm proud of This is the first time I made a Power-Up for Trello and since now it's done - I am very excited to keep building! What I learned I learned a lot about the entire ecosystem of Trello and how Power-Ups work and how to harness its capabilities to build something awesome. What's next for Random Inspiration The main goal of this project was to open up opportunities for others to use this project as a starting point for building their own Trello Power-Ups using the Flask framework. This can be considered as a template and can be easily modified for many use-cases. Built With flask python trello Try it out github.com
Random Inspiration
Be inspired by thoughts from a great mind as you work on that great list of yours!
['Hannan Satopay']
[]
['flask', 'python', 'trello']
50
10,045
https://devpost.com/software/test-p8i92y
Logging in (JWT token sent by Flask API is stored in localStorage in client's browser) Aneri Thakkar enters the room Jayesh Thakkar enters the room Default bidder is chosen to be Aneri Thakkar, who is assigned a bid of 125 points. Third player plays a card Player players a card Bidding process begins Home page The bidding process ends (after a bid of 250 is sent / bidding timer ran out), starting the trump suit selection process Second player plays a card Main gameplay begins Player views their cards The trump suit selection process ends, starting the partner card selection process Inspiration The inspiration for Triple Spades was that there was no other application out in the market currently that allowed users to play the card game of triple spades in real-time, so we decided to build one ourselves. What it does Triple Spades allows users to play the Indian card game of Triple Spades completely online and in real-time using SocketIO, keeps track of game outcomes in a scoreboard using MongoDB, and allows users to register/log in using our Flask API with JWTs. How I built it For the backend, we made a REST API using the python module Flask with Flask-SocketIO for web socket connections and PyMongo to connect to our MongoDB database. For the frontend, we used the Angular front end javascript framework using the languages Typescript, HTML, and CSS, as well as the UI framework Boostrap. We also used SocketIO in Angular to connect to the server's web socket and trigger / listen to events. Challenges I ran into We ran into many challenges, most being little bugs where the many elements of the game (bidding, trump card selection, partner card selection, gameplay, end game) sometimes conflicted with each other due to bugs in the code and malfunctioning sockets, etc. However, we managed to eventually overcome these bugs and came out with more knowledge about SocketIO in general. Additionally, we ran into some issues trying to implement real-time user statuses as to whether certain users are currently logged in or not, but decided to scrap the feature as it is not integral to the gameplay process in and of itself. Accomplishments that I'm proud of I am proud that we were able to accomplish such a large project and pull through with all the little details and features involved. There are many parts to the game of Triple Spades, and I am proud to say that we have implemented every single one of them in a bug-free, verbose manner. Additionally, we were able to implement features not integral to the gameplay process such as authentication and keeping track of games using a scoreboard. What I learned I learned many new technologies such as SocketIO and Flask, but on a broader scale, I learned how to design the structure of NoSQL databases, how to architect a SocketIO app and create a real-time full-stack web application. What's next for Triple Spades Next, we are planning to add integration for multiple rooms. Probably the biggest disadvantage our card game currently has is to not be able to play multiple games simultaneously, as there is only one game room containing only 5 players, and no more will be able to join and no new rooms can be created. Additionally, we are planning to add the ability to add friend users and see your friends list, invite them to games, chat with them, etc., creating a deeper connection between the players in a game. Built With angular.js bootstrap connect css3 flask forge heroku html5 jira mongodb pymongo python socket.io typescript Try it out triple-spades.atlassian.net github.com
Triple Spades
Triple Spades is a dynamic and strategic card game web application played with 5 players, which allows users to play the Indian card game of triple spades in real time, keep track of scores, and login
['Jayesh Thakkar', 'Mahesh Natamai', 'Hemang Dwivedi', 'Rutvij Thakkar', 'Nikhil Gupta']
[]
['angular.js', 'bootstrap', 'connect', 'css3', 'flask', 'forge', 'heroku', 'html5', 'jira', 'mongodb', 'pymongo', 'python', 'socket.io', 'typescript']
51
10,045
https://devpost.com/software/notification-assistant-for-jira-cloud
Notification Assistant for Jira (NAFJ) has powered Jira notifications for organizations of all sizes and industries for over 8 years. We used Codegeist as an opportunity to start to bring power of NAFJ to Jira Cloud. What Notification Assistant for Jira does There's a frequent refrain that you should build a product to solve your own problems. We use Jira Service Desk Cloud for our project tracking and support system. A common pain point we had was that we would attach a file to a support ticket, and then the customer would have to go to our Service Desk, log-in, then click and download the file. Everyone kept asking, why isn't the file included in the email? And this is exactly what we built for Codegeist. With Notification Assistant for Jira Cloud, you can now have attachments in your customer notifications. How we built Notification Assistant for Jira The app was built using Atlassian Connect, Typescript, Serverless, AWS, SQS, and SES. When Forge get's a bit more mature, we plan to switch some of our stack to support that. Challenges we ran into Because we're building the foundation of a much larger application, we had to invest a lot of time in various infrastructure components that are not necessary for a Codegeist sized MVP. This includes a robust web hook retry handling system and an architecture that can handle spikes in usage to ensure our customers always get the best experience. Accomplishments that we're proud of From the first release, we support: Jira Software Classic Jira Software Next-gen Jira Service Desk Classic Jira Service Desk Next-gen What's next for Notification Assistant for Jira Cloud We plan to extend the application to support more than Comment events. Go to the Atlassian Marketplace , scroll down, and click on Watch app to stay updated on what is happening. Built With amazon-web-services atlassian email jira typescript Try it out marketplace.atlassian.com
Notification Assistant for Jira Cloud
Next Level Notifications for Jira Cloud
['Boris Berenberg']
[]
['amazon-web-services', 'atlassian', 'email', 'jira', 'typescript']
52
10,045
https://devpost.com/software/achievements
Start Commending your team mates now! View Commends on your Jira Issues Select team members to commend View Leaderboards and Recent Commends Inspiration We were inspired when we realised that there wasn't an easy way to commend a team member for their hard work and effort during a task. What it does On a Jira issue, users have the ability to commend a users work by pressing the Commend button. Users can view a Leaderboard of team members to see who has the most commends from their team. On their profile users can also see who they have recently commended and who has commended them. How we built it We build it using the Atlassian Connect Express framework, alongside a GraphQL API and Sequelize Database. For the Front End we used React. Challenges we ran into We struggled with initially setting up the app, and working out how to host it correctly. Accomplishments that we're proud of We're proud of our functional app which allows teams to begin praising their staff members through Jira. We think that this app can be used to produce a friendly and competitive edge in a company. (Prizes for staff with most commends etc.) What we learned We learned how to develop apps from the ground up, and we each got to try out developing both front-end and back-end code. What's next for Commendations We would like to integrate Commendations with AWS SES to send emails to users when they've been commended. We also want to introduce more achievements into the app, which can be created by Admins. Built With atlassian-connect connect graphql node.js react Try it out github.com commendations-for-jira.herokuapp.com gheister.atlassian.net
Commendations for Jira
We want to introduce a positive, friendly aspect to Jira Cloud through commendations, which users can strive to earn. Enable your teams to commend each other when good work has been done!
['Jeff Tomband', 'Thomas Ivall']
[]
['atlassian-connect', 'connect', 'graphql', 'node.js', 'react']
53
10,045
https://devpost.com/software/mesh-bayou
Inspiration Confluence is a great collaboration software particularly for sharing knowledge within teams What it does Our tool (Mesh Bayou) which is build using the Connect framework, enhances Confluence functionality by allowing teams to easily share, collaborate and consume 3D models. This is useful for companies working on Video Games, Architecture, Computer Aided Design, Educational institutions, e-commerce websites etc... It is usually difficult to share 3D content between group of people because, in most cases only selected few have the required 3D modelling/viewer software needed to open the document. Our tool allow anyone to view the 3D models from a browser or in Augmented Reality, on Desktop (Windows, Mac, Linux) or Mobile (Android, Apple) immediately without installing any additional app. Another example where this is useful, is in education, especially in the post Covid-19 world, a teacher can upload a 3D model and share in Augmented Reality with students who can access remotely from their PC, phones or tablets How I built it Using Connect Framework, Web technologies Challenges I ran into Implementing the back-end Accomplishments that I'm proud of It now works well What I learned How to optimize 3D models What's next for Mesh Bayou Adding support for animations Adding 3D annotations Adding audio User comments Built With amazon-web-services bootstrap confluence connect java javascript mxgraph python Try it out default.meshbayou.com
Mesh Bayou
Easily share and consume 3D content in the browser or in Augmented Reality
[]
[]
['amazon-web-services', 'bootstrap', 'confluence', 'connect', 'java', 'javascript', 'mxgraph', 'python']
54
10,045
https://devpost.com/software/stock-ticker-7deza0
Stocker ticker in Confluence page Stock ticker macro config settings Stock with positive change Stock with negative change Inspiration Stock ticker helps you fetch live stock information right inside Confluence. This macro uses the Alpha vantage API to fetch live stock trading information. The info is then dynamically converted into SVG and displayed. I have added different styles for stocks in positive as well as negative trends. This macro saves time for anyone who wants to keep track of stock prices inside Confluence. What it does It fetches live stock trading value (stock price, percentage change etc) using various APIs. This information is then converted into SVGs and then encoded in base64 form for display. There are 2 different types of SVGs for positive and negative trend of stocks. How I built it I build this using Atlassian Forge. The macro is deployed on Confluence server. The macro is coded using node.js/typescript. Stock trading APIs like Alpha Vantage is used to fetch live trading values. I also created SVGs using sketch tool, and then used code to update it for stock prices. Challenges I ran into Forge UI elements did not allow adding custom CSS, however I found a workaround by creating dynamic SVGs. What I learned I leant working with Forge API, UI elements and also deploying apps to various atlassian platforms. What's next for Stock Ticker I want to display even more stats for the stock symbols, and also give user the option to customize what information is displayed. Built With alphavantage atlassian confluence forge forgeui node.js typescript Try it out bitbucket.org atlas-maker.atlassian.net
Stock Ticker
Get information about stock values and their percentage changes right inside Confluence! This macro fetches live stock values using Alpha Vantage API and displays them in a pretty way.
['Jayshree Anandakumar']
[]
['alphavantage', 'atlassian', 'confluence', 'forge', 'forgeui', 'node.js', 'typescript']
55
10,045
https://devpost.com/software/trophy-vx38p0
Ranking Add new action panel Users with ranks Rewards configuration panel Users configuration panel Actions configuration panel New action form (step 1) New action form (step 2) New action form (step 3) Private user's profile Public user's profile Inspiration The concept of work gamification has been generating a buzz for a while now, becoming increasingly popular. Our experience in introducing similar game-based solutions in student organizations let us see first hand how gamification increases motivation and engagement, and changes attitude towards work. We believe it to be a successful, cost-effective, and attractive way to boost both productivity and morale and hoped to introduce gamification in our software house. We were pushed to action by implementation of task-time logging. We have noticed that gamification offers more detailed data about business processes and employee productivity, while bringing benefits to employees in the form of recognition with rewards, morale-boosting friendly rivalry, and increasing pride in their work. The wish to implement gamification to improve our own work lead us to creating Trophy. What it does Trophy is a gamification app designed for Jira Cloud which encourages motivation and productivity through friendly rivalry and rewards. For leaders and managers, Trophy provides aid in consolidating processes and rules deployed across teams or organizations. It can ease introduction of new, or modification of existing processes. Furthermore, Trophy provides a new way to both encourage and measure efficiency. For employees, Trophy introduces a unique game-like layer of achievements for effectively performing everyday tasks, thus boosting results. Moreover, it can increase employee satisfaction and pride in their work. It also rewards loyalty and helps build attachment to the company. Through the form of a game, Trophy inspires a community of employees involved in company life. At the same time, it takes away negative perceptions of Jira actions as mundane. With Trophy, players can: score points for specific actions in Jira. receive points directly from a Jira administrator as a special recognition. be awarded with ranks after crossing point thresholds. Ranks can be tailored to your company profile. You can create themes - laid back, like Wizards or Knights, or formal, like Expert. be awarded with achievements for excelling in specific actions. view a global ranking of users, including points and ranks. view public profiles of other players, including ranking position, points, ranks, achievements, points history chart, and activity history. view their personal Trophy profile, including points, ranks, achievements - current and in progress, activity history, and points history chart. subscribe to native web browser notifications on desktop about their Trophy progress. How I built it In terms of creating the application, after brainstorming during an internal hackathon, we have combined the most valuable ideas from two teams into Trophy. Once we established features that a gamification app should offer to best suit individual needs of any organization, we set out to create a PoC version to present as a pitch. After receiving feedback from business, management, UX, marketing, and technical teams, we created the ready-do-deploy version of the app, driven by the huge amount of enthusiasm for Trophy we have seen from our initial audience. In terms of implementing the application, it can be divided into three parts, the server side, frontend, and a PostgreSQL database. The server side was created with, Java, Spring Boot, Atlassian Connect, Firebase Cloud Messaging, and Amazon S3. In the frontend, React, Typescript, and AtlasKit UI were used. We used the benefits of hot module replacement in JavaScript and hot swap in Java, which made writing code much more efficient and more enjoyable. During development, ngrok was used to make the app available in the Cloud. The final deployment took place on the Heroku platform. The entire development process was managed in a next-gen Jira project integrated with Confluence. Challenges I ran into Developing Trophy provided us with both creative and technical challenges, such as: accommodating both business and technical aspects to create a flexible and coherent tool for both managers and employees. designing an optimal user interface, as it is especially crucial to us that Trophy is visually appealing and vibrant, and at the same time intuitive. defining actions so they could be scored and rewarded with achievements. We had to track actions of individual users in Jira and provide configuration flexible enough for each company to adapt the game to their needs, but intuitive and accessible enough so that a non-technical user can administer it. working under a strict deadline, made more challenging by the small size of our team. prioritising and picking a core of the most important features for the PoC application and plan a roadmap, and app versions for the future. overhauling our way of working and overcome obstacles in communication that come with remote work, imposed on us by the COVID-19 pandemic. Trophy is our first completely remotely developed project. Accomplishments that I'm proud of We are proud that we have created a user-experience oriented application that non-invasively uses processes in an organization to provide a wide range of benefits, both to the organization and its employees. In terms of development, we consider our accomplishments to be: vast elasticity and freedom of configuration, letting users tailor various gamification aspects to their precise needs, from scoring actions, through custom ranks reflecting the company profile, to creating achievements. push notifications that further boost dynamic interaction with its users and improves the link between Jira user interface and user activity. visually attractive and intuitive interface design. Inspired by Jira Cloud Next-gen, it is neat and practical while offering a wide range of actions. As Trophy is the first application that we have created from idea to release, having overcome all challenges, we are immensely proud of it and are looking forward to further developing and perfecting the app. What I learned Developing Trophy was full of new experiences for us. This is our first app for Jira Cloud! It was an extremely valuable learning experience that allowed us to greatly advance our skills and learn new technologies and solutions. Thanks to Trophy, we explored: development for Jira Cloud integration possibilities offered by Jira Cloud Java 11, as Trophy is our first project in this technology implementation of web push notification the use of AtlasKit in creating modern and user-friendly applications -Next-gen Jira projects In terms of the process of developing and application from start to finish, we have: polished our management and organizational skills while creating our action plan, meaning roadmap, backlog, and versions. gained insight into all aspects of app development learned more about the importance of the iterative nature of software implementation - thanks to this, despite the resignation from some features in the core version of the app, we have provided an application that offers key solutions. found out that remote work can be as effective as regular work - a source of optimism during the uncertain times of the COVID-19 pandemic. improved our teamwork and communication skills, as we have not worked together in a team prior to this project. What's next for Trophy As Trophy is full of potential, and we are full of motivation and zeal, there is plenty on our roadmap. We plan to steadily introduce new features that will enhance the Trophy experience: teams - creating teams of players, setting intra-team goals, action scores, achievements, and ranking. Team leaders will be able to create a custom Trophy game for their team. seasons - cyclic reset of point tallies cups - a new type of reward virtual currency and a prize catalog players can choose from gamification onboarding of new employees missions - a mechanism to support the implementation of individual and team goals updating rankings in real time integration with other ecosystem applications, such as Bitbucket and Service Desk integration of rewards with Slack - letting users receive Slack messages about their or their coworkers achievements and badges Built With amazon-web-services atlaskit firebase heroku java postgresql react typescript Try it out app-trophy.herokuapp.com
Trophy - gamification for Jira
Boost motivation and encourage productivity through friendly gamification in Jira Cloud.
['Sławomir Jezierski', 'Maciej Dziadyk', 'Mateusz O.', 'Marta Szmyd', 'Jarosław Morawski', 'Justyna Sroczyńska', 'Michał Bizoń', 'Paweł Panek']
[]
['amazon-web-services', 'atlaskit', 'firebase', 'heroku', 'java', 'postgresql', 'react', 'typescript']
56
10,045
https://devpost.com/software/sketch2code-trello-power-up
Sample Sketch2Code Inspiration What more could we accomplish if the time to test an idea was zero? I've been in awe of what computer vision can accomplish and how it can bring about positive change in our business processes. One such area is UI prototyping. Sketch2Code made me rethink the efficiency of existing accepted process by demonstrating the utility of AI and computer vision in producing prototypes for user testing. I got the idea that integration with tools such as Trello would make it readily accessible and fit seamlessly in the users' collaboration space while also making them far more efficient. What it does Sketch2Code Power up allows one to transform a UI sketch into a working web interface within seconds. If you have sketch anywhere, just click a photo and add it to your Trello card as attachment or draw it directly there with the help of inbuilt sketch editor. After this, you can simply select any of your attachments and the Sketch2Code will load the HTML of it. How we built it Trello Power Ups Trello Rest API Microsoft Azure Cloud and Customvision.ai Challenges we ran into Understanding how Trello Power Ups are designed and figuring out how to handle interactions within nested iframes was a bit of a challenge. However debugging was easy and the documents was quite clear. Accomplishments that we're proud of Training a custom vision model that performs well for a very small dataset of about 140 images with 10 classification labels. Designing an intuitive user experience with Trello power up so that it will improve productivity. What we learned Atlassian's Trello Power Up platform Azure Cloud What's next for Sketch2Code - Trello Power Up Real time multi-party collaboration on Sketches (Google docs for UI sketches). Faster code generation. Better HTML generation with more training data. Built With javascript sketch2code trello
Sketch2Code - Trello Power Up
A new way to do UI prototyping and user testing! Transform UI sketches to HTML Web pages within seconds.
['Piyush Agrawal', 'Shashwat Gulyani']
[]
['javascript', 'sketch2code', 'trello']
57