Spaces:
Configuration error
Configuration error
Upload 18 files
Browse files- .DS_Store +0 -0
- .gitattributes +1 -0
- .gitignore +2 -0
- CODE_OF_CONDUCT.md +75 -0
- LICENSE +21 -0
- README.md +221 -19
- Windows_guide.md +58 -0
- _config.yml +1 -0
- app.py +120 -0
- detect_mask_image.py +105 -0
- detect_mask_video.py +148 -0
- mask_detector.h5 +3 -0
- mask_detector.model +3 -0
- model2onnx.py +32 -0
- plot.png +0 -0
- requirements.txt +16 -3
- search.py +58 -0
- test_image.jpg +3 -0
- train_mask_detector.py +169 -0
.DS_Store
ADDED
Binary file (8.2 kB). View file
|
|
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
test_image.jpg filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
# Python virtual environments
|
2 |
+
venv
|
CODE_OF_CONDUCT.md
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Contributor Covenant Code of Conduct
|
2 |
+
|
3 |
+
## Our Pledge
|
4 |
+
|
5 |
+
In the interest of fostering an open and welcoming environment, we as
|
6 |
+
contributors and maintainers pledge to making participation in our project and
|
7 |
+
our community a harassment-free experience for everyone, regardless of age, body
|
8 |
+
size, disability, ethnicity, sex characteristics, gender identity and expression,
|
9 |
+
level of experience, education, socio-economic status, nationality, personal
|
10 |
+
appearance, race, religion, or sexual identity and orientation.
|
11 |
+
|
12 |
+
## Our Standards
|
13 |
+
|
14 |
+
Examples of behavior that contributes to creating a positive environment
|
15 |
+
include:
|
16 |
+
|
17 |
+
* Using welcoming and inclusive language
|
18 |
+
* Being respectful of differing viewpoints and experiences
|
19 |
+
* Gracefully accepting constructive criticism
|
20 |
+
* Focusing on what is best for the community
|
21 |
+
* Showing empathy towards other community members
|
22 |
+
|
23 |
+
Examples of unacceptable behavior by participants include:
|
24 |
+
|
25 |
+
* The use of sexualized language or imagery and unwelcome sexual attention or
|
26 |
+
advances
|
27 |
+
* Trolling, insulting/derogatory comments, and personal or political attacks
|
28 |
+
* Public or private harassment
|
29 |
+
* Publishing others' private information, such as a physical or electronic
|
30 |
+
address, without explicit permission
|
31 |
+
* Other conduct which could reasonably be considered inappropriate in a
|
32 |
+
professional setting
|
33 |
+
|
34 |
+
## Our Responsibilities
|
35 |
+
|
36 |
+
Project maintainers are responsible for clarifying the standards of acceptable
|
37 |
+
behavior and are expected to take appropriate and fair corrective action in
|
38 |
+
response to any instances of unacceptable behavior.
|
39 |
+
|
40 |
+
Project maintainers have the right and responsibility to remove, edit, or
|
41 |
+
reject comments, commits, code, wiki edits, issues, and other contributions
|
42 |
+
that are not aligned to this Code of Conduct, or to ban temporarily or
|
43 |
+
permanently any contributor for other behaviors that they deem inappropriate,
|
44 |
+
threatening, offensive, or harmful.
|
45 |
+
|
46 |
+
## Scope
|
47 |
+
|
48 |
+
This Code of Conduct applies both within project spaces and in public spaces
|
49 |
+
when an individual is representing the project or its community. Examples of
|
50 |
+
representing a project or community include using an official project e-mail
|
51 |
+
address, posting via an official social media account, or acting as an appointed
|
52 |
+
representative at an online or offline event. Representation of a project may be
|
53 |
+
further defined and clarified by project maintainers.
|
54 |
+
|
55 |
+
## Enforcement
|
56 |
+
|
57 |
+
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
58 |
+
reported by contacting the project team at chandrikadeb7@gmail.com. All
|
59 |
+
complaints will be reviewed and investigated and will result in a response that
|
60 |
+
is deemed necessary and appropriate to the circumstances. The project team is
|
61 |
+
obligated to maintain confidentiality with regard to the reporter of an incident.
|
62 |
+
Further details of specific enforcement policies may be posted separately.
|
63 |
+
|
64 |
+
Project maintainers who do not follow or enforce the Code of Conduct in good
|
65 |
+
faith may face temporary or permanent repercussions as determined by other
|
66 |
+
members of the project's leadership.
|
67 |
+
|
68 |
+
## Attribution
|
69 |
+
|
70 |
+
his Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, is
|
71 |
+
available <a href="https://www.contributor-covenant.org/version/1/4/code-of-conduct.html">here</a>
|
72 |
+
|
73 |
+
[homepage]: https://www.contributor-covenant.org
|
74 |
+
|
75 |
+
For answers to common questions about this code of conduct, see <a href="https://www.contributor-covenant.org/faq">this</a>
|
LICENSE
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MIT License
|
2 |
+
|
3 |
+
Copyright (c) 2021 chandrikadeb7
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6 |
+
of this software and associated documentation files (the "Software"), to deal
|
7 |
+
in the Software without restriction, including without limitation the rights
|
8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9 |
+
copies of the Software, and to permit persons to whom the Software is
|
10 |
+
furnished to do so, subject to the following conditions:
|
11 |
+
|
12 |
+
The above copyright notice and this permission notice shall be included in all
|
13 |
+
copies or substantial portions of the Software.
|
14 |
+
|
15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21 |
+
SOFTWARE.
|
README.md
CHANGED
@@ -1,19 +1,221 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<h1 align="center">Face Mask Detection</h1>
|
2 |
+
|
3 |
+
<div align= "center"><img src="https://github.com/Vrushti24/Face-Mask-Detection/blob/logo/Logo/facemaskdetection.ai%20%40%2051.06%25%20(CMYK_GPU%20Preview)%20%2018-02-2021%2018_33_18%20(2).png" width="200" height="200"/>
|
4 |
+
<h4>Face Mask Detection System built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect face masks in static images as well as in real-time video streams.</h4>
|
5 |
+
</div>
|
6 |
+
|
7 |
+
|
8 |
+

|
9 |
+
[](https://github.com/chandrikadeb7/Face-Mask-Detection/issues)
|
10 |
+
[](https://github.com/chandrikadeb7/Face-Mask-Detection/network/members)
|
11 |
+
[](https://github.com/chandrikadeb7/Face-Mask-Detection/stargazers)
|
12 |
+
[](https://github.com/chandrikadeb7/Face-Mask-Detection/issues)
|
13 |
+
[](https://www.linkedin.com/in/chandrika-deb/)
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+

|
18 |
+
|
19 |
+
## :point_down: Support me here!
|
20 |
+
<a href="https://www.buymeacoffee.com/chandrikadeb7" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Buy Me A Coffee" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>
|
21 |
+
|
22 |
+
## :innocent: Motivation
|
23 |
+
Amid the ongoing COVID-19 pandemic, there are no efficient face mask detection applications which are now in high demand for transportation means, densely populated areas, residential districts, large-scale manufacturers and other enterprises to ensure safety. The absence of large datasets of __‘with_mask’__ images has made this task cumbersome and challenging.
|
24 |
+
|
25 |
+
|
26 |
+
|
27 |
+
|
28 |
+
:computer: [Dev Link]
|
29 |
+
|
30 |
+
|
31 |
+
|
32 |
+
|
33 |
+
|
34 |
+
<p align="center"><img src="https://github.com/chandrikadeb7/Face-Mask-Detection/blob/master/Readme_images/Screen%20Shot%202020-05-14%20at%208.49.06%20PM.png" width="700" height="400"></p>
|
35 |
+
|
36 |
+
|
37 |
+
## :warning: TechStack/framework used
|
38 |
+
|
39 |
+
- [OpenCV](https://opencv.org/)
|
40 |
+
- [Caffe-based face detector](https://caffe.berkeleyvision.org/)
|
41 |
+
- [Keras](https://keras.io/)
|
42 |
+
- [TensorFlow](https://www.tensorflow.org/)
|
43 |
+
- [MobileNetV2](https://arxiv.org/abs/1801.04381)
|
44 |
+
|
45 |
+
## :star: Features
|
46 |
+
Our face mask detector doesn't use any morphed masked images dataset and the model is accurate. Owing to the use of MobileNetV2 architecture, it is computationally efficient, thus making it easier to deploy the model to embedded systems (Raspberry Pi, Google Coral, etc.).
|
47 |
+
|
48 |
+
This system can therefore be used in real-time applications which require face-mask detection for safety purposes due to the outbreak of Covid-19. This project can be integrated with embedded systems for application in airports, railway stations, offices, schools, and public places to ensure that public safety guidelines are followed.
|
49 |
+
|
50 |
+
## :file_folder: Dataset
|
51 |
+
The dataset used can be downloaded here - [Click to Download](https://github.com/chandrikadeb7/Face-Mask-Detection/tree/master/dataset)
|
52 |
+
|
53 |
+
This dataset consists of __4095 images__ belonging to two classes:
|
54 |
+
* __with_mask: 2165 images__
|
55 |
+
* __without_mask: 1930 images__
|
56 |
+
|
57 |
+
The images used were real images of faces wearing masks. The images were collected from the following sources:
|
58 |
+
|
59 |
+
* __Bing Search API__ ([See Python script](https://github.com/chandrikadeb7/Face-Mask-Detection/blob/master/search.py))
|
60 |
+
* __Kaggle datasets__
|
61 |
+
* __RMFD dataset__ ([See here](https://github.com/X-zhangyang/Real-World-Masked-Face-Dataset))
|
62 |
+
|
63 |
+
## :key: Prerequisites
|
64 |
+
|
65 |
+
All the dependencies and required libraries are included in the file <code>requirements.txt</code> [See here](https://github.com/chandrikadeb7/Face-Mask-Detection/blob/master/requirements.txt)
|
66 |
+
|
67 |
+
## 🚀 Installation
|
68 |
+
1. Clone the repo
|
69 |
+
```
|
70 |
+
$ git clone https://github.com/chandrikadeb7/Face-Mask-Detection.git
|
71 |
+
```
|
72 |
+
|
73 |
+
2. Change your directory to the cloned repo
|
74 |
+
```
|
75 |
+
$ cd Face-Mask-Detection
|
76 |
+
```
|
77 |
+
|
78 |
+
3. Create a Python virtual environment named 'test' and activate it
|
79 |
+
```
|
80 |
+
$ virtualenv test
|
81 |
+
```
|
82 |
+
```
|
83 |
+
$ source test/bin/activate
|
84 |
+
```
|
85 |
+
|
86 |
+
4. Now, run the following command in your Terminal/Command Prompt to install the libraries required
|
87 |
+
```
|
88 |
+
$ pip3 install -r requirements.txt
|
89 |
+
```
|
90 |
+
|
91 |
+
## :bulb: Working
|
92 |
+
|
93 |
+
1. Open terminal. Go into the cloned project directory and type the following command:
|
94 |
+
```
|
95 |
+
$ python3 train_mask_detector.py --dataset dataset
|
96 |
+
```
|
97 |
+
|
98 |
+
2. To detect face masks in an image type the following command:
|
99 |
+
```
|
100 |
+
$ python3 detect_mask_image.py --image images/pic1.jpeg
|
101 |
+
```
|
102 |
+
|
103 |
+
3. To detect face masks in real-time video streams type the following command:
|
104 |
+
```
|
105 |
+
$ python3 detect_mask_video.py
|
106 |
+
```
|
107 |
+
## :key: Results
|
108 |
+
|
109 |
+
#### Our model gave 98% accuracy for Face Mask Detection after training via <code>tensorflow-gpu==2.5.0</code>
|
110 |
+
|
111 |
+
<a href="https://colab.research.google.com/drive/1AZ0W2QAHnM3rcj0qbTmc7c3fAMPCowQ1?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
|
112 |
+
####
|
113 |
+

|
114 |
+
|
115 |
+
#### We got the following accuracy/loss training curve plot
|
116 |
+

|
117 |
+
|
118 |
+
## Streamlit app
|
119 |
+
|
120 |
+
Face Mask Detector webapp using Tensorflow & Streamlit
|
121 |
+
|
122 |
+
command
|
123 |
+
```
|
124 |
+
$ streamlit run app.py
|
125 |
+
```
|
126 |
+
## Images
|
127 |
+
|
128 |
+
<p align="center">
|
129 |
+
<img src="Readme_images/1.PNG">
|
130 |
+
</p>
|
131 |
+
<p align="center">Upload Images</p>
|
132 |
+
|
133 |
+
<p align="center">
|
134 |
+
<img src="Readme_images/2.PNG">
|
135 |
+
</p>
|
136 |
+
<p align="center">Results</p>
|
137 |
+
|
138 |
+
## :clap: And it's done!
|
139 |
+
Feel free to mail me for any doubts/query
|
140 |
+
:email: chandrikadeb7@gmail.com
|
141 |
+
|
142 |
+
---
|
143 |
+
|
144 |
+
## Internet of Things Device Setup
|
145 |
+
|
146 |
+
### Expected Hardware
|
147 |
+
* [Raspberry Pi 4 4GB with a case](https://www.canakit.com/raspberry-pi-4-4gb.html)
|
148 |
+
* [5MP OV5647 PiCamera from Arducam](https://www.arducam.com/docs/cameras-for-raspberry-pi/native-raspberry-pi-cameras/5mp-ov5647-cameras/)
|
149 |
+
|
150 |
+
### Getting Started
|
151 |
+
* Setup the Raspberry Pi case and Operating System by following the Getting Started section on page 3 at `documentation/CanaKit-Raspberry-Pi-Quick-Start-Guide-4.0.pdf` or https://www.canakit.com/Media/CanaKit-Raspberry-Pi-Quick-Start-Guide-4.0.pdf
|
152 |
+
* With NOOBS, use the recommended operating system
|
153 |
+
* Setup the PiCamera
|
154 |
+
* Assemble the PiCamera case from Arducam using `documentation/Arducam-Case-Setup.pdf` or https://www.arducam.com/docs/cameras-for-raspberry-pi/native-raspberry-pi-cameras/5mp-ov5647-cameras/
|
155 |
+
* [Attach your PiCamera module to the Raspberry Pi and enable the camera](https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/2)
|
156 |
+
|
157 |
+
### Raspberry Pi App Installation & Execution
|
158 |
+
|
159 |
+
> Run these commands after cloning the project
|
160 |
+
|
161 |
+
| Commands | Time to completion |
|
162 |
+
|------------------------------------------------------------------------------------------------------------------------------|--------------------|
|
163 |
+
| sudo apt install -y libatlas-base-dev liblapacke-dev gfortran | 1min |
|
164 |
+
| sudo apt install -y libhdf5-dev libhdf5-103 | 1min |
|
165 |
+
| pip3 install -r requirements.txt | 1-3 mins |
|
166 |
+
| wget "https://raw.githubusercontent.com/PINTO0309/Tensorflow-bin/master/tensorflow-2.4.0-cp37-none-linux_armv7l_download.sh" | less than 10 secs |
|
167 |
+
| ./tensorflow-2.4.0-cp37-none-linux_armv7l_download.sh | less than 10 secs |
|
168 |
+
| pip3 install tensorflow-2.4.0-cp37-none-linux_armv7l.whl | 1-3 mins |
|
169 |
+
|
170 |
+
---
|
171 |
+
|
172 |
+
## :trophy: Awards
|
173 |
+
Awarded Runners Up position in [Amdocs Innovation India ICE Project Fair]( https://www.amdocs.com/)
|
174 |
+
|
175 |
+

|
176 |
+
|
177 |
+
## :raising_hand: Cited by:
|
178 |
+
|
179 |
+
1. https://osf.io/preprints/3gph4/
|
180 |
+
2. https://link.springer.com/chapter/10.1007/978-981-33-4673-4_49
|
181 |
+
3. https://ieeexplore.ieee.org/abstract/document/9312083/
|
182 |
+
4. https://link.springer.com/chapter/10.1007/978-981-33-4673-4_48
|
183 |
+
5. https://www.researchgate.net/profile/Akhyar_Ahmed/publication/344173985_Face_Mask_Detector/links/5f58c00ea6fdcc9879d8e6f7/Face-Mask-Detector.pdf
|
184 |
+
|
185 |
+
## 👏 Appreciation
|
186 |
+
|
187 |
+
### Selected in [Devscript Winter Of Code](https://devscript.tech/woc/)
|
188 |
+
<img src="Readme_images/Devscript.jpeg" height=300 width=300>
|
189 |
+
|
190 |
+
### Selected in [Script Winter Of Code](https://swoc.tech/project.html)
|
191 |
+
<img src="Readme_images/winter.jpeg" height=300 width=300>
|
192 |
+
|
193 |
+
### Seleted in [Student Code-in](https://scodein.tech/)
|
194 |
+
<img src="Readme_images/sci.jpeg" height=300 width=300>
|
195 |
+
|
196 |
+
## :+1: Credits
|
197 |
+
* [https://www.pyimagesearch.com/](https://www.pyimagesearch.com/)
|
198 |
+
* [https://www.tensorflow.org/tutorials/images/transfer_learning](https://www.tensorflow.org/tutorials/images/transfer_learning)
|
199 |
+
|
200 |
+
## :handshake: Contribution
|
201 |
+
|
202 |
+
#### Please read the Contribution Guidelines [here](https://github.com/chandrikadeb7/Face-Mask-Detection/blob/master/CONTRIBUTING.md)
|
203 |
+
Feel free to **file a new issue** with a respective title and description on the the [Face-Mask-Detection](https://github.com/chandrikadeb7/Face-Mask-Detection/issues) repository. If you already found a solution to your problem, **I would love to review your pull request**!
|
204 |
+
|
205 |
+
## :handshake: Our Contributors
|
206 |
+
|
207 |
+
<a href="https://github.com/chandrikadeb7/Face-Mask-Detection/graphs/contributors">
|
208 |
+
<img src="https://contributors-img.web.app/image?repo=chandrikadeb7/Face-Mask-Detection" />
|
209 |
+
</a>
|
210 |
+
|
211 |
+
|
212 |
+
## :eyes: Code of Conduct
|
213 |
+
|
214 |
+
You can find our Code of Conduct [here](/CODE_OF_CONDUCT.md).
|
215 |
+
|
216 |
+
## :heart: Owner
|
217 |
+
Made with :heart: by [Chandrika Deb](https://github.com/chandrikadeb7)
|
218 |
+
|
219 |
+
## :eyes: License
|
220 |
+
MIT © [Chandrika Deb](https://github.com/chandrikadeb7/Face-Mask-Detection/blob/master/LICENSE)
|
221 |
+
|
Windows_guide.md
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<h1 align="center">Setting up Face Mask Detection project in Windows</h1>
|
2 |
+
|
3 |
+
<div align= "center">
|
4 |
+
<h4>This is a guide to set up the project on Windows.</h4>
|
5 |
+
</div>
|
6 |
+
|
7 |
+
## 🚀 Installation
|
8 |
+
1. Clone the repo
|
9 |
+
```
|
10 |
+
> git clone https://github.com/chandrikadeb7/Face-Mask-Detection.git
|
11 |
+
```
|
12 |
+
|
13 |
+
2. Change your directory to the cloned repo and create a Python virtual environment named 'env', type the following commmand in command prompt
|
14 |
+
```
|
15 |
+
> py -m venv env
|
16 |
+
```
|
17 |
+
|
18 |
+
3. Activate the Virtual environment with
|
19 |
+
```
|
20 |
+
> .\env\Scripts\activate
|
21 |
+
```
|
22 |
+
|
23 |
+
4. Now, run the following command in your Command Prompt to install the libraries required
|
24 |
+
```
|
25 |
+
> pip3 install -r requirements.txt
|
26 |
+
```
|
27 |
+
|
28 |
+
<p align="center">
|
29 |
+
<img src="Readme_images/win_cmd.png">
|
30 |
+
</p>
|
31 |
+
<p align="center">Example</p>
|
32 |
+
|
33 |
+
## Running the project
|
34 |
+
|
35 |
+
1. Open terminal. Go into the cloned project directory and type the following command:
|
36 |
+
```
|
37 |
+
> python3 train_mask_detector.py --dataset dataset
|
38 |
+
```
|
39 |
+
|
40 |
+
2. To detect face masks in an image type the following command:
|
41 |
+
```
|
42 |
+
> python3 detect_mask_image.py --image images/pic1.jpeg
|
43 |
+
```
|
44 |
+
|
45 |
+
3. To detect face masks in real-time video streams type the following command:
|
46 |
+
```
|
47 |
+
> python3 detect_mask_video.py
|
48 |
+
```
|
49 |
+
3. To detect face masks in an image on webapp type the following command:
|
50 |
+
```
|
51 |
+
> streamlit run app.py
|
52 |
+
```
|
53 |
+
|
54 |
+
|
55 |
+
|
56 |
+
## Readme
|
57 |
+
|
58 |
+
You can find our Readme [here](/README.md).
|
_config.yml
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
theme: jekyll-theme-modernist
|
app.py
ADDED
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
from PIL import Image, ImageEnhance
|
3 |
+
import numpy as np
|
4 |
+
import cv2
|
5 |
+
import os
|
6 |
+
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
|
7 |
+
from tensorflow.keras.preprocessing.image import img_to_array
|
8 |
+
from tensorflow.keras.models import load_model
|
9 |
+
import detect_mask_image
|
10 |
+
|
11 |
+
# Setting custom Page Title and Icon with changed layout and sidebar state
|
12 |
+
st.set_page_config(page_title='Face Mask Detector', page_icon='😷', layout='centered', initial_sidebar_state='expanded')
|
13 |
+
|
14 |
+
|
15 |
+
def local_css(file_name):
|
16 |
+
""" Method for reading styles.css and applying necessary changes to HTML"""
|
17 |
+
with open(file_name) as f:
|
18 |
+
st.markdown(f'<style>{f.read()}</style>', unsafe_allow_html=True)
|
19 |
+
|
20 |
+
|
21 |
+
def mask_image():
|
22 |
+
global RGB_img
|
23 |
+
# load our serialized face detector model from disk
|
24 |
+
print("[INFO] loading face detector model...")
|
25 |
+
prototxtPath = os.path.sep.join(["face_detector", "deploy.prototxt"])
|
26 |
+
weightsPath = os.path.sep.join(["face_detector",
|
27 |
+
"res10_300x300_ssd_iter_140000.caffemodel"])
|
28 |
+
net = cv2.dnn.readNet(prototxtPath, weightsPath)
|
29 |
+
|
30 |
+
# load the face mask detector model from disk
|
31 |
+
print("[INFO] loading face mask detector model...")
|
32 |
+
model = load_model("mask_detector.h5")
|
33 |
+
|
34 |
+
|
35 |
+
# load the input image from disk and grab the image spatial
|
36 |
+
# dimensions
|
37 |
+
image = cv2.imread("./images/out.jpg")
|
38 |
+
(h, w) = image.shape[:2]
|
39 |
+
|
40 |
+
# construct a blob from the image
|
41 |
+
blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300),
|
42 |
+
(104.0, 177.0, 123.0))
|
43 |
+
|
44 |
+
# pass the blob through the network and obtain the face detections
|
45 |
+
print("[INFO] computing face detections...")
|
46 |
+
net.setInput(blob)
|
47 |
+
detections = net.forward()
|
48 |
+
|
49 |
+
# loop over the detections
|
50 |
+
for i in range(0, detections.shape[2]):
|
51 |
+
# extract the confidence (i.e., probability) associated with
|
52 |
+
# the detection
|
53 |
+
confidence = detections[0, 0, i, 2]
|
54 |
+
|
55 |
+
# filter out weak detections by ensuring the confidence is
|
56 |
+
# greater than the minimum confidence
|
57 |
+
if confidence > 0.5:
|
58 |
+
# compute the (x, y)-coordinates of the bounding box for
|
59 |
+
# the object
|
60 |
+
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
|
61 |
+
(startX, startY, endX, endY) = box.astype("int")
|
62 |
+
|
63 |
+
# ensure the bounding boxes fall within the dimensions of
|
64 |
+
# the frame
|
65 |
+
(startX, startY) = (max(0, startX), max(0, startY))
|
66 |
+
(endX, endY) = (min(w - 1, endX), min(h - 1, endY))
|
67 |
+
|
68 |
+
# extract the face ROI, convert it from BGR to RGB channel
|
69 |
+
# ordering, resize it to 224x224, and preprocess it
|
70 |
+
face = image[startY:endY, startX:endX]
|
71 |
+
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
|
72 |
+
face = cv2.resize(face, (224, 224))
|
73 |
+
face = img_to_array(face)
|
74 |
+
face = preprocess_input(face)
|
75 |
+
face = np.expand_dims(face, axis=0)
|
76 |
+
|
77 |
+
# pass the face through the model to determine if the face
|
78 |
+
# has a mask or not
|
79 |
+
(mask, withoutMask) = model.predict(face)[0]
|
80 |
+
|
81 |
+
# determine the class label and color we'll use to draw
|
82 |
+
# the bounding box and text
|
83 |
+
label = "Mask" if mask > withoutMask else "No Mask"
|
84 |
+
color = (0, 255, 0) if label == "Mask" else (0, 0, 255)
|
85 |
+
|
86 |
+
# include the probability in the label
|
87 |
+
label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)
|
88 |
+
|
89 |
+
# display the label and bounding box rectangle on the output
|
90 |
+
# frame
|
91 |
+
cv2.putText(image, label, (startX, startY - 10),
|
92 |
+
cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
|
93 |
+
cv2.rectangle(image, (startX, startY), (endX, endY), color, 2)
|
94 |
+
RGB_img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
|
95 |
+
mask_image()
|
96 |
+
|
97 |
+
def mask_detection():
|
98 |
+
local_css("css/styles.css")
|
99 |
+
st.markdown('<h1 align="center">😷 Face Mask Detection</h1>', unsafe_allow_html=True)
|
100 |
+
activities = ["Image", "Webcam"]
|
101 |
+
#st.set_option('deprecation.showfileUploaderEncoding', False)
|
102 |
+
st.sidebar.markdown("# Mask Detection on?")
|
103 |
+
choice = st.sidebar.selectbox("Choose among the given options:", activities)
|
104 |
+
|
105 |
+
if choice == 'Image':
|
106 |
+
st.markdown('<h2 align="center">Detection on Image</h2>', unsafe_allow_html=True)
|
107 |
+
st.markdown("### Upload your image here ⬇")
|
108 |
+
image_file = st.file_uploader("", type=['jpg']) # upload image
|
109 |
+
if image_file is not None:
|
110 |
+
our_image = Image.open(image_file) # making compatible to PIL
|
111 |
+
im = our_image.save('./images/out.jpg')
|
112 |
+
saved_image = st.image(image_file, caption='', use_column_width=True)
|
113 |
+
st.markdown('<h3 align="center">Image uploaded successfully!</h3>', unsafe_allow_html=True)
|
114 |
+
if st.button('Process'):
|
115 |
+
st.image(RGB_img, use_column_width=True)
|
116 |
+
|
117 |
+
if choice == 'Webcam':
|
118 |
+
st.markdown('<h2 align="center">Detection on Webcam</h2>', unsafe_allow_html=True)
|
119 |
+
st.markdown('<h3 align="center">This feature will be available soon!</h3>', unsafe_allow_html=True)
|
120 |
+
mask_detection()
|
detect_mask_image.py
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# USAGE
|
2 |
+
# python detect_mask_image.py --image images/pic1.jpeg
|
3 |
+
|
4 |
+
# import the necessary packages
|
5 |
+
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
|
6 |
+
from tensorflow.keras.preprocessing.image import img_to_array
|
7 |
+
from tensorflow.keras.models import load_model
|
8 |
+
import numpy as np
|
9 |
+
import argparse
|
10 |
+
import cv2
|
11 |
+
import os
|
12 |
+
def mask_image():
|
13 |
+
# construct the argument parser and parse the arguments
|
14 |
+
ap = argparse.ArgumentParser()
|
15 |
+
ap.add_argument("-i", "--image", required=True,
|
16 |
+
help="path to input image")
|
17 |
+
ap.add_argument("-f", "--face", type=str,
|
18 |
+
default="face_detector",
|
19 |
+
help="path to face detector model directory")
|
20 |
+
ap.add_argument("-m", "--model", type=str,
|
21 |
+
default="mask_detector.model",
|
22 |
+
help="path to trained face mask detector model")
|
23 |
+
ap.add_argument("-c", "--confidence", type=float, default=0.5,
|
24 |
+
help="minimum probability to filter weak detections")
|
25 |
+
args = vars(ap.parse_args())
|
26 |
+
|
27 |
+
# load our serialized face detector model from disk
|
28 |
+
print("[INFO] loading face detector model...")
|
29 |
+
prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"])
|
30 |
+
weightsPath = os.path.sep.join([args["face"],
|
31 |
+
"res10_300x300_ssd_iter_140000.caffemodel"])
|
32 |
+
net = cv2.dnn.readNet(prototxtPath, weightsPath)
|
33 |
+
|
34 |
+
# load the face mask detector model from disk
|
35 |
+
print("[INFO] loading face mask detector model...")
|
36 |
+
model = load_model("mask_detector.h5")
|
37 |
+
|
38 |
+
|
39 |
+
# load the input image from disk, clone it, and grab the image spatial
|
40 |
+
# dimensions
|
41 |
+
image = cv2.imread(args["image"])
|
42 |
+
orig = image.copy()
|
43 |
+
(h, w) = image.shape[:2]
|
44 |
+
|
45 |
+
# construct a blob from the image
|
46 |
+
blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300),
|
47 |
+
(104.0, 177.0, 123.0))
|
48 |
+
|
49 |
+
# pass the blob through the network and obtain the face detections
|
50 |
+
print("[INFO] computing face detections...")
|
51 |
+
net.setInput(blob)
|
52 |
+
detections = net.forward()
|
53 |
+
|
54 |
+
# loop over the detections
|
55 |
+
for i in range(0, detections.shape[2]):
|
56 |
+
# extract the confidence (i.e., probability) associated with
|
57 |
+
# the detection
|
58 |
+
confidence = detections[0, 0, i, 2]
|
59 |
+
|
60 |
+
# filter out weak detections by ensuring the confidence is
|
61 |
+
# greater than the minimum confidence
|
62 |
+
if confidence > args["confidence"]:
|
63 |
+
# compute the (x, y)-coordinates of the bounding box for
|
64 |
+
# the object
|
65 |
+
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
|
66 |
+
(startX, startY, endX, endY) = box.astype("int")
|
67 |
+
|
68 |
+
# ensure the bounding boxes fall within the dimensions of
|
69 |
+
# the frame
|
70 |
+
(startX, startY) = (max(0, startX), max(0, startY))
|
71 |
+
(endX, endY) = (min(w - 1, endX), min(h - 1, endY))
|
72 |
+
|
73 |
+
# extract the face ROI, convert it from BGR to RGB channel
|
74 |
+
# ordering, resize it to 224x224, and preprocess it
|
75 |
+
face = image[startY:endY, startX:endX]
|
76 |
+
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
|
77 |
+
face = cv2.resize(face, (224, 224))
|
78 |
+
face = img_to_array(face)
|
79 |
+
face = preprocess_input(face)
|
80 |
+
face = np.expand_dims(face, axis=0)
|
81 |
+
|
82 |
+
# pass the face through the model to determine if the face
|
83 |
+
# has a mask or not
|
84 |
+
(mask, withoutMask) = model.predict(face)[0]
|
85 |
+
|
86 |
+
# determine the class label and color we'll use to draw
|
87 |
+
# the bounding box and text
|
88 |
+
label = "Mask" if mask > withoutMask else "No Mask"
|
89 |
+
color = (0, 255, 0) if label == "Mask" else (0, 0, 255)
|
90 |
+
|
91 |
+
# include the probability in the label
|
92 |
+
label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)
|
93 |
+
|
94 |
+
# display the label and bounding box rectangle on the output
|
95 |
+
# frame
|
96 |
+
cv2.putText(image, label, (startX, startY - 10),
|
97 |
+
cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
|
98 |
+
cv2.rectangle(image, (startX, startY), (endX, endY), color, 2)
|
99 |
+
|
100 |
+
# show the output image
|
101 |
+
cv2.imshow("Output", image)
|
102 |
+
cv2.waitKey(0)
|
103 |
+
|
104 |
+
if __name__ == "__main__":
|
105 |
+
mask_image()
|
detect_mask_video.py
ADDED
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# USAGE
|
2 |
+
# python detect_mask_video.py
|
3 |
+
|
4 |
+
# import the necessary packages
|
5 |
+
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
|
6 |
+
from tensorflow.keras.preprocessing.image import img_to_array
|
7 |
+
from tensorflow.keras.models import load_model
|
8 |
+
from imutils.video import VideoStream
|
9 |
+
import numpy as np
|
10 |
+
import argparse
|
11 |
+
import imutils
|
12 |
+
import time
|
13 |
+
import cv2
|
14 |
+
import os
|
15 |
+
|
16 |
+
def detect_and_predict_mask(frame, faceNet, maskNet):
|
17 |
+
# grab the dimensions of the frame and then construct a blob
|
18 |
+
# from it
|
19 |
+
(h, w) = frame.shape[:2]
|
20 |
+
blob = cv2.dnn.blobFromImage(frame, 1.0, (300, 300),
|
21 |
+
(104.0, 177.0, 123.0))
|
22 |
+
|
23 |
+
# pass the blob through the network and obtain the face detections
|
24 |
+
faceNet.setInput(blob)
|
25 |
+
detections = faceNet.forward()
|
26 |
+
|
27 |
+
# initialize our list of faces, their corresponding locations,
|
28 |
+
# and the list of predictions from our face mask network
|
29 |
+
faces = []
|
30 |
+
locs = []
|
31 |
+
preds = []
|
32 |
+
|
33 |
+
# loop over the detections
|
34 |
+
for i in range(0, detections.shape[2]):
|
35 |
+
# extract the confidence (i.e., probability) associated with
|
36 |
+
# the detection
|
37 |
+
confidence = detections[0, 0, i, 2]
|
38 |
+
|
39 |
+
# filter out weak detections by ensuring the confidence is
|
40 |
+
# greater than the minimum confidence
|
41 |
+
if confidence > args["confidence"]:
|
42 |
+
# compute the (x, y)-coordinates of the bounding box for
|
43 |
+
# the object
|
44 |
+
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
|
45 |
+
(startX, startY, endX, endY) = box.astype("int")
|
46 |
+
|
47 |
+
# ensure the bounding boxes fall within the dimensions of
|
48 |
+
# the frame
|
49 |
+
(startX, startY) = (max(0, startX), max(0, startY))
|
50 |
+
(endX, endY) = (min(w - 1, endX), min(h - 1, endY))
|
51 |
+
|
52 |
+
# extract the face ROI, convert it from BGR to RGB channel
|
53 |
+
# ordering, resize it to 224x224, and preprocess it
|
54 |
+
face = frame[startY:endY, startX:endX]
|
55 |
+
if face.any():
|
56 |
+
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
|
57 |
+
face = cv2.resize(face, (224, 224))
|
58 |
+
face = img_to_array(face)
|
59 |
+
face = preprocess_input(face)
|
60 |
+
|
61 |
+
# add the face and bounding boxes to their respective
|
62 |
+
# lists
|
63 |
+
faces.append(face)
|
64 |
+
locs.append((startX, startY, endX, endY))
|
65 |
+
|
66 |
+
# only make a predictions if at least one face was detected
|
67 |
+
if len(faces) > 0:
|
68 |
+
# for faster inference we'll make batch predictions on *all*
|
69 |
+
# faces at the same time rather than one-by-one predictions
|
70 |
+
# in the above `for` loop
|
71 |
+
faces = np.array(faces, dtype="float32")
|
72 |
+
preds = maskNet.predict(faces, batch_size=32)
|
73 |
+
|
74 |
+
# return a 2-tuple of the face locations and their corresponding
|
75 |
+
# locations
|
76 |
+
return (locs, preds)
|
77 |
+
|
78 |
+
# construct the argument parser and parse the arguments
|
79 |
+
ap = argparse.ArgumentParser()
|
80 |
+
ap.add_argument("-f", "--face", type=str,
|
81 |
+
default="face_detector",
|
82 |
+
help="path to face detector model directory")
|
83 |
+
ap.add_argument("-m", "--model", type=str,
|
84 |
+
default="mask_detector.model",
|
85 |
+
help="path to trained face mask detector model")
|
86 |
+
ap.add_argument("-c", "--confidence", type=float, default=0.5,
|
87 |
+
help="minimum probability to filter weak detections")
|
88 |
+
args = vars(ap.parse_args())
|
89 |
+
|
90 |
+
# load our serialized face detector model from disk
|
91 |
+
print("[INFO] loading face detector model...")
|
92 |
+
prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"])
|
93 |
+
weightsPath = os.path.sep.join([args["face"],
|
94 |
+
"res10_300x300_ssd_iter_140000.caffemodel"])
|
95 |
+
faceNet = cv2.dnn.readNet(prototxtPath, weightsPath)
|
96 |
+
|
97 |
+
# load the face mask detector model from disk
|
98 |
+
print("[INFO] loading face mask detector model...")
|
99 |
+
maskNet = load_model(args["model"])
|
100 |
+
|
101 |
+
# initialize the video stream and allow the camera sensor to warm up
|
102 |
+
print("[INFO] starting video stream...")
|
103 |
+
vs = VideoStream(src=0).start()
|
104 |
+
time.sleep(2.0)
|
105 |
+
|
106 |
+
# loop over the frames from the video stream
|
107 |
+
while True:
|
108 |
+
# grab the frame from the threaded video stream and resize it
|
109 |
+
# to have a maximum width of 400 pixels
|
110 |
+
frame = vs.read()
|
111 |
+
frame = imutils.resize(frame, width=400)
|
112 |
+
|
113 |
+
# detect faces in the frame and determine if they are wearing a
|
114 |
+
# face mask or not
|
115 |
+
(locs, preds) = detect_and_predict_mask(frame, faceNet, maskNet)
|
116 |
+
|
117 |
+
# loop over the detected face locations and their corresponding
|
118 |
+
# locations
|
119 |
+
for (box, pred) in zip(locs, preds):
|
120 |
+
# unpack the bounding box and predictions
|
121 |
+
(startX, startY, endX, endY) = box
|
122 |
+
(mask, withoutMask) = pred
|
123 |
+
|
124 |
+
# determine the class label and color we'll use to draw
|
125 |
+
# the bounding box and text
|
126 |
+
label = "Mask" if mask > withoutMask else "No Mask"
|
127 |
+
color = (0, 255, 0) if label == "Mask" else (0, 0, 255)
|
128 |
+
|
129 |
+
# include the probability in the label
|
130 |
+
label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)
|
131 |
+
|
132 |
+
# display the label and bounding box rectangle on the output
|
133 |
+
# frame
|
134 |
+
cv2.putText(frame, label, (startX, startY - 10),
|
135 |
+
cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
|
136 |
+
cv2.rectangle(frame, (startX, startY), (endX, endY), color, 2)
|
137 |
+
|
138 |
+
# show the output frame
|
139 |
+
cv2.imshow("Frame", frame)
|
140 |
+
key = cv2.waitKey(1) & 0xFF
|
141 |
+
|
142 |
+
# if the `q` key was pressed, break from the loop
|
143 |
+
if key == ord("q"):
|
144 |
+
break
|
145 |
+
|
146 |
+
# do a bit of cleanup
|
147 |
+
cv2.destroyAllWindows()
|
148 |
+
vs.stop()
|
mask_detector.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:648ae7cbac452d4e52659926e4ef6fe6c3455e1bcc7caa527e65dfd37c89a73c
|
3 |
+
size 11546872
|
mask_detector.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d0b30e2c7f8f187c143d655dee8697fcfbe8678889565670cd7314fb064eadc8
|
3 |
+
size 11490448
|
model2onnx.py
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# import the necessary packages
|
3 |
+
from tensorflow.keras.models import load_model, save_model
|
4 |
+
import argparse
|
5 |
+
import tf2onnx
|
6 |
+
import onnx
|
7 |
+
|
8 |
+
def model2onnx():
|
9 |
+
# construct the argument parser and parse the arguments
|
10 |
+
ap = argparse.ArgumentParser()
|
11 |
+
ap.add_argument("-m", "--model", type=str,
|
12 |
+
default="mask_detector.model",
|
13 |
+
help="path to trained face mask detector model")
|
14 |
+
ap.add_argument("-o", "--output", type=str,
|
15 |
+
default='mask_detector.onnx',
|
16 |
+
help="path to trained face mask detector model")
|
17 |
+
args = vars(ap.parse_args())
|
18 |
+
|
19 |
+
|
20 |
+
# load the face mask detector model from disk
|
21 |
+
print("[INFO] loading face mask detector model...")
|
22 |
+
model = load_model(args["model"])
|
23 |
+
onnx_model, _ = tf2onnx.convert.from_keras(model, opset=13)
|
24 |
+
|
25 |
+
onnx_model.graph.input[0].type.tensor_type.shape.dim[0].dim_param = '?'
|
26 |
+
onnx_model.graph.output[0].type.tensor_type.shape.dim[0].dim_param = '?'
|
27 |
+
|
28 |
+
onnx.save(onnx_model, args['output'])
|
29 |
+
|
30 |
+
|
31 |
+
if __name__ == "__main__":
|
32 |
+
model2onnx()
|
plot.png
ADDED
![]() |
requirements.txt
CHANGED
@@ -1,3 +1,16 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
tensorflow>=2.19.0,<3.0
|
2 |
+
|
3 |
+
# If you want standalone Keras, it must be 3.x:
|
4 |
+
# keras>=3.5.0,<4.0
|
5 |
+
|
6 |
+
imutils>=0.5.4
|
7 |
+
numpy>=1.24.3,<2.0
|
8 |
+
opencv-python>=4.11.0.46
|
9 |
+
matplotlib>=3.7.1,<4.0
|
10 |
+
scipy>=1.9.3,<2.0
|
11 |
+
scikit-learn>=1.0.2,<2.0
|
12 |
+
pillow>=9.4.0,<10.0
|
13 |
+
streamlit>=1.25.0,<2.0
|
14 |
+
onnx>=1.13.1,<2.0
|
15 |
+
tf2onnx>=1.14.1,<2.0
|
16 |
+
|
search.py
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from requests import exceptions
|
2 |
+
import argparse
|
3 |
+
import requests
|
4 |
+
import cv2
|
5 |
+
import os
|
6 |
+
ap = argparse.ArgumentParser()
|
7 |
+
ap.add_argument("-q", "--query", required=True,
|
8 |
+
help="search query to search Bing Image API for")
|
9 |
+
ap.add_argument("-o", "--output", required=True,
|
10 |
+
help="path to output directory of images")
|
11 |
+
args = vars(ap.parse_args())
|
12 |
+
API_KEY = "d8982f9e69a4437fa6e10715d1ed691d"
|
13 |
+
MAX_RESULTS = 500
|
14 |
+
GROUP_SIZE = 50
|
15 |
+
URL = "https://api.cognitive.microsoft.com/bing/v7.0/images/search"
|
16 |
+
EXCEPTIONS = set([IOError, FileNotFoundError,
|
17 |
+
exceptions.RequestException, exceptions.HTTPError,
|
18 |
+
exceptions.ConnectionError, exceptions.Timeout])
|
19 |
+
term = args["query"]
|
20 |
+
headers = {"Ocp-Apim-Subscription-Key" : API_KEY}
|
21 |
+
params = {"q": term, "offset": 0, "count": GROUP_SIZE}
|
22 |
+
print("[INFO] searching Bing API for '{}'".format(term))
|
23 |
+
search = requests.get(URL, headers=headers, params=params)
|
24 |
+
search.raise_for_status()
|
25 |
+
results = search.json()
|
26 |
+
estNumResults = min(results["totalEstimatedMatches"], MAX_RESULTS)
|
27 |
+
print("[INFO] {} total results for '{}'".format(estNumResults,
|
28 |
+
term))
|
29 |
+
total = 0
|
30 |
+
for offset in range(0, estNumResults, GROUP_SIZE):
|
31 |
+
print("[INFO] making request for group {}-{} of {}...".format(
|
32 |
+
offset, offset + GROUP_SIZE, estNumResults))
|
33 |
+
params["offset"] = offset
|
34 |
+
search = requests.get(URL, headers=headers, params=params)
|
35 |
+
search.raise_for_status()
|
36 |
+
results = search.json()
|
37 |
+
print("[INFO] saving images for group {}-{} of {}...".format(
|
38 |
+
offset, offset + GROUP_SIZE, estNumResults))
|
39 |
+
for v in results["value"]:
|
40 |
+
try:
|
41 |
+
print("[INFO] fetching: {}".format(v["contentUrl"]))
|
42 |
+
r = requests.get(v["contentUrl"], timeout=30)
|
43 |
+
ext = v["contentUrl"][v["contentUrl"].rfind("."):]
|
44 |
+
p = os.path.sep.join([args["output"], "{}{}".format(
|
45 |
+
str(total).zfill(8), ext)])
|
46 |
+
f = open(p, "wb")
|
47 |
+
f.write(r.content)
|
48 |
+
f.close()
|
49 |
+
except Exception as e:
|
50 |
+
if type(e) in EXCEPTIONS:
|
51 |
+
print("[INFO] skipping: {}".format(v["contentUrl"]))
|
52 |
+
continue
|
53 |
+
image = cv2.imread(p)
|
54 |
+
if image is None:
|
55 |
+
print("[INFO] deleting: {}".format(p))
|
56 |
+
os.remove(p)
|
57 |
+
continue
|
58 |
+
total += 1
|
test_image.jpg
ADDED
![]() |
Git LFS Details
|
train_mask_detector.py
ADDED
@@ -0,0 +1,169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# USAGE
|
2 |
+
# python train_mask_detector.py --dataset dataset
|
3 |
+
|
4 |
+
# import the necessary packages
|
5 |
+
from tensorflow.keras.preprocessing.image import ImageDataGenerator
|
6 |
+
from tensorflow.keras.applications import MobileNetV2
|
7 |
+
from tensorflow.keras.layers import AveragePooling2D
|
8 |
+
from tensorflow.keras.layers import Dropout
|
9 |
+
from tensorflow.keras.layers import Flatten
|
10 |
+
from tensorflow.keras.layers import Dense
|
11 |
+
from tensorflow.keras.layers import Input
|
12 |
+
from tensorflow.keras.models import Model
|
13 |
+
from tensorflow.keras.optimizers import Adam
|
14 |
+
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
|
15 |
+
from tensorflow.keras.preprocessing.image import img_to_array
|
16 |
+
from tensorflow.keras.preprocessing.image import load_img
|
17 |
+
from tensorflow.keras.utils import to_categorical
|
18 |
+
from sklearn.preprocessing import LabelBinarizer
|
19 |
+
from sklearn.model_selection import train_test_split
|
20 |
+
from sklearn.metrics import classification_report
|
21 |
+
from imutils import paths
|
22 |
+
import matplotlib.pyplot as plt
|
23 |
+
import numpy as np
|
24 |
+
import argparse
|
25 |
+
import os
|
26 |
+
from tensorflow.keras.optimizers import Adam
|
27 |
+
from tensorflow.keras.optimizers.schedules import ExponentialDecay
|
28 |
+
|
29 |
+
# construct the argument parser and parse the arguments
|
30 |
+
ap = argparse.ArgumentParser()
|
31 |
+
ap.add_argument("-d", "--dataset", required=True,
|
32 |
+
help="path to input dataset")
|
33 |
+
ap.add_argument("-p", "--plot", type=str, default="plot.png",
|
34 |
+
help="path to output loss/accuracy plot")
|
35 |
+
ap.add_argument("-m", "--model", type=str,
|
36 |
+
default="mask_detector.model",
|
37 |
+
help="path to output face mask detector model")
|
38 |
+
args = vars(ap.parse_args())
|
39 |
+
|
40 |
+
# initialize the initial learning rate, number of epochs to train for,
|
41 |
+
# and batch size
|
42 |
+
INIT_LR = 1e-4
|
43 |
+
EPOCHS = 20
|
44 |
+
BS = 32
|
45 |
+
|
46 |
+
# grab the list of images in our dataset directory, then initialize
|
47 |
+
# the list of data (i.e., images) and class images
|
48 |
+
print("[INFO] loading images...")
|
49 |
+
imagePaths = list(paths.list_images(args["dataset"]))
|
50 |
+
data = []
|
51 |
+
labels = []
|
52 |
+
|
53 |
+
# loop over the image paths
|
54 |
+
for imagePath in imagePaths:
|
55 |
+
# extract the class label from the filename
|
56 |
+
label = imagePath.split(os.path.sep)[-2]
|
57 |
+
|
58 |
+
# load the input image (224x224) and preprocess it
|
59 |
+
image = load_img(imagePath, target_size=(224, 224))
|
60 |
+
image = img_to_array(image)
|
61 |
+
image = preprocess_input(image)
|
62 |
+
|
63 |
+
# update the data and labels lists, respectively
|
64 |
+
data.append(image)
|
65 |
+
labels.append(label)
|
66 |
+
|
67 |
+
# convert the data and labels to NumPy arrays
|
68 |
+
data = np.array(data, dtype="float32")
|
69 |
+
labels = np.array(labels)
|
70 |
+
|
71 |
+
# perform one-hot encoding on the labels
|
72 |
+
lb = LabelBinarizer()
|
73 |
+
labels = lb.fit_transform(labels)
|
74 |
+
labels = to_categorical(labels)
|
75 |
+
|
76 |
+
# partition the data into training and testing splits using 75% of
|
77 |
+
# the data for training and the remaining 25% for testing
|
78 |
+
(trainX, testX, trainY, testY) = train_test_split(data, labels,
|
79 |
+
test_size=0.20, stratify=labels, random_state=42)
|
80 |
+
|
81 |
+
# construct the training image generator for data augmentation
|
82 |
+
aug = ImageDataGenerator(
|
83 |
+
rotation_range=20,
|
84 |
+
zoom_range=0.15,
|
85 |
+
width_shift_range=0.2,
|
86 |
+
height_shift_range=0.2,
|
87 |
+
shear_range=0.15,
|
88 |
+
horizontal_flip=True,
|
89 |
+
fill_mode="nearest")
|
90 |
+
|
91 |
+
# load the MobileNetV2 network, ensuring the head FC layer sets are
|
92 |
+
# left off
|
93 |
+
baseModel = MobileNetV2(weights="imagenet", include_top=False,
|
94 |
+
input_tensor=Input(shape=(224, 224, 3)))
|
95 |
+
|
96 |
+
# construct the head of the model that will be placed on top of the
|
97 |
+
# the base model
|
98 |
+
headModel = baseModel.output
|
99 |
+
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
|
100 |
+
headModel = Flatten(name="flatten")(headModel)
|
101 |
+
headModel = Dense(128, activation="relu")(headModel)
|
102 |
+
headModel = Dropout(0.5)(headModel)
|
103 |
+
headModel = Dense(2, activation="softmax")(headModel)
|
104 |
+
|
105 |
+
# place the head FC model on top of the base model (this will become
|
106 |
+
# the actual model we will train)
|
107 |
+
model = Model(inputs=baseModel.input, outputs=headModel)
|
108 |
+
|
109 |
+
# loop over all layers in the base model and freeze them so they will
|
110 |
+
# *not* be updated during the first training process
|
111 |
+
for layer in baseModel.layers:
|
112 |
+
layer.trainable = False
|
113 |
+
|
114 |
+
# compile our model
|
115 |
+
print("[INFO] compiling model...")
|
116 |
+
lr_schedule = ExponentialDecay(
|
117 |
+
initial_learning_rate=INIT_LR,
|
118 |
+
decay_steps=EPOCHS, # or any number of steps you prefer
|
119 |
+
decay_rate=0.5, # half the LR every decay_steps
|
120 |
+
staircase=True # if you want discrete drops
|
121 |
+
)
|
122 |
+
|
123 |
+
opt = Adam(learning_rate=lr_schedule)
|
124 |
+
model.compile(loss="binary_crossentropy", optimizer=opt,
|
125 |
+
metrics=["accuracy"])
|
126 |
+
|
127 |
+
# train the head of the network
|
128 |
+
print("[INFO] training head...")
|
129 |
+
H = model.fit(
|
130 |
+
aug.flow(trainX, trainY, batch_size=BS),
|
131 |
+
steps_per_epoch=len(trainX) // BS,
|
132 |
+
validation_data=(testX, testY),
|
133 |
+
validation_steps=len(testX) // BS,
|
134 |
+
epochs=EPOCHS)
|
135 |
+
|
136 |
+
# make predictions on the testing set
|
137 |
+
print("[INFO] evaluating network...")
|
138 |
+
predIdxs = model.predict(testX, batch_size=BS)
|
139 |
+
|
140 |
+
# for each image in the testing set we need to find the index of the
|
141 |
+
# label with corresponding largest predicted probability
|
142 |
+
predIdxs = np.argmax(predIdxs, axis=1)
|
143 |
+
|
144 |
+
# show a nicely formatted classification report
|
145 |
+
print(classification_report(testY.argmax(axis=1), predIdxs,
|
146 |
+
target_names=lb.classes_))
|
147 |
+
|
148 |
+
# serialize the model to disk
|
149 |
+
# after training...
|
150 |
+
print("[INFO] serializing mask detector model...")
|
151 |
+
# drop save_format:
|
152 |
+
model.save("mask_detector.h5")
|
153 |
+
print(f"[INFO] model saved to {args['model']}")
|
154 |
+
|
155 |
+
|
156 |
+
# plot the training loss and accuracy
|
157 |
+
N = EPOCHS
|
158 |
+
plt.style.use("ggplot")
|
159 |
+
plt.figure()
|
160 |
+
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
|
161 |
+
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
|
162 |
+
plt.plot(np.arange(0, N), H.history["accuracy"], label="train_acc")
|
163 |
+
plt.plot(np.arange(0, N), H.history["val_accuracy"], label="val_acc")
|
164 |
+
plt.title("Training Loss and Accuracy")
|
165 |
+
plt.xlabel("Epoch #")
|
166 |
+
plt.ylabel("Loss/Accuracy")
|
167 |
+
plt.legend(loc="lower left")
|
168 |
+
plt.savefig(args["plot"])
|
169 |
+
|