generated from Hazel/python-project
Secure_pixelation
Hiding faces with Mosaic has proven incredibly unsafe especially with videos, because the algorithm isn't destructive. However, if you black out the selected area, repopulate it with generative ai, and then pixelate it, it should look authentic, but be 100% destructive, thus safe.
I first realized that a normal mosaic algorithm isn't safe AT ALL seeing this project: https://github.com/KoKuToru/de-pixelate_gaV-O6NPWrI
Install
- create and activate a virtual env and activate it
- install the local python program with pip
- run
secure-pixelation
# Step 1: Create and activate virtual environment
python3.8 -m venv .venv
source .venv/bin/activate
# Step 2: Install the local Python program add the -e flag for development
pip install .
# Step 3: Run the secure-pixelation command
secure-pixelation
Setup LaMa
This is the generative ai model to impaint the blacked out areas.
# get the pretrained models
mkdir -p ./big-lama
wget https://huggingface.co/smartywu/big-lama/resolve/main/big-lama.zip
unzip big-lama.zip -d ./big-lama
rm big-lama.zip
# get the code to run the models
cd big-lama
git clone https://github.com/advimman/lama.git
cd lama
pip install -r requirements.txt
Description
Hiding faces with Mosaic has proven incredibly unsafe especially with videos, because the algorythm isn't destructive. However, if you black out the selected area, repopulate it with generative ai, and then pixelate it, it should look authentic, but be 100% destructive, thus safe.
Languages
Python
100%