Project Fur-Tress: Building a Home SOC Lab (Part 5)

Introduction

In Project Fur-Tress: Building a Home SOC Lab (Part 4), I finished setting up my domain controller, created several users and assigned them to groups. I also set a few different group policies to reduce the password requirements, set local admin for the IT Ops group, and to install software on domain computers.

In this next part of my SOC Lab journey, I will be setting up an endpoint, using Windows 10 Enterprise, and then connecting this to the domain. Once the endpoint is done, I will set up a Linux VM running Ubuntu server, to host Docker, Portainer, and a vulnerable web app.

Windows 10 Enterprise

I downloaded the ISO for Windows 10 Enterprise Evaluation, then created a VM with 2 vCPUs, 4 GB RAM and a 40 GB disk drive, and set the VLAN Tag to 20 on the vmbr5 network bridge, then installed Windows.
After the installation had finished, I renamed the system, connect it to the domain, then restarted the system.

Logging in, I saw the shortcut for Google Chrome on the desktop, so the software installation policy had worked. There was no internet connection though, and checking 'ipconfig' I saw the device had an IP address in the correct range. The Ethernet was showing as connected to the domain, and the Gateway was showing the IP address of the domain controller.

When I double-checked the DHCP configuration on the DHCP Server, I noticed that the Router setting was set to the IP address of the domain controller, 10.10.20.2, and was not set as the firewall, 10.10.20.254. I edited the scope, and restarted Windows 10, and I was now showing as connected to the internet.

However, now a new issue presented itself. Windows 10 would be connected for a little while, then would disconnect for a few minutes, then reconnect. I decided that I should perform all the updates first before looking into this further.
After Windows had finished updating, and restarted, the network connection appeared to remain stable.

The permissions for local admin still did not seem to be getting applied though. When using a user that was in the IT Operations group, that was a part of the Administrator group, I still could not use elevated commands. This is something I would need to dig in to deeper. For now though, I shut down the Windows 10 Enterprise system, and made a snapshot, as the evaluation is only valid for 90 days.

Ubuntu Server

Another system that I wanted within the corporate entity was a web server running on Linux. The Windows Server is running IIS and there is a web service there, but I wanted another one to run Docker, Portainer, and some vulnerable applications for testing.

I created a VM, 4 vCPU, 4 GB RAM, 120 GB HDD, and then got to installing Ubuntu Server 24.04. During the installation, I left the network settings at the pre-assigned configuration, and I did not install any additional software, except to install OpenSSH Server, so I could connect to, and configure, the system remotely through SSH.

Static IP Configuration

Once Ubuntu had finished installed, I checked that all current packages were updated, and now I wanted to set the Ubuntu server to have a static IP address. This would allow easier DNS lookups on the network, as I wanted to run a web server, and a couple of web apps from this server.

From the console of the VM in Proxmox, I logged in to the Ubuntu server vm, then ran to get the IP address of the system. I could see that I had been given a dynamic IP address of '10.10.20.51', so I knew that this device was connecting with the DHCP server, provided by the Windows Server.ip a.

Now, I wanted to set this system with a static IP. To do this would require editing a specific file, as I had no GUI to facilitate editing the network configuration. Ubuntu, since 20.04, has opted to use Netplan for the configuration of network interfaces.
By default, on a new installation of Ubuntu Server, there will be one, or both of 01-network-manager-all.yaml and /etc/netplan/50-cloud-init.yaml in the /etc/netplan/ directory. When I checked my installation, I had the cloud init file, and this means any changes I make would be overwritten, as this file is populated automatically. To get around this, I need to disable the automatic network configuration.

I created /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg and put the following in the file, then saved it.

network: {config: disabled}

Now I could edit etc/netplan/50-cloud-init.yaml and enter the following:

network:
  ethernets:
    ens18:
      dhcp4: false
      dhcp6: false
      addresses:
        - 10.10.20.5/24
      routes:
        - to: default
          via: 10.10.20.254
        nameservers:
          addresses: [10.10.20.2, 10.10.20.254]
  version: 2

Now to reboot the server, and check the settings get applied correctly. After restarting, I could SSH in to the server with the new IP address I had set, looking good so far.

Connecting to a Domain

With a static IP address configured, I now needed to install some packages, that would allow me to connect my Ubuntu server to my domain, and help ensure configurations could be applied correctly.

sudo apt update
sudo apt install realmd sssd sssd-tools libnss-sss libpam-sss adcli samba-common samba-common-bin oddjob oddjob-mkhomedir packagekit krb5-user ntp-y

With the packages installed, I now wanted to change my Ubuntu server hostname, to include the Fully Qualified Domain Name (FQDN) of my domain.

sudo hostnamectl set-hostname ub-sever.fur-tress.soc

Next I had to check if my server could identify the domain.

sudo realm discover fur-tress.soc

The Ubuntu server can see my Windows domain, so now I just need to connect to the domain.

sudo realm join fur-tress.soc

During the connection to the domain, I did get a notification that no DNS domain was configured for the Ubuntu server, so the DNS update failed. Before I fixed the DNS issue, first I wanted to check that the Linux system was appearing in the Active Directory Computers on the domain controller. I could see my Linux server in the list, so now to move on.

I removed 10.10.20.254 from the nameservers in the 50-cloud-init.yaml file, and restarted the server. After a restart, checking resolvectl shows the DNS server as the domain controller.

Before any further configuration, I wanted to check that my AD users were available in the Linux system.

getent passwd FUR-TRESS\\Administrator

From the command, I see that the user entity is returned correctly. Now I want to set a home directory, so that one is created for any user that logs in, and make sure the login shell is set.

To set a home directory to be created automatically when a user logs in, I used:

sudo pam-auth-update -enable mkhomedir

Restarting the server once more, I then tried to log in with one of the AD users I had previously created on the domain. I was able to log in and saw the console notify me that the user's home directory was being created.

The last step, on setting up Ubuntu to connect to the domain controller, was now setting some users and groups to have Sudo access.

On the domain controller, I created a new group nixadmin and then added the AD users that would be permitted to use sudo on the Linux server. Back on the Linux system, I used visudo and added the following to my sudoers file:

%nixadmin ALL=(ALL) ALL

Now, my logged-in user was now able to run elevated commands, and I was ready to start installing Docker, and setting up Portainer.

Docker

I’m using Docker and Portainer on the Ubuntu server so that I can quickly start and stop vulnerable web applications like DVWA and WebGoat. These applications are invaluable tools for gaining practical experience with penetration testing, logging, and monitoring tools.

Why Docker?

Docker is a powerful platform that automates the deployment of applications inside lightweight, portable containers. These containers bundle everything needed to run the application, including the code, runtime, libraries, and dependencies. By using Docker, I can ensure consistency across multiple environments, streamline application development, and simplify deployment processes.

By leveraging Docker and Portainer, I can manage and deploy multiple applications efficiently. This setup allows me to:

  • Quickly Spin Up Vulnerable Apps: Launch applications like DVWA and WebGoat to simulate real-world security scenarios.
  • Practice Penetration Testing: Use these environments to test various penetration techniques and improve my skills.
  • Implement Logging and Monitoring: Integrate tools to log and monitor activities, enhancing my understanding of security breaches and their impacts.

These applications will give a great opportunity to deepen my understanding of IT infrastructure and security practices

Installing Docker

Docker is very quick and easy to install. Following the steps in the Docker guide, first removing any packages that may be unofficially maintained, then adding the GPG key, adding the repository, and then installing Docker and required packages.

# Remove unofficial packages
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curlsudpo
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc


# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# Install latest version of Docker
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

With the installation of Docker completed, I now had to check that it worked. The Hello-World test would be sufficient here.

sudo docker run hello-world

Portainer

Portainer is a universal container management platform that simplifies the deployment, management, and scaling of containerised applications. It is designed to be easy to use, supports multi-cluster and multi-device management, allowing the management of environments anywhere.

I am using Portainer on a couple of VPS servers that I have running a few other applications and services, that I access through some personal domains. Being familiar with it, and finding that it suits my needs quite well, I wanted to implement it here in my SOC lab, so that I can levarage multiple containers, as and when, I need them.

I wanted to install Portainer CE (Community Edition) as the root user, so once I had connected to the Ubuntu server through SSH, I switched to 'root' then created a volume, so that Portainer was able to store its database.

docker volume create portainer_data

This creates a volume external to the Portainer container, so that if the container is stopped, there is persistant data stored for it to read from when it is restarted.

The next step was to start the Portainer contaniner, so that Docker could pull the information it required, and start the container.

docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:2.21.4

Once the container was pulled and running, I could open my web browser and go to https://10.10.20.5:9443 and I was at the User Setup page, to create the admin account. With a password set, I was now logged in to Portainer, and would be able to use this to add, remove, start up, and shut down any containers I wish to use.

Challenges

Connecting to the Domain

Initially, I encountered problems when connecting to the domain using SMB, which caused issues with users being able to SSH into the Ubuntu server. To resolve this, I reset the VM and started again, using SSSD instead of SMB. This generated the /etc/sssd/sssd.conf file, and I made two key changes:

  • Home Directory Location: Set the user directory to be in a domain directory within the home location.
  • Fully Qualified Domain Names: Disabled the use of fully qualified domain names.
fallback_homedir = /home/%d/%u
use_fully_qualified_domain_names = False

With these changes, I could SSH into the server using an AD user without needing to use the full 'user.name@domain', simplifying the login process to just the username.

DNS Issues

I also faced DNS issues when the Ubuntu server had difficulties reaching repositories. Although it could connect to the internet, it couldn’t resolve domain names. This issue was traced back to the domain controller, which also acted as a DNS server. It could ping IP addresses, but failed to resolve domain names. To fix this, I set up a forwarder in the DNS Manager on the Windows Server to point to 1.1.1.1, enabling the server to resolve DNS queries it couldn’t handle on its own.

Summary

In this phase of Project Fur-Tress, I've made significant progress by setting up a Windows 10 Enterprise endpoint and connecting it to the domain. Additionally, I configured an Ubuntu Server to join the domain, allowing domain users to log in and granting sudo access to specific users within a security group.

Docker is now installed and configured on the Ubuntu server, and Portainer CE is running as a container. I'm able to access the Portainer web interface through both the IP:Port from my management device and the FQDN:Port of the Ubuntu server.

Next, I’ll be setting up DVWA and WebGoat as containers. I also plan to enhance security by configuring the "Fake Internet" and "Corporate LAN" VLANs to communicate through pfSense, ensuring neither VLAN has direct internet access.

Resources