Got some open redirect but want to chain it to ssrf
I want to chain this open redirect to ssrf
Trusted extra eyes for stuck bug bounty findings
BugUnstuck helps bug bounty hunters post masked requests, attract the right specialists, agree on split expectations, and turn half-proven bugs into valid impact.
I want to chain this open redirect to ssrf
Digital banking platform (US neobank). Mapped GraphQL mutations for critical financial operations: draft transaction deletion, tip refunds, and peer-to-peer transfers. Each mutation accepts a target ID parameter that may not be validated against the authenticated user ownership. Three test scripts are ready to check for IDOR on: (1) delete_draft - can you delete another user pending transaction? (2) refund_tip - can you trigger a refund on another user tip? (3) P2P transfer manipulation - can you alter the recipient or amount? Every mutation requires valid authenticated session cookies. QA account enrollment was declined. If any IDOR vector confirms, this is a P1/P2 financial impact finding. Need someone with an active account who can execute pre-written Python scripts against the GraphQL endpoint.
Major cryptocurrency exchange. A CDN domain actively used for loading JavaScript assets across multiple exchange pages is approaching expiration or has lapsed registration. If registered by an attacker, this enables arbitrary JS injection on authenticated exchange pages leading to session cookie exfiltration, wallet address substitution, and full account takeover. The domain is confirmed referenced in production page source. I need help with: (1) verifying whether the CDN domain is currently registrable or in redemption period, (2) documenting which production pages still load scripts from this domain, (3) building a clean PoC demonstrating session hijack via injected JS. This is a high-severity supply chain vector if the domain can be claimed. Currently under triage.
Global payment infrastructure company. The QA subdomain of their finance portal uses Auth0 with open self-registration enabled. Creating a new account via the standard Auth0 signup flow grants immediate access to an internal finance backoffice dashboard. Navigation, partial data, and internal tooling are visible. Currently rated P3 under review. I need help with: (1) enumerating what finance operations are accessible from the dashboard, (2) determining if the QA backoffice shares a database with production, (3) building a stronger impact argument to escalate from P3. Need someone experienced with Auth0 misconfigurations and fintech backoffice assessment.
Home security IoT platform. Authenticated users can query monitoring status of arbitrary base stations by serial number - no ownership verification. A secondary endpoint (wifiCredentials) returns 500 when called with foreign serials, suggesting a crashed authorization check. The IDOR is clean and reproducible. What I need: (1) someone to help frame the physical security impact (serial-to-address mapping possibility), (2) determine if the wifiCredentials crash is exploitable beyond DoS, (3) tighten the severity argument for the triager. Currently P3 under review.
Restaurant management SaaS. I discovered that a low-privilege role (waiter) can call the generateTotp endpoint intended for admin actions. The TOTP is sent to the waiter email. The createUser endpoint also lacks RBAC - if you supply a valid TOTP, you can create admin-level accounts. Steps 1-2 are proven (generateTotp succeeds, schema is documented). Step 3 (createUser with the received TOTP to mint an admin) needs a clean end-to-end PoC. The program is assessing but I think a polished full-chain recording would seal it. Need someone who has done RBAC/privilege escalation chains before.
Healthcare platform. I can store arbitrary HTML/JS in the patient address field via direct API call. The payload persists and is visible in booking confirmations. The critical question: does it fire when a doctor views the patient record on the practitioner portal? I do not have a practitioner account to test. If XSS executes in the doctor context, this is a high-impact stored XSS affecting medical staff. Need someone with a test practitioner account on this platform (or experience setting one up) to verify the render path.
I have a 3-stage SSRF chain on a major social media platform: (1) analytics subdomain has a dangling DNS reference to an expired domain, (2) I can serve a redirect from that domain, (3) the platform renderer fetches and executes the redirected content. JS execution is confirmed from multiple IPs. Internal DNS names resolve from the renderer context but not publicly. The program uses an internal SSRF validation tool (canary endpoint) that I have not been able to trigger yet. My addendum shows internal DNS resolution and port-scan timing differentials, but the triager wants the canary hit. Need someone who has experience proving internal network access through SSRF chains - specifically bypassing allowlist-based SSRF detection.
Healthcare platform with end-to-end encryption (Tanker SDK). I can enumerate encryption group identifiers for other patients and doctor agendas by iterating the subject_id parameter. Cross-boundary access (patient can reach practitioner records) is confirmed. The weak spot: I cannot yet show how possessing a tanker_group_identifier leads to actual document decryption or content access. Registration POST returns 500. Need someone familiar with Tanker SDK internals or E2E encryption group semantics to help map the path from group ID enumeration to actual data exposure.
Found an exposed Prometheus metrics endpoint on a fintech platform that reveals internal system metrics including request rates, error counts, memory usage, goroutine counts, and internal service names. The endpoint requires no authentication and is accessible from the public internet. While this is typically classified as informational, the leaked service names and error patterns could help an attacker map internal architecture and identify weak points. Looking for someone to help assess whether this has enough impact for the program or if I should chain it with other findings.
While reviewing the JavaScript bundles of a financial platform, I found references to 4 internal npm packages that are not registered on the public npm registry. The build system appears to resolve from both internal and public registries. If an attacker registers these package names on npmjs.com with a higher version number, the build system would pull the attacker-controlled package instead. This is a classic dependency confusion / substitution attack. I have confirmed the package names are available on npm. Need a collaborator who has successfully submitted dependency confusion findings before to help structure the report and PoC.
Identified a GitHub Actions workflow that uses pull_request_target trigger combined with actions/checkout of the PR head SHA. This means any external contributor can submit a PR that executes arbitrary code in the context of the target repo with write permissions and access to repository secrets. The workflow is in a public repo of a well-funded crypto project. I ran the PoC and the token variable was empty at the time of testing, but the architecture is fundamentally exploitable if secrets are added later or if a different workflow in the same repo shares the token. Need someone who has experience with GitHub Actions exploitation to help assess whether the current empty-token state means it is still reportable as a design flaw.
On a financial platform, I found that an API key obtained from a public-facing endpoint grants access to an internal API that should not be externally reachable. The internal API responds with partial data from authenticated endpoints — not full data, but enough to confirm the API key provides elevated access beyond what was intended. The key was found embedded in a JavaScript bundle served to unauthenticated users. Looking for help determining the full scope of what the key can access and whether the exposed internal endpoints contain sensitive operations.
Set up a callback server and observed over 1000 out-of-band HTTP requests originating from a crypto exchange monitoring infrastructure. The callbacks come from multiple distinct IPs and contain internal path information. This suggests the platform fetch or monitoring system is making requests to attacker-controlled URLs without proper validation. The traffic pattern suggests automated health checks or URL validation that follows external links. I need help determining whether this qualifies as SSRF under the program scope, and whether the volume and IP diversity strengthens or weakens the case.
Found a legacy authentication endpoint on a major tech company where a path parameter value is reflected inside a script tag JSON object without proper encoding. However, the WAF catches most XSS payloads and the server also does some escaping on angle brackets. I have not achieved full XSS execution yet, but the reflection is clearly there and the encoding is inconsistent. Looking for someone experienced with WAF bypass techniques and script-context XSS to help find a working payload. The endpoint also leaks internal hostnames in its CSP header and accepts arbitrary values for the application identifier parameter.
On a cryptocurrency exchange, the session cookie is set without the HttpOnly flag, making it accessible to JavaScript. Combined with a previously found IDOR on the order endpoint, an attacker with XSS can steal the session cookie and then enumerate other users' order data. The chain is: XSS reads document.cookie -> session token extracted -> IDOR on order endpoint using stolen session -> opposing user order metadata leaked. I have each piece individually confirmed but need help putting together the full chain PoC and impact statement for the report.
Found a complete authentication bypass on a biometric challenge API for a major identity verification platform. The chain starts with an IDOR on the challenge endpoint that leaks verification images stored in cloud object storage, then pivots into stored XSS via a crafted payload in the verification metadata field. The IDOR alone exposes PII (biometric selfies). Combined with the XSS, an attacker can hijack active verification sessions. I have a working PoC for the full chain but need a second pair of eyes on the impact assessment and the race condition timing in the session hijack step. High confidence this is Critical — the image exfil alone is a privacy nightmare.
Found an API key embedded in client-side JavaScript of a delivery service. The key is completely unrestricted — no HTTP referrer check, no IP restriction — and works for 12 different billable cloud API endpoints (mapping, routing, geolocation, places, and more). Automated abuse could generate 100K+ per month in charges to the target GCP project. The key is associated with a known project ID that I confirmed via error message fingerprinting. I need help deciding whether to submit this as-is or whether the target program considers embedded map keys as intentionally public. Also looking for help writing the financial impact section.
A blockchain analytics platform has a staging API endpoint reachable from the production domain. The staging endpoint has full GraphQL introspection enabled, revealing the complete schema including mutations, types, and internal field names. The schema exposes internal entities and operations that are not documented in the public API. This is informational on its own but I want to use it as supporting evidence in a larger chain. Looking for someone who can help analyze the schema for sensitive mutations or access control bypasses that would elevate the severity.
During authenticated testing of a banking app, I found that the OAuth callback flow has a redirect parameter that accepts partially validated URLs. While fully external domains are blocked, I found that certain URL patterns using path traversal or subdomain tricks can redirect the callback to an attacker-controlled location, potentially leaking the OAuth authorization code or token fragment. The flow requires user interaction (clicking a crafted link) but the redirect happens after authentication. Need help crafting a reliable PoC that bypasses the current validation and demonstrating token interception.