04 mar
Totalperform
Buenos Aires
We are seeking a Python Developer who specializes in data processing, automation, and web scraping. This role will work closely with the CTO and engineering team to extract, process, and optimize real-time travel data. If you are proficient working with Python, data pipelines, and web scraping, this is an opportunity to have a direct impact on a product that sits at the intersection of travel, rewards, and fintech.
Key Responsibilities:
- Develop and maintain Python-based automation for data extraction, processing, and analysis.
- Build and optimize web scraping solutions to collect real-time travel pricing and availability data.
- Work with third-party APIs and integrate data from multiple sources.
- Design and maintain data pipelines for processing large datasets efficiently.
- Implement ETL (Extract, Transform, Load) processes to structure travel and financial data.
- Collaborate with backend engineers to support API integrations and data workflows.
- Optimize data storage and retrieval using NoSQL (MongoDB) and relational databases.
- Use cloud services (AWS, Azure) to deploy and manage automation scripts.
Required Skills and Qualifications:
- 5+ years of professional experience as a Python developer.
- Expertise in web scraping frameworks such as Scrapy, Selenium, or PlayWright.
- Strong background in data processing and automation using Python (Pandas, NumPy, Airflow).
- Experience with API integration and working with structured/unstructured data.
- Familiarity with ETL workflows and data transformation techniques.
- Hands-on experience with NoSQL databases (MongoDB) and data storage optimization.
- Comfortable with cloud-based scripting and deployment (AWS Lambda, EC2, etc.).
- Strong problem-solving skills and ability to work independently on data challenges.
- Bonus:
Experience in the travel or financial industries is a plus.
Web Scraping Requirements:
- Strong ability to inspect and reverse engineer APIs, including:
- Analyzing browser network traffic (using Chrome DevTools, Burp Suite, Fiddler, or similar tools).
- Understanding encrypted or obfuscated API responses.
- Mimicking authenticated API calls and bypassing security restrictions.
- Experience in bypassing CAPTCHAs, bot protections (Cloudflare, PerimeterX, DataDome, Akamai), and anti-scraping techniques.
- Deep understanding of headless browser automation tools such as Playwright, Puppeteer, or Selenium.
- Ability to extract, clean, and structure data from HTML, JSON, XML, and other formats.
- Experience working with residential proxies, rotating IPs, user-agent spoofing, and fingerprint evasion.
- Ability to optimize scraping scripts for speed, concurrency, and scalability while minimizing detection risks.
#J-18808-Ljbffr
Muestra tus habilidades a la empresa, rellenar el formulario y deja un toque personal en la carta, ayudará el reclutador en la elección del candidato.