Proxy Duke

PapaProxy - premium datacenter proxies with the fastest speed. Fully unlimited traffic. Big Papa packages from 100 to 15,000 IP
  • Some of the lowest prices on the market, no hidden fees;
  • Guaranteed refund within 24 hours after payment.
  • All IPv4 proxies with HTTPS and SOCKS5 support;
  • Upgrade IP in a package without extra charges;
  • Fully unlimited traffic included in the price;
  • No KYC for all customers at any stage;
  • Several subnets in each package;
  • Impressive connection speed;
  • And many other benefits :)
Select your tariff
Price for 1 IP-address: 0$
We have over 100,000 addresses on the IPv4 network. All packets need to be bound to the IP address of the equipment you are going to work with. Proxy servers can be used with or without login/password authentication. Just elite and highly private proxies.
Types of proxies

Types of proxies

Datacenter proxies

Starting from $19 / month
Select tariff
  • Unlimited Traffic
  • SOCKS5 Supported
  • Over 100,000 IPv4 proxies
  • Packages from 100 proxies
  • Good discount for wholesale
Learn More

Private proxies

Starting from $2,5 / month
Select tariff
  • Unlimited Traffic
  • SOCKS5 Supported
  • Proxies just for you
  • Speed up to 200 Mbps
  • For sale from 1 pc.
Learn More

Rotating proxies

Starting from $49 / month
Select tariff
  • Each request is a new IP
  • SOCKS5 Supported
  • Automatic rotation
  • Ideal for API work
  • All proxies available now
Learn More

UDP proxies

Starting from $19 / month
Select tariff
  • Unlimited traffic
  • SOCKS5 supported
  • PremiumFraud Shield
  • For games and broadcasts
  • Speed up to 200 Mbps
Learn More

Try our proxies for free

Register an account and get a proxy for the test. You do not need to fill payment data. Support most of popular tasks: search engines, marketplaces, bulletin boards, online services, etc. tasks
Rectangle Rectangle Rectangle Rectangle
Available regions

Available regions

PapaProxy.net offers the Proxy Duke service, tailored for Duke University students, faculty, and staff needing secure and remote access to Duke's network resources. This proxy ensures that you can access library databases, academic journals, and other online university resources from anywhere, maintaining continuity in your academic and research activities. Whether you're studying remotely or accessing resources while traveling, Proxy Duke keeps you connected to Duke University's wealth of knowledge.

  • IP updates in the package at no extra charge;

  • Unlimited traffic included in the price;

  • Automatic delivery of addresses after payment;

  • All proxies are IPv4 with HTTPS and SOCKS5 support;

  • Impressive connection speed;

  • Some of the cheapest cost on the market, with no hidden fees;

  • If the IP addresses don't suit you - money back within 24 hours;

  • And many more perks :)

You can buy proxies at cheap pricing and pay by any comfortable method:

  • VISA, MasterCard, UnionPay

  • Tether (TRC20, ERC20)

  • Bitcoin

  • Ethereum

  • AliPay

  • WebMoney WMZ

  • Perfect Money

You can use both HTTPS and SOCKS5 protocols at the same time. Proxies with and without authorization are available in the personal cabinet.

 

Port 8080 for HTTP and HTTPS proxies with authorization.

Port 1080 for SOCKS 4 and SOCKS 5 proxies with authorization.

Port 8085 for HTTP and HTTPS proxies without authorization.

Port 1085 for SOCKS4 and SOCKS5 proxy without authorization.

 

We also have a proxy list builder available - you can upload data in any convenient format. For professional users there is an extended API for your tasks.

Free proxy list

Free duke access proxy list

Note - these are not our test proxies. Publicly available free lists, collected from open sources, to test your software. You can request a test of our proxies here
IP Country PORT ADDED
50.169.222.243 us 80 27 minutes ago
115.22.22.109 kr 80 27 minutes ago
50.174.7.152 us 80 27 minutes ago
50.171.122.27 us 80 27 minutes ago
50.174.7.162 us 80 27 minutes ago
47.243.114.192 hk 8180 27 minutes ago
72.10.160.91 ca 29605 27 minutes ago
218.252.231.17 hk 80 27 minutes ago
62.99.138.162 at 80 27 minutes ago
50.217.226.41 us 80 27 minutes ago
50.174.7.159 us 80 27 minutes ago
190.108.84.168 pe 4145 27 minutes ago
50.169.37.50 us 80 27 minutes ago
50.223.246.238 us 80 27 minutes ago
50.223.246.239 us 80 27 minutes ago
50.168.72.116 us 80 27 minutes ago
72.10.160.174 ca 3989 27 minutes ago
72.10.160.173 ca 32677 27 minutes ago
159.203.61.169 ca 8080 27 minutes ago
209.97.150.167 us 3128 27 minutes ago
Feedback

Feedback

I have been using this service for a long time and I am satisfied with the quality of its services. Servers are available 24 hours a day, 7 days a week. You will hardly be able to find cheaper proxies in the market, so I recommend this particular service.
David Gaston

I've been using proxies from this service for about a month, and my impressions are extremely positive! 👍 Easy to manage, excellent speed and reliability. Compared to previous services (I won't name them), I'm really delighted with the stable performance. Would recommend to anyone who works online!
Joe

The site shows itself very well. Proxies work consistently, none of them failed. Technical support quickly responded to my questions, which also pleased me. I must give it a high five!
Gordon

I took a proxy here to promote my group in VK. All works smoothly and most importantly - quickly. No bans, tech support on the level. I will not complain about anything, everything is at the highest level. My advice.
Thomas Howarth

Reliability and stability of proxies is what I appreciate about this service. Professionalism and politeness of tech support make interaction with the service pleasant and carefree. The prices are also reasonable, which is important.
M. J. Marc

I bought IPv6 for Instagram. Low price, ability to take large quantities (from 1 thousand). The proxies are clean and there were no problems with creating accounts. Over time, accounts remain alive and are not blocked.
KEVIN HOPF

Last month I purchased a proxy for games and was satisfied with the stability and adequate prices. Communication with the support team was successful and I don't regret my choice.
Herb

Fast integration with API

Fast integration with API

Simple tool for complete proxy management - purchase, renewal, IP list update, binding change, upload lists. With easy integration into all popular programming languages, PapaProxy API is a great choice for developers looking to optimize their systems.

Quick and easy integration.
Full control and management of proxies via API.
Extensive documentation for a quick start.
Compatible with any programming language that supports HTTP requests.

Ready to improve your product? Explore our API and start integrating today!

Python
Golang
C++
NodeJS
Java
PHP
React
Delphi
Assembly
Rust
Ruby
Scratch

And 500+ more programming tools and languages

F.A.Q.

F.A.Q.

What does a VPN connection do? Close

VPN allows you to hide your real IP address, as well as further encrypt your traffic. VPN is also actively used for address spoofing. For example, the user is in the Russian Federation, but by connecting through a VPN server, the site "thinks" that the user is from the United States.

How to simulate click during scraping? Close

To simulate a click during scraping, you can use a headless browser automation library like Puppeteer for Node.js. Puppeteer provides a high-level API to control headless browsers, allowing you to automate tasks such as clicking on elements, filling out forms, and navigating through pages.

Here's a basic example of how you can use Puppeteer to simulate a click:

  1. Install Puppeteer:

    • Install Puppeteer using npm:

npm install puppeteer

Write the Scraping Script:

  • Create a Node.js script (e.g., scrape_with_click.js) with the following code:


const puppeteer = require('puppeteer');

async function scrapeWithClick() {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();

    try {
        // Navigate to the target URL
        await page.goto('https://example.com');

        // Wait for a specific selector to appear (replace with the selector of the element you want to click)
        const elementSelector = 'button#exampleButton';
        await page.waitForSelector(elementSelector);

        // Simulate a click on the specified element
        await page.click(elementSelector);

        // Wait for the page to settle (replace with additional logic if needed)
        await page.waitForTimeout(2000);

        // Extract and print information after the click
        const extractedInfo = await page.evaluate(() => {
            // Replace this with your logic to extract information from the clicked page
            return document.title;
        });

        console.log('Extracted information after click:', extractedInfo);
    } catch (error) {
        console.error('Error during scraping:', error);
    } finally {
        // Close the browser
        await browser.close();
    }
}

// Run the scraping script
scrapeWithClick();
    • Replace 'https://example.com' with the URL you want to scrape.

    • Replace 'button#exampleButton' with the selector of the element you want to click.

  • Run the Script:

    • Run the script using Node.js:

node scrape_with_click.js

This script uses Puppeteer to launch a headless browser, navigate to a specified URL, wait for a specific element to appear, simulate a click on that element, and then perform additional actions or extractions as needed.

Make sure to handle errors and adjust the script based on the structure of the website you are scraping.

How can I add my cookies from a file in Selenium Python? Close

In Selenium with Python, you can add cookies to your browser session using the add_cookie method of the WebDriver's options or add_cookie method of the WebDriver instance. If you have cookies saved in a file, you can read the file and then add the cookies to your Selenium session. Here's an example:


from selenium import webdriver
import pickle

# Create a new instance of the browser (e.g., Chrome)
driver = webdriver.Chrome()

# Read cookies from a file (replace 'cookies.pkl' with your actual file name)
with open('cookies.pkl', 'rb') as cookies_file:
    cookies = pickle.load(cookies_file)

# Add each cookie to the browser session
for cookie in cookies:
    driver.add_cookie(cookie)

# Now the browser should have the added cookies

# Example: Navigate to a website after setting cookies
driver.get('https://example.com')

# Continue with your script...

# Close the browser when done
driver.quit()

In this example:

  1. The Selenium WebDriver (Chrome in this case) is created.
  2. Cookies are read from a file using the pickle module. Make sure your cookies file is in the correct format (a list of dictionaries).
  3. Each cookie is added to the browser session using the add_cookie method.
  4. The script navigates to a website (https://example.com) after setting the cookies. Adjust this part according to your specific use case.
  5. The browser is closed using driver.quit() when the script is done.

Make sure to replace 'cookies.pkl' with the actual path to your cookies file.

Note: The format of the cookies file is crucial. It should be a list of dictionaries, and each dictionary should contain at least the keys 'name', 'value', 'domain', and 'path'. If the cookies were obtained using get_cookies() in a previous Selenium session, you can directly save the result using pickle.dump(cookies, file).

Here's a simple example of how to save cookies:


from selenium import webdriver
import pickle

driver = webdriver.Chrome()
driver.get('https://example.com')

# Get cookies
cookies = driver.get_cookies()

# Save cookies to a file
with open('cookies.pkl', 'wb') as cookies_file:
    pickle.dump(cookies, cookies_file)

driver.quit()

Then, you can use the first script to load and set these cookies in a new Selenium session.

Selenium in PyCharm does not work in headless mode and goes to TimeoutException error Close

If you are experiencing TimeoutException errors when trying to run Selenium in headless mode in PyCharm, there are several potential causes and solutions. Here are some steps to troubleshoot and address the issue:

  1. Increase Wait Time:

    • Headless mode may introduce additional latency, and elements might take longer to load. Increase the timeout for explicit waits to give the elements enough time to become available.

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome(options=options)

# Increase the timeout as needed
wait = WebDriverWait(driver, 20)

# Example wait for an element to be clickable
element = wait.until(EC.element_to_be_clickable((By.ID, 'your_locator')))
  • Use Different Locator Strategies:

    • If one locator strategy is causing timeouts, try using a different one. For example, switch from By.ID to By.XPATH or vice versa.
  • Verify Element Identification:

    • Confirm that the element locator used in your script is correct and uniquely identifies the intended element.
  • Check for JavaScript Errors:

    • Open the browser console and check for any JavaScript errors that might be affecting the behavior of the page.
  • Increase Browser Window Size:

    • Some websites may behave differently in headless mode based on the window size. Try setting a larger window size.

options.add_argument('--window-size=1920,1080')
  • Update ChromeDriver:

    • Ensure that you are using the latest version of ChromeDriver that is compatible with your Chrome browser version.
  • Use a Custom User Agent:

    • Some websites may behave differently based on the user agent. Try setting a custom user agent.

options.add_argument('--user-agent=Your_Custom_User_Agent')
  • Check for Captchas or Additional Security Measures:

    • Some websites may use captchas or additional security measures that could cause delays. Ensure that your script is not encountering captchas.
  • Browser Profile:

    • In some cases, the behavior of the browser may change when running in headless mode. Experiment with different browser profiles or use a clean profile.
  • Network Issues:

    • Ensure that there are no network-related issues that might be causing delays in loading elements.
  • Check Proxy Settings:

    • If you are using a proxy, ensure that the proxy settings are configured correctly for headless mode.
  • Headless Mode Compatibility:

    • Some websites may have issues with headless mode due to user agent detection or other factors. Test your script on different websites to see if the issue persists.
Running GUI autotests in GitLab CI\CD using Docker Selenium PyTest. Close

To run GUI autotests in GitLab CI\CD using Docker, Selenium, and PyTest, you can follow these steps:

1. Create a .gitlab-ci.yml file in the root directory of your project. This file will define the pipeline and the jobs for your CI\CD process.

2. Configure the pipeline to use the appropriate image for your tests. In this case, you can use a Python image with the required dependencies installed.

3. Define the before_script section to set up the environment for the tests, including installing the necessary packages and downloading the required drivers for Selenium.

4. Define the test job to run the PyTest tests using the Selenium WebDriver.

Here's an example of a .gitlab-ci.yml file:


stages:
  - test

variables:
  SELENIUM_CHROME_DRIVER: '102.0.5005.62'
  SELENIUM_FIREFOX_DRIVER: '0.26.0'

image: python:3.8

cache:
  paths:
    - .venv
    - requirements.txt

before_script:
  - apt-get update -qq
  - apt-get install -y --no-install-recommends \
      build-essential \
      wget \
      xvfb \
      xvfb-run
  - pip install --upgrade pip
  - pip install --quiet --upgrade pytest
  - pip install --quiet selenium
  - pip install --quiet webdriver-manager
  - wget https://github.com/SeleniumHQ/selenium/releases/download/v${SELENIUM_CHROME_DRIVER}/chromedriver_linux64.zip
  - unzip chromedriver_linux64.zip chromedriver
  - wget https://github.com/SeleniumHQ/selenium/releases/download/v${SELENIUM_FIREFOX_DRIVER}/geckodriver-v${SELENIUM_FIREFOX_DRIVER}
  - mv geckodriver-v${SELENIUM_FIREFOX_DRIVER} geckodriver

test:
  stage: test
  script:
    - pytest tests/
  tags:
    - selenium
  artifacts:
    reports:
      - html
  only:
    - master
    - merge_requests

This .gitlab-ci.yml file defines a single stage called test that runs the PyTest tests in the tests/ directory. The before_script section installs the necessary dependencies, downloads the Selenium WebDriver for Chrome and Firefox, and sets up the environment for running the tests.

The tags: - selenium line ensures that the job runs on a runner with the selenium tag, which should have the appropriate Selenium WebDriver installed. The artifacts: reports: - html line enables the generation of HTML reports for the test results.

The only: - master - merge_requests line specifies that the tests should be run on every commit to the master branch and on every merge request.

Once you've set up the .gitlab-ci.yml file, commit and push it to your repository. Then, create a new merge request or push to the master branch to trigger the CI\CD pipeline and run the GUI autotests using Docker, Selenium, and PyTest.

Our statistics

>12 000

packages were sold in a few years

8 000 Tb

traffic spended by our clients per month.

6 out of 10

Number of clients that increase their tariff after the first month of usage

HTTP / HTTPS / Socks 4 / Socks 5

All popular proxy protocols that work with absolutely any software and device are available
With us you will receive

With us you will receive

  • Many payment methods: VISA, MasterCard, UnionPay, WMZ, Bitcoin, Ethereum, Litecoin, USDT TRC20, AliPay, etc;
  • No-questions-asked refunds within the first 24 hours of payment;
  • Personalized prices via customer support;
  • High proxy speed and no traffic restrictions;
  • Complete privacy on SOCKS protocols;
  • Automatic payment, issuance and renewal of proxies;
  • Only live support, no chatbots.
  • Personal manager for purchases of $500 or more.

    What else…

  • Discounts for regular customers;
  • Discounts for large proxy volume;
  • Package of documents for legal entities;
  • Stability, speed, convenience;
  • Binding a proxy duke youtube only to your IP address;
  • Comfortable control panel and downloading of proxy lists.
  • Advanced API.