IP | Country | PORT | ADDED |
---|---|---|---|
50.169.222.243 | us | 80 | 40 minutes ago |
115.22.22.109 | kr | 80 | 40 minutes ago |
50.174.7.152 | us | 80 | 40 minutes ago |
50.171.122.27 | us | 80 | 40 minutes ago |
50.174.7.162 | us | 80 | 40 minutes ago |
47.243.114.192 | hk | 8180 | 40 minutes ago |
72.10.160.91 | ca | 29605 | 40 minutes ago |
218.252.231.17 | hk | 80 | 40 minutes ago |
62.99.138.162 | at | 80 | 40 minutes ago |
50.217.226.41 | us | 80 | 40 minutes ago |
50.174.7.159 | us | 80 | 40 minutes ago |
190.108.84.168 | pe | 4145 | 40 minutes ago |
50.169.37.50 | us | 80 | 40 minutes ago |
50.223.246.238 | us | 80 | 40 minutes ago |
50.223.246.239 | us | 80 | 40 minutes ago |
50.168.72.116 | us | 80 | 40 minutes ago |
72.10.160.174 | ca | 3989 | 40 minutes ago |
72.10.160.173 | ca | 32677 | 40 minutes ago |
159.203.61.169 | ca | 8080 | 40 minutes ago |
209.97.150.167 | us | 3128 | 40 minutes ago |
Simple tool for complete proxy management - purchase, renewal, IP list update, binding change, upload lists. With easy integration into all popular programming languages, PapaProxy API is a great choice for developers looking to optimize their systems.
Quick and easy integration.
Full control and management of proxies via API.
Extensive documentation for a quick start.
Compatible with any programming language that supports HTTP requests.
Ready to improve your product? Explore our API and start integrating today!
And 500+ more programming tools and languages
The easiest option is to use ready-made online proxy checkers. For example, Hidemy.name, which shows the type of protocol used. Or you can simply run Speedtest - this will show you the bandwidth and response speed (ping).
In PHP, you can generate JSON data using the json_encode function, and in Swift (iOS/macOS), you can parse it using JSONSerialization or Codable depending on your needs.
Here's an example of generating JSON in PHP and parsing it using NSJSONSerialization in Swift
PHP (Generate JSON):
'John Doe',
'age' => 25,
'city' => 'New York',
'is_student' => true
);
// Encode data to JSON
$jsonData = json_encode($data);
// Output JSON
echo $jsonData;
?>
In this PHP script, the json_encode function is used to convert the PHP associative array into a JSON string.
Swift (Parse JSON using NSJSONSerialization):
import Foundation
// Sample JSON data as a string
let jsonString = """
{
"name": "John Doe",
"age": 25,
"city": "New York",
"is_student": true
}
"""
// Convert JSON string to Data
if let jsonData = jsonString.data(using: .utf8) {
do {
// Parse JSON data using NSJSONSerialization
if let jsonObject = try JSONSerialization.jsonObject(with: jsonData, options: []) as? [String: Any] {
// Access parsed JSON data
let name = jsonObject["name"] as? String ?? ""
let age = jsonObject["age"] as? Int ?? 0
let city = jsonObject["city"] as? String ?? ""
let isStudent = jsonObject["is_student"] as? Bool ?? false
// Print parsed data
print("Name: \(name)")
print("Age: \(age)")
print("City: \(city)")
print("Is Student: \(isStudent)")
}
} catch {
print("Error parsing JSON: \(error.localizedDescription)")
}
}
In this Swift code, the JSONSerialization class is used to parse the JSON string (converted to Data) into a Swift dictionary ([String: Any]). You can then access individual values from the parsed JSON data.
Note: Ensure that the JSON structure in your PHP script and Swift code aligns, and handle errors appropriately during parsing. Additionally, consider using Codable in Swift for a more convenient way to work with JSON data if your data structure matches your Swift model.
To scrape all HTML content from a website using Scrapy, you need to create a spider that visits each page of the website and extracts the HTML content. Here's a simple example:
Create a Scrapy Project:
If you haven't already, create a Scrapy project by running the following commands in your terminal or command prompt:
scrapy startproject myproject
cd myproject
Define a Spider:
Open the spiders directory in your project and create a spider (e.g., html_spider.py). Edit the spider file with the following content:
import scrapy
class HtmlSpider(scrapy.Spider):
name = 'html_spider'
start_urls = ['http://example.com'] # Start with the main page of the website
def parse(self, response):
# Extract HTML content and yield it
html_content = response.text
yield {
'url': response.url,
'html_content': html_content
}
# Follow links to other pages (if needed)
for next_page_url in response.css('a::attr(href)').extract():
yield scrapy.Request(url=next_page_url, callback=self.parse)
This spider, named html_spider, starts with the main page (start_urls) and extracts the HTML content. It then follows links (a::attr(href)) to other pages and extracts their HTML content as well.
Run the Spider:
Run your spider using the following command:
scrapy crawl html_spider -o output.json
This command will execute the html_spider and save the output in a JSON file named output.json. Each item in the JSON file will contain the URL and HTML content of a page.
In the messenger settings, go to "Data and Drive". Click on "Proxy settings", and then, enabling the "Use proxy settings" tab, enter the server, port, username and password in the specially highlighted fields. If you are going to make settings in the Desktop version, you will need to go to the menu. There, in the "Connection method" item, click on "TSP via Socks5" and enter the required data.
In the "System Settings" section, open the "Network" tab, and then, when you highlight the active connection, click "Advanced". Here, in the "Proxies" tab, tick only the HTTP proxy if you do not intend to use other types of proxies temporarily. Enter the address of your proxy server and its port in the designated fields and click "OK".
What else…