Search Results

Search found 6882 results on 276 pages for 'ftp proxy'.

Page 60/276 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • suggestions for fast reliable proxy IPs like codeen but with posting?

    - by barlop
    Hi, I am looking for a list like one offered by codeen http://codeen.cs.princeton.edu/ of fast reliable proxy servers.. I just want to be able to "post" on usenet or yahoo groups with them.. I think the codeen ones don't allow HTTP-POST I don't need them for downloading or for torrents, or even for any images.. they can block images to keep browsing faster. I know it's not a list, but I did try TOR once, but it was horribly slow.

    Read the article

  • mod_usertrack with X-Forwarded-For (proxy) IPs, apache 2.2

    - by ripper234
    I'm using apache 2.2 with mod_usertrack, behind a reverse proxy (load balancer). Now, the proxy disguises the client's real IP addresses (keeps them in the X-Forwarded-For header), and forwards the request along. mod_usertrack uses the clients' IP (along with some noise) to generate a GUID for each client. However, because of the proxy, it only sees a single IP and the generated GUIDs for each client are very similar (even with some possible collisions). I would like to upgrade apache to version 2.4, but it seems to be somewhat of a project. I did manage to compile it using this post and a few others, only to discover the folder structure does not resemble the one I had before (default ubuntu). I'm weary of tweaking it myself ... and I will be making my life miserable if I want to upgrade the server later on. So ... what are my options? Is there a good unofficial repository that packages apache 2.4 for Oneiric? (please provide a short 'how to', I'm not great in installing packages) Is there an alternative route to solve this? (Upgrading just the user_track module? Another module that works with apache 2.2?)

    Read the article

  • Pushing updates to live server... FTP isn't cutting it... a better method?

    - by Jenkz
    I'm the lead developer in a team of 2. My partner has only just joined the project and despite using GIT for version control etc, we are still stuck in the dark ages when it comes to code deployment. Currently I make all site updates via FTP (this way I have control / responsibility over everything that goes live), using Filezilla. I've done this for years, but we now have some large PHP classes (300KB), and a lot of traffic. So in short, every time I upload a key class "general" for example, the site goes down until the file finishes uploading. This is only 5/6 seconds at a time, but this is increasingly unacceptable. I realise I can upload the file under a different name and then rename both files... but really there must be a better way? I've heard about rsyncing code across from another server, but I don't see how this prevents switching to the new file whilst uploading. We only have one server (for DB and Apache) but also use some cloud servers (for openx as an example).

    Read the article

  • How to share files/media online

    - by user110744
    Can someone can direct me to a how to that explains not only how to install an FTP but also how a family member on another coast can access this via online? I have setup an old computer with the intent of sharing movies/music/photos with family members not in my house. I have been searching for over an hour and cannot find a complete guide from installation through user/folder setup to access via a computer not on the same network. Any assistance in this matter would be eternally appreciated.

    Read the article

  • How do I set DateTimeStamp on file after uploading to FTP?

    - by Anthony Brien
    I have an application that deploys game data files to different gaming consoles. If matching files on the users machine and the console have identical size and dates, they must not be re-deployed. On Xbox, this is easily accomplished because an XDK library used to upload files on the console allows me to set the date on the uploaded files to match the dates on the user's machine. On Ps3 however, I use an FTP service running on the console. I use WebClient.UploadFileAsync to upload files to the console. However, I cannot figure out how I can set the uploaded file's date timestamp, leaving me with only the file size to determine identical files which is unsafe. I was wondering if there was a way to set a file's date timestamp through the WebClient interface?

    Read the article

  • How can I make visual studio 2010 deploy (or FTP upload) a page on save?

    - by Isa
    Just started using Visual Studio 2010, moved over from Netbeans. I kinda liked the Netbeans upload on save functionality, which was useful in development environments when one is constantly making small changes and testing them. As soon as you saved a file, it would be synced to the FTP server. Is it possible to do this in VS? I'm pretty sure there is a way, using a macro of some sort, and having it run on every save? However I have no idea how to implement it... Maybe even a keyboard shortcut to deploy the current working file would be nice.

    Read the article

  • Wordpress transfer to new ftp. Help, Home link doesn't work?

    - by judi
    Hi I've transfered my wordpress site to a new ftp server, but my home link doesn't work properly. When I click on it, it goes to http://123.456.78.8/mydomain.com and I get a page not found message. I've discovered it needs a / at the end to work. Does anyone know a way to fix this before I put it on my live site? Could it be a database or config file issue? Thanks for all your help Regards Judi P.S Could it be the permalink structure? Will it work when change my domain to http://mydomain.com?

    Read the article

  • Are there any XML Editors with FTP and file-tree browsing combined?

    - by JW
    Are there any (free preferably) XML Editors combined with FTP and file-tree browsing Project wide find+Replace I.e A bit like what Dreamweaver MX is but with fancier XML capabilities /XSLT /XSD Perhaps even DW does this...im still on an older version. I'd like to keep a smooth flow between find-edit-view-upload any ideas? Background: I have converted most of the html files of my legacy site into XML (which match the directory structure of my 'public docs' folder). Part of a step towards turning it into completely dynamic data via MVC /Front Controller Pattern.

    Read the article

  • vsftp login errors 530 login incorrect

    - by mcktimo
    Using Ubuntu 10.04 on an aws ec2 instance. I was happy just using ssh but then a wordpress plugin needs ftp access...I just need ftp access for one site www.sitebuilt.net which is in /home/sitebuil. I installed a vftpd and pam and followed suggestions that got me to the following state /etc/vftpd.conf listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/vsftpd.log secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem guest_enable=YES user_sub_token=$USER local_root=/home/$USER chroot_local_user=YES hide_ids=YES check_shell=NO userlist_file=/etc/vsftpd_users /etc/pam.d/vsftpd # Standard behaviour for ftpd(8). auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed # Note: vsftpd handles anonymous logins on its own. Do not enable pam_ftp.so. # Standard pam includes @include common-account @include common-session @include common-auth auth required pam_shells.so # Customized login using htpasswd file auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so session optional pam_keyinit.so force revoke auth include system-auth account include system-auth session include system-auth session required pam_loginuid.so /etc/vsftpd_users sitebuil tim /etc/passwd ... sitebuil:x:1002:100:sitebuilt systems:/home/sitebuil:/bin/sh ftp:x:108:113:ftp daemon,,,:/srv/ftp:/sbin/nologin /etc/vsftpd/passwd sitebuil:Kzencryptedpwd /var/log/vftpd.log Wed Feb 29 15:15:48 2012 [pid 20084] CONNECT: Client "98.217.196.12" Wed Feb 29 15:16:02 2012 [pid 20083] [sitebuil] FAIL LOGIN: Client "98.217.196.12" Wed Feb 29 16:12:33 2012 [pid 20652] CONNECT: Client "98.217.196.12" Wed Feb 29 16:12:45 2012 [pid 20651] [sitebuil] FAIL LOGIN: Client "98.217.196.12"

    Read the article

  • IIS 7.5 - Remove the pipe character from usernames for virtual hosts

    - by glasnt
    Currently I have a setup with a virtual FTP site in IIS 7.5 that requires the following authentication details for the anonymous account: Host: ftp.mydomain.com User: ftp.mydomain.com|anonymous Pass: <none> I have multiple FTP accounts setup on this same server. I know that this means I need to specify the domain in the username to let IIS know what I need site to authenticate against, but is it possible to make the username only be anonymous? Would I have to create a user by that name in the windows users and groups area to be and specifically link it there?

    Read the article

  • How to know the full path of a file using SSH?

    - by Roy
    Hi I am beginner for SSH stuff but i want to dump a big sql file and for that i need to be able to navigate to the appropriate path in my hosting account. I managed to login to SSH and i typed pwdbut it gave me a shared hosting pathway like /home/content/r/o/s/roshanjonah How Can i go to the path where i upload my files to...i use FTP but in FTP path it just shows / so i cannot go any further back than that...so using SSH how can i come to this path in ftp... Thanks Roshan

    Read the article

  • Recommend an SFTP solution for Windows (Server & Client) that integrates well with Dreamweaver

    - by aaandre
    Could you please recommend a secure FTP solution for updating files on a remote windows server from windows workstations? We would like to replace the FTP-based workflow with a secure FTP one. Windows Server 2003 on the remotely hosted webserver, WinXP on the workstations. We manage the files via Dreamweaver's built-in FTP. My understanding is Dreamweaver supports sFTP out of the box so I guess I am looking for a good sFTP server for Windows Server 2003. Ideally, that would not require cygwin. Ideally, the solution would use authentication based on the existing windows accounts and permissions. Thank you!

    Read the article

  • On demand upload server

    - by stimms
    I'm looking for a simple application which will do something like Allow user to sign up for an ftp account -> ask admin for approval -> create ftp account for that user now it doesn't have to be FTP, in fact I would be happy with a web based tool which supported upload via some sort of java applet or something similar. I don't care about what platform it runs on, although if we could avoid PHP that would be cool. Any ideas?

    Read the article

  • I setup vsftpd on ubuntu server on my ec2 instance, how to connect using SSH?

    - by Blankman
    I connect to my ec2 instance using ssh so I don't have to login each time. I just installed vsftpd on the ubuntu server, but when I connect it obviously asks for my username and password. Since I connect using the ubuntu user that my AMI comes with, I don't even know the root password. Is there a way I can login via ftp using SSH? Or do I just create a user on the system for ftp purposes? I've locked ftp to my IP address, and I will shutdown the ftp service once I'm done as I dont need it running 99.99999% of the time.

    Read the article

  • Proftpd: How to set default root to a users home directory without jailing the user?

    - by sacamano
    Hi there. I've installed proftpd on my debian box but I'm having having some trouble with the configuration. In my proftpd.conf I've added; DefaultRoot ~ !ftp_special This works fine in that all users except members of ftp-special are unable to navigate outside of their home folder. However, I want users that are members of ftp-special to enter a special home folder when logging on to the ftp server but at the same time I want them to be able to navigate the entire server. Right now, if a user that is a member of ftp-special logs on his entry-point is the root ( / ). Thanks in advance.

    Read the article

  • How to redirect a specific url through a proxy for multiple services?

    - by CrystalFire
    I have a website hosted on 000webhost.com for free. I am unable to connect directly to the site because Comcast has blocked a portion of 000webhost's servers for free accounts due to other people hosting malicious content. In order to maintain my website, I cannot use my computer to directly connect to the server. I am wondering if there is a way by which I can specifically forward attempts to access the server through a proxy, transparently. The current system that I am on is Windows, but I also have systems running Mac OSX and Linux, so solutions for any system could be fine. I've found answers which work for http, but I'm looking for a solution which will let me use all the other functions as well, such as ftp and ssh.

    Read the article

  • Socks proxy on mac's shared internet

    - by AliBZ
    Hi all I use my mac's internet sharing to create wireless network for my ipod touch. I have a linux server and I use socks proxy. I wanna use this proxy on my ipod but i don't know how. I put my shared network connection behind the proxy with localhost ip but my ipod isn't behind the proxy. any ideas?

    Read the article

  • Why does using nginx as a reverse proxy break local links?

    - by tsvallender
    I've just set up nginx as a reverse proxy, so some sites served from the box are served directly by it and others are forwarded to a Node.js server. The site being served by Node.js, however, is displayed with no CSS or images, so I assume the links are somehow being broken, but don't know why. The following is the only file in /etc/nginx/sites-enabled: server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name dev.my.site; access_log /var/log/nginx/localhost.access.log; location / { root /var/www; index index.html index.htm; } location /myNodeSite { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; } } I had thought perhaps it was trying to find them in /var/www due to the first entry, but removing that doesn't seem to help.

    Read the article

  • How do you cache web pages with a personalized header using caching reverse proxy such as Squid, Var

    - by Continuation
    Pretty much every page of my website is dynamically generated. However they don't change that frequently (kinda similar to a forum page). So I'd like to cache them using a caching reverse proxy such as Squid, varnish or Nginx. The problem is that for my logged-in users, each of them will see a personalized header saying "Welcome John Doe. Logout" on the upper right corner of the page (just like serverfault). While users who aren't logged in will see a header that says "Login" instead. So basically even though every user will see the same page in general, they all slightly different version due to that personalized header. Is there any way so that I can cache the "main" part of the page and serve it from cache while generate the personalized header dynamically for each individual user? This must be a very common problem. How is it solved in general?

    Read the article

  • How to use iptables to foward outbound web traffic to a proxy?

    - by jnman
    I've been hitting my head for a while as to how to do this. The scenario is as follows: I want to be able to forward all outbound web traffic from a browswer to Tor so that it is properly anonymized. Normally, one could just set the http proxy in the browser and be done with it but this is with a browser without the ability to do so specifically, a mobile browser. So ideally, what could be done then is to intercept all web/dns traffic requests from the browser and send it to Tor. Assume for this, that Tor will be running on the device too.

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • How to rewrite the domain part of Set-Cookie in a nginx reverse proxy?

    - by Tobia
    I have a simple nginx reverse proxy: server { server_name external.domain.com; location / { proxy_pass http://backend.int/; } } The problem is that Set-Cookie response headers contain ;Domain=backend.int, because the backend does not know it is being reverse proxied. How can I make nginx rewrite the content of the Set-Cookie response headers, replacing ;Domain=backend.int with ;Domain=external.domain.com? Passing the Host header unchanged is not an option in this case. Apache httpd has had this feature for a while, see ProxyPassReverseCookieDomain, but I cannot seem to find a way to do the same in nginx.

    Read the article

  • DNS Spoofing and Xampp as a proxy, how to configure it?

    - by Angelo
    I have a server running Apache with mod_proxy, a module to use my localhost as a proxy server. When somebody on the same LAN visits my server (my localhost through my lan ip), he/she can see only the .html page loaded into my server. Due to DNS Spoofing restrictions on the client, if he/she clicks on a link that refers to something not on my server, Apache says correctly "Object not found", because the client cannot request the page from the Internet (remember, the DNS is spoofed to my localhost). The question is: how to configure Apache to grab the page in place of the client?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >