Search Results

Search found 11748 results on 470 pages for 'webclient download'.

Page 4/470 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Wget - if / else download condition?

    - by Kai
    I want wget to prefer a certain filetype over another, if the files have the same basename. For example: if foo.ogg available, don't download foo.mp3 the way i use wget so far to crawl/automatically download (if anyone is interested): wget -Dfoo.com -I /folder/ -r -l 1 -nc -A.ogg,.mp3 -i http://www.foo.com/folder/ but this, of course, gets me .mp3 AND .ogg files. It often also gets me image files like .png which i didn't want in the first place, and discards them afterwards. Any Ideas? (Syntax-Explanation: -D: download only from this Domain -I: download only from this subfolder of Domain -r: recursive (follow links and directory structure) -l 1: follow only 1 link deep -nc: no clobber = download only if file doesn't exist -A: accept/download only all *.ogg and *.mp3 (discard necessary html-files) -i: download-url/starting point)

    Read the article

  • SQL SERVER – Download Free eBook – Introducing Microsoft SQL Server 2012

    - by pinaldave
    Database Administration and Business Intelligence is indeed very key area of the SQL Server. My very good friend Ross Mistry and Stacia Misner has recently wrote book which is for SQL Server 2012. The best part of the book is it is totally FREE! Well, this book assumes that you have certain level of SQL Server Administration as well Business Intelligence understanding. So if you are absolutely beginner I suggest you read other books of Ross as well attend Pluralsight course of Stacia Misner. Personally I read this book in last 10 days and I find it very easy to read and very comprehensive as well. Part I Database Administration (by Ross Mistry) 1. SQL Server 2012 Editions and Engine Enhancements 2. High-Availability and Disaster-Recovery Enhancements 3. Performance and Scalability 4. Security Enhancements 5. Programmability and Beyond-Relational Enhancements Part II Business Intelligence Development (by Stacia Misner) 6. Integration Services 7. Data Quality Services 8. Master Data Services 9. Analysis Services and PowerPivot 10. Reporting Services Here are various versions of the eBook. PDF ePub Mobi Amazon Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Download SQL Server 2008 R2 Upgrade Technical Reference Guide

    - by pinaldave
    I recently come across very interesting white paper written for Microsoft by Solid Quality Mentors. A successful upgrade to SQL Server 2008 R2 should be smooth and trouble-free. To do that smooth transition, you must plan sufficiently for the upgrade and match the complexity of your database application. Otherwise, you risk costly and stressful errors and upgrade problems. SQL Server 2008 R2 Upgrade Technical Reference Guide is one of the best and comprehensive reference guide I have seen on the subject of SQL Server 2008 R2 upgrade. There are so many various subjects discussed about upgrade which one would always wanted to see. You can find the link of why one has to upgrade to SQL Server 2008 R2 over here: Why upgrade to SQL Server 2008 R2. White paper to upgrade to SQL Server 2008 R2 Upgrade Guide. Here is the quick list of content of the white paper. 1. Upgrade Planning and Deployment 2. Management and Development Tools 3. Relational Databases 4. High Availability 5. Database Security 6. Full-Text Search 7. Service Broker 8. Transact-SQL Queries 9. Notification Services 10. SQL Server Express 11. Analysis Services 12. Data Mining 13. Integration Services 14. Reporting Services 15. Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: Upgrade Planning Deployment and Tasks Checklist This white paper is indeed huge with 490 pages and 151,956 words.As I said, this is one of the most comprehensive white paper ever published on the subject. Just reading this white paper one can learn a lot about SQL Server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Download PSSDIAG Data Collection Utility

    - by pinaldave
    During an early career of mine as a database consultant – when I was dealing with SQL Server 2000, I often needed to collect various data related to SQL Server. My favorite tool to collect the data is PSSDIAG tool. It is a general purpose diagnostic collection utility that Microsoft Product Support Services uses to collect various logs and data files. It collects Performance Monitor logs, SQL Profiler traces, SQL Server blocking script output, Windows Event Logs, and SQLDIAG output. The data collected can be used by SQL Nexus tool which help you troubleshoot SQL Server performance problems. PSSDIAG is a wrapper around other data collection APIs and utilities, the performance impact of running PSSDIAG is generally equal to the impact of the traces that PSSDIAG has been configured to capture. If you are using SQL Server 2000 – you need to seriously consider to upgrading it to SQL Server 2012. Here is a PSSDIAG Data Collection Utility updated in August 2012. My friend and SQL Server Expert Amit Benerjee have written an excellent article on this subject, I encourage all of you to read the same. Note: For SQL Server 2012 there is SQLDiag. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • WebClient The request was aborted: Could not create SSL/TLS secure channel

    - by Tomas
    I am using WebClient in ASP.NET app to call PayPal secured url to create payment button. While calling secured PayPal Url I get error below. How to solve this problem? Do I need to purchase certificate to just call secured url? The request was aborted: Could not create SSL/TLS secure channel. My code ServicePointManager.Expect100Continue = true; ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3; using (var client = new WebClient()) { var postBytes = Encoding.ASCII.GetBytes(param); client.Headers.Add("Content-Type", "application/x-www-form-urlencoded"); responseBytes = client.UploadData(_paymentProcessorCredentials.PayPalApiUrl, "POST", postBytes); }

    Read the article

  • C# WebClient only downloads partial html

    - by H4mm3rHead
    Hi, I am working on some scraping app, i wanted to try to get it to work but ran into a problem. I have replaced the original scraping destination in the below code with googles webpage, just for testing. It seems that my download doesnt get everything, i note that the body and the html tags are missing their close tags. How do i get it to download everything? Whats wrong with my sample code: string filename = "test.html"; WebClient client = new WebClient(); string searchTerm = HttpUtility.UrlEncode(textBox2.Text); client.QueryString.Add("q", searchTerm); client.QueryString.Add("hl", "en"); string data = client.DownloadString("http://www.google.com/search"); StreamWriter writer = new StreamWriter(filename, false, Encoding.Unicode); writer.Write(data); writer.Flush(); writer.Close();

    Read the article

  • WebClient.DownloadString() Not Producing Exact HTML

    - by Ryan Fuentes
    So here's the deal. I'm creating a spider bot for a website that scans all the product pages and records the product data. I'm using C# and the WebClient library to download the HTML string. The site I'm crawling must be specially made because the HTML that is received from WebClient.DownloadString() is different than the HTML that I get when I view the source of the HTML when visiting it on a browser. This seems intentional because the only info I can't get is the price. Does anyone know a workaround for this problem or can anyone explain what is happening? Thanks.

    Read the article

  • System.Net.WebClient Class in .Net CompactFramework 3.5 ?

    - by Leen15
    Hi at all! I need to comunicate with a Server that give me async answers (streamer connection). I find this: http://msdn.microsoft.com/en-en/library/ms144211%28v=VS.80%29.aspx that generate this event: http://msdn.microsoft.com/en-en/library/system.net.webclient.openreadcompleted%28v=VS.80%29.aspx I think this is what i need, but i don't have the WebClient class in my System.Net of CompactFramework 3.5. How can i do? Thanks. EDIT: I've done a more clear question: httpRequest, httpResponse, send GET through Stream and Receive the Result in C#

    Read the article

  • Download resume support blocked by isp?

    - by John Doe
    Can ISP block resume support for downloads ? I'm using IDM (internet download manager) to download of the internet from resume supported websites, yet I am unable to resume downloads. I tried different computers with the same result. Turned off firewall, didn't have any effect i was able to download with no issues until a couple of days ago. Another thing i noticed is that before IDM used to try to connect to several connections to speed up my download, but now it can only connect to one connection. Also i tried to download using my vpn, and i was able to download and resume downloads with no problem.

    Read the article

  • Webclient using download file to grab file from server - handling exceptions

    - by baron
    Hello everyone, I have a web service in which I am manipulating POST and GET methods to facilitate upload / download functionality for some files in a client/server style architecture. Basically the user is able to click a button to download a specific file, make some changes in the app, then click upload button to send it back. Problem I am having is with the download. Say the user expects 3 files 1.txt, 2.txt and 3.txt. Except 2.txt does not exist on the server. So I have code like (on server side): public class HttpHandler : IHttpHandler { public void ProcessRequest { if (context.Request.HttpMethod == "GET") { GoGetIt(context) } } private static void GoGetIt(HttpContext context) { var fileInfoOfWhereTheFileShouldBe = new FileInfo(......); if (!fileInfoOfWhereTheFileShouldBe.RefreshExists()) { throw new Exception("Oh dear the file doesn't exist"); } ... So the problem I have is that when I run the application, and I use a WebClient on client side to use DownloadFile method which then uses the code I have above, I get: WebException was unhandled: The remote server returned an error: (500) Internal Server Error. (While debugging) If I attach to the browser and use http://localhost:xxx/1.txt I can step through server side code and throw the exception as intended. So I guess I'm wondering how I can handle the internal server error on the client side properly so I can return something meaningful like "File doesn't exist". One thought was to use a try catch around the WebClient.DownloadFile(address, filename) method but i'm not sure thats the only error that will occur i.e. the file doesn't exist.

    Read the article

  • C# WebClient OpenRead url

    - by Octopus-Paul
    So i have this program that fetch a page using a short link (I used Google url shortener) to build my example i used the code from Using WebClient in C# is there a way to get the URL of a site after being redirected? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Net; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { MyWebClient client = new MyWebClient(); client.OpenRead("http://tinyurl.com/345yj7x"); Uri uri = client.ResponseUri; Console.WriteLine(uri.AbsoluteUri); Console.Read(); } } class MyWebClient : WebClient { Uri _responseUri; public Uri ResponseUri { get { return _responseUri; } } protected override WebResponse GetWebResponse(WebRequest request) { WebResponse response = base.GetWebResponse(request); _responseUri = response.ResponseUri; return response; } } } I do not understant a thing: when i do client.OpenRead("http://tinyurl.com/345yj7x"); this downloads the page that the url points to? If this method downloads the page, I need something to get me only the url, so if there a method to get only some headers, or only the url please let me know.

    Read the article

  • Limit WebClient DownloadFile maximum file size

    - by Jack Juiceson
    Hi everyone, In my asp .net project, my main page receives URL as a parameter I need to download internally and then process it. I know that I can use WebClient's DownloadFile method however I want to avoid malicious user from giving a url to a huge file, which will unnecessary traffic from my server. In order to avoid this, I'm looking for a solution to set maximum file size that DownloadFile will download. Thank you in advance, Jack

    Read the article

  • Static file download from browser breaking in varnish but works fine in Apache

    - by Ron
    I would at first like to thank everyone at serverfault for this great website and I also come to this site while searching in google for various server related issues and setups. I also have an issue today and so I am posting here and hope that the seniors would help me out. I had setup a website on a dedicated server a few days ago and I used Varnish 3 as the frontend to Apache2 on a Debian Lenny server as the traffic was a bit high. There are several static file downloads of around 10-20 MB in size in the website. The website looked fine in the last few days after I setup. I was checking from a 5mbps + broadband connection and the file downloads were also completed in seconds and working fine. But today I realized that on a slow internet connection the file downloads were breaking off. When I tried to download the files from the website using a browser then it broke off after a minute or so. It kept on happening again and again and so it had nothing to do with the internet connection. The internet connection was around 512 kbps and so it was not dial up level speed too but decent speed where files should easily download though not that fast. Then I thought of trying out with the apache backend port and used the port number to check out if the problem occurs. But then on adding the apache port in the static file download url, the files got downloaded easily and did not break even once. I tried it several times to make sure that it was not a coincidence but every time I was using the apache port in the file download url then it was downloading fine while it was breaking each time with the normal link which was routed through Varnish I suppose. So, it seems Varnish has somehow resulted in the broken file downloads. Could anyone give any idea as to why it is happening and how to fix the problem. For more clarification, take this example: Apache backend set on port 8008, Varnish frontend set on port 80 Now when I download say http://mywebsite.com/directory/filename.extension Then the download breaks off after a minute or so. I cannot be sure it is due to the time or size though and I am just assuming. May be some other reason too. But when I download using: http://mywebsite.com:8008/directory/filename.extension Then the file download does not break at all and it gets download fine. So, it seems that varnish is somehow creating the file download breaking and not apache. Does anybody have any idea as to why it is happening and how can it be fixed. Any help would be highly appreciated. And my varnish default.vcl is backend apache { set backend.host = "127.0.0.1"; set backend.port = "8008"; } sub vcl_deliver { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Server; remove resp.http.X-Powered-By; }

    Read the article

  • Resuming downloads in Firefox

    - by Kim
    Unfortunately, Firefox still has failed to add the option to resume downloads. I've ran into this problem SO MANY times, and in my previous searches I found posts saying Firefox was going to fix that. As of 3.6.3 they haven't. I just tried Free Download Manager (FDM), again, having the Firefox addon Flashgot use it. The download gets passed to FDM, and fails, giving the error message "access denied, invalid username or password." No password was required. The site I'm trying to get the file from is turbobit.net, which limits downloads speeds to 100kb/sec, and has a 59 second countdown before you get the link. I guess it's transparently using a password on their end. If I just download normally (save to disk) the download starts fine, but it fails after 30 minutes to 1 hour (always different), and my Wi-fi connection will stop briefly - and I have to start all over. So I will never be able to download a large file. I also tried DTA instead of FMD with Flashgot, and I get an "access denied" message in DTA. Again, I reloaded - waited the 59 seconds, and download w/Firefox, and the download starts fine. The failure message in the Firefox Downloads window is "source file at http... could not be read." Any help would be greatly appreciated. When is Firefox going to finally add the ability to resume downloads????? Is there some other software I haven't found using Google that will work?

    Read the article

  • [C#] WebClient - Upload data, get stream

    - by Barguast
    I have a situation where I want to asynchronously write a series of bytes with WebClient (in much the same way as UploadDataAsync) and get a readable response stream (in the same way as OpenReadAsync). You seem to be able to do the two individually, but not both of them together. Is there a way? Thanks!

    Read the article

  • What is the best way to download files via HTTP using c#

    - by Shamika
    Hi, In one of my application I'm using the WebClient class to download files from a web server. Depending on the web server sometimes the application download millions of documents. It seems to be when there are lot of documents, performance vise the WebClient doesn't scale up well. Also it seems to be the WebClient doesn't immediately close the connection it opened for the WebServer even after it successfully download the particular document. I would like to know what other alternatives I have. Thanks, Shamika

    Read the article

  • C# progress bar not synced with download (WebClient class)

    - by dominiquel
    Hi, I am coding a system which has a small FTP module included inside, it's not the main feature at all, but needed... I must link the progressbar with the WebClient class event DownloadProgressChangedEventHandler and AsyncCompletedEventHandler, the progressbar increment is ok, and the ASyncCompletedEventHandler launch a MessageBox (as intended), the problem is that the progress bar see to load too slow... problem : My MessageBox pop at 100% (launched by the event handler), BUT when the MessageBox pop my progress bar is only at +-80% (but the .VALUE is really 100), the first though I had was that they have added a "smooth" effect in Windows Vista which slow down the progressbar relatively to it's true value. If any of you have experienced the same problem thanks for your help.

    Read the article

  • C# WebClient - View source question

    - by Jim
    I'm using a C# WebClient to post login details to a page and read the all the results. The page I am trying to load includes flash (which, in the browser, translates into HTML). I'm guessing it's flash to avoid being picked up by search engines??? The flash I am interested in is just text (not an image/video) etc and when I "View Selection Source" in firefox I do actually see the text, within HTML, that I want to see. (Interestingly when I view the source for the whole page I do not see the text, within HTML, that I want to see. Could this be related?) Currently after I have posted my login details, and loaded the HTML back, I see the page which does NOT show the flash HTML (as if I had viewed source for the whole page). Thanks in advance, Jim PS: I should point out that the POST is actually working, my log in is successful.

    Read the article

  • Get html that is generated via AJAX in C# webclient

    - by WebDevHobo
    I often go to a site to look stuff up. I thought to myself: "Hold on. I can program. Why am I going to this site manually when I can write a piece of software that does it for me?". And so I started. I'm using C#, so I found WebClient and Uri. I've managed to get the source code for the site, yet the problem occurred that the specific data I'm looking for is generated via AJAX, after the source code has loaded. So that's my problem. How can I get that code, if it needs to be requested via an AJAX call first?

    Read the article

  • Limit download usage for clients

    - by Kumar P
    i am maintaining few windows xp machines under rhel 5 . i want to set quota for download file size. How to do it ? I mean, in lan usar A's maximum donload file size is 300 MB , and user B's maximum download file size in 200 MB. I want to block downloading when user try to download more than 300 MB file.User should not allow to download 300MB file at a time. Or how to set quota for maximum download per day, is there possible to do it ? How can i do this ?

    Read the article

  • C# Persistent WebClient

    - by Nullstr1ng
    I have a class written in C# (Windows Forms) It's a WebClient class which I intent to use in some website and for Logging In and navigation. Here's the complete class pastebin.com (the class has 197 lines so I just use pastebin. Sorry if I made a little bit harder for you to read the class, also below this post) The problem is, am not sure why it's not persistent .. I was able to log in, but when I navigate to other page (without leaving the domain), I was thrown back to log in page. Can you help me solving this problem? one issue though is, the site I was trying to connect is "HTTPS" protocol. I have not yet tested this on just a regular HTTP. Thank you in advance. /* * Web Client v1.2 * --------------- * Date: 12/17/2010 * author: Jayson Ragasa */ using System; using System.Collections; using System.Collections.Specialized; using System.Collections.Generic; using System.Text; using System.IO; using System.Net; using System.Web; namespace Nullstring.Modules.WebClient { public class WebClientLibrary { #region vars string _method = string.Empty; ArrayList _params; CookieContainer cookieko; HttpWebRequest req = null; HttpWebResponse resp = null; Uri uri = null; #endregion #region properties public string Method { set { _method = value; } } #endregion #region constructor public WebClientLibrary() { _method = "GET"; _params = new ArrayList(); cookieko = new CookieContainer(); } #endregion #region methods public void ClearParameter() { _params.Clear(); } public void AddParameter(string key, string value) { _params.Add(string.Format("{0}={1}", WebTools.URLEncodeString(key), WebTools.URLEncodeString(value))); } public string GetResponse(string URL) { StringBuilder response = new StringBuilder(); #region create web request { uri = new Uri(URL); req = (HttpWebRequest)WebRequest.Create(URL); req.Method = "GET"; req.GetLifetimeService(); } #endregion #region get web response { resp = (HttpWebResponse)req.GetResponse(); Stream resStream = resp.GetResponseStream(); int bytesReceived = 0; string tempString = null; int count = 0; byte[] buf = new byte[8192]; do { count = resStream.Read(buf, 0, buf.Length); if (count != 0) { bytesReceived += count; tempString = Encoding.UTF8.GetString(buf, 0, count); response.Append(tempString); } } while (count > 0); } #endregion return response.ToString(); } public string GetResponse(string URL, bool HasParams) { StringBuilder response = new StringBuilder(); #region create web request { uri = new Uri(URL); req = (HttpWebRequest)WebRequest.Create(URL); req.MaximumAutomaticRedirections = 20; req.AllowAutoRedirect = true; req.Method = this._method; req.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"; req.KeepAlive = true; req.CookieContainer = this.cookieko; req.UserAgent = "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_8; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.224 Safari/534.10"; } #endregion #region build post data { if (HasParams) { if (this._method.ToUpper() == "POST") { string Parameters = String.Join("&", (String[])this._params.ToArray(typeof(string))); UTF8Encoding encoding = new UTF8Encoding(); byte[] loginDataBytes = encoding.GetBytes(Parameters); req.ContentType = "application/x-www-form-urlencoded"; req.ContentLength = loginDataBytes.Length; Stream stream = req.GetRequestStream(); stream.Write(loginDataBytes, 0, loginDataBytes.Length); stream.Close(); } } } #endregion #region get web response { resp = (HttpWebResponse)req.GetResponse(); Stream resStream = resp.GetResponseStream(); int bytesReceived = 0; string tempString = null; int count = 0; byte[] buf = new byte[8192]; do { count = resStream.Read(buf, 0, buf.Length); if (count != 0) { bytesReceived += count; tempString = Encoding.UTF8.GetString(buf, 0, count); response.Append(tempString); } } while (count > 0); } #endregion return response.ToString(); } #endregion } public class WebTools { public static string EncodeString(string str) { return HttpUtility.HtmlEncode(str); } public static string DecodeString(string str) { return HttpUtility.HtmlDecode(str); } public static string URLEncodeString(string str) { return HttpUtility.UrlEncode(str); } public static string URLDecodeString(string str) { return HttpUtility.UrlDecode(str); } } } UPDATE Dec 22GetResponse overload public string GetResponse(string URL) { StringBuilder response = new StringBuilder(); #region create web request { //uri = new Uri(URL); req = (HttpWebRequest)WebRequest.Create(URL); req.Method = "GET"; req.CookieContainer = this.cookieko; } #endregion #region get web response { resp = (HttpWebResponse)req.GetResponse(); Stream resStream = resp.GetResponseStream(); int bytesReceived = 0; string tempString = null; int count = 0; byte[] buf = new byte[8192]; do { count = resStream.Read(buf, 0, buf.Length); if (count != 0) { bytesReceived += count; tempString = Encoding.UTF8.GetString(buf, 0, count); response.Append(tempString); } } while (count 0); } #endregion return response.ToString(); } But still I got thrown back to login page. UPDATE: Dec 23 I tried listing the cookie and here's what I get at first, I have to login to a webform and this I have this Cookie JSESSIONID=368C0AC47305282CBCE7A566567D2942 then I navigated to another page (but on the same domain) I got a different Cooke? JSESSIONID=9FA2D64DA7669155B9120790B40A592C What went wrong? I use the code updated last Dec 22

    Read the article

  • What is the relationship between WebProxy & IWebProxy with respect to WebClient?

    - by Streamline
    I am creating an app (.NET 2.0) that uses WebClient to connect (downloaddata, etc) to/from a http web service. I am adding a form now to handle allowing proxy information to either be stored or set to use the defaults. I am a little confused about some things. First, some of the methods & properties available in either WebProxy or IWebProxy are not in both. What is the difference here with respect to setting up how WebClient will be have when it is called? Secondly, do I have to tell WebClient to use the proxy information if I set it using either WebProxy or IWebProxy class elsewhere? Or is it automatically inherited? Thirdly, when giving the option for the user to use the default proxy (whatever is set in IE) and using the default credentials (I assume also whatever is set in IE) are these two mutually exclusive? Or you only use default credentials when you have also used default proxy? This gets me to the whole difference between WebProxy and IWebProxy. WebRequest.DefaultProxy is a IWebPRoxy class but UseDefaultCredentials is not a method on the IWebProxy class, rather it is only on WebProxy and in turn, How to set the proxy to the WebRequest.DefautlProxy if they are two different classes? Here is my current method to read the stored form settings by the user - but I am not sure if this is correct, not enough, overkill, or just wrong because of the mix of WebProxy and IWebProxy: private WebProxy _proxyInfo = new WebProxy(); private WebProxy SetProxyInfo() { if (UseProxy) { if (UseIEProxy) { // is doing this enough to set this as default for WebClient? IWebProxy iProxy = WebRequest.DefaultWebProxy; if (UseIEProxyCredentials) { _proxyInfo.UseDefaultCredentials = true; } else { // is doing this enough to set this as default credentials for WebClient? WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword); } } else { // is doing this enough to set this as default for WebClient? WebRequest.DefaultWebProxy = new WebProxy(ProxyAddress, ParseLib.StringToInt(ProxyPort)); if (UseIEProxyCredentials) { _proxyInfo.UseDefaultCredentials = true; } else { WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword); } } } // Do I need to WebClient to absorb this returned proxy info if I didn't set or use defaults? return _proxyInfo; } Is there any reason to not just scrap storing app specific proxy information and only allow the app the ability to use the default proxy information & credentials for the logged in user? Will this ever not be enough if using HTTP? Part 2 Question: How can I test that the WebClient instance is using the proxy information or not?

    Read the article

  • IE8 Download's Writing to C:\?

    - by Dana
    Every time I download something using IE8 (running on Win 7) I get asked where to save the file, I choose my downloads folder, then the download continues and when complete I get the following message: "C:\ is not accessible. Access Denied." And I so I can't access any files I download. So why on earth would IE8 be trying to write to the root, and where can I change this? I've checked the cache folder location and it's correct. Dana

    Read the article

  • Download all the links from a website at once [closed]

    - by user216112
    Possible Duplicate: How can I download an entire website Is there any software that allows you to download all the links of a website at once? E.g.: I'm using the w3school.com site and want to download all the PHP tutorials at once. Someone told me "tglepote".bt I have no idea what it is and Google returns me with nothing.

    Read the article

  • One time torrent download

    - by joaoc
    I know about bittorrent but have never used it since it tends to be mostly used for illegal download of movies and software. Typically whenever I see a legitimate use for it (Linux distros) there is also a regular download option so I haven't had the need for a client until today. There is a large (500Mb) file I want to download that appears to only be accessible via bit torrent. For a one time use, is there an online resource that converts the torrent to a normal http no-special-client-software download? If not, what is an easy to use and light client to install/uninstall on WinXP for a one time use?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >