Search Results

Search found 43 results on 2 pages for 'downloadstring'.

Page 1/2 | 1 2  | Next Page >

  • WebClient.DownloadString() Not Producing Exact HTML

    - by Ryan Fuentes
    So here's the deal. I'm creating a spider bot for a website that scans all the product pages and records the product data. I'm using C# and the WebClient library to download the HTML string. The site I'm crawling must be specially made because the HTML that is received from WebClient.DownloadString() is different than the HTML that I get when I view the source of the HTML when visiting it on a browser. This seems intentional because the only info I can't get is the price. Does anyone know a workaround for this problem or can anyone explain what is happening? Thanks.

    Read the article

  • The remote server returned an error: (404) Not Found.

    - by John
    I am running this piece of code to get the source of my webpage. The problem is why this function returns 404 error? Private Function getPageSource(ByVal URL As String) As String Dim webClient As New System.Net.WebClient() Dim strSource As String = webClient.DownloadString(URL) webClient.Dispose() Return strSource End Function

    Read the article

  • c# How Do I make Sure Webclient allows necessary time for download

    - by every_answer_gets_a_point
    using (var client = new WebClient()) { html = client.DownloadString(some_string); //do something html = client.DownloadString(some_string1); //do something html = client.DownloadString(some_string2); //do something html = client.DownloadString(some_string3); //do something etc } webclient does not allow itself enough time to download the entire source of the webpage. how do i force it to wait until it finishes the donwload?

    Read the article

  • Page_load event firing twice. User control not properly loading

    - by Phil
    Here is the code I am using to pull my usercontrol (content.ascx): Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load 'load module If TheModule = "content" Then Dim control As UserControl = LoadControl("~\Modules\Content.ascx") Controls.Add(control) End If End Sub Within the usercontrol is the following code (data access taken care of by DAAB and ive replaced sql statements with 'sql'): Imports System.Data.SqlClient Imports System.Data Imports System.Web.Configuration Imports Microsoft.Practices.EnterpriseLibrary.Common Imports Microsoft.Practices.EnterpriseLibrary.Data Partial Class Modules_WebUserControl Inherits System.Web.UI.UserControl Dim db As Database = DatabaseFactory.CreateDatabase() Dim command As SqlCommand 'database Dim reader As IDataReader 'general vars Dim pageid As Integer Dim did As Integer Dim contentid As Integer Dim dotpos As String Dim ext As String Dim content As String Dim folder As String Dim downloadstring As String Function getimage(ByVal strin As String) As String If strin > "" Then dotpos = InStrRev(strin, ".") ext = Right(strin, Len(strin) - dotpos) getimage = ext & ".gif" Else getimage = String.Empty End If Return getimage End Function Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles MyBase.Load, Me.Load 'test Response.Write("(1 Test from within page_load)") 'get session vars folder = Session("folder") pageid = Session("pageid") did = Session("did") 'main content command = db.GetSqlStringCommand("sql") db.AddInParameter(command, "@pageid", DbType.Int32, pageid) reader = db.ExecuteReader(command) While reader.Read If reader("content") IsNot DBNull.Value Then content = Replace(reader("content"), Chr(38) + Chr(97) + Chr(109) + Chr(112) + Chr(59) + Chr(98) + Chr(104) + Chr(99) + Chr(112) + Chr(61) + Chr(49), "") If reader("id") IsNot DBNull.Value Then contentid = reader("id") End If Else contentid = -1 content = String.Empty End If End While Outputcontent.Text = content 'contacts info If did = 0 Then command = db.GetSqlStringCommand("sql") db.AddInParameter(command, "@contentid", DbType.Int32, contentid) reader = db.ExecuteReader(command) While reader.Read() Contactinforepeater.DataSource = reader Contactinforepeater.DataBind() End While End If If Not did = 0 Then command = (db.GetSqlStringCommand("sql") db.AddInParameter(command, "@contentid", DbType.Int32, contentid) db.AddInParameter(command, "@did", DbType.Int32, did) reader = db.ExecuteReader(command) While reader.Read Contactinforepeater.DataSource = reader Contactinforepeater.DataBind() End While End If 'downloads box command = db.GetSqlStringCommand("sql") db.AddInParameter(command, "@contentid", DbType.Int32, contentid) reader = db.ExecuteReader(command) While reader.Read If reader("filename") IsNot DBNull.Value Then downloadstring += "<a href='/documents/" & folder & "/" & reader("filename") & "'>" downloadstring += "<img src=images/" & getimage(reader("filename")) & " border=0 align=absmiddle />" End If If reader("filesize") IsNot DBNull.Value Then downloadstring += Convert.ToInt32((reader("filesize") / 1000)) & "kb - " End If If reader("filename") IsNot DBNull.Value Then downloadstring += "<a href='/documents/" & Session("folder") & "/" & reader("filename") & "'>" & reader("description") & "</a><br />" End If End While Dim downloadsarray As ArrayList downloadsarray = New ArrayList If downloadstring IsNot Nothing Then downloadsarray.Add(downloadstring) End If If downloadsarray.Count > 0 Then DownloadsRepeater.DataSource = downloadsarray DownloadsRepeater.DataBind() End If 'get links command = db.GetSqlStringCommand("sql") db.AddInParameter(command, "@contentid", DbType.Int32, contentid) reader = db.ExecuteReader(command) While reader.Read Linksrepeater.DataSource = reader Linksrepeater.DataBind() End While End Sub End Class Now instead of seeing my page content and what should be within the repeaters on the page all I get is 2 x the output of Response.Write("(1 Test from within page_load)") (1 Test from within page_load)(1 Test from within page_load) This leads me to believe the page_load is firing twice, but not properly displaying all the information. Please can one of you willing experts help me to get this working? Thanks a lot in advance

    Read the article

  • WebClient Lost my Session

    - by kamiar3001
    Hi folks I have a problem first of all look at my web service method : [WebMethod(), ScriptMethod(ResponseFormat = ResponseFormat.Json)] public string GetPageContent(string VirtualPath) { WebClient client = new WebClient(); string content=string.Empty; client.Encoding = System.Text.Encoding.UTF8; try { if (VirtualPath.IndexOf("______") > 0) content = client.DownloadString(HttpContext.Current.Request.UrlReferrer.AbsoluteUri.Replace("Main.aspx", VirtualPath.Replace("__", "."))); else content = client.DownloadString(HttpContext.Current.Request.UrlReferrer.AbsoluteUri.Replace("Main.aspx", VirtualPath)); } catch { content = "Not Found"; } return content; } As you can see my web service method read and buffer page from it's localhost it works and I use it to add some ajax functionality to my web site it works fine but my problem is Client.DownloadString(..) lost all session because my sessions are all null which are related to this page for more description. at page-load in the page which I want to load from my web service I set session : HttpContext.Current.Session[E_ShopData.Constants.SessionKey_ItemList] = result; but when I click a button in the page this session is null I mean it can't transfer session. How I can solve this problem ? my web service is handled by some jquery code like following : $.ajax({ type: "Post", url: "Services/NewE_ShopServices.asmx" + "/" + "GetPageContent", data: "{" + "VirtualPath" + ":" + mp + "}", contentType: "application/json; charset=utf-8", dataType: "json", complete: hideBlocker, success: LoadAjaxSucceeded, async: true, cache: false, error: AjaxFailed });

    Read the article

  • The remote server returned an error: (401) Unauthorized.

    - by Zaidman
    Hi I'm trying to get the html code of certain webpage, I have a username and a password that are correct but i still can't get it to work, this is my code: private void buttondownloadfile_Click(object sender, EventArgs e) { NetworkCredentials nc = new NetworkCredentials("?", "?", "http://cdrs.globalpopsvoip.com/0000069/20091229/20091228_20091228.CDR"); WebClient client = new WebClient(); client.Credentials = nc; String htmlCode = client.DownloadString("http://cdrs.globalpopsvoip.com/0000069/20091229/20091228_20091228.CDR"); MessageBox.Show(htmlCode); } The MessageBox is just to test it, the problem is that every time I get to this line: String htmlCode = client.DownloadString("http://cdrs.globalpopsvoip.com/0000069/20091229/20091228_20091228.CDR"); I get an exception: The remote server returned an error: (401) Unauthorized. How do I fix this?

    Read the article

  • How can I pause a BackgroundWorker? Or something similar...

    - by juan
    I was using a BackgroundWorker to download some web sites by calling WebClient.DownloadString inside a loop. I wanted the option for the user to cancel in the middle of downloading stuff, so I called CancelAsync whenever I found that CancellationPending was on in the middle of the loop. But now I noticed that the function DownloadString kinda freezes sometimes, so I decided to use DownloadStringAsync instead (all this inside the other thread created with BackgroundWorker). An since and don't want to rewrite my whole code by having to exit the loop and the function after calling DownloadStringAsync, I made a while loop right after calling it that does nothing but checks for a variable bool Stop that I turn true either when the DownloadStringCompleted event handler is called or when the user request to cancel the operation. Now, the weird thing is that it works fine on the debug version, but on the release one, the program freezes in the while loop like if it were the main thread.

    Read the article

  • How to use a loop to download HTML with paging?

    - by Nai
    I want to loop through this URL and download the HTML. https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" +"&start=" + i start and num controls the paging of the URL. So if &start=2, and &num=10, it will scrape 10 results from page 2. Given that Google has a max limit of num = 10, how can I write a loop that loops through the HTML and scrape the results for the first 10 pages? This is what I have so far which just scrapes the first page. //input search term Console.WriteLine("What is your search query?:"); string searchTerm = Console.ReadLine(); //concantenate the strings using + symbol to make it URL friendly for google string searchTermFormat = searchTerm.Replace(" ", "+"); //create a new instance of Webclient and use DownloadString method from the Webclient class to extract download html WebClient client = new WebClient(); int i = 1; string Json = client.DownloadString("https://www.googleapis.com/customsearch/v1?key=AIzaSyAAoPQprb6aAV-AfuVjoCdErKTiJHn-4uI&cx=017576662512468239146:omuauf_lfve&q=" + searchTermFormat + "&num=10" + "&start=" + i); //create a new instance of JavaScriptSerializer and deserialise the desired content JavaScriptSerializer js = new JavaScriptSerializer(); GoogleSearchResults results = js.Deserialize<GoogleSearchResults>(Json); //output results to console Console.WriteLine(js.Serialize(results)); Console.ReadLine();

    Read the article

  • Does Google not allow webclients?

    - by every_answer_gets_a_point
    I have the following: string html_string = "http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=pharma"; string html; html = new WebClient().DownloadString(html_string); and when I get the length of HTML, it's returning only the first 28435 characters. Is it possible that Google is not allowing webclient access?

    Read the article

  • Web request returns "DOS"

    - by jlezard
    I am getting a "DOS" instead of the html string .... let getHtmlBasic (uri :System.Uri ) = use client = new WebClient() client.DownloadString( uri) let uri = System.Uri( "http://www.b-a-r-f.com/" ) getHtmlBasic uri This gives a string, "DOS" Lol what the ? All other websites seems to work ...

    Read the article

  • c#: adding two strings

    - by every_answer_gets_a_point
    i am doing: html = new WebClient().DownloadString("http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=" + biocompany); and i am getting the error: Error 1 Operator '&' cannot be applied to operands of type 'string' and 'string' but i am not even using the & ! please help!

    Read the article

  • Proxy support in VB.Net

    - by CFP
    I'm running the following code to check for updates in my software, and I wonder whether VB.Net will automatically user computer proxy settings: Dim CurrentVersion As String = (New System.Net.WebClient).DownloadString("URL/version.txt") If not, how can I adapt it to use proxy settings?

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • C# WebClient only downloads partial html

    - by H4mm3rHead
    Hi, I am working on some scraping app, i wanted to try to get it to work but ran into a problem. I have replaced the original scraping destination in the below code with googles webpage, just for testing. It seems that my download doesnt get everything, i note that the body and the html tags are missing their close tags. How do i get it to download everything? Whats wrong with my sample code: string filename = "test.html"; WebClient client = new WebClient(); string searchTerm = HttpUtility.UrlEncode(textBox2.Text); client.QueryString.Add("q", searchTerm); client.QueryString.Add("hl", "en"); string data = client.DownloadString("http://www.google.com/search"); StreamWriter writer = new StreamWriter(filename, false, Encoding.Unicode); writer.Write(data); writer.Flush(); writer.Close();

    Read the article

  • problem running system.net.webclient and process.start off a control event

    - by Rob
    The following code causes my vs 2008 wpf project to hang, I'm not sure why. Both Part 1 and Part 2 work perfectly fine independently, but when I run them together on an control event (click a button for example) the program hangs. I've also tried shell execute for part 2 - same results. However, this code when run within the form loaded event works fine. Any insights would be truly appreciated. Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) Handles Button1.Click 'Part 1 Dim myWebClient As System.Net.WebClient = New System.Net.WebClient Dim CurrentDataFileContents As String = myWebClient.DownloadString("http://www.xyz.com") myWebClient.Dispose() 'Part 2 System.Diagnostics.Process.Start("http://www.test.com") End Sub

    Read the article

  • Characters in string changed after downloading HTML from the internet.

    - by Callum Rogers
    Using the following code, I can download the HTML of a file from the internet: WebClient wc = new WebClient(); // .... string downloadedFile = wc.DownloadString("http://www.myurl.com/"); However, sometimes the file contains "interesting" characters like é to é, ? to ↠and ????? to フシギダãƒ. I think it may be something to do with different unicode types or something, as each character gets changed into 2 new ones, perhaps each character being split in half but I have very little knowledge in this area. What do you think is wrong?

    Read the article

  • Avoiding a NullReferenceException

    - by Nikhil K
    I have used this code for extracting urls from web page.But in the line of 'foreach' it is showing Object reference not set to an instance of an object exception. What is the problem? how can i correct that? WebClient client = new WebClient(); string url = "http://www.google.co.in/search?hl=en&q=java&start=10&sa=N"; string source = client.DownloadString(url); HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(source); foreach (HtmlNode link in doc.DocumentNode.SelectNodes("//a[@href and @rel='nofollow']")) { Console.WriteLine(link.Attributes["href"].Value); }

    Read the article

  • The underlying connection was closed: An unexpected error occurred on a receive

    - by Dave
    Using a console app which runs as a scheduled task on Azure, to all a long(ish) running webpage on Azure. Problem after a few minutes: The underlying connection was closed: An unexpected error occurred on a receive //// used as WebClient doesn't support timeout public class ExtendedWebClient : WebClient { public int Timeout { get; set; } protected override WebRequest GetWebRequest(Uri address) { HttpWebRequest request = (HttpWebRequest)base.GetWebRequest(address); if (request != null) request.Timeout = Timeout; request.KeepAlive = false; request.ProtocolVersion = HttpVersion.Version10; return request; } public ExtendedWebClient() { Timeout = 1000000; // in ms.. the standard is 100,000 } } class Program { static void Main(string[] args) { var taskUrl = "http://www.secret.com/secret.aspx"; // create a webclient and issue an HTTP get to our url using (ExtendedWebClient httpRequest = new ExtendedWebClient()) { var output = httpRequest.DownloadString(taskUrl); } } }

    Read the article

  • C# Thread issues

    - by Mike
    What I have going on is a listview being dynamically created from a previous button click. Then ti starts a background worker in which should clear out the listview and populate the iistview with new information every 30 seconds. I continously get: Cross-thread operation not valid: Control 'listView2' accessed from a thread other than the thread it was created on. private void watcherprocess1Updatelist() { listView2.Items.Clear(); string betaFilePath1 = @"C:\Alpha\watch1\watch1config.txt"; using (FileStream fs = new FileStream(betaFilePath1, FileMode.Open)) using (StreamReader rdr = new StreamReader((fs))) { while (!rdr.EndOfStream) { string[] betaFileLine = rdr.ReadLine().Split(','); using (WebClient webClient = new WebClient()) { string urlstatelevel = betaFileLine[0]; string html = webClient.DownloadString(urlstatelevel); File.AppendAllText(@"C:\Alpha\watch1\specificconfig.txt", html); } } }

    Read the article

  • question about System.Net

    - by backdoor
    hi all. when i use some code like this: System.Net.WebClient objClient = new WebClient(); string url = "http://google.com"; objClient.DownloadString(url); it takes some seconds to connection stablished and downloading starts... i reinstall my windows yesterday. before yesterday i havent this problem. some times when i reinstall my windows this problem occures. does anyboddy know why this problem occures?? thanks all and sorry for my bad english....

    Read the article

  • Problem with Google AJAX Search API RESTful interface

    - by robert_d
    When I send the following query http://ajax.googleapis.com/ajax/services/search/local?v=1.0&q=coffee%20New%20York%20NY using c# WebClient.DownloadString function or ordinary web browser I get JSON data which is different from data for the same query using JavaScript and Google AJAX Search API. From REST service I get the following url field http://www.google.com/maps/place?source003duds0026q003dcoffee0026cid003d13245583795745066822 but from JavaScript query I get this url field http://www.google.com/maps/place?source=uds&q=coffee&cid=13245583795745066822 The problem with REST service answer is that the url it gives points to a web page with error message "We currently do not support the location". What am I doing wrong?

    Read the article

  • WebClient - The remote server returned an error: (403) Forbidden.

    - by daniel
    Opening a public page from browser works fine. Downloading same page using WebClient throws - (403) Forbidden. What is going on here ? Here is quick copy/paste example (used on console app) to specific page on web: try { WebClient webClient = new WebClient(); string ss = webClient.DownloadString("http://he.wikisource.org/wiki/%D7%A9%D7%95%D7%9C%D7%97%D7%9F_%D7%A2%D7%A8%D7%95%D7%9A_%D7%90%D7%95%D7%A8%D7%97_%D7%97%D7%99%D7%99%D7%9D_%D7%90_%D7%90"); } catch (Exception ex) { throw; }

    Read the article

1 2  | Next Page >