Search Results

Search found 2747 results on 110 pages for 'foxit reader'.

Page 48/110 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Render User Controls and call their Page_Load

    - by David Murdoch
    The following code will not work because the controls (page1, page2, page3) require that their "Page_Load" event gets called. using System; using System.IO; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using iTextSharp.text; using iTextSharp.text.html.simpleparser; using iTextSharp.text.pdf; public partial class utilities_getPDF : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { // add user controls to the page AddContentControl("controls/page1"); AddContentControl("controls/page2"); AddContentControl("controls/page3"); // set up the response to download the rendered PDF... Response.ContentType = "application/pdf"; Response.ContentEncoding = System.Text.Encoding.UTF8; Response.AddHeader("Content-Disposition", "attachment;filename=FileName.pdf"); Response.Cache.SetCacheability(HttpCacheability.NoCache); // get the rendered HTML of the page System.IO.StringWriter stringWrite = new StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); this.Render(htmlWrite); StringReader reader = new StringReader(stringWrite.ToString()); // write the PDF to the OutputStream... Document doc = new Document(PageSize.A4); HTMLWorker parser = new HTMLWorker(doc); PdfWriter.GetInstance(doc, Response.OutputStream); doc.Open(); parser.Parse(reader); doc.Close(); } // the parts are probably irrelevant to the question... private const string contentPage = "~/includes/{0}.ascx"; private void AddContentControl(string page) { content.Controls.Add(myLoadControl(page)); } private Control myLoadControl(string page) { return TemplateControl.LoadControl(string.Format(contentPage, page)); } } So my question is: How can I get the controls HTML?

    Read the article

  • How to retain XML string as a string field during XML deserialization

    - by detale
    I got an XML input string and want to deserialize it to an object which partially retain the raw XML. <SetProfile> <sessionId>A81D83BC-09A0-4E32-B440-0000033D7AAD</sessionId> <profileDataXml> <ArrayOfProfileItem> <ProfileItem> <Name>Pulse</Name> <Value>80</Value> </ProfileItem> <ProfileItem> <Name>BloodPresure</Name> <Value>120</Value> </ProfileItem> </ArrayOfProfileItem> </profileDataXml> </SetProfile> The class definition: public class SetProfile { public Guid sessionId; public string profileDataXml; } I hope the deserialization syntax looks like string inputXML = "..."; // the above XML XmlSerializer xs = new XmlSerializer(typeof(SetProfile)); using (TextReader reader = new StringReader(inputXML)) { SetProfile obj = (SetProfile)xs.Deserialize(reader); // use obj .... } but XMLSerializer will throw an exception and won't output < profileDataXml 's descendants to "profileDataXml" field in raw XML string. Is there any way to implement the deserialization like that?

    Read the article

  • Java - SAX parser on a XHTML document

    - by Peter
    Hey, I'm trying to write a SAX parser for an XHTML document that I download from the web. At first I was having a problem with the doctype declaration (I found out from here that it was because W3C have intentionally blocked access to the DTD), but I fixed that with: XMLReader reader = parser.getXMLReader(); reader.setFeature("http://apache.org/xml/features/disallow-doctype-decl",true); However, now I'm experiencing a second problem. The SAX parser throws an exception when it reaches some Javascript embedded in the XHTML document: <script type="text/javascript" language="JavaScript"> function checkForm() { answer = true; if (siw && siw.selectingSomething) answer = false; return answer; }// </script> Specifically the parser throws an error once it reaches the &&'s, as it's expecting an entity reference. The exact exception is: `org.xml.sax.SAXParseException: The entity name must immediately follow the '&' in the entity reference. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:391) at com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1390) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEntityReference(XMLDocumentFragmentScannerImpl.java:1814) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3000) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:624) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:486) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:810) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:740) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:110) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1208) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:525) at MLIAParser.readPage(MLIAParser.java:55) at MLIAParser.main(MLIAParser.java:75)` I suspect (but I don't know) that if I hadn't disabled the DTD then I wouldn't get this error. So, how can I avoid the DTD error and avoid the entity reference error? Cheers, Pete

    Read the article

  • Problem parsing an atom feed using simplexml_load_file(), can't get an attribute.

    - by Craig Ward
    Hi, I am trying to create a social timeline. I pull in feeds form certain places so I have a timeline of thing I have done. The problem I am having is with Google reader Shared Items. I want to get the time at which I shared the item which is contained in <entry gr:crawl-timestamp-msec="1269088723811"> Trying to get the element using $date = $xml->entry[$i]->link->attributes()->gr:crawl-timestamp-msec; fails because of the : after gr which causes a PHP error. I could figure out how to get the element, so thought I would change the name using the code below but it throws the following error Warning: simplexml_load_file() [function.simplexml-load-file]: I/O warning : failed to load external entity "<?xml version="1.0"?><feed xmlns:idx="urn:atom-extension:indexing" xmlns:media="http://search.yahoo.com/mrss/" xmlns <?php $get_feed = file_get_contents('http://www.google.com/reader/public/atom/user/03120403612393553979/state/com.google/broadcast'); $old = "gr:crawl-timestamp-msec"; $new = "timestamp"; $xml_file = str_replace($old, $new, $get_feed); $xml = simplexml_load_file($xml_file); $i = 0; foreach ($xml->entry as $value) { $id = $xml->entry[$i]->id; $date = date('Y-m-d H:i:s', strtotime($xml->entry[$i]->attributes()->timestamp )); $text = $xml->entry[$i]->title; $link = $xml->entry[$i]->link->attributes()->href; $source = "googleshared"; echo "date = $date<br />"; $sql="INSERT IGNORE INTO timeline (id,date,text,link, source) VALUES ('$id', '$date', '$text', '$link', '$source')"; mysql_query($sql); $i++; }` Could someone point me in the right direction please. Cheers Craig

    Read the article

  • How to Verify Signature, Loading PUBLIC KEY From PEM file?

    - by bbirtle
    I'm posting this in the hope it saves somebody else the hours I lost on this really stupid problem involving converting formats of public keys. If anybody sees a simpler solution or a problem, please let me know! The eCommerce system I'm using sends me some data along with a signature. They also give me their public key in .pem format. The .pem file looks like this: -----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDe+hkicNP7ROHUssGNtHwiT2Ew HFrSk/qwrcq8v5metRtTTFPE/nmzSkRnTs3GMpi57rBdxBBJW5W9cpNyGUh0jNXc VrOSClpD5Ri2hER/GcNrxVRP7RlWOqB1C03q4QYmwjHZ+zlM4OUhCCAtSWflB4wC Ka1g88CjFwRw/PB9kwIDAQAB -----END PUBLIC KEY----- Here's the magic code to turn the above into an "RSACryptoServiceProvider" which is capable of verifying the signature. Uses the BouncyCastle library, since .NET apparently (and appallingly cannot do it without some major headaches involving certificate files): RSACryptoServiceProvider thingee; using (var reader = File.OpenText(@"c:\pemfile.pem")) { var x = new PemReader(reader); var y = (RsaKeyParameters)x.ReadObject(); thingee = (RSACryptoServiceProvider)RSACryptoServiceProvider.Create(); var pa = new RSAParameters(); pa.Modulus = y.Modulus.ToByteArray(); pa.Exponent = y.Exponent.ToByteArray(); thingee.ImportParameters(pa); } And then the code to actually verify the signature: var signature = ... //reads from the packet sent by the eCommerce system var data = ... //reads from the packet sent by the eCommerce system var sha = new SHA1CryptoServiceProvider(); byte[] hash = sha.ComputeHash(Encoding.ASCII.GetBytes(data)); byte[] bSignature = Convert.FromBase64String(signature); ///Verify signature, FINALLY: var hasValidSig = thingee.VerifyHash(hash, CryptoConfig.MapNameToOID("SHA1"), bSignature);

    Read the article

  • FtpWebRequest Download File

    - by pm_2
    The following code is intended to retrieve a file via FTP. However, I'm getting an error with it. serverPath = "ftp://x.x.x.x/tmp/myfile.txt"; FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverPath); request.KeepAlive = true; request.UsePassive = true; request.UseBinary = true; request.Method = WebRequestMethods.Ftp.DownloadFile; request.Credentials = new NetworkCredential(username, password); // Read the file from the server & write to destination using (FtpWebResponse response = (FtpWebResponse)request.GetResponse()) // Error here using (Stream responseStream = response.GetResponseStream()) using (StreamReader reader = new StreamReader(responseStream)) using (StreamWriter destination = new StreamWriter(destinationFile)) { destination.Write(reader.ReadToEnd()); destination.Flush(); } The error is: The remote server returned an error: (550) File unavailable (e.g., file not found, no access) The file definately does exist on the remote machine and I am able to perform this ftp manually (i.e. I have permissions). Can anyone tell me why I might be getting this error?

    Read the article

  • JSON.Net and Linq

    - by user1745143
    I'm a bit of a newbie when it comes to linq and I'm working on a site that parses a json feed using json.net. The problem that I'm having is that I need to be able to pull multiple fields from the json feed and use them for a foreach block. The documentation for json.net only shows how to pull just one field. I've done a few variations after checking out the linq documentation, but I've not found anything that works best. Here's what I've got so far: WebResponse objResponse; WebRequest objRequest = HttpWebRequest.Create(url); objResponse = objRequest.GetResponse(); using (StreamReader reader = new StreamReader(objResponse.GetResponseStream())) { string json = reader.ReadToEnd(); JObject rss = JObject.Parse(json); var postTitles = from p in rss["feedArray"].Children() select (string)p["item"], //These are the fields I need to also query //(string)p["title"], (string)p["message"]; //I've also tried this with console.write and labeling the field indicies for each pulled field foreach (var item in postTitles) { lbl_slides.Text += "<div class='slide'><div class='slide_inner'><div class='slide_box'><div class='slide_content'></div><!-- slide content --></div><!-- slide box --></div><div class='rotator_photo'><img src='" + item + "' alt='' /></div><!-- rotator photo --></div><!-- slide -->"; } } Has anyone seen how to pull multiple fields from a json feed and use them as part of a foreach block (or something similar?

    Read the article

  • HttpWebRequest Timeouts After Ten Consecutive Requests

    - by Bob Mc
    I'm writing a web crawler for a specific site. The application is a VB.Net Windows Forms application that is not using multiple threads - each web request is consecutive. However, after ten successful page retrievals every successive request times out. I have reviewed the similar questions already posted here on SO, and have implemented the recommended techniques into my GetPage routine, shown below: Public Function GetPage(ByVal url As String) As String Dim result As String = String.Empty Dim uri As New Uri(url) Dim sp As ServicePoint = ServicePointManager.FindServicePoint(uri) sp.ConnectionLimit = 100 Dim request As HttpWebRequest = WebRequest.Create(uri) request.KeepAlive = False request.Timeout = 15000 Try Using response As HttpWebResponse = DirectCast(request.GetResponse, HttpWebResponse) Using dataStream As Stream = response.GetResponseStream() Using reader As New StreamReader(dataStream) If response.StatusCode <> HttpStatusCode.OK Then Throw New Exception("Got response status code: " + response.StatusCode) End If result = reader.ReadToEnd() End Using End Using response.Close() End Using Catch ex As Exception Dim msg As String = "Error reading page """ & url & """. " & ex.Message Logger.LogMessage(msg, LogOutputLevel.Diagnostics) End Try Return result End Function Have I missed something? Am I not closing or disposing of an object that should be? It seems strange that it always happens after ten consecutive requests. Notes: In the constructor for the class in which this method resides I have the following: ServicePointManager.DefaultConnectionLimit = 100 If I set KeepAlive to true, the timeouts begin after five requests. All the requests are for pages in the same domain. EDIT I added a delay between each web request of between two and seven seconds so that I do not appear to be "hammering" the site or attempting a DOS attack. However, the problem still occurs.

    Read the article

  • How can I read a DBF file with incorrectly defined column data types using ADO.NET?

    - by Jason
    I have a several DBF files generated by a third party that I need to be able to query. I am having trouble because all of the column types have been defined as characters, but the data within some of these fields actually contain binary data. If I try to read these fields using an OleDbDataReader as anything other than a string or character array, I get an InvalidCastException thrown, but I need to be able to read them as a binary value or at least cast/convert them after they are read. The columns that actually DO contain text are being returned as expected. For example, the very first column is defined as a character field with a length of 2 bytes, but the field contains a 16-bit integer. I have written the following test code to read the first column and convert it to the appropriate data type, but the value is not coming out right. The first row of the database has a value of 17365 (0x43D5) in the first column. Running the following code, what I end up getting is 17215 (0x433F). I'm pretty sure it has to do with using the ASCII encoding to get the bytes from the string returned by the data reader, but I'm not sure of another way to get the value into the format that I need, other that to write my own DBF reader and bypass ADO.NET altogether which I don't want to do unless I absolutely have to. Any help would be greatly appreciated. byte[] c0; int i0; string con = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\ASTM;Extended Properties=dBASE III;User ID=Admin;Password=;"; using (OleDbConnection c = new OleDbConnection(con)) { c.Open(); OleDbCommand cmd = c.CreateCommand(); cmd.CommandText = "SELECT * FROM astm2007"; OleDbDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { c0 = Encoding.ASCII.GetBytes(dr.GetValue(0).ToString()); i0 = BitConverter.ToInt16(c0, 0); } dr.Dispose(); }

    Read the article

  • Asp.net Crawler Webresponse Operation Timed out.

    - by Leon
    Hi I have built a simple threadpool based web crawler within my web application. Its job is to crawl its own application space and build a Lucene index of every valid web page and their meta content. Here's the problem. When I run the crawler from a debug server instance of Visual Studio Express, and provide the starting instance as the IIS url, it works fine. However, when I do not provide the IIS instance and it takes its own url to start the crawl process(ie. crawling its own domain space), I get hit by operation timed out exception on the Webresponse statement. Could someone please guide me into what I should or should not be doing here? Here is my code for fetching the page. It is executed in the multithreaded environment. private static string GetWebText(string url) { string htmlText = ""; HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "My Crawler"; using (WebResponse response = request.GetResponse()) { using (Stream stream = response.GetResponseStream()) { using (StreamReader reader = new StreamReader(stream)) { htmlText = reader.ReadToEnd(); } } } return htmlText; } And the following is my stacktrace: at System.Net.HttpWebRequest.GetResponse() at CSharpCrawler.Crawler.GetWebText(String url) in c:\myAppDev\myApp\site\App_Code\CrawlerLibs\Crawler.cs:line 366 at CSharpCrawler.Crawler.CrawlPage(String url, List1 threadCityList) in c:\myAppDev\myApp\site\App_Code\CrawlerLibs\Crawler.cs:line 105 at CSharpCrawler.Crawler.CrawlSiteBuildIndex(String hostUrl, String urlToBeginSearchFrom, List1 threadCityList) in c:\myAppDev\myApp\site\App_Code\CrawlerLibs\Crawler.cs:line 89 at crawler_Default.threadedCrawlSiteBuildIndex(Object threadedCrawlerObj) in c:\myAppDev\myApp\site\crawler\Default.aspx.cs:line 108 at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() Thanks and cheers, Leon.

    Read the article

  • Asynchronous crawling F#

    - by jlezard
    When crawling on webpages I need to be careful as to not make too many requests to the same domain, for example I want to put 1 s between requests. From what I understand it is the time between requests that is important. So to speed things up I want to use async workflows in F#, the idea being make your requests with 1 sec interval but avoid blocking things while waiting for request response. let getHtmlPrimitiveAsyncTimer (uri : System.Uri) (timer:int) = async{ let req = (WebRequest.Create(uri)) :?> HttpWebRequest req.UserAgent<-"Mozilla" try Thread.Sleep(timer) let! resp = (req.AsyncGetResponse()) Console.WriteLine(uri.AbsoluteUri+" got response") use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let html = reader.ReadToEnd() return html with | _ as ex -> return "Bad Link" } Then I do something like: let uri1 = System.Uri "http://rue89.com" let timer = 1000 let jobs = [|for i in 1..10 -> getHtmlPrimitiveAsyncTimer uri1 timer|] jobs |> Array.mapi(fun i job -> Console.WriteLine("Starting job "+string i) Async.StartAsTask(job).Result) Is this alright ? I am very unsure about 2 things: -Does the Thread.Sleep thing work for delaying the request ? -Is using StartTask a problem ? I am a beginner (as you may have noticed) in F# (coding in general actually ), and everything envolving Threads scares me :) Thanks !!

    Read the article

  • Using git pull to track a remote branch without merging

    - by J Barlow
    I am using git to track content which is changed by some people and shared "read-only" with others. The "readers" may from time to time need to make a change, but mostly they will not. I want to allow for the git "writers" to rebase pushed branches** if need be, and ensure that the "readers" never accidentally get a merge. That's normally easy enough. git pull origin +master There's one case that seems to cause problems. If a reader makes a local change, the command above will merge. I want pull to be fully automatic if the reader has not made local changes, while if they have made local changes, it should stop and ask for input. I want to track any upstream changes while being careful about merging downstream changes. In a way, I don't really want to pull. I want to track the master branch exactly. ** (I know this is not a best practice, but it seems necessary in our case: we have one main branch that contains most of the work and some topic branches for specific customers with minor changes that need to be isolated. It seems easiest to frequently rebase to keep the topics up to date.)

    Read the article

  • Using XmlSerializer deserialize complex type elements are null

    - by Jean Bastos
    I have the following schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tipos="http://www.ginfes.com.br/tipos_v03.xsd" targetNamespace="http://www.ginfes.com.br/servico_consultar_situacao_lote_rps_resposta_v03.xsd" xmlns="http://www.ginfes.com.br/servico_consultar_situacao_lote_rps_resposta_v03.xsd" attributeFormDefault="unqualified" elementFormDefault="qualified"> <xsd:import schemaLocation="tipos_v03.xsd" namespace="http://www.ginfes.com.br/tipos_v03.xsd" /> <xsd:element name="ConsultarSituacaoLoteRpsResposta"> <xsd:complexType> <xsd:choice> <xsd:sequence> <xsd:element name="NumeroLote" type="tipos:tsNumeroLote" minOccurs="1" maxOccurs="1"/> <xsd:element name="Situacao" type="tipos:tsSituacaoLoteRps" minOccurs="1" maxOccurs="1"/> </xsd:sequence> <xsd:element ref="tipos:ListaMensagemRetorno" minOccurs="1" maxOccurs="1"/> </xsd:choice> </xsd:complexType> </xsd:element> </xsd:schema> and the following class: [System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "2.0.50727.3038")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true, Namespace = "http://www.ginfes.com.br/servico_consultar_situacao_lote_rps_envio_v03.xsd")] [System.Xml.Serialization.XmlRootAttribute(Namespace = "http://www.ginfes.com.br/servico_consultar_situacao_lote_rps_envio_v03.xsd", IsNullable = false)] public partial class ConsultarSituacaoLoteRpsEnvio { [System.Xml.Serialization.XmlElementAttribute(Order = 0)] public tcIdentificacaoPrestador Prestador { get; set; } [System.Xml.Serialization.XmlElementAttribute(Order = 1)] public string Protocolo { get; set; } } Use the following code to deserialize the object: XmlSerializer respSerializer = new XmlSerializer(typeof(ConsultarSituacaoLoteRpsResposta)); StringReader reader = new StringReader(resp); ConsultarSituacaoLoteRpsResposta respModel = (ConsultarSituacaoLoteRpsResposta)respSerializer.Deserialize(reader); does not occur any error but the properties of objects are null, anyone know what is happening?

    Read the article

  • Quering XElements for children with children attributes.

    - by Arnej65
    Here is the XML outline: <Root> <Thing att="11"> <Child lang="e"> <record></record> <record></record> <record></record> </Child > <Child lang="f"> <record></record> <record></record> <record></record> </Child > </Thing> </Root> I have the following: TextReader reader = new StreamReader(Assembly.GetExecutingAssembly() .GetManifestResourceStream(FileName)); var data = XElement.Load(reader); foreach (XElement single in Data.Elements()) { // english records var EnglishSet = (from e in single.Elements("Child") where e.Attribute("lang").Equals("e") select e.Value).FirstOrDefault(); } But I'm getting back nothing. I want to be able to for Each "Thing" select the "Child" where the attribute "lang" equals a value. I have also tried this, which has not worked. var FrenchSet = single.Elements("Child") .Where(y => y.Attribute("lang").Equals("f")) .Select(x => x.Value).FirstOrDefault();

    Read the article

  • How can I get at the raw bytes of the request in WCF?

    - by Gregory Higley
    For logging purposes, I want to get at the raw request sent to my RESTful web service implemented in WCF. I have already implemented IDispatchMessageInspector. In my implementation of AfterReceiveRequest, I want to spit out the raw bytes of the message even (and especially) if the content of the message is invalid. This is for debugging purposes. My service works perfectly already, but it is often helpful when working through problems with clients who are trying to call the service to know what it was they sent, i.e., the raw bytes. For example, let's say that instead of sending a well-formed XML document, they post the string "your mama" to my service endpoint. I want to see that that's what they did. Unfortunately using MessageBuffer::CreateBufferedCopy() won't work unless the contents of the message are already well-formed XML. Here's (roughly) what I already have in my implementation of AfterReceiveRequest: // The immediately following line raises an exception if the message // does not contain valid XML. This is uncool because I want // the raw bytes regardless of whether they are valid or not. using (MessageBuffer buffer = request.CreateBufferedCopy(Int32.MaxValue)) { using (MemoryStream stream = new MemoryStream()) using (StreamReader reader = new StreamReader(stream)) { buffer.WriteMessage(stream); stream.Position = 0; Trace.TraceInformation(reader.ReadToEnd()); } request = buffer.CreateMessage(); } My guess here is that I need to get at the raw request before it becomes a Message. This will most likely have to be done at a lower level in the WCF stack than an IDispatchMessageInspector. Anyone know how to do this?

    Read the article

  • Why does the OnDeserialization not fire for XML Deserialization?

    - by Jonathan
    I have a problem which I have been bashing my head against for the better part of three hours. I am almost certain that I've missed something blindingly obvious... I have a simple XML file: <?xml version="1.0" encoding="utf-8"?> <WeightStore xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Records> <Record actual="150" date="2010-05-01T00:00:00" /> <Record actual="155" date="2010-05-02T00:00:00" /> </Records> </WeightStore> I have a simple class structure: [Serializable] public class Record { [XmlAttribute("actual")] public double weight { get; set; } [XmlAttribute("date")] public DateTime date { get; set; } [XmlIgnore] public double trend { get; set; } } [Serializable] [XmlRoot("WeightStore")] public class SimpleWeightStore { [XmlArrayAttribute("Records")] private List<Record> records = new List<Record>(); public List<Record> Records { get { return records; } } [OnDeserialized()] public void OnDeserialized_Method(StreamingContext context) { // This code never gets called Console.WriteLine("OnDeserialized"); } } I am using these in both calling code and in the class files: using System.Xml.Serialization; using System.Runtime.Serialization; I have some calling code: SimpleWeightStore weight_store_reload = new SimpleWeightStore(); TextReader reader = new StringReader(xml); XmlSerializer deserializer = new XmlSerializer(weight_store.GetType()); weight_store_reload = (SimpleWeightStore)deserializer.Deserialize(reader); The problem is that I am expecting OnDeserialized_Method to get called, and it isn't. I suspect it might have something to do with the fact that it's XML deserialization rather than Runtime deserialization, and perhaps I am using the wrong attribute name, but I can't find out what it might be. Any ideas, folks?

    Read the article

  • XMLStreamReader and a real stream

    - by Yuri Ushakov
    Update There is no ready XML parser in Java community which can do NIO and XML parsing. This is the closest I found, and it's incomplete: http://wiki.fasterxml.com/AaltoHome I have the following code: InputStream input = ...; XMLInputFactory xmlInputFactory = XMLInputFactory.newInstance(); XMLStreamReader streamReader = xmlInputFactory.createXMLStreamReader(input, "UTF-8"); Question is, why does the method #createXMLStreamReader() expects to have an entire XML document in the input stream? Why is it called a "stream reader", if it can't seem to process a portion of XML data? For example, if I feed: <root> <child> to it, it would tell me I'm missing the closing tags. Even before I begin iterating the stream reader itself. I suspect that I just don't know how to use a XMLStreamReader properly. I should be able to supply it with data by pieces, right? I need it because I'm processing a XML stream coming in from network socket, and don't want to load the whole source text into memory. Thank you for help, Yuri.

    Read the article

  • Java Reflection, java.lang.IllegalAccessException Error

    - by rubby
    Hi all, My goal is : Third class will read the name of the class as String from console.Upon reading the name of the class, it will automatically and dynamically (!) generate that class and call its writeout method. If that class is not read from input, it will not be initialized. And I am taking java.lang.IllegalAccessException: Class deneme.class3 can not access a member of class java.lang.Object with modifiers "protected" error. And i don't know how i can fix it.. Can anyone help me? import java.io.*; import java.lang.reflect.*; class class3 { public void run() { try { BufferedReader reader= new BufferedReader(new InputStreamReader(System.in)); String line=reader.readLine(); Class c1 = Class.forName("java.lang.String"); Object obj = new Object(); Class c2 = obj.getClass(); Method writeout = null; for( Method mth : c2.getDeclaredMethods() ) { writeout = mth; break; } Object o = c2.newInstance(); writeout.invoke( o ); } catch(Exception ee) { System.out.println("ERROR! "+ee); } } public void writeout3() { System.out.println("class3"); } } class class4 { public void writeout4() { System.out.println("class4"); } } class ornek { public static void main(String[] args) { System.out.println("Please Enter Name of Class : "); class3 example = new class3(); example.run(); } }

    Read the article

  • Understanding F# Asynchronous Programming

    - by Yin Zhu
    I kind of know the syntax of asynchronous programming in F#. E.g. let downloadUrl(url:string) = async { let req = HttpWebRequest.Create(url) // Run operation asynchronously let! resp = req.AsyncGetResponse() let stream = resp.GetResponseStream() // Dispose 'StreamReader' when completed use reader = new StreamReader(stream) // Run asynchronously and then return the result return! reader.AsyncReadToEnd() } In F# expert book (and many other sources), they say like let! var = expr simply means "perform the asynchronous operation expr and bind the result to var when the operation completes. Then continue by executing the rest of the computation body" I also know that a new thread is created when performing async operation. My original understanding was that there are two parallel threads after the async operation, one doing I/O and one continuing to execute the async body at the same time. But in this example, I am confused at let! resp = req.AsyncGetResponse() let stream = resp.GetResponseStream() What happens if resp has not started yet and the thread in the async body wants to GetResponseStream? Is this a possible error? So maybe my original understanding was wrong. The quoted sentences in the F# expert book actually means that "creating a new thread, hang the current thread up, when the new thread finishes, wake up the body thread and continue", but in this case I don't see we could save any time. In the original understanding, the time is saved when there are several independent IO operations in one async block so that they could be done at the same time without intervention with each other. But here, if I don't get the response, I cannot create the stream; only I have stream, I can start reading the stream. Where's the time gained?

    Read the article

  • Card emulation via software NFC

    - by user85030
    After reading a lot of questions, i decided to post this one. I read that stock version of android does not support API's for card emulation. Also, we cannot write custom applications to secure element embedded in nfc controllers due to keys managed by google/samsung. I need to emulate a card (mifare or desfire etc). The option i can see is doing it via software. I have a ACR122U reader and i've tested that NFC P2P mode works fine with the Nexus-S that i have. 1) I came across a site that said that nexus s's NFC controller (pn532) can emulate a mifare 4k card. If this is true, can i write/read apdu commands to this emulated card? (Probably if i use a modded rom like cyanogenmod) 2) Can i write a android application that reads apdu commands sent from the reader and generate appropriate responses (if not fully, then upto some extent only). To do so, i searched that we need to patch nexus s with cynagenmod. Has someone tried emulating card via this method? I see that this is possible since we have products from access control companies offering mobile applications via which one can open doors e.g. http://www.assaabloy.com/en/com/Products/seos-mobile-access/

    Read the article

  • Android: Cannot get the httpPost params but can get the httpGet from php

    - by jjLin
    Here is my android code to send request: // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(serverUrl); List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("abc", "abc2")); httpPost.setEntity(new UrlEncodedFormEntity(params)); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); InputStream is = null; is = httpEntity.getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader( is, "UTF-8"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); String json = ""; json = sb.toString(); Log.d("JSON", "JSON is:" + json); and here is my php code to get the request: <?php echo $_POST['abc']; ?> When I run the application, the string json is nothing. I expect to get JSON is:abc2 Then I change the some code, in android part: HttpPost httpPost = new HttpPost(serverUrl); change to: HttpPost httpPost = new HttpPost(serverUrl + "?abc=abc3"); in php part: <?php echo $_GET['abc']; ?> This time, the string json in logcat is JSON is:abc3. It is correct!! I have tried lots of time, but seems cannot send HttpPost request with params. Any one can help me to find out what wrongs with my code??

    Read the article

  • sql exception arithmetic overflow?

    - by MyHeadHurts
    In my program the user imports a date and it works whenever the year is in 2011 but if i try a date in 2010 i get this error which is weird [ SqlException (0x80131904): Arithmetic overflow error converting int to data type numeric.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1950890 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4846875 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlDataReader.HasMoreRows() +157 System.Data.SqlClient.SqlDataReader.ReadInternal(Boolean setTimeout) +197 System.Data.SqlClient.SqlDataReader.Read() +9 System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) +78 System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) +164 System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) +282 System.Data.Common.LoadAdapter.FillFromReader(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) +19 System.Data.DataTable.Load(IDataReader reader, LoadOption loadOption, FillErrorEventHandler errorHandler) +222 System.Data.DataTable.Load(IDataReader reader) +14 ( @YearToGet int, @current datetime, @y int, @search datetime ) AS SET @YearToGet = 2006; WITH Years AS ( SELECT DATEPART(year, GETDATE()) [Year] UNION ALL SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet ), q_00 as ( select DIVISION , DYYYY , sum(PARTY) as asofPAX , sum(InsAmount) as asofSales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year where Booked <= CONVERT(int, DateAdd(year, (Years.Year - @y), @search)) and DYYYY = Years.Year group by DIVISION, DYYYY, years.year having DYYYY = years.year ), q_01 as ( select DIVISION , DYYYY , sum(PARTY) as YEPAX , sum(InsAmount) as YESales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year group by DIVISION, DYYYY , years.year having DYYYY = years.year ), q_02 as ( select DIVISION , DYYYY , sum(PARTY) as CurrentPAX , sum(InsAmount) as CurrentSales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year where Booked <= CONVERT(int,@current) and DYYYY = (year( getdate() )) group by DIVISION, DYYYY ) select a.DIVISION , a.DYYYY , asofPAX , asofSales , YEPAX , YESales , CurrentPAX , CurrentSales ,asofsales/ ISNULL(NULLIF(yesales,0),1) as percentsales, CAST((asofpax) AS DECIMAL(5,1))/yepax as percentpax from q_00 as a join q_01 as b on (b.DIVISION = a.DIVISION and b.DYYYY = a.DYYYY) join q_02 as c on (b.DIVISION = c.DIVISION) JOIN Years as d on (b.dyyyy = d.year) where A.DYYYY <> (year( getdate() )) order by a.DIVISION, a.DYYYY ;

    Read the article

  • Java split giving opposite order of arabic characters

    - by MuhammadA
    I am splitting the following string using \\| in java (android) using the IntelliJ 12 IDE. Everything is fine except the last part, somehow the split picks them up in the opposite order : As you can see the real positioning 34,35,36 is correct and according to the string, but when it gets picked out into split part no 5 its in the wrong order, 36,35,34 ... Any way I can get them to be in the right order? My Code: public ArrayList<Book> getBooksFromDatFile(Context context, String fileName) { ArrayList<Book> books = new ArrayList<Book>(); try { // load csv from assets InputStream is = context.getAssets().open(fileName); try { BufferedReader reader = new BufferedReader(new InputStreamReader(is)); String line; while ((line = reader.readLine()) != null) { String[] RowData = line.split("\\|"); books.add(new Book(RowData[0], RowData[1], RowData[2], RowData[3], RowData[4], RowData[5])); } } catch (IOException ex) { Log.e(TAG, "Error parsing csv file!"); } finally { try { is.close(); } catch (IOException e) { Log.e(TAG, "Error closing input stream!"); } } } catch (IOException ex) { Log.e(TAG, "Error reading .dat file from assets!"); } return books; }

    Read the article

  • Newb Question: passing objects in java?

    - by Adam Outler
    Hello, I am new at java. I am doing the following: Read from file, then put data into a variable. checkToken = lineToken.nextToken(); processlinetoken() } But then when I try to process it... public static void readFile(String fromFile) throws IOException { BufferedReader reader = new BufferedReader(new FileReader(fromFile)); String line = null; while ((line=reader.readLine()) != null ) { if (line.length() >= 2) { StringTokenizer lineToken = new StringTokenizer (line); checkToken = lineToken.nextToken(); processlinetoken() ...... But here's where I come into a problem. public static void processlinetoken() checkToken=lineToken.nextToken(); } it fails out. Exception in thread "main" java.lang.Error: Unresolved compilation problem: The method nextToken() is undefined for the type String at testread.getEngineLoad(testread.java:243) at testread.readFile(testread.java:149) at testread.main(testread.java:119) so how do I get this to work? It seems to pass the variable, but nothing after the . works.

    Read the article

  • Convert XML to DataSet with rows

    - by lidermin
    Hi, I have an xml file like this: <result> <customer> <id>1</id> <name>A</name> </customer> <customer> <id>2</id> <name>B</name> </customer> </result> So I need that data filled on a DataSet, here is my code: var reader = new StringReader(xmldoc.InnerXml); dsDatos.ReadXml(reader); The problem is that it's filling the dataset with two Tables each one with a single row. But I need a single Table with the two rows. What am I doing wrong? PD: I'm using C# and I don't want to iterate through the XML file, I want to use the ReadXml method. Thanks for your time.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >