Search Results

Search found 36369 results on 1455 pages for 'document write'.

Page 95/1455 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • No norwegian characters in LaTeX

    - by DreamCodeR
    Hi, I have translated a document from English to Norwegian in the LaTeX format, and while using norwegian special characters, I get an error using \usepackage[utf8x]{inputenc} to try and display the norwegian (scandinavian) special characters in PostScript/PDF/DVI format, saying Package utf8x Error: MalformedUTF-8sequence. So while that didn't work, I tried out another possible solution: \usepackage{ucs} \usepackage[norsk]babel And when I tried to save that in Emacs I get this message: These default coding systems were tried to encode text in the buffer `lol.tex': (utf-8-unix (905 . 4194277) (916 . 4194245) (945 . 4194278) (950 . 4194277) (954 . 4194296) (990 . 4194277) (1010 . 4194277) (1013 . 4194278) (1051 . 4194277) (1078 . 4194296) (1105 . 4194296)) However, each of them encountered characters it couldn't encode: utf-8-unix cannot encode these: \345 \305 \346 \345 \370 \345 \345 \346 \345 \370 ... Thanks to Emacs I have the possibility to check out the properties of those characters and the first one tells me: character: \345 (4194277, #o17777745, #x3fffe5) preferred charset: eight-bit (Raw bytes 128-255) code point: 0xE5 syntax: w which means: word buffer code: #xE5 file code: not encodable by coding system utf-8-unix display: not encodable for terminal Which doesn't tell me much. When I try to build this with texi2dvi --dvipdf filename.text I get a perfectly fine PDF, all without the special norwegian characters. When I am about to save Emacs also ask me: "Select coding system (default raw-text):" And I type in utf-8 to choose its coding system. I have also tried to choose default raw-text to see if I get some different result. But nothing. At last I tried \lstset{inputencoding=utf8x, extendedchars=\true} ... a code I came over while trying to google the solution to this problem. Which gives me this error: Undefined control sequence. So basically, I have tried every encoding option I have been able to find and nothing works. I am desperately trying to make this work since the norwegian translation must be published before the deadline. As an additional information I may add that I found out later on that I only had the en_US.UTF-8 in my locale, so I added nb_NO.UTF-8 and nb_NO.ISO-8859-15 and ran locale-gen + reboot without any changes. I hope I provided enough information to get some assistance, the characters in question is æ ø å.

    Read the article

  • What is wrong with these jquery statements?

    - by Pandiya Chendur
    I cant able to get this jquery statement to work on page load but it works once when i refresh F5 the page..... <script type="text/javascript"> var itemsPerPage = 5; $(document).ready(function() { getRecordspage(0, itemsPerPage); var maxvalues = $("#HfId").val(); alert(maxvalues); $(".pager").pagination(maxvalues, { //my syntax }); }); </script> On the initial pageload alert(maxvalues); is nothing... But when i refresh it shows the value of maxvalues which is in the hidden field HfId because it is assigned in the function getRecordspage.... Why this strange behaviour.... Any suggestion... EDIT: function getRecordspage(curPage) { $.ajax({ type: "POST", url: "Default.aspx/GetRecords", data: "{'currentPage':" + (curPage + 1) + ",'pagesize':5}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(jsonObj) { $("#ResultsDiv").empty(); $("#HfId").val(""); var strarr = jsonObj.d.split('##'); var jsob = jQuery.parseJSON(strarr[0]); var divs = ''; $.each(jsob.Table, function(i, employee) { divs += '<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>'; }); $("#ResultsDiv").append(divs); $(".resultsdiv:even").addClass("resultseven"); $(".resultsdiv").hover(function() { $(this).addClass("resultshover"); }, function() { $(this).removeClass("resultshover"); }); $("#HfId").val(strarr[1]); } }); }

    Read the article

  • Should we write detailed architecture design or just an outline when designing a program?

    - by EpsilonVector
    When I'm doing design for a task, I keep fighting this nagging feeling that aside from being a general outline it's going to be more or less ignored in the end. I'll give you an example: I was writing a frontend for a device that has read/write operations. It made perfect sense in the class diagram to give it a read and a write function. Yet when it came down to actually writing them I realized they were literally the same function with just one line of code changed (read vs write function call), so to avoid code duplication I ended up implementing a do_io function with a parameter that distinguishes between operations. Goodbye original design. This is not a terribly disruptive change, but it happens often and can happen in more critical parts of the program as well, so I can't help but wondering if there's a point to design more detail than a general outline, at least when it comes to the program's architecture (obviously when you are specifying an API you have to spell everything out). This might be just the result of my inexperience in doing design, but on the other hand we have agile methodologies which sort of say "we give up on planning far ahead, everything is going to change in a few days anyway", which is often how I feel. So, how exactly should I "use" design?

    Read the article

  • Is it possible to render and style a <title> element from within the <head> of an html document?

    - by Brian Z
    Is it possible to render and style a <title> element from within the <head> of an html document? I thought it was impossible to render information from the <head>, but the system status page for 37signals.com seems to be doing just that - http://status.37signals.com/. If you inspect the element at the very top of the page, the text that reads "37signals System Status", you'll see that the part of the DOM that is generating the text is the <head>'s <title>, and the css is as follows: title { display: block; margin: 10px auto; max-width: 840px; width: 100%; padding: 0 20px; float: left; color: black; text-rendering: optimizelegibility; -moz-box-sizing: border-box; box-sizing: border-box; } Can someone confirm that the <title> info from the <head> is indeed what is being rendered? If so, can someone point to documentation that defines this capability as I have not found any? I have applied the above css to an html document on my local web server using the same browser (chromium, os x 10.8.5) as the 37signals site was viewed on, yet my file did not display the <head>'s <title>.

    Read the article

  • Strings are UTF-16&hellip;. There is an error in XML document (1, 1).

    - by Shawn Cicoria
    I had a situation today where an xml document had a directive indicating it was utf-8.  So, the code in question was reading in the “string” of that xml then attempting to de-serialize it using an Xsd generated type. What you end up with is an exception indicating that there’s an error in the Xml document at (1,1) or something to that effect. The fix is, run it through a memory stream – which reads the string, but at utf8 bytes – if you have things that fall outside of 8 bit chars, you’ll get an exception.   //Need to read it to bytes, to undo the fact that strings are UTF-16 all the time. //We want it to handle it as UTF8. byte[] bytes = Encoding.UTF8.GetBytes(_myXmlString); TargetType myInstance = null; using (MemoryStream memStream = new MemoryStream(bytes)) { XmlSerializer tokenSerializer = new XmlSerializer(typeof(TargetType)); myInstance = (TargetType)tokenSerializer.Deserialize(memStream); }   Writing is similar – also, adding the default namespace prevents the additional xmlns additions that aren’t necessary:   XmlWriterSettings settings = new XmlWriterSettings() { Encoding = Encoding.UTF8, Indent = true, NewLineOnAttributes = true, }; XmlSerializerNamespaces xmlnsEmpty = new XmlSerializerNamespaces(); xmlnsEmpty.Add("", "http://www.wow.thisworks.com/2010/05"); MemoryStream memStr = new MemoryStream(); using (XmlWriter writer = XmlTextWriter.Create(memStr, settings)) { XmlSerializer tokenSerializer = new XmlSerializer(typeof(TargetType)); tokenSerializer.Serialize(writer, theInstance, xmlnsEmpty); }

    Read the article

  • Can I query DOM Document with xpath expression from multiple threads safely?

    - by Dan
    I plan to use dom4j DOM Document as a static cache in an application where multiples threads can query the document. Taking into the account that the document itself will never change, is it safe to query it from multiple threads? I wrote the following code to test it, but I am not sure that it actually does prove that operation is safe? package test.concurrent_dom; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.DocumentHelper; import org.dom4j.Element; import org.dom4j.Node; /** * Hello world! * */ public class App extends Thread { private static final String xml = "<Session>" + "<child1 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText1</child1>" + "<child2 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText2</child2>" + "<child3 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText3</child3>" + "</Session>"; private static Document document; private static Element root; public static void main( String[] args ) throws DocumentException { document = DocumentHelper.parseText(xml); root = document.getRootElement(); Thread t1 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child1"); if(!n1.getText().equals("ChildText1")){ System.out.println("WRONG!"); } } } }; Thread t2 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child2"); if(!n1.getText().equals("ChildText2")){ System.out.println("WRONG!"); } } } }; Thread t3 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child3"); if(!n1.getText().equals("ChildText3")){ System.out.println("WRONG!"); } } } }; t1.start(); t2.start(); t3.start(); System.out.println( "Hello World!" ); } }

    Read the article

  • Why does document.evaluate succeed in Firebug but fail in Selenium?

    - by anil
    browser.getEval function in selenium makes iterateNext return null ..Otherwise in firebug it returns a value(same script) document.evaluate("//button[text()='Save']", document, null, XPathResult.ANY_TYPE, null) .iterateNext() .disabled; returns true But browser.getEval("document.evaluate(\"//button[text()='Save']\", document, null, XPathResult.ANY_TYPE, null) .iterateNext() .disabled;"); returns that error as : "com.thoughtworks.selenium.SeleniumException: ERROR: Threw an exception: res.iterateNext() is null "

    Read the article

  • Using JavaScript, how do I write the same text to multiple HTML elements, or how do I write text to all HTML elements of the same class?

    - by myfavoritenoisemaker
    I am writing this program to take a root music note and populate tables with various scales from that root note. So, many of the tables cells will have the exact same value in them. I realize I can call my "useScale" function for every single that I need to write text to but since there will be repeats, it seemed like there should be a way to run my function once and apply the results to multiple but it did not work to use the document.getElementsByClassName("").innerHTML, I had been using "ById" which worked fine but each ID must be unique so, I can't write to multiple elements. Here's my code, I'd love some suggestions. many thanks Root Note <input type="text" name="defineRootNote" id="rootNoteCapture" size="2"/> <button onclick="findScale()">Submit</button> <table id="majorTriad"> <th>Major Triad</th> <tr><td>1st</td><td class="root"> </td></tr> <tr><td>3rd</td><td class="3rd"> </td></tr> <tr><td>5th</td><td class="5th"> </td></tr> </table> <table id="minorTriad"> <th>Minor Triad</th> <tr><td>1st</td><td class="root"> </td></tr> <tr><td>3 Flat</td><td class="3Flat"> </td></tr> <tr><td>5th</td><td class="5th"> </td></tr> </table> <script type="text/javascript"> function findScale(rootNote){ var rootNote = document.getElementById("rootNoteCapture").value; rootNote = rootNote.toUpperCase(); var scaleCheck = ["A", "A#", "AB", "B", "BB", "C", "C#", "D", "D#", "DB", "E", "EB", "F", "F#", "G", "G#", "GB"]; if (scaleCheck.indexOf(rootNote) == -1) { document.getElementById("root").innerHTML = "Invalid Entry"; } else { switch(rootNote){ case "AB": rootNote = "G#"; break; case "BB": rootNote = "A#"; break; case "DB": rootNote = "C#"; break; case "EB": rootNote = "D#"; break; case "GB": rootNote = "F#"; break; rootNote = rootNote; } document.getElementsByClassName("root").innerHTML = rootNote; document.getElementsByClassName("3rd").innerHTML = useScale(rootNote, 4); document.getElementsByClassName("5th").innerHTML = useScale(rootNote, 7); document.getElementsByClassName("3Flat").innerHTML = useScale(rootNote, 3); } } function useScale(startPoint, offset){ var scale = ["A", "A#", "B", "C", "C#", "D", "D#", "E", "F", "F#", "G", "G#"]; var returnNote = null; var scalePoint = scale.indexOf(startPoint); for (var i = 0; i < offset; ){ i = i + 1; //console.log(i); //console.log(scalePoint); scalePoint ++; if (scalePoint > 11) {scalePoint = 0;} } returnNote = scale[scalePoint]; return returnNote; } </script>

    Read the article

  • How to read & write contents of a div from localstorage in chrome extensions?

    - by Minas Abovyan
    I am trying to build an extension that will allow users to put some parameters into a text box in the popup, generate a link using that information and add it to the said popup. I have all that working, but needless to say, it gets flushed every time the user opens the extension anew. I'd like the info that has been put in there to stay, but can't seem to get it to work. Here's what I have thus far: manifest.json { "manifest_version": 2, "name": "Test", "description": "Test Extension", "version": "1.0", "permissions": [ "http://*/*", "https://*/*" ], "browser_action": { "default_title": "This is a test", "default_popup": "popup.html" } } popup.html <!DOCTYPE html> <html> <head> </head> <body> <div id="linkContainer"/> <input type="text" id="catsList"/> <button type="button" id="addToList">Add</button> <script src="popup.js"></script> </body> </html> popup.js function addCats() { var a = document.createElement('a'); a.appendChild(document.createTextNode(document.getElementById('catsList').value)); a.setAttribute('href', 'http://google.com'); var p = document.createElement('p'); p.appendChild(a) document.getElementById('linkContainer').appendChild(p); indexLinks() } function indexLinks() { var links = document.getElementsByTagName("a"); for (var i = 0; i < links.length; i++) { (function () { var ln = links[i]; var location = ln.href; ln.onclick = function () { chrome.tabs.create({active: true, url: location}); }; })(); } }; document.getElementById('addToList').onclick = addCats; My guess is that I need something along the lines of localStorage['cointainer'] = document.getElementById('linkContainer'); at the end of addCats() and a call to something like function loadLocalStorage() { var container = document.getElementById('linkContainer'); container.innerHTML = localStorage['container']; } at the beginning, but doing that didn't work. Not sure what's going wrong. Also,if there is a different way to save users' additions, I'd be open to them.

    Read the article

  • Oracle Tutor: *** CAUTION to Word .docx Users ***

    - by [email protected]
    Microsoft released a security update KB969604 for Office 2007 (around June 2009) This update causes document variables within Word docx files to be scrambled. This update might still be pushed out via Office 2007 updates DO NOT save files as docx using MS OFFICE 2007 until you apply the MS hotfix # 970942 available here If you are using Windows XP with Office 2003 or Office 2000 and have installed an older Office 2007 compatibility pack, documents saved as docx may also cause the scrambled document variables. Installing the 2007 compatibility pack published on 1/6/2010 (version 4) will prevent the document variables from becoming corrupt. Those on Windows 2000 may not be able to install the latest compatibility pack, or the compatibility pack may not function properly. This situation will hopefully be rectified in the coming months. What is a document variable? Document variables store data inside the document, invisible to the user. The Tutor software uses them when converting the document to HTML and when creating the flowchart, just to name a couple of uses. How will you know if a document's variables are scrambled? The difficulty in diagnosing the issue is that the symptoms can take myriad forms. There isn't a single error message or a single feature that one can point to and say, "test for the problem by doing this." The best clue about the error is seeing any kind of string in an error message that has garbage characters, question marks, xml code snippets, or just nonsense. Such as "Language ?????????????xlr;lwlerkjl could not be found." It is also possible to see the corrupted data in the footers of the Word docs. And, just because the footers look correct does not mean that the document variables are not corrupted. The corruption problem does not occur in every document variable in the document, just some of them. Often it is less than a quarter of them. What is the difference between docx files and doc files? Office 2007 uses Office Open XML formats with .docx and .docm filename extensions. - Docx is an Office Open XML word document. - Docm is a macro enabled Office Open XML document. This means the file structure behind the scenes is quite different from the binary file formats used prior to Office 2007 such as .doc, .dot, .xls, and .ppt. Solution Summary: For Windows XP and Word 2007: Install the hotfix, or save files as *.doc For Windows XP and Word 2000 and 2003: Install the latest compatibility pack or save files as *.doc For Windows 2000 with Word 2000 or 2003, do not use any compatibility pack, save files as *.doc Emily Chorba Principle Product Manager for Oracle Tutor

    Read the article

  • Shell command slow when using pipe, fast with intermediate file

    - by plang
    Does anyone understand this huge difference in processing time, when using an intermediate file, or when using a pipe? I'm converting tiff to pdf using standard tools on a fresh debian squeeze server. A standard way of doing this is to convert to ps first. Without pipe: root@web5:~# time tiff2ps test.tif > test.ps real 0m0.860s user 0m0.744s sys 0m0.112s root@web5:~# time ps2pdf13 -sPAPERSIZE=a4 test.ps > test.pdf real 0m0.667s user 0m0.612s sys 0m0.060s With pipe: root@web5:~# time tiff2ps test.tif | ps2pdf13 -sPAPERSIZE=a4 - > test.pdf real 1m6.098s user 0m15.861s sys 0m50.9 During the last command, gs process is at 100% all the time. Update: Here is an strace output for the ps generation: root@web5:~# strace tiff2ps test.tif > test.ps execve("/usr/bin/tiff2ps", ["tiff2ps", "test.tif"], [/* 28 vars */]) = 0 brk(0) = 0x1395000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1937000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=21735, ...}) = 0 mmap(NULL, 21735, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb5a1931000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libtiff.so.4", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\200\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=405128, ...}) = 0 mmap(NULL, 2501416, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a14b9000 mprotect(0x7fb5a151a000, 2093056, PROT_NONE) = 0 mmap(0x7fb5a1719000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x60000) = 0x7fb5a1719000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libjpeg.so.62", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3408\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=145048, ...}) = 0 mmap(NULL, 2240080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a1296000 mprotect(0x7fb5a12b9000, 2093056, PROT_NONE) = 0 mmap(0x7fb5a14b8000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7fb5a14b8000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libz.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\"\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=93936, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1930000 mmap(NULL, 2188976, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a107f000 mprotect(0x7fb5a1096000, 2093056, PROT_NONE) = 0 mmap(0x7fb5a1295000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0x7fb5a1295000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libm.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360>\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=530736, ...}) = 0 mmap(NULL, 2625768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a0dfd000 mprotect(0x7fb5a0e7d000, 2097152, PROT_NONE) = 0 mmap(0x7fb5a107d000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x80000) = 0x7fb5a107d000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\355\1\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1437064, ...}) = 0 mmap(NULL, 3545160, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a0a9b000 mprotect(0x7fb5a0bf4000, 2093056, PROT_NONE) = 0 mmap(0x7fb5a0df3000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x158000) = 0x7fb5a0df3000 mmap(0x7fb5a0df8000, 18504, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb5a0df8000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a192f000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a192e000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a192d000 arch_prctl(ARCH_SET_FS, 0x7fb5a192e700) = 0 mprotect(0x7fb5a0df3000, 16384, PROT_READ) = 0 mprotect(0x7fb5a107d000, 4096, PROT_READ) = 0 mprotect(0x7fb5a1939000, 4096, PROT_READ) = 0 munmap(0x7fb5a1931000, 21735) = 0 open("test.tif", O_RDONLY) = 3 brk(0) = 0x1395000 brk(0x13b6000) = 0x13b6000 read(3, "II*\0\10\0\0\0", 8) = 8 fstat(3, {st_mode=S_IFREG|0644, st_size=1825656, ...}) = 0 mmap(NULL, 1825656, PROT_READ, MAP_SHARED, 3, 0) = 0x7fb5a176f000 open("/proc/meminfo", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1936000 read(4, "MemTotal: 2090844 kB\nMemF"..., 1024) = 1024 close(4) = 0 munmap(0x7fb5a1936000, 4096) = 0 write(2, "TIFFReadDirectory: ", 19TIFFReadDirectory: ) = 19 write(2, "Warning, ", 9Warning, ) = 9 write(2, "test.tif: wrong data type 7 for "..., 59test.tif: wrong data type 7 for "RichTIFFIPTC"; tag ignored) = 59 write(2, ".\n", 2. ) = 2 gettimeofday({1334836895, 374666}, NULL) = 0 fstat(1, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1936000 open("/etc/localtime", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=1892, ...}) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=1892, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1935000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 1892 lseek(4, -1217, SEEK_CUR) = 675 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\6\0\0\0\6\0\0\0\0"..., 4096) = 1217 close(4) = 0 munmap(0x7fb5a1935000, 4096) = 0 write(1, "%!PS-Adobe-3.0 EPSF-3.0\n%%Creato"..., 4096) = 4096 write(1, "fffffffffffffffffffffffffffff\nff"..., 4096) = 4096 write(1, "ffffffffffffffffffff\nfffffffffff"..., 4096) = 4096 write(1, "fffffffffff\nffffffffffffffffffff"..., 4096) = 4096 write(1, "ff\nfffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffff\nfffffff"..., 4096) = 4096 Here is an strace output for the piped version: PS generation seems to be much slower when output is piped into ps2pdf13. root@web5:~# strace tiff2ps test.tif | ps2pdf13 -sPAPERSIZE=a4 - > test.pdf execve("/usr/bin/tiff2ps", ["tiff2ps", "test.tif"], [/* 28 vars */]) = 0 brk(0) = 0x1b97000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208bb1000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=21735, ...}) = 0 mmap(NULL, 21735, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9208bab000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libtiff.so.4", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\200\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=405128, ...}) = 0 mmap(NULL, 2501416, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9208733000 mprotect(0x7f9208794000, 2093056, PROT_NONE) = 0 mmap(0x7f9208993000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x60000) = 0x7f9208993000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libjpeg.so.62", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3408\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=145048, ...}) = 0 mmap(NULL, 2240080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9208510000 mprotect(0x7f9208533000, 2093056, PROT_NONE) = 0 mmap(0x7f9208732000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7f9208732000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libz.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\"\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=93936, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208baa000 mmap(NULL, 2188976, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f92082f9000 mprotect(0x7f9208310000, 2093056, PROT_NONE) = 0 mmap(0x7f920850f000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0x7f920850f000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libm.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360>\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=530736, ...}) = 0 mmap(NULL, 2625768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9208077000 mprotect(0x7f92080f7000, 2097152, PROT_NONE) = 0 mmap(0x7f92082f7000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x80000) = 0x7f92082f7000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\355\1\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1437064, ...}) = 0 mmap(NULL, 3545160, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9207d15000 mprotect(0x7f9207e6e000, 2093056, PROT_NONE) = 0 mmap(0x7f920806d000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x158000) = 0x7f920806d000 mmap(0x7f9208072000, 18504, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9208072000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208ba9000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208ba8000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208ba7000 arch_prctl(ARCH_SET_FS, 0x7f9208ba8700) = 0 mprotect(0x7f920806d000, 16384, PROT_READ) = 0 mprotect(0x7f92082f7000, 4096, PROT_READ) = 0 mprotect(0x7f9208bb3000, 4096, PROT_READ) = 0 munmap(0x7f9208bab000, 21735) = 0 open("test.tif", O_RDONLY) = 3 brk(0) = 0x1b97000 brk(0x1bb8000) = 0x1bb8000 read(3, "II*\0\10\0\0\0", 8) = 8 fstat(3, {st_mode=S_IFREG|0644, st_size=1825656, ...}) = 0 mmap(NULL, 1825656, PROT_READ, MAP_SHARED, 3, 0) = 0x7f92089e9000 open("/proc/meminfo", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208bb0000 read(4, "MemTotal: 2090844 kB\nMemF"..., 1024) = 1024 close(4) = 0 munmap(0x7f9208bb0000, 4096) = 0 write(2, "TIFFReadDirectory: ", 19TIFFReadDirectory: ) = 19 write(2, "Warning, ", 9Warning, ) = 9 write(2, "test.tif: wrong data type 7 for "..., 59test.tif: wrong data type 7 for "RichTIFFIPTC"; tag ignored) = 59 write(2, ".\n", 2. ) = 2 gettimeofday({1334836513, 114140}, NULL) = 0 fstat(1, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208bb0000 open("/etc/localtime", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=1892, ...}) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=1892, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208baf000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 1892 lseek(4, -1217, SEEK_CUR) = 675 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\6\0\0\0\6\0\0\0\0"..., 4096) = 1217 close(4) = 0 munmap(0x7f9208baf000, 4096) = 0 write(1, "%!PS-Adobe-3.0 EPSF-3.0\n%%Creato"..., 4096) = 4096 write(1, "fffffffffffffffffffffffffffff\nff"..., 4096) = 4096 write(1, "ffffffffffffffffffff\nfffffffffff"..., 4096) = 4096 write(1, "fffffffffff\nffffffffffffffffffff"..., 4096) = 4096 write(1, "ff\nfffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 ...etc...

    Read the article

  • Why does Saxon evaluate the result-document URI to be the same?

    - by Jan
    My XSL source document looks like this <Topology> <Environment> <Id>test</Id> <Machines> <Machine> <Id>machine1</Id> <modules> <module>m1</module> <module>m2</module> </modules> </Machine> </Machines> </Environment> <Environment> <Id>production</Id> <Machines> <Machine> <Id>machine1</Id> <modules> <module>m1</module> <module>m2</module> </modules> </Machine> <Machine> <Id>machine2</Id> <modules> <module>m3</module> <module>m4</module> </modules> </Machine> </Machines> </Environment> </Topology> I want to create one result-document per machine, so I use the following stylesheet giving modelDir as path for the result-documents as parameter. <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" name="myXML" doctype-system="http://java.sun.com/dtd/properties.dtd"/> <xsl:template match="/"> <xsl:for-each-group select="/Topology/Environment/Machines/Machine" group-by="Id"> <xsl:variable name="machine" select="Id"/> <xsl:variable name="filename" select="concat($modelDir,$machine,'.xml')" /> <xsl:message terminate="no">Writing machine description to <xsl:value-of select="$filename"/></xsl:message> <xsl:result-document href="$filename" format="myXML"> <xsl:variable name="currentMachine" select="Id"/> <xsl:for-each select="current-group()/LogicalHosts/LogicalHost"> <xsl:variable name="environment" select="normalize-space(../../../../Id)"/> <xsl:message terminate="no">Module <xsl:value-of select="."/> for <xsl:value-of select="$environment"/></xsl:message> </xsl:for-each> </xsl:result-document> </xsl:for-each-group> </xsl:template> As my messages show me this seems to work fine - if saxon would not evaluate the URI of the result-document to be the same and thus give the following output. Writing machine description to target/build/model/m1.xml Module m1 for test Module m2 for test Module m1 for production Module m2 for production Writing machine description to target/build/model/m2.xml Error at xsl:result-document on line 29 of file:/C:/Projekte/.../machine.xsl: XTDE1490: Cannot write more than one result document to the same URI, or write to a URI that has been read: file:/C:/Projekte/.../$filename file:/C:/Projekte/.../machine.xsl(29,-1) : here Cannot write more than one result document to the same URI, or write to a URI that has been read: file:/C:/Projekte/.../$filename ; SystemID: file:/C:/Projekte/.../machine.xsl; Line#: 29; Column#: -1 net.sf.saxon.trans.DynamicError: Cannot write more than one result document to the same URI, or write to a URI that has been read: file:/C:/Projekte/.../$filename at net.sf.saxon.instruct.ResultDocument.processLeavingTail(ResultDocument.java:300) at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:365) at net.sf.saxon.instruct.Instruction.process(Instruction.java:91) Any ideas on how to solve this?

    Read the article

  • Where can I find a legal "permission to work on open source" document?

    - by Nathan Long
    One of the things I really like about my current job is that we developers are encouraged to make open source contributions. However, this encouragement has always been verbal. I've read some horror stories about developers having their open-source work legally claimed by their employer. I'd be more comfortable if we had something in writing from my employer saying that contributions are allowed and not owned by the company. Understanding that you are not lawyers, does anyone know where to find a boilerplate document to this effect?

    Read the article

  • Canon MG8150 - unable to scan document to PC via wireless router, but works via USB connection?

    - by Heidi
    Please help. Get error that scanner disconnected or locked "code:5,146,555" when I try to scan a document to PC or attach to an email. Can't remember this function ever working and I've had this MFD for 2-3 years now. Printer function works fine via wireless, but not scanning. Upgraded my laptop a few months ago from Windows 98 to Windows Vista and reinstalled my printer/scanner software and drivers. Any ideas?

    Read the article

  • Le W3C met à jour "Standards pour les Applications Web sur Mobile", un document qui fait l'état de l'art des spécifications

    Le W3C met à jour « Standards pour les Applications Web sur Mobile » Un document qui fait l'état de l'art des spécifications et leurs supports Ces dernières années, nous observons une progression nette des demandes de développements Web mobiles. Cela est dû au fait que de plus en plus d'utilisateurs optent pour des smartphones ou des tablettes pour accéder au Web via une multiplicité de navigateurs à maturités divergentes, véritable casse-tête pour le développeur. [IMG]http://idelways.developpez.com/news/images/web-plateforme.png[/IMG] Le Web : une plateforme de développement d'applications Le W3C (World Wide Web Consort...

    Read the article

  • Should back end processes be included in use cases in requirements document?

    - by bizso09
    We're writing a requirements document for our client and need to include the use cases of the system. We're following this template: ID Description Actors Precondition Basic Steps Alternate Steps Exceptions Business validations/Rules Postconditions In the Basic Steps section, should we include steps that the system performs in the back end or should we only include steps that the user directly interacts with? Example: Basic Steps for Search 1: User goes to search page User enters term User presses search System matches search term with database entries System displays results vs Basic Steps for Search 2: User goes to search page User enters term User presses search System displays results

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • Simple implementation of N-Gram, tf-idf and Cosine similarity in Python

    - by seanieb
    I need to compare documents stored in a DB and come up with a similarity score between 0 and 1. The method I need to use has to be very simple. Implementing a vanilla version of n-grams (where it possible to define how many grams to use), along with a simple implementation of tf-idf and Cosine similarity. Is there any program that can do this? Or should I start writing this from scratch?

    Read the article

  • Strange problem with UIDocumentInteractionController

    - by Tùng Ð?
    I don't know what wrong with this code but everytime when I run the app, after the Menu is shown, the app crash. NSString * path = [[NSBundle mainBundle] pathForResource:@"tung" ofType:@"doc"]; UIDocumentInteractionController *docController = [UIDocumentInteractionController interactionControllerWithURL:[NSURL fileURLWithPath:path]]; docController.delegate = self; //[docController presentPreviewAnimated:YES]; CGRect rect = CGRectMake(0, 0, 300, 300); [docController presentOptionsMenuFromRect:rect inView:self.view animated:YES]; Error I got: * Terminating app due to uncaught exception 'NSGenericException', reason: '-[UIPopoverController dealloc] reached while popover is still visible.' What should I do now ?

    Read the article

  • What are your thoughts on Raven DB?

    - by Ronnie Overby
    What are your thoughts on Raven DB? I see this below my title: The question you're asking appears subjective and is likely to be closed. Please don't do that. I think the question is legit because: Raven DB is brand-spanking-new. RDBMS's are probably the de facto for data persistence for .net developers, which Raven DB is not. Given these points, I would like to know your general opinions. Admittedly, the question is sort of broad. That is intentional, because I am trying to learn as much about it as possible, however here are some of my initial concerns and questions: What about using Raven DB for data storage in a shared web hosting environment, since Raven DB is interacted with through HTTP? Are there any areas that Raven DB is particularly well or not well suited for? How does it rank among alternatives, from a .net developer's perspective?

    Read the article

  • What resources will help me understand the data model for QC 10.0 in order to write my SQL queries?

    - by srihari
    I am a fresher in Quality Center 10.0 HP software testing tool. As per my understanding in order to generate reports from QC and to troubleshoot the scenarios, we need to write SQL queries in the QC back end database. In my case it is SQL db. I downloaded the database reference help file but I could not understand from where I can start. It just gave the table name and its information. For a starter like me are there any online tutorials or helpful websites,hands on exercises,scenario's where I can better understand how to write queries for the QC data model? I am very confident about the SQL coding itself, what I want to know is how to query on the QC database tables based on the scenarios that occur in QC tool. Please suggest. Thanks, Srihari

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >