Search Results

Search found 35970 results on 1439 pages for 'javascript performance'.

Page 41/1439 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Drupal incorrectly escapes tags in javascript

    - by sergdev
    I installed drupal-6.16. I applied the patch from the post http://drupal.org/node/222926#comment-930745. It works correctly in simple cases. But following code of counter is handled incorrectly and counter is now displayed on the page after drupal. Drupal modifies the string "alt='1Gb.ua counter' /><\/a>")</a></script> to "alt='1Gb.ua counter' />&lt;\/a>")</a></script> The full code of counter follows: <br><br> Text <br><br> <!-- counter.1Gb.ua --> <script language="javascript" type="text/javascript"> cgb_js="1.0"; cgb_r=""+Math.random()+"&r="+ escape(document.referrer)+"&pg="+ escape(window.location.href); document.cookie="rqbct=1; path=/"; cgb_r+="&c="+ (document.cookie?"Y":"N"); </script><script language="javascript1.1" type="text/javascript"> cgb_js="1.1";cgb_r+="&j="+ (navigator.javaEnabled()?"Y":"N")</script> <script language="javascript1.2" type="text/javascript"> cgb_js="1.2"; cgb_r+="&wh="+screen.width+ 'x'+screen.height+"&px="+ (((navigator.appName.substring(0,3)=="Mic"))? screen.colorDepth:screen.pixelDepth)</script> <script language="javascript1.3" type="text/javascript"> cgb_js="1.3"</script> <script language="javascript" type="text/javascript">cgb_r+="&js="+cgb_js; document.write("<a href='http://www.1Gb.ua?cnt=1416'>"+ "<img src='http://counter.1Gb.ua/cnt.aspx?"+ "u=1416&"+cgb_r+ "&' border=0 width=88 height=31 "+ "alt='1Gb.ua counter'><\/a>")</script> <noscript><a href='http://www.1Gb.ua?cnt=1416'> <img src="http://counter.1Gb.ua/cnt.aspx?u=1416" border=0 width="88" height="31" alt="1Gb.ua counter"></a> </noscript> <!-- /counter.1Gb.ua --> Does anybody have this code working? How can it be fixed? Thanks a lot in advance!

    Read the article

  • Drupal incorrectly espaces tags in javascript

    - by sergdev
    I installed drupal-6.16. I applied the patch from the post http://drupal.org/node/222926#comment-930745. It works correctly in simple cases. But for following code for counter is handled incorrectly: <br><br> Text <br><br> <!-- counter.1Gb.ua --> <script language="javascript" type="text/javascript"> cgb_js="1.0"; cgb_r=""+Math.random()+"&r="+ escape(document.referrer)+"&pg="+ escape(window.location.href); document.cookie="rqbct=1; path=/"; cgb_r+="&c="+ (document.cookie?"Y":"N"); </script><script language="javascript1.1" type="text/javascript"> cgb_js="1.1";cgb_r+="&j="+ (navigator.javaEnabled()?"Y":"N")</script> <script language="javascript1.2" type="text/javascript"> cgb_js="1.2"; cgb_r+="&wh="+screen.width+ 'x'+screen.height+"&px="+ (((navigator.appName.substring(0,3)=="Mic"))? screen.colorDepth:screen.pixelDepth)</script> <script language="javascript1.3" type="text/javascript"> cgb_js="1.3"</script> <script language="javascript" type="text/javascript">cgb_r+="&js="+cgb_js; document.write("<a href='http://www.1Gb.ua?cnt=1416'>"+ "<img src='http://counter.1Gb.ua/cnt.aspx?"+ "u=1416&"+cgb_r+ "&' border=0 width=88 height=31 "+ "alt='1Gb.ua counter'><\/a>")</script> <noscript><a href='http://www.1Gb.ua?cnt=1416'> <img src="http://counter.1Gb.ua/cnt.aspx?u=1416" border=0 width="88" height="31" alt="1Gb.ua counter"></a> </noscript> <!-- /counter.1Gb.ua --> It modifies the string "alt='1Gb.ua counter' /><\/a>")</a></script> to "alt='1Gb.ua counter' />&lt;\/a>")</a></script> Does anybody have this code working? If so how this can be fixed? Thanks a lot in advance!

    Read the article

  • Cross domain javascript form filling, reverse proxy

    - by Michel van Engelen
    I need a javascript form filler that can bypass the 'same origin policy' most modern browsers implement. I made a script that opens the desired website/form in a new browser. With the handler, returned by the window.open method, I want to retrieve the inputs with theWindowHandler.document.getElementById('inputx') and fill them (access denied). Is it possible to solve this problem by using Isapi Rewrite (official site) in IIS 6 acting like a reverse proxy? If so, how would I configure the reverse proxy? This is how far I got: RewriteEngine on RewriteLogLevel 9 LogLevel debug RewriteRule CarChecker https://the.actualcarchecker.com/CheckCar.aspx$1 [NC,P] The rewrite works, http://ourcompany.com/ourapplication/CarChecker, as evident in the logging. From within our companysite I can run the carchecker as if it was in our own domain. Except, the 'same origin policy' is still in force. Regards, Michel

    Read the article

  • Get lots of javascript problems when using Opera 11.00 to surf

    - by s hanley
    Sites like ebay and even superuser stop working properly when I use opera 11.00. Menus stop working everywhere from ebay to godaddy. Hovering on a menu item doesn't expand it, no sub menu slides out. This makes a large number of very popular websites unusable. Am I right in assuming this is a javascript issue? I use opera for the turbo feature (I have tested opera with and without turbo so it's not turbo's fault) because I'm on mobile broadband until I get my phone line sorted out. Turbo helps me save money, as well as allowing me to surf at a sane speed. Is there a firefox or chrome equivalent to opera turbo that doesn't cost money? I'm using Opera 11.00, build 1156.

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • How do I measure performance of a virtual server?

    - by Sergey
    I've got a VPS running Ubuntu. Being a virtual server, I understand that it shares resources with unknown number of other servers, and I'm noticing that it's considerably slower than my desktop machine. Is there some tool to measure the performance of the virtual machine? I'd be curious to see some approximate measure similar to bogomips, possibly for CPU (operations/sec), memory and disk read/write speed. I'd like to be able to compare those numbers to my desktop machine. I'm not interested in the specs of the actual physical machine my VPS is running on - by doing cat /proc/cpuinfo I can see that it's a nice quad-core Xeon machine, but it doesn't matter to me. I'm basically interested in how fast a program would run in my VPS - how many CPU operations it can make in a second, how many bytes to write to RAM or to disk. I only have ssh access to the machine so the tool need to be command-line. I could write a script which, say, does some calculations in a loop for a second and counts how many loops it was able to do, or something similar to measure disk and RAM performance. But I'm sure something like this already exists.

    Read the article

  • How to squeeze the maximum performance out of Unity and GNOME 3?

    - by melvincv
    I see that I do not get good performance with the new Unity desktop, but I should say that Unity has improved a lot since the last edition Ubuntu 11.10. How to squeeze the maximum performance out of 1. Unity 2. GNOME 3 My system specs: -Processors- Intel(R) Pentium(R) Dual CPU E2180 @ 2.00GHz -Memory- Total Memory : 2049996 kB -PCI Devices- Host bridge : Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 10) PCI bridge : Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 10) (prog-if 00 [Normal decode]) VGA compatible controller : Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) (prog-if 00 [VGA controller]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 01) (prog-if 00 [UHCI]) USB controller : Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 01) (prog-if 20 [EHCI]) PCI bridge : Intel Corporation 82801 PCI Bridge (rev e1) (prog-if 01 [Subtractive decode]) ISA bridge : Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) IDE interface : Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) (prog-if 8a [Master SecP PriP]) IDE interface : Intel Corporation N10/ICH7 Family SATA Controller [IDE mode] (rev 01) (prog-if 8f [Master SecP SecO PriP PriO]) SMBus : Intel Corporation N10/ICH 7 Family SMBus Controller (rev 01) Ethernet controller : Intel Corporation PRO/100 VE Network Connection (rev 01)

    Read the article

  • Functions registered with ExternalInterface.addCallback not available in Javascript

    - by Selene
    I'm working on a Flash game that needs to call some Javascript on the page and get data back from it. Calling Javascript from Flash works. Calling the Flash functions from Javascript (often) doesn't. I'm using the Gaia framework. What happens: The swf is loaded in with SWFObject There's a button in the Flash file. On click, it uses ExternalInterface.call() to call a Javascript function. This works. The Javascript function calls a Flash function that was exposed with ExternalInterface.addCallback(). Sometimes, the Javascript produces the following error: TypeError: myFlash.testCallback is not a function. When the error happens, it affects all functions registered with addCallback(). Gaia and some of its included libraries use addCallback(), and calling those functions from Javascript also produces the TypeError. Waiting a long time before pressing the button in Flash doesn't solve the error. Having Flash re-try addCallback() periodically doesn't solve the error When the error occurs, ExternalInterface.available = true and ExternalInterface.objectID contains the correct name for the Flash embed object. When the error occurs, document.getElementById('myflashcontent') correctly returns the Flash embed object. From my Page class: public class MyPage extends AbstractPage { // declarations of stage instances and class variables // other functions override public function transitionIn():void { send_button.addEventListener(MouseEvent.MOUSE_UP, callJS); exposeCallbacks(); super.transitionIn(); } private function exposeCallbacks():void { trace("exposeCallbacks()"); if (ExternalInterface.available) { trace("ExternalInterface.objectID: " + ExternalInterface.objectID); try { ExternalInterface.addCallback("testCallback", simpleTestCallback); trace("called ExternalInterface.addCallback"); } catch (error:SecurityError) { trace("A SecurityError occurred: " + error.message + "\n"); } catch (error:Error) { trace("An Error occurred: " + error.message + "\n"); } } else { trace("exposeCallbacks() - ExternalInterface not available"); } } private function simpleTestCallback(str:String):void { trace("simpleTestCallback(str=\"" + str + "\")"); } private function callJS(e:Event):void { if (ExternalInterface.available) { ExternalInterface.call("sendTest", "name", "url"); } else { trace("callJS() - ExternalInterface not available"); } } } My Javascript: function sendTest(text, url) { var myFlash = document.getElementById("myflashcontent"); var callbackStatus = ""; callbackStatus += '\nmyFlash[testCallback]: ' + myFlash['testCallback']; //console.log(callbackStatus); var errors = false; try { myFlash.testCallback("test string"); } catch (err) { alert("Error: " + err.toString()); error = true; } if (!error) { alert("Success"); } } var params = { quality: "high", scale: "noscale", wmode: "transparent", allowscriptaccess: "always", bgcolor: "#000000" }; var flashVars = { siteXML: "xml/site.xml" }; var attributes = { id: "myflashcontent", name: "myflashcontent" }; // load the flash movie. swfobject.embedSWF("http://myurl.com/main.swf?v2", "myflashcontent", "728", "676", "10.0.0", serverRoot + "expressInstall.swf", flashVars, params, attributes, function(returnObj) { console.log('Returned ' + returnObj.success); if (returnObj.success) { returnObj.ref.focus(); } });

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • How to avoid Memory "Hard Fault/sec"

    - by Flavio Oliveira
    i've a problem on my windows 2008 server x64, and i cannot understand how can i solve it. i'm looking to Resource Monitor and see about 100 to 200 hard faults/sec. and generally the machine is slow. As i've readed a bit it is caused by a "memory Page" that is no longer available on physical memory and causes a io operations (disk) and it is a problem. The current hardware is a intel core2duo E8400 (3.0GHz) with 6GB RAM on a Windows Server Web 64-bit. Actually the machine have about 2GB Ram used what having 4Gb available to use, Why is the machine requires that high level of Disk operations? what can i do to increase the performance? Im experiencing a memory issues? what should be my starting point?

    Read the article

  • SQL server peformance, virtual memory usage

    - by user45641
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceeds the amount of physical memory available. Currently, physical memory is 10GB (10238 MB) whereas the virtual memory returns significantly more - 8388607 MB. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • Benchmarking Java programs

    - by stefan-ock
    For university, I perform bytecode modifications and analyze their influence on performance of Java programs. Therefore, I need Java programs---in best case used in production---and appropriate benchmarks. For instance, I already got HyperSQL and measure its performance by the benchmark program PolePosition. The Java programs running on a JVM without JIT compiler. Thanks for your help! P.S.: I cannot use programs to benchmark the performance of the JVM or of the Java language itself (such as Wide Finder).

    Read the article

  • Event type property lost in IE-8

    - by Channel72
    I've noticed a strange Javascript error which only seems to happen on Internet Explorer 8. Basically, on IE-8 if you have an event handler function which captures the event object in a closure, the event "type" property seems to become invalidated from within the closure. Here's a simple code snippet which reproduces the error: <html> <head> <script type = "text/javascript"> function handleClickEvent(ev) { ev = (ev || window.event); alert(ev.type); window.setTimeout(function() { alert(ev.type); // Causes error on IE-8 }, 20); } function foo() { var query = document.getElementById("query"); query.onclick = handleClickEvent; } </script> </head> <body> <input id = "query" type = "submit"> <script type = "text/javascript"> foo(); </script> </body> </html> So basically, what happens here is that within the handleClickEvent function, we have the event object ev. We call alert(ev.type) and we see the event type is "click". So far, so good. But then when we capture the event object in a closure, and then call alert(ev.type) again from within the closure, now all of a sudden Internet Explorer 8 errors, saying "Member not found" because of the expression ev.type. It seems as though the type property of the event object is mysteriously gone after we capture the event object in a closure. I tested this code snippet on Firefox, Safari and Chrome, and none of them report an error condition. But in IE-8, the event object seems to become somehow invalidated after it's captured in the closure. Question: Why is this happening in IE-8, and is there any workaround?

    Read the article

  • Javascript loading never completes on many sites

    - by Joe
    I recently moved country and have found that on many websites the page never finishes loading. In some cases, no content is ever displayed, but the loading will never time out. Loading Developer Tools in Chrome shows me that it is the Javascript files which never load. For example, this BBC article will never load compatability.js, though will load all the other JS files perfectly. Google Maps often fails to finish loading, meaning it's impossible to make searches. There seems to be no pattern to which files will fail to load (i.e. they don't come from the same CDN). I have tried Chrome, Safari and Firefox on OSX 10.8, and Chrome on my girlfriend's OSX 10.7. I have similar issues on the iPad. In many cases, if I can go to the mobile version of the page that seems to load fine. I have run the browsers in private mode, disabled plugins, updated flash, cleared the cache, flushed the DNS cache - though it would seem that if this is happening on other devices, none of this would work anyway. Is this an ISP issue? And if so, why would it be limited to certain JS files and not all? JS files from the same domain work fine, so I'm not really sure what I should be looking for.

    Read the article

  • Common causes of slow performing jQuery and how to optimize the code?

    - by Polaris878
    Hello, This might be a bit of a vague or general question, but I figure it might be able to serve as a good resource for other jQuery-ers. I'm interested in common causes of slow running jQuery and how to optimize these cases. We have a good amount of jQuery/JavaScript performing actions on our page... and performance can really suffer with a large number off elements. What are some obvious performance pitfalls you know of with jQuery? What are some general optimizations a jQuery-er can do to squeeze every last bit of performance out of his/her scripts? One example: a developer may use a selector to access an element that is slower than some other way. Thanks

    Read the article

  • HTML form with single text field + preventing postback in Internet Explorer

    - by SudheerKovalam
    I have noticed a rather strange behaviour in IE. I have a HTML form with a single input text field and a submit button On Submit click I need to execute a client side JavaScript function that does the necessary. Now when I want to prevent the postback in the text field (on enter key press) I have added a key press JavaScript function that looks like this: <input type=text onkeypress="return OnEnterKeyPress(event)" /> function OnEnterKeyPress(event) { var keyNum = 0; if (window.event) // IE { keyNum = event.keyCode; } else if (event.which) // Netscape/Firefox/Opera { keyNum = event.which; } else return true; if (keyNum == 13) // Enter Key pressed, then start search, else do nothing. { OnButtonClick(); return false; } else return true; } Strangly this doesn't work. But if I pass the text field to the function : <input type=text onkeypress="return OnEnterKeyPress(this,event);" /> function OnEnterKeyPress(thisForm,event) { var keyNum = 0; if (window.event) // IE { keyNum = event.keyCode; } else if (event.which) // Netscape/Firefox/Opera { keyNum = event.which; } else return true; if (keyNum == 13) // Enter Key pressed, then start search, else do nothing. { OnButtonClick(); return false; } else return true; } I am able to prevent the postback. Can anyone confirm what is exactly happening here?? the HTML form has just one text box and a submit button The resultant o/p of the JavaScript function executed on submit is displayed in a HTML text area in a separate div.

    Read the article

  • Is it possible to access javascript return value outside of function? [on hold]

    - by Kinnard Hockenhull
    How would one access javascript function's return value outside of the function? For example could you tell a function to return something somewhere else in the code? Theoretical example: milkmachine = function(argument){ var r; var k; //do something with arguments and variables return r; } var rainbow = milkmachine(); //rainbow == r milkmachine.return(k); var spectrum = milkmachine(); //spectrum == k

    Read the article

  • How to convert an html page to pdf using javascript? [closed]

    - by user1439891
    I am developing a project, In that I have a receipt page (this is the html page that I want to convert it into pdf) and I've to print it. While printing that page alignments are not coming properly. If I convert it into pdf, then pdf only will take care of that alignments thus my work will become easy and effective. I was restricted to use either JavaScript or js libraries only to complete this task. Could any of you please help me?

    Read the article

  • Can a whitespace regex character be used to perform a javascript injection? [migrated]

    - by webose
    if I want to validate the input of a <textarea>, and want it to contain, for example, only numerical values, but even want to give users the possibility to insert new lines, I can selected wanted characters with a javascript regex that includes even the whitespace characters. /[0-9\s]/ The question is: do a whitecharacter can be used to perform injections, XSS,even if I think this last option is impossible, or any other type of attack ? thanks

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • JavaScript tags, performance and W3C

    - by Thomas
    Today I was looking for website optimization content and I found an article talking about move JavaScript scripts to the bottom of the HTML page. Is this valid with W3C's recommendations? I learned that all JavaScript must be inside of head tag... Thank you.

    Read the article

  • Performance question: Inverting an array of pointers in-place vs array of values

    - by Anders
    The background for asking this question is that I am solving a linearized equation system (Ax=b), where A is a matrix (typically of dimension less than 100x100) and x and b are vectors. I am using a direct method, meaning that I first invert A, then find the solution by x=A^(-1)b. This step is repated in an iterative process until convergence. The way I'm doing it now, using a matrix library (MTL4): For every iteration I copy all coeffiecients of A (values) in to the matrix object, then invert. This the easiest and safest option. Using an array of pointers instead: For my particular case, the coefficients of A happen to be updated between each iteration. These coefficients are stored in different variables (some are arrays, some are not). Would there be a potential for performance gain if I set up A as an array containing pointers to these coefficient variables, then inverting A in-place? The nice thing about the last option is that once I have set up the pointers in A before the first iteration, I would not need to copy any values between successive iterations. The values which are pointed to in A would automatically be updated between iterations. So the performance question boils down to this, as I see it: - The matrix inversion process takes roughly the same amount of time, assuming de-referencing of pointers is non-expensive. - The array of pointers does not need the extra memory for matrix A containing values. - The array of pointers option does not have to copy all NxN values of A between each iteration. - The values that are pointed to the array of pointers option are generally NOT ordered in memory. Hopefully, all values lie relatively close in memory, but *A[0][1] is generally not next to *A[0][0] etc. Any comments to this? Will the last remark affect performance negatively, thus weighing up for the positive performance effects?

    Read the article

  • Performance impact: What is the optimal payload for SqlBulkCopy.WriteToServer()?

    - by Linchi Shea
    For many years, I have been using a C# program to generate the TPC-C compliant data for testing. The program relies on the SqlBulkCopy class to load the data generated by the program into the SQL Server tables. In general, the performance of this C# data loader is satisfactory. Lately however, I found myself in a situation where I needed to generate a much larger amount of data than I typically do and the data needed to be loaded within a confined time frame. So I was driven to look into the code...(read more)

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >