Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 383/670 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Win32: How to crash?

    - by Ian Boyd
    i'm trying to figure out where Windows Error Reports are saved; i hit Send on some earlier today, but i forgot that i want to "view the details" so i can examine the memory minidumps. But i cannot find where they are stored (and google doesn't know). So i want to write a dummy application that will crash, show the WER dialog, let me click "view the details" so i can get to the folder where the dumps are saved. How can i crash on Windows?

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • Any difference in performance/compatibility of different languages in PostgreSQL?

    - by Igor
    In nowadays the PostgreSQL offers plenty of procedural languages: pl/pgsql, pl/perl, etc Are there any difference in the speed/memory consumption in procedures written in different languages? Does anybody have done any test? Is it true that to use the native pl/pgsql is the most correct choice? How the procedure written in C++ and compiled into loadable module differs in all parameter w.r.t. the user function written with pl/* languages?

    Read the article

  • Avoid writing SQL queries altogether in SSIS

    - by Jonn
    Working on a Data Warehouse project, the guy that gave us the tutorial advised that we stick to using SQL queries over defining a lot of data flow transformations, citing points like it'll consume a lot of memory on the ETL box so we'd rather leave the processing to the DB box. Is this really advisable? Where's the balance between relying on GUI tools over executing a bunch of SQL scripts on your Integration package? And honestly, I'd like to avoid writing SQL queries as much as I can.

    Read the article

  • how to load part of the HTML page which is currently on display ?

    - by ganapati hegde
    Hi, i have an ebook(relatively large size say 800 pages),in HTML format. I am opening that book as webpage using webkit-gtk+. If i load the whole book at a time,it takes too much memory(RAM ).So i dont want to load the whole book at a time, but load the part of the book which is currently on display. and when the user scrolls down, next part should be displayed.How can i implement that ?

    Read the article

  • BigInteger.pow(BigInteger) ?

    - by PeterW
    I'm playing with numbers in Java, and want to see how big a number I can make. It is my understanding that BigInteger can hold a number of infinite size, so long as my computer has enough Memory to hold such a number, correct? My problem is that BigInteger.pow accepts only an int, not another BigInteger, which means I can only use a number up to 2,147,483,647 as the exponent. Is it possible to use the BigInteger class as such? BigInteger.pow(BigInteger) Thanks.

    Read the article

  • Object serialization practical uses?

    - by nash
    How many software projects have you worked on used object serialization? I personally never came across a scenario where object serialization was used. One use case i can think of is, a server software storing objects to disk to save memory. Are there other types of software where object serialization is essential or preferred over a database?

    Read the article

  • cached data base

    - by radi
    hi , in my project i need a tow tables each of it has about 2000 row , i want my application to be speed so my db should load into memory (cached) when the app start and before it close the db have to be saved on the disk . i am using java and i want to use sql

    Read the article

  • Apache mpm-itk Performance

    - by Matt Beckman
    I manage a bunch of VPSs with memory ranging from 1GB to 8GB. Most of these websites are Joomla websites, and the servers must support multiple sites/users/S-FTP. I use mpm-itk almost exclusively (mostly due to it's convenience in these shared environments). However, I'm aware it isn't known for performance, so I need some advice on making it faster. Due to the lack of documentation when I first went the way of mpm-itk, I included only one setting in the config, and that was to limit each user to 50 clients (the rest I left up to defaults): <IfModule mpm_itk_module> MaxClientsVHost 50 </IfModule> Are there any better alternatives available? Are there any settings supported in mpm-prefork or mpm-worker that are also supported in mpm-itk? Thanks!

    Read the article

  • How do you use technology to memorize set of terms?

    - by user49767
    Always there are few set of items needs to be memorized in short span of time. Here are my following cases. 1) My Job requires some set of items needs to be memorized. 2) I am a developer who has to learn 150+ tags within next 3 days. 3) Fix developer/support has to remember minimum of 125+ tags (set of possible values). 4) It is better if team's SQL developer knows all the table and columns in my database. 5) When guys join new department or job. Memorizing few related items will definitely gives some benefit. Most of the cases, I suggest people to understand the domain better and nothing wrong in using google (but remember correct search-word). But recently I came across a junior developer who took lot of effort in memorizing set of things (150+ table structures, fix protocol tags, almost 300+ configuration items from property file) and was very very successful in his job and was swift in responding for support queries. Needless to say he is smart worker too (not a dumb guy). When I try to recollect some of the successful employees I met, they were so good in remembering entire schema and they did in short span of time. But I don't argue that memorizing alone gives success, but it greatly helps when situation demands. Here my question is, I am not good at remembering things, but it shouldn't be lame excuse. Hence I am evaluating using technolgies better to memorize set of items. Not very much interested in memory techniques (mnemoninc, photography memory, etc..). Even I have recorded 100+ items and listen to that whenever I found free time, defintely there were some fruitful result. Now I need your suggestion about what are all the ways to exploit technology to memorize. There could be so many reason why guys remember a subject (passionate, essential, author, creator, responsbile). Not interested in dissecting why guys remeber. Rather much interested in using ways, and techniques (cheat sheet...) to remember a set of itmes. Note : I appreciate, encourage people who could rephrase my question better. Note : I have kept couple of cheat-sheet close to my monitor, honestly it did not help me :).

    Read the article

  • Tomcat session stickiness in session replication

    - by rabbit
    Hi, I am having a 2 instance load balanced and session replicated tomcat 6.0.20 cluster. Should sticky_session be set to true or false for in memory session replication. http://tomcat.apache.org/connectors-doc/reference/workers.html mentions : Set sticky_session to False when Tomcat is using a Session Manager which can persist session data across multiple instances of Tomcat. where as /tomcat-6.0-doc/cluster-howto.html (Cluster Basics) mentions : Make sure that your loadbalancer is configured for sticky session mode.

    Read the article

  • Performance issues with testing on an ADP2

    - by Stuart
    I have an Android Developer Phone with Android 1.6 installed, sometimes it will take 30 seconds for the home screen to appear after a call or exiting from an application. Why is my phone so slow? Should I replace the memory card? Also, when is the 2.0 coming out for the ADP2? How do I install it?

    Read the article

  • How can I set up OpenVPN to accept more than 60 connections?

    - by Robin
    Greetings! We're using OpenVPN and today hit an unexpected connection limit of 60 - even though max-clients is set to the source code default 1024. Server log: Tue Dec 21 13:49:41 2010 MULTI: new incoming connection would exceed maximum number of clients (60) We're slowly adding new clients to the VPN and expect to hit 200 some time next year, if we can get it working. We're running the server on a Win2003 R2. OpenVPN 2.0.9 Server config as follows: local 192.168.10.211 port 1195 proto tcp dev tun dev-node OpenVPN_Vision ca vision_ca.crt cert vision_server.crt key vision_server.key # This file should be kept secret dh vision_dh1024.pem server 192.168.211.0 255.255.255.0 ifconfig-pool-persist vision_ipp.txt ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 ;client-to-client keepalive 10 120 comp-lzo ;max-clients 100 # Default in source code is 1024 persist-key persist-tun status openvpn-status-vision.log log vision.log verb 3 I would greatly appreciate any help or input on this one. Thanks! Best regards, Robin

    Read the article

  • nokogiri vs hpricot?

    - by roshan
    Which one would you choose? My important attributes are (not in order) Support & Future enhancements Community & general knowledge base (on the Internet) Comprehensive (i.e proven to parse a wide range of *.*ml pages) Performance Memory Footprint (runtime, not the code-base)

    Read the article

  • What is the need of collection framework in java?

    - by JavaUser
    Hi, What is the need of Collection framework in Java since all the data operations(sorting/adding/deleting) are possible with Arrays and moreover array is suitable for memory consumption and performance is also better compared with Collections. Can anyone point me a real time data oriented example which shows the difference in both(array/Collections) of these implementations. Thx

    Read the article

  • C array initialization.

    - by chrisdew
    Why does static char *opcode_str[] = { "DATA" , "DATA_REQUEST_ACK" , "ACK_TIMER_EXPIRED" , "ACK_UNEXPECTED_SEQ" , "ACK_AS_REQUESTED" } ; work, but static char **opcode_str = { "DATA" , "DATA_REQUEST_ACK" , "ACK_TIMER_EXPIRED" , "ACK_UNEXPECTED_SEQ" , "ACK_AS_REQUESTED" } ; fails with SEGV when opcode_str[0] is printf'd? I think it's because the second listing has not allocated memory for the five element array of pointers, but I need a more comprehensive explanation. All the best, Chris.

    Read the article

  • Python float copy question

    - by SJA
    Hi, I'm puzzled by some behaviour I'm seeing when copying a float array member into another variable - please help! For example data_entry[1] = 9.6850069951 new_value = data_entry[1] <comment> #print both 9.6850069951 9.6850663300 I'm aware of the problem of binary storage of floats but I thought with a direct copy of memory we would end up with the same value. Any ideas? I need better precision than this! thanks in advance Stuart

    Read the article

  • Proxy Issues with Javascript Cross Domain RSS Feed Parsing

    - by Amir
    This is my Javascript function which grabs an rss feed via the proxy script and then spits out the 5 latest rss items from the feed along with a link to my stylesheet: function getWidget (feed,limit) { if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest() } else { xhttp=new ActiveXObject("Microsoft.XMLHTTP") } xhttp.open("GET","http://MYSITE/proxy.php?url="+feed,false); xhttp.send(""); xmlDoc=xhttp.responseXML; var x = 1; var div = document.getElementById("div"); srdiv.innerHTML = '<link type="text/css" href="http://MYSITE/css/widget.css" rel="stylesheet" /><div id="rss-title"></div></h3><div id="items"></div><br /><br /><a href="http://MYSITE">Powered by MYSITE</a>'; document.body.appendChild(div); content=xmlDoc.getElementsByTagName("title"); thelink=xmlDoc.getElementsByTagName("link"); document.getElementByTagName("rss-title").innerHTML += content[0].childNodes[0].nodeValue; for (x=1;x<=limit;srx++) { y=x; y--; var shout = '<div class="item"><a href="'+thelink[y].childNodes[0].nodeValue+'">'+content[x].childNodes[0].nodeValue+'</a></div>'; document.getElementById("items").innerHTML += shout; } } Here is the the code from proxy.php: $session = curl_init($_GET['url']); // Open the Curl session curl_setopt($session, CURLOPT_HEADER, false); // Don't return HTTP headers curl_setopt($session, CURLOPT_RETURNTRANSFER, true); // Do return the contents of the call $xml = curl_exec($session); // Make the call header("Content-Type: text/xml"); // Set the content type appropriately echo $xml; // Spit out the xml curl_close($session); // And close the session Now when I try to load this on any domain that's not my site nothing loads. I get no JS errors, but I in the Console tab in firebug I get "407 Proxy Authentication Required" So I'm not really sure how to make this work. The goal is to be able to grab the RSS feed, parse it to grab the titles and links and spit it out into some HTML on any website on the web. I"m basically making a simple RSS widget for my site's various RSS feeds. My Javascript is wack Also, I'm really a beginner with Javascript. I know jQuery pretty well, but I wasn't able to use it in this case, because this script will be embeded on any site and I can't really rely on the jQuery library. So I was decided to write some basic Javascript relying on the default XML parsing options available. Any suggestions here would be cool. Thanks! What's with the x and y They way my site creates RSS feeds is that the first title is actually the RSS feed title. The second title is the title of the first item. The first link is the link to the first item. So when using the javascript to get the title, I had to first grab the first title (which is the RSS title) and then start with the second title that being the first title of the item. Sorry for the confusion, but I don't think this is related to my issue. Just wanted to clarify my code.

    Read the article

  • Set maximum requests per IP in IIS7

    - by Maxim V. Pavlov
    I have a web site deployed to IIS 7. One page it is has 15+ .js files linked to it. Last two files referenced in <head> tag (loaded last) get 403 forbidden response from server. I have enabled FailedRequestTracing and have been able to see a detailed error code which is 403.502. I suppose over a very short period of time I am just pulling to much and the IIS blocks me. Is there a way I can configure the limit to enable larger number of requests and get rid of 403.502 error?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >