Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 328/555 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Good Postgres graphical client for Windows

    - by alex
    The name pretty much says it all. Right now I'm using Squirrel - it crashes frequently and suffers from memory problems (I've tried increasing the heap size). I don't need anything particularly fancy or full-featured - just something that won't take up 2.4 GB of RAM to store a 1.5 million line, 8 column result set.

    Read the article

  • How to tile a 30000 x 6000 image for a 480 x 320 screen?

    - by Horace Ho
    (this is related to another question about implementation on iPhone) I have a large image, size around 30000 (w) x 6000 (h) pixels. You may consider it's like a big map. I assume I need to crop it up into smaller tiles. Questions: what is the tile strategy? Requirements: whole image (though cropped) can be scrolled up/down/left/right by swipes zoom in (up to pixel-to-pixel) out (down to screen-fit-by-height) by the 2-finger operation memory efficiency by lazy loading tiles Thanks!

    Read the article

  • SQL Server, Remote Stored Procedure, and DTC Transactions

    - by marc
    Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

    Read the article

  • Determine compile options from load module - IBM Enterprise COBOL

    - by NealB
    How can I determine the compile options used to compile an IBM Enterprise COBOL program by looking at the load module? When a dump is issued they are listed as follows: Compile Options for PROGXX: ADV, ARITH(COMPAT), AWO, NOCICS, CODEPAGE(01140), DATA(31), NODATEPROC, NODBCS, NODLL, NODYNAM, NOEXPORTALL, NOFASTSRT, INTDATE(LILIAN), NUMPROC(NOPFD), NOOPTIMIZE, OUTDD(SYSOUT), PGMNAME(COMPAT), RENT, RMODE(AN NOSQL, SQLCCSID, SSRANGE, NOTEST, NOTHREAD, TRUNC(OPT), XMLPARSE(XMLSS), YEARWINDOW(1900), ZWB so I presume they must be tucked away somewhere in the load module. I want to scan a load library checking that each load was compiled with some specific options to ensure compliance to shop standard (eg. SSRANGE). Any ideas would be appreciated.

    Read the article

  • FastMM and Dynamically loaded DLLs

    - by Vegar
    I have a host application, that loads a dozen of libraries at start up. I want to switch from Delphi 7s default memory manager to the full version of FastMM4 for better mem leak reporting. Should I include FastMM4 in the uses section of both the host application and the libraries? What about shared runtime packages? -Vegar

    Read the article

  • How can I translate my programmatic WCF configuration into app.config

    - by ofer
    Hi, I have an self hosted WCF server with hard coded configurations. the server worked fine until I tried to implement some new functionality. the new setting did not work (urrr.... ) and I find it hard to locate where are the problems in my code. instead of digging inside the code, I thought about different approach: Is there any way to dump those hard coded WCF configuration (the entire ) into app.config like text file after all configurations are loaded? this will enable me to have a easy global view of the entire settings .. mmm .. .by the way, does anyone know a way that will do the translation to the opposite direction? config to code. Any advice will be welcomed! ofer

    Read the article

  • Optimum size of transaction in Postgres?

    - by Joe
    I'm running a process that does a lot of updates ( 100,000) to a table. I have the choice between putting all the updates in a single transaction or committing transactions every 1000 or so. Ignore for the moment the case where a transaction fails and is aborted. I'm interested in the best size of transaction for memory and speed efficiency.

    Read the article

  • Win32: How to crash?

    - by Ian Boyd
    i'm trying to figure out where Windows Error Reports are saved; i hit Send on some earlier today, but i forgot that i want to "view the details" so i can examine the memory minidumps. But i cannot find where they are stored (and google doesn't know). So i want to write a dummy application that will crash, show the WER dialog, let me click "view the details" so i can get to the folder where the dumps are saved. How can i crash on Windows?

    Read the article

  • Any difference in performance/compatibility of different languages in PostgreSQL?

    - by Igor
    In nowadays the PostgreSQL offers plenty of procedural languages: pl/pgsql, pl/perl, etc Are there any difference in the speed/memory consumption in procedures written in different languages? Does anybody have done any test? Is it true that to use the native pl/pgsql is the most correct choice? How the procedure written in C++ and compiled into loadable module differs in all parameter w.r.t. the user function written with pl/* languages?

    Read the article

  • Rails CSV import, adding to a related table

    - by Jack
    Hi, I have a csv importing system on my app (used locally only) which parses the csv file line by line and adds the data to the database table. This is based on a tutorial here. require 'csv' def csv_import @parsed_file=CSV::Reader.parse(params[:dump][:file]) n = 0 @parsed_file.each_with_index do |row, i| next if i == 0 #ignore the first row course = Course.new course.title = row[0] course.unit_code = row[1] course.course_type = row[2] course.value = row[3] course.pass_mark = row[4] if course.save n = n+1 GC.start if n%50==0 end flash.now[:message] = "CSV Import Successful, #{n} new courses added to the database." end redirect_to(courses_url) end This is all in the courses controller and works fine. There is a relationship that courses HABTM years and years HABTM courses. In the csv file (effectively in row[5] to row[8]) are the year_id s. Is there a way that I can add this within the method above. I am confused as to how to loop over the 4 items and add them to the courses_years table. Thank you Jack

    Read the article

  • What CPAN module can summarize arbitrary error logs?

    - by mithaldu
    I'm maintaining some website code that will soon dump all its errors and warnings into a log file. In order to make this a bit more pro-active I plan to parse this log file daily, summarize the warnings and errors (i.e. count the occurrence of each specific one and group by either warning/error) and then email this to the devs on the project. This would likely admittedly be rather trivial with a hash and some further fiddling, I wondered if there is a suitable module on CPAN that I could use to do this task. It would either be one that summarizes specifically Perl error/warnings logs or one that summarizes arbitrary text files. Any suggestions?

    Read the article

  • Avoid writing SQL queries altogether in SSIS

    - by Jonn
    Working on a Data Warehouse project, the guy that gave us the tutorial advised that we stick to using SQL queries over defining a lot of data flow transformations, citing points like it'll consume a lot of memory on the ETL box so we'd rather leave the processing to the DB box. Is this really advisable? Where's the balance between relying on GUI tools over executing a bunch of SQL scripts on your Integration package? And honestly, I'd like to avoid writing SQL queries as much as I can.

    Read the article

  • Reading chunked data from HttpEntity

    - by Gagan
    I have the following code: HttpClient FETCHER HttpResponse response = FETCHER.execute(host, httpMethod); Im trying to read its contents to a string like this: HttpEntity entity = response.getEntity(); InputStream st = entity.getContent(); StringWriter writer = new StringWriter(); IOUtils.copy(st, writer); String content = writer.toString(); The problem is, when i fetch http://www.google.co.in/ page, the transfer encoding is chunked, and i get only the first chunk. It fetches till first "". How do i get all the chunks at once so i can dump the complete output and do some processing on it ?

    Read the article

  • how to load part of the HTML page which is currently on display ?

    - by ganapati hegde
    Hi, i have an ebook(relatively large size say 800 pages),in HTML format. I am opening that book as webpage using webkit-gtk+. If i load the whole book at a time,it takes too much memory(RAM ).So i dont want to load the whole book at a time, but load the part of the book which is currently on display. and when the user scrolls down, next part should be displayed.How can i implement that ?

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • cached data base

    - by radi
    hi , in my project i need a tow tables each of it has about 2000 row , i want my application to be speed so my db should load into memory (cached) when the app start and before it close the db have to be saved on the disk . i am using java and i want to use sql

    Read the article

  • Huge file in Clojure and Java heap space error

    - by trzewiczek
    I posted before on a huge XML file - it's a 287GB XML with Wikipedia dump I want ot put into CSV file (revisions authors and timestamps). I managed to do that till some point. Before I got the StackOverflow Error, but now after solving the first problem I get: java.lang.OutOfMemoryError: Java heap space error. My code (partly taken from Justin Kramer answer) looks like that: (defn process-pages [page] (let [title (article-title page) revisions (filter #(= :revision (:tag %)) (:content page))] (for [revision revisions] (let [user (revision-user revision) time (revision-timestamp revision)] (spit "files/data.csv" (str "\"" time "\";\"" user "\";\"" title "\"\n" ) :append true))))) (defn open-file [file-name] (let [rdr (BufferedReader. (FileReader. file-name))] (->> (:content (data.xml/parse rdr :coalescing false)) (filter #(= :page (:tag %))) (map process-pages)))) I don't show article-title, revision-user and revision-title functions, because they just simply take data from a specific place in the page or revision hash. Anyone could help me with this - I'm really new in Clojure and don't get the problem.

    Read the article

  • nokogiri vs hpricot?

    - by roshan
    Which one would you choose? My important attributes are (not in order) Support & Future enhancements Community & general knowledge base (on the Internet) Comprehensive (i.e proven to parse a wide range of *.*ml pages) Performance Memory Footprint (runtime, not the code-base)

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >