Search Results

Search found 10417 results on 417 pages for 'large'.

Page 36/417 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • java : writing large files ?

    - by umanga
    Greetings , I get huge number of records from database and write into a file.I was wondering what the best way to write huge files. (1Gb - 10Gb). Currently I am using BufferedWriter BufferedWriter mbrWriter=new BufferedWriter(new FileWriter(memberCSV)); while(done){ //do writings } mbrWriter.close();

    Read the article

  • Upload 1GB files using chunking in PHP

    - by rjha94
    I have a web application that accepts file uploads of up to 4 MB. The server side script is PHP and web server is NGINX. Many users have requested to increase this limit drastically to allow upload of video etc. However there seems to be no easy solution for this problem with PHP. First, on the client side I am looking for something that would allow me to chunk files during transfer. SWFUpload does not seem to do that. I guess I can stream uploads using Java FX (http://blogs.sun.com/rakeshmenonp/entry/javafx_upload_file ) but I can not find any equivalent of request.getInputStream in PHP. Increasing browser client_post limits or php.ini upload or max_execution times is not really a solution for really large files (~ 1GB) because maybe the browser will time out and think of all those blobs stored in memory. is there any way to solve this problem using PHP on server side? I would appreciate your replies.

    Read the article

  • Best practices for large solutions in Visual Studio (2008)

    - by Eyvind
    We have a solution with around 100+ projects, most of them C#. Naturally, it takes a long time to both open and build, so I am looking for best practices for such beasts. Along the lines of questions I am hoping to get answers to, are: how do you best handle references between projects should "copy local" be on or off? should every project build to its own folder, or should they all build to the same output folder(they are all part of the same application) are solutions folders a good way of organizing stuff? I know that splitting the solution up into multiple smaller solutions is an option, but that comes with its own set of refactoring and building headaches, so perhaps we can save that for a separate thread :-)

    Read the article

  • very large string in memory

    - by bushman
    Hi, I am writing a program for formatting 100s of MB String data (nearing a gig) into xml == And I am required to return it as a response to an HTTP (GET) request . I am using a StringWriter/XmlWriter to build an XML of the records in a loop and returning the stringWriter.ToString() during testing I saw a few --out of memory exceptions-- and quite clueless on how to find a solution? do you guys have any suggestions for a memory optimized delivery of the response? is there a memory efficient way of encoding the data? or maybe chunking the data -- I just can not think of how to return it without building the whole thing into one HUGE string object thanks

    Read the article

  • Sharing large objects between ruby processes without a performance hit

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds

    Read the article

  • JMS Session pooling for large numbers of Topic subscribers

    - by matthewKizoom
    I'm writing an app that will create lots of JMS topic subscribers. What is best practise regarding reusing sessions? A session per subscriber? A pool of sessions? With a session per subscriber the thread count seems unreasonable. Is this a job for something like a ServerSessionPool? What I've seen so far seems to suggest that ServerSessionPool is more geared towards one receiver consuming messages concurrently rather than lots of receivers. I'm currently working with HornetQ 2.0.0GA embedded in JBoss 4.3.0CP6.

    Read the article

  • Usage of open source libraries in high governance and risk-averse large organizations (banks, financ

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source dependencies (and also tools). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. I'm also interested in any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • Making a Form Input Field Large

    - by John
    Hello, For the form below, how could I make the input field big, like maybe 100 pixels in height by 400 pixels in length? Thanks in advance, John <form action="http://www...com/sandbox/comments/comments2.php" method="post"> <input type="hidden" value="'.$_SESSION['loginid'].'" name="uid"> <div class="addacomment"><label for="title">Add a comment:</label></div> <div class="submissionfield"><input name="title" type="title" id="title" maxlength="1000"></div> <div class="submissionbutton"><input name="submit" type="submit" value="Submit"></div> </form>

    Read the article

  • Display large PDF using iPhone SDK

    - by MadJawa
    Hello, I was wondering what is the best way to display a big PDF file (it's a map actually) using iPhone SDK (the file is around 5MB), because it's really slow in a UIWebView. I want to be able to scroll through the PDF and zoom in/out. Also do you think that it would be better to convert it to a PNG? Thank in advance

    Read the article

  • Browser Client Side Storage aka a large Cookie

    - by Ian
    Hi, I need to store about 20-30k or data on the client side when using a website. I was using a cookie, but this is to small for my needs. Is there something else that I can use? I need to be able to do this via javascript. Server side storage is a last resort but not what I am looking for. I need it to work for Chrome, IE and firefox. Thanks Ian

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Large Scale VHDL modularization techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

  • Octave: importing a large matrix in csv format

    - by Massagran
    I'm trying to import a matrix (about 80.000 rows) from a csv file to Octave. The obvious solution seems something like: load("-ascii","relative_directory/the_file.csv") or maybe renaming the file and trying: load("-ascii", "relative_directory/the_file.txt") Yet I keep getting the error: load: failed to read matrix from file "relative_directory/the_file.csv" or .txt without anymore details. Any tips are appreciated.

    Read the article

  • Building dictionary of words from large text

    - by LiorH
    I have a text file containing posts in English/Italian. I would like to read the posts into a data matrix so that each row represents a post and each column a word. The cells in the matrix are the counts of how many times each word appears in the post. The dictionary should consist of all the words in the whole file or a non exhaustive English/Italian dictionary. I know this is a common essential preprocessing step for NLP. Does anyone know of a tool\project that can perform this task? Someone mentioned apache lucene, do you know if lucene index can be serialized to a data-structure similar to my needs?

    Read the article

  • Append a large li to ul: best way?

    - by zsharp
    I have a li that has numerous nested divs. I am appending to a ul as follows: $("ul#List").append('<li><div>....many more nested divs...</li>'); the structure of the li is the same as the other lis in ul but i have to modify some elements. My question is simply am I doing it wrong by manually writing out the entire structure?

    Read the article

  • PHP readfile() and large downloads

    - by Nirmal
    While setting up an online file management system, and now I have hit a block. I am trying to push the file to the client using this modified version of readfile: function readfile_chunked($filename,$retbytes=true) { $chunksize = 1*(1024*1024); // how many bytes per chunk $buffer = ''; $cnt =0; // $handle = fopen($filename, 'rb'); $handle = fopen($filename, 'rb'); if ($handle === false) { return false; } while (!feof($handle)) { $buffer = fread($handle, $chunksize); echo $buffer; ob_flush(); flush(); if ($retbytes) { $cnt += strlen($buffer); } } $status = fclose($handle); if ($retbytes && $status) { return $cnt; // return num. bytes delivered like readfile() does. } return $status; } But when I try to download a 13 MB file, it's just breaking at 4 MB. What would be the issue here? It's definitely not the time limit of any kind because I am working on a local network and speed is not an issue. The memory limit in PHP is set to 300 MB. Thank you for any help.

    Read the article

  • A function where small changes in input always result in large changes in output

    - by snowlord
    I would like an algorithm for a function that takes n integers and returns one integer. For small changes in the input, the resulting integer should vary greatly. Even though I've taken a number of courses in math, I have not used that knowledge very much and now I need some help... An important property of this function should be that if it is used with coordinate pairs as input and the result is plotted (as a grayscale value for example) on an image, any repeating patterns should only be visible if the image is very big. I have experimented with various algorithms for pseudo-random numbers with little success and finally it struck me that md5 almost meets my criteria, except that it is not for numbers (at least not from what I know). That resulted in something like this Python prototype (for n = 2, it could easily be changed to take a list of integers of course): import hashlib def uniqnum(x, y): return int(hashlib.md5(str(x) + ',' + str(y)).hexdigest()[-6:], 16) But obviously it feels wrong to go over strings when both input and output are integers. What would be a good replacement for this implementation (in pseudo-code, python, or whatever language)?

    Read the article

  • git-svn on subset of large svn repo

    - by an146
    repo layout: a/1 a/2 a/3 ... b/1 b/2 ... c/1 c/2 ... git-svn works perfect for me if I work on 1 svn repo subdir. But right now I'm facing the need to work on several subdirs (like, a/1, a/2, and b/1), and there's much shit in repo besides them. I've managed to write a regexp for this, but git-svn with --ignore-paths seems to check each file's name against this regexp, instead of skipping entire folders, so it's too slow. /* Probably I should file a bug report about this */ So -- any ideas of handling this? If some Mercurial svn agent can do selective clones, it's OK too, but I'd better stick with git. My another idea was some selective svn proxy, but I haven't succeeded in googling anything like that. Thanks!

    Read the article

  • Need to check uptime on a large file being hosted

    - by trustfundbaby
    I have a dynamically generated rss feed that is about 150M in size (don't ask) The problem is that it keeps crapping out sporadically and there is no way to monitor it without downloading the entire feed to get a 200 status. Pingdom times out on it and returns a 'down' error. So my question is, how do I check that this thing is up and running

    Read the article

  • Decompressing a very large serialized object and managing memory

    - by Mike_G
    I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client. The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process. So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can. edit: here is the code i am using to serialize/compress (implemented as extension methods) public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer { Type t = obj.GetType(); if (!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes; using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); initialBytes = stream.ToArray(); } return initialBytes; } public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer { Type t = obj.GetType(); if(!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes = obj.SerializeObject(serializer); byte[] compressedBytes; using (MemoryStream stream = new MemoryStream(initialBytes)) { using (MemoryStream output = new MemoryStream()) { using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress)) { Pump(stream, zipper); } compressedBytes = output.ToArray(); } } return compressedBytes; } internal static void Pump(Stream input, Stream output) { byte[] bytes = new byte[4096]; int n; while ((n = input.Read(bytes, 0, bytes.Length)) != 0) { output.Write(bytes, 0, n); } } And here is my code for decompress/deserialize: public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer { using (MemoryStream stream = new MemoryStream(serializedObject)) { return (T)deserializer.ReadObject(stream); } } public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer { byte[] decompressedBytes; using(MemoryStream stream = new MemoryStream(compressedBytes)) { using(MemoryStream output = new MemoryStream()) { using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress)) { ObjectExtensions.Pump(zipper, output); } decompressedBytes = output.ToArray(); } } return decompressedBytes.DeSerializeObject<T, TU>(deserializer); } The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings. One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

    Read the article

  • Prefilling large volumes of body text in GMAIL compose getting a Request URI too long error

    - by Ali
    Hi guys this is a followup from the question: http://stackoverflow.com/questions/2583928/prefilling-gmail-compose-screen-with-html-text Where I was building a google apps application - I can call a gmail compose message page from my application using the url: https://mail.google.com/a/domain/?view=cm&fs=1&tf=1&source=mailto&to=WHOEVER%40COMPANY.COM&su=SUBJECTHERE&cc=WHOEVER%40COMPANY.COM&bcc=WHOEVER%40COMPANY.COM&body=PREPOPULATEDBODY However when I try to pass in the body parameter a very long line of text eg as a reply message body I get this error from GMAIL stating the REQUEST URI is too long. Is there a better way to do this as in a way to fillin the text body box of gmail compose section. Or some way to open the page and have it prefilled with javascript some how...

    Read the article

  • How to manage a large dataset using Spring MySQL and RowCallbackHandler

    - by rmarimon
    I'm trying to go over each row of a table in MySQL using Spring and a JdbcTemplate. If I'm not mistaken this should be as simple as: JdbcTemplate template = new JdbcTemplate(datasource); template.setFetchSize(1); // template.setFetchSize(Integer.MIN_VALUE) does not work either template.query("SELECT * FROM cdr", new RowCallbackHandler() { public void processRow(ResultSet rs) throws SQLException { System.out.println(rs.getString("src")); } }); I get an OutOfMemoryError because it is trying to read the whole thing. Any ideas?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >