Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 127/1524 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • java : writing large files ?

    - by umanga
    Greetings , I get huge number of records from database and write into a file.I was wondering what the best way to write huge files. (1Gb - 10Gb). Currently I am using BufferedWriter BufferedWriter mbrWriter=new BufferedWriter(new FileWriter(memberCSV)); while(done){ //do writings } mbrWriter.close();

    Read the article

  • How to permanently remove xcuserdata under the project.xcworkspace and resolve uncommitted changes

    - by JeffB6688
    I am struggling with a problem with a merge conflict (see Cannot Merge due to conflict with UserInterfaceState.xcuserstate). Based on feedback, I needed to remove the UserInterfaceState.xcuserstate using git rm. After considerable experimentation, I was able to remove the file with "git rm -rf project.xcworkspace/xcuserdata". So while I was on the branch I was working on, it almost immediately came back as a file that needed to be committed. So I did the git rm on the file again and just switched back to the master. Then I performed a git rm on the file again. The operation again removed the file. But I am still stuck. If I try to merge the branch into the master branch, it again says that I have uncommitted changes. So I go to commit the change. But this time, it shows UserInterfaceState.xcuserstate as the file to commit, but the box is unchecked and it can't be checked. So I can't move forward. Is there a way to use 'git rm' to permanently remove xcuserdata under the project.xcworkspace? Help!! Any ideas?

    Read the article

  • Upload 1GB files using chunking in PHP

    - by rjha94
    I have a web application that accepts file uploads of up to 4 MB. The server side script is PHP and web server is NGINX. Many users have requested to increase this limit drastically to allow upload of video etc. However there seems to be no easy solution for this problem with PHP. First, on the client side I am looking for something that would allow me to chunk files during transfer. SWFUpload does not seem to do that. I guess I can stream uploads using Java FX (http://blogs.sun.com/rakeshmenonp/entry/javafx_upload_file ) but I can not find any equivalent of request.getInputStream in PHP. Increasing browser client_post limits or php.ini upload or max_execution times is not really a solution for really large files (~ 1GB) because maybe the browser will time out and think of all those blobs stored in memory. is there any way to solve this problem using PHP on server side? I would appreciate your replies.

    Read the article

  • very large string in memory

    - by bushman
    Hi, I am writing a program for formatting 100s of MB String data (nearing a gig) into xml == And I am required to return it as a response to an HTTP (GET) request . I am using a StringWriter/XmlWriter to build an XML of the records in a loop and returning the stringWriter.ToString() during testing I saw a few --out of memory exceptions-- and quite clueless on how to find a solution? do you guys have any suggestions for a memory optimized delivery of the response? is there a memory efficient way of encoding the data? or maybe chunking the data -- I just can not think of how to return it without building the whole thing into one HUGE string object thanks

    Read the article

  • JMS Session pooling for large numbers of Topic subscribers

    - by matthewKizoom
    I'm writing an app that will create lots of JMS topic subscribers. What is best practise regarding reusing sessions? A session per subscriber? A pool of sessions? With a session per subscriber the thread count seems unreasonable. Is this a job for something like a ServerSessionPool? What I've seen so far seems to suggest that ServerSessionPool is more geared towards one receiver consuming messages concurrently rather than lots of receivers. I'm currently working with HornetQ 2.0.0GA embedded in JBoss 4.3.0CP6.

    Read the article

  • Sharing large objects between ruby processes without a performance hit

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds

    Read the article

  • Converting From Castle Windsor To StructureMap In An MVC2 Project

    - by alphadogg
    I am learning about best practices in MVC2 and I am knocking off a copy of the "Who Can Help Me" project (http://whocanhelpme.codeplex.com/) off Codeplex. In it, they use Castle Windsor for their DI container. One "learning" task I am trying to do is convert this subsystem in this project to use StructureMap. Basically, at Application_Start(), the code news up a Windsor container. Then, it goes through multiple assemblies, using MEF: public static void Register(IContainer container) { var catalog = new CatalogBuilder() .ForAssembly(typeof(IComponentRegistrarMarker).Assembly) .ForMvcAssembly(Assembly.GetExecutingAssembly()) .ForMvcAssembliesInDirectory(HttpRuntime.BinDirectory, "CPOP*.dll") // Won't work in Partial trust .Build(); var compositionContainer = new CompositionContainer(catalog); compositionContainer .GetExports<IComponentRegistrar>() .Each(e => e.Value.Register(container)); } and any class in any assembly that has an IComponentRegistrar interface will get its Register() method run. For example, the controller registrar's Register() method implementation basically is: public void Register(IContainer container) { Assembly.GetAssembly(typeof(ControllersRegistrarMarker)).GetExportedTypes() .Where(IsController) .Each(type => container.AddComponentLifeStyle( type.Name.ToLower(), type, LifestyleType.Transient )); } private static bool IsController(Type type) { return typeof(IController).IsAssignableFrom(type); } Hopefully, I am not butchering WCHM too much. I am wondering how does one do this with StructureMap?

    Read the article

  • Usage of open source libraries in high governance and risk-averse large organizations (banks, financ

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source dependencies (and also tools). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. I'm also interested in any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • Making a Form Input Field Large

    - by John
    Hello, For the form below, how could I make the input field big, like maybe 100 pixels in height by 400 pixels in length? Thanks in advance, John <form action="http://www...com/sandbox/comments/comments2.php" method="post"> <input type="hidden" value="'.$_SESSION['loginid'].'" name="uid"> <div class="addacomment"><label for="title">Add a comment:</label></div> <div class="submissionfield"><input name="title" type="title" id="title" maxlength="1000"></div> <div class="submissionbutton"><input name="submit" type="submit" value="Submit"></div> </form>

    Read the article

  • Display large PDF using iPhone SDK

    - by MadJawa
    Hello, I was wondering what is the best way to display a big PDF file (it's a map actually) using iPhone SDK (the file is around 5MB), because it's really slow in a UIWebView. I want to be able to scroll through the PDF and zoom in/out. Also do you think that it would be better to convert it to a PNG? Thank in advance

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Browser Client Side Storage aka a large Cookie

    - by Ian
    Hi, I need to store about 20-30k or data on the client side when using a website. I was using a cookie, but this is to small for my needs. Is there something else that I can use? I need to be able to do this via javascript. Server side storage is a last resort but not what I am looking for. I need it to work for Chrome, IE and firefox. Thanks Ian

    Read the article

  • Question about registering COM server and Add Reference to it in a C# project

    - by smwikipedia
    I build a COM server in raw C++, here is the procedure: (1) write an IDL file to define the interface and library. (2) use msidl.exe to compile the IDL file to necessary .h, .c, .tlb files. (3) implement the COM server in C++ and build a .dll file. (4) add the following registry entris: [HKEY_CLASSES_ROOT\RawComCarLib.ComCar.1\CurVer] @="RawComCarLib.ComCar.1" ;CLSID [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\InprocServer32] @="D:\com\Project01.dll" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\ProgID] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\TypeLib] @="{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}" ;TypeLib [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0] @="Car Server Type Lib" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="D:\com\Project01.tlb" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\FLAGS] @="0" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="C:\Windows\System32\msdatsrc.tlb" (5) I try to add reference to the COM by click the Add Reference in the C# project. (6) In the COM tab, I saw my "Car Server Type Lib", it's ok until now. I try to use the Object Browser to browse my COM lib, but the Visual Studio said "the following components could not be browsed", and I noticed that there's no new reference added to the list in the C# project Reference. I can use the tlbimp.exe to generate a interop.CarCom.dll, and then use the COM through this interop dll, but I want this interop assembly to be generated automatically when I just add reference to the COM. Could someone tell me what's wrong? Many thanks.

    Read the article

  • What call from the Android Development Tools API should I use to poll project target loading?

    - by Ricardo Gladwell
    I'm writing my own eclipse plug-in that integrates with the Eclipse Android Development Tools (ADT). However, I'm getting a CoreException ("Project target not loaded yet.") thrown when I attempt to call IProject.build on an Android project as part of a unit test: IProject project = importProject(...); project.build(IncrementalProjectBuilder.FULL_BUILD, monitor); Should I be waiting for the project target to load before calling the above? If so, what call should I use to poll the project target loading status?

    Read the article

  • Easy GUI way to auto scale EC2 and RDS: aws console, scalr, ylastic...?

    - by Zillo
    I am managing all my instances with the AWS Management Console (the GUI web console) but now I want to use Auto Scale and it seems that this can not be done with that console. Yes, there is CloudWatch but I can only create alarms (e-mail notifications), it seems that CouldWatch needs you to add the auto scale policy in some other place (by command line console?). I would like to use some easy GUI interface. Ylastic and Scalr seems to be a good option. Which one do you think is better? Regarding Scalr, is there any difference between the open source software Scalr and the service Scalr.net? I mean, is the GUI interface the same? I like the idea of the Scalr because I do not need to give my Secret Access Key to a third party (like in Ylastic or in Scalr.net) One question about the Scalr software, it has to be installed in the instances or it must be installed in another machine? Do I need to setup again all my security permissions, AMIs, snapshots, etc. or I can use AWS Management Console for everything and Scalr just to auto scale.

    Read the article

  • Octave: importing a large matrix in csv format

    - by Massagran
    I'm trying to import a matrix (about 80.000 rows) from a csv file to Octave. The obvious solution seems something like: load("-ascii","relative_directory/the_file.csv") or maybe renaming the file and trying: load("-ascii", "relative_directory/the_file.txt") Yet I keep getting the error: load: failed to read matrix from file "relative_directory/the_file.csv" or .txt without anymore details. Any tips are appreciated.

    Read the article

  • PHP readfile() and large downloads

    - by Nirmal
    While setting up an online file management system, and now I have hit a block. I am trying to push the file to the client using this modified version of readfile: function readfile_chunked($filename,$retbytes=true) { $chunksize = 1*(1024*1024); // how many bytes per chunk $buffer = ''; $cnt =0; // $handle = fopen($filename, 'rb'); $handle = fopen($filename, 'rb'); if ($handle === false) { return false; } while (!feof($handle)) { $buffer = fread($handle, $chunksize); echo $buffer; ob_flush(); flush(); if ($retbytes) { $cnt += strlen($buffer); } } $status = fclose($handle); if ($retbytes && $status) { return $cnt; // return num. bytes delivered like readfile() does. } return $status; } But when I try to download a 13 MB file, it's just breaking at 4 MB. What would be the issue here? It's definitely not the time limit of any kind because I am working on a local network and speed is not an issue. The memory limit in PHP is set to 300 MB. Thank you for any help.

    Read the article

  • Append a large li to ul: best way?

    - by zsharp
    I have a li that has numerous nested divs. I am appending to a ul as follows: $("ul#List").append('<li><div>....many more nested divs...</li>'); the structure of the li is the same as the other lis in ul but i have to modify some elements. My question is simply am I doing it wrong by manually writing out the entire structure?

    Read the article

  • A function where small changes in input always result in large changes in output

    - by snowlord
    I would like an algorithm for a function that takes n integers and returns one integer. For small changes in the input, the resulting integer should vary greatly. Even though I've taken a number of courses in math, I have not used that knowledge very much and now I need some help... An important property of this function should be that if it is used with coordinate pairs as input and the result is plotted (as a grayscale value for example) on an image, any repeating patterns should only be visible if the image is very big. I have experimented with various algorithms for pseudo-random numbers with little success and finally it struck me that md5 almost meets my criteria, except that it is not for numbers (at least not from what I know). That resulted in something like this Python prototype (for n = 2, it could easily be changed to take a list of integers of course): import hashlib def uniqnum(x, y): return int(hashlib.md5(str(x) + ',' + str(y)).hexdigest()[-6:], 16) But obviously it feels wrong to go over strings when both input and output are integers. What would be a good replacement for this implementation (in pseudo-code, python, or whatever language)?

    Read the article

  • Decompressing a very large serialized object and managing memory

    - by Mike_G
    I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client. The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process. So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can. edit: here is the code i am using to serialize/compress (implemented as extension methods) public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer { Type t = obj.GetType(); if (!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes; using (MemoryStream stream = new MemoryStream()) { serializer.WriteObject(stream, obj); initialBytes = stream.ToArray(); } return initialBytes; } public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer { Type t = obj.GetType(); if(!Attribute.IsDefined(t, typeof(DataContractAttribute))) return null; byte[] initialBytes = obj.SerializeObject(serializer); byte[] compressedBytes; using (MemoryStream stream = new MemoryStream(initialBytes)) { using (MemoryStream output = new MemoryStream()) { using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress)) { Pump(stream, zipper); } compressedBytes = output.ToArray(); } } return compressedBytes; } internal static void Pump(Stream input, Stream output) { byte[] bytes = new byte[4096]; int n; while ((n = input.Read(bytes, 0, bytes.Length)) != 0) { output.Write(bytes, 0, n); } } And here is my code for decompress/deserialize: public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer { using (MemoryStream stream = new MemoryStream(serializedObject)) { return (T)deserializer.ReadObject(stream); } } public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer { byte[] decompressedBytes; using(MemoryStream stream = new MemoryStream(compressedBytes)) { using(MemoryStream output = new MemoryStream()) { using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress)) { ObjectExtensions.Pump(zipper, output); } decompressedBytes = output.ToArray(); } } return decompressedBytes.DeSerializeObject<T, TU>(deserializer); } The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings. One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

    Read the article

  • git-svn on subset of large svn repo

    - by an146
    repo layout: a/1 a/2 a/3 ... b/1 b/2 ... c/1 c/2 ... git-svn works perfect for me if I work on 1 svn repo subdir. But right now I'm facing the need to work on several subdirs (like, a/1, a/2, and b/1), and there's much shit in repo besides them. I've managed to write a regexp for this, but git-svn with --ignore-paths seems to check each file's name against this regexp, instead of skipping entire folders, so it's too slow. /* Probably I should file a bug report about this */ So -- any ideas of handling this? If some Mercurial svn agent can do selective clones, it's OK too, but I'd better stick with git. My another idea was some selective svn proxy, but I haven't succeeded in googling anything like that. Thanks!

    Read the article

  • Need to check uptime on a large file being hosted

    - by trustfundbaby
    I have a dynamically generated rss feed that is about 150M in size (don't ask) The problem is that it keeps crapping out sporadically and there is no way to monitor it without downloading the entire feed to get a 200 status. Pingdom times out on it and returns a 'down' error. So my question is, how do I check that this thing is up and running

    Read the article

  • How to manage a large dataset using Spring MySQL and RowCallbackHandler

    - by rmarimon
    I'm trying to go over each row of a table in MySQL using Spring and a JdbcTemplate. If I'm not mistaken this should be as simple as: JdbcTemplate template = new JdbcTemplate(datasource); template.setFetchSize(1); // template.setFetchSize(Integer.MIN_VALUE) does not work either template.query("SELECT * FROM cdr", new RowCallbackHandler() { public void processRow(ResultSet rs) throws SQLException { System.out.println(rs.getString("src")); } }); I get an OutOfMemoryError because it is trying to read the whole thing. Any ideas?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >