Search Results

Search found 602 results on 25 pages for 'chunks'.

Page 15/25 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • JavaScript data grid for millions of rows

    - by Rudiger
    I need to present millions of rows of data to users in a grid using JavaScript. I don't want the user thinking about viewing finite pieces of data; it should appear that all of the data is available. Rather than downloading all the data at once, small chunks are downloaded as the user comes to them (by scrolling, keying in row numbers, searching). The rows will not be edited through this front end, so read-only answers are acceptable. What JavaScript data grids are best for this kind of seamless paging?

    Read the article

  • js regexp problem

    - by Alexander
    I have a searching system that splits the keyword into chunks and searches for it in a string like this: var regexp_school = new RegExp("(?=.*" + split_keywords[0] + ")(?=.*" + split_keywords[1] + ")(?=.*" + split_keywords[2] + ").*", "i"); I would like to modify this so that so that I would only search for it in the beginning of the words. For example if the string is: "Bbe be eb ebb beb" And the keyword is: "be eb" Then I want only these to hit "be ebb eb" In other words I want to combine the above regexp with this one: var regexp_school = new RegExp("^" + split_keywords[0], "i"); But I'm not sure how the syntax would look like. I'm also using the split fuction to split the keywords, but I dont want to set a length since I dont know how many words there are in the keyword string. split_keywords = school_keyword.split(" ", 3); If I leave the 3 out, will it have dynamic lenght or just lenght of 1? I tried doing a alert(split_keywords.lenght); But didnt get a desired response

    Read the article

  • Python: Unpack arbitary length bits for database storage

    - by sberry2A
    I have a binary data format consisting of 18,000+ packed int64s, ints, shorts, bytes and chars. The data is packed to minimize it's size, so they don't always use byte sized chunks. For example, a number whose min and max value are 31, 32 respectively might be stored with a single bit where the actual value is bitvalue + min, so 0 is 31 and 1 is 32. I am looking for the most efficient way to unpack all of these for subsequent processing and database storage. Right now I am able to read any value by using either struct.unpack, or BitBuffer. I use struct.unpack for any data that starts on a bit where (bit-offset % 8 == 0 and data-length % 8 == 0) and I use BitBuffer for anything else. I know the offset and size of every packed piece of data, so what is going to be the fasted way to completely unpack them? Many thanks.

    Read the article

  • How do I temporarily change the require path in Ruby ($:)?

    - by John Feminella
    I'm doing some trickery with a bunch of Rake tasks for a complex project, gradually refactoring away some of the complexity in chunks at a time. This has exposed the bizarre web of dependencies left behind by the previous project maintainer. What I'd like to be able to do is to add a specific path in the project to require's list of paths to be searched, aka $:. However, I only want that path to be searched in the context of one particular method. Right now I'm doing something like this: def foo() # Look up old paths, add new special path. paths = $: $: << special_path # Do work ... bar() baz() quux() # Reset. $:.clear $: << paths end def bar() require '...' # If called from within foo(), will also search special_path. ... end This is clearly a monstrous hack. Is there a better way?

    Read the article

  • Long IF tree with strings

    - by DalGr
    I have a C program which uses Lua for scripting. In order to keep readability and avoid importing several constants within the individual Lua states, I condense a large amount of functions within a simple call (such as "ObjectSet(id, "ANGLE", 45)"), by using an "action" string. To do this I have a large if tree comparing the action string to a list (such as "if(stringcompare(action, "ANGLE") ... else if (stringcompare(action, "X")... etc") This approach works well, and within the program it's not really slow, and is fairly quick to add a new action. But I kind of feel perfectionist. Is there a better way to do this in C? And having Lua in heavy use, maybe there is a way to use it for this purpose? (embedded "chunks" making a dictionary?) Although this part is mostly curiosity.

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • How to stream a WAV file?

    - by jonasb
    I'm writing an app where I record audio and upload the audio file over the web. In order to speed up the upload I want to start uploading before I've finished recording. The file I'm creating is a WAV file. My plan was to use multiple data chunks. So instead of the normal encoding (RIFF, fmt , data) I’m using (RIFF, fmt , data, data, ..., data). The first issue is that the RIFF header wants the total length of the whole file, but that is of course not known when streaming the audio (I’m now using an arbitrary number). The other problem is that I'm not sure if it's valid since Audacity doesn't recognise the file, and Windows Media Player opens the file but plays only a very small part. I've been reading WAV specs but haven’t found an answer. Any suggestions?

    Read the article

  • Which faces technology for use with GlassFish 2.1 and NetBeans 6.7?

    - by SteJav
    I'm running GlassFish 2.1 and using NetBeans 6.7. I'd like to create a web interface to my data using JSF 1.2. Trouble is, I'm not sure which 'faces' technology to learn (that includes some good documentation). JBoss/RichFaces seem pretty good on documentation, but I'm using GlassFish. Any thoughts? The choices appear overwhelming: Tomahawk Tobago Trinidad ICEfaces RCFaces Netadvantage WebGalileoFaces QuipuKit BluePrints Woodstock JBoss RichFaces Ajax4jsf ILOG Oracle ADF G4JSF Simplica Backbase jenia4faces VisualWebPack DynaFaces IBM Impl Dinamica Mojarra PrimeFaces jQuery OpenFaces ZK ExtJS Anybody had any experience with any of the above and found the documentation to be clear to a beginner? Being a JSF/Web beginner, I tried some ICEFaces, Mojarra tutorials and had a go at getting RichFaces working with NBeans and GlassFish, but no luck. Lots of XML complaints. I'm clearly missing some huge chunks of configuration, but I can't find any documentation to help me. Any suggestions would be much appreciated :-)

    Read the article

  • How to keep Hibernate mapping use under control as requirements grow

    - by David Plumpton
    I've worked on a number of Java web apps where persistence is via Hibernate, and we start off with some central class (e.g. an insurance application) without any time being spent considering how to break things up into manageable chunks. Over time as features are added we add more mappings (rates, clients, addresses, etc.) and then amount of time spent saving and loading an insurance object and everything it connects to grows. In particular you get close to a go-live date and performance testing with larger amounts of data in each table is starting to demonstrate that it's all too slow. Obviously there are a number of ways that we could attempt to partition things up, e.g. map only the client classes for the client CRUD screens, etc., which would have been better to get in place earlier rather than trying to work it in at the end of the dev cycle. I'm just wondering if there are recommendations about ways to handle/mitigate this.

    Read the article

  • Can FLV AAC stream be played in Android

    - by HariKJ
    Hi, I'm trying to build a radio player and the client is providing a stream which is a FLV container with the audio being AAC When I read the headers it shows up as audio/aacp. I have tried all possible ways such as using the 1) Streaming through mediaplayer (Does not work) 2) Use the NPR mode of using a proxy stream (I get a broken pipe exception) 3) Play it in chunks ( Plays but I need the SDCard and the playback is not very great) 4) Use the GPL'd FAAD2 Library but I would have to pay the royalty fee Can some one help me out on figuring this issue out. The last option that I have is to have my client change the stream to mp3 container (which I know that it works) Regards, Hari

    Read the article

  • How to transfer large files from desktop to server ( .NET)

    - by rahulchandran
    I am writing a .NET 2.0 based desktop client that will send large files ( well largish under 2GB) to a server. Need to develop the server as well. Server can be on any technology It should be secure so an underlying SSL stream is needed What are my options. Any obvious caveats etc I should be aware of To my mind the simplest solution is to open a tcp\ip connection over SSL to the server and send n packets each of size M bytes and then have the server append the chunks to the file and finally send an EOF packet as well IS this horrible. Will the perf suck on the server with all these disk writes What are any other clever options. I am limited to .NET 2.0 on the client if I did move to a WCF client will it buy be something magical and cool for this scenario Thanks

    Read the article

  • C# : Redirect console application output : How to flush the output?

    - by user93422
    I am spawning external console application and use async output redirect. as shown in this SO post My problem is it seems that the spawned process needs to produce certain amount of output before I get the OutputDataReceived event notification. I want to receive the OutputDataReceived event as soon as possible. I have a bare-bones redirecting application, and here are some observations: 1. When I call a simple 'while(true) print("X");' console application (C#) I receive output event immediately. 2. When I call a 3d party app I am trying to wrap from the command line I see the line-by-line output. 3. When I call that 3d party app from my bare-bone wrapper (see 1) - the output comes in chunks (about one page size). What happens inside that app? FYI: The app in question is a "USBee DX Data Exctarctor (Async bus) v1.0".

    Read the article

  • Migrating Data to MSSQL 2008

    - by Fred Clown
    I am trying to migrate data from an Informix database to MSSQL 2008. I've got quite a lot of data to move. I've been try multiple methods to get the data over, and so far SQLBulkCopy in multiple chunks seems to be the fastest that I can find. Does anyone know of a faster means of getting the data over? I'm trying to cut down on the transfer time so that on my cut-over date I don't run out of time to do the full cut-over. Thanks.

    Read the article

  • Is it possible to Update Sharepoint List Without "ID" ?

    - by Pari
    I want to Upload File on Sharepoint and while apploading only i want to add all properties of Uploaded Document. We get ID field only when Document is uploaded on Sharepoint. Is there any other way to Update List without passing ID Field. Example: <Batch OnError="Continue" ListVersion="1" ViewName="270C0508-A54F-4387-8AD0-49686D685EB2"> <Method ID="1" Cmd="Update"> <Field Name="ID">4<Field> <Field Name="Field_Name">Value</Field> </Method> <Method ID="2" Cmd="Update"> <Field Name="ID" >6</Field> <Field Name="Field_Name">Value</Field> </Method> </Batch> Refering Link I am using Sharepoint Web Services.And Uploading Document in Chunks.**

    Read the article

  • Which is faster in memory, ints or chars? And file-mapping or chunk reading?

    - by Nick
    Okay, so I've written a (rather unoptimized) program before to encode images to JPEGs, however, now I am working with MPEG-2 transport streams and the H.264 encoded video within them. Before I dive into programming all of this, I am curious what the fastest way to deal with the actual file is. Currently I am file-mapping the .mts file into memory to work on it, although I am not sure if it would be faster to (for example) read 100 MB of the file into memory in chunks and deal with it that way. These files require a lot of bit-shifting and such to read flags, so I am wondering that when I reference some of the memory if it is faster to read 4 bytes at once as an integer or 1 byte as a character. I thought I read somewhere that x86 processors are optimized to a 4-byte granularity, but I'm not sure if this is true... Thanks!

    Read the article

  • Can log4net appenders be defined in their own XML files?

    - by ladenedge
    I want to define a handful of (ADO.NET) appenders in my library, but allow users of my library to configure the use of those appenders. Something like this seems to be what I want: XmlConfigurator.Configure(appenderStream1); XmlConfigurator.Configure(appenderStream2); XmlConfigurator.Configure(); But that doesn't seem work, in spite of the debug output containing messages like this: Configuration update mode [Merge]. What is the right way to do this? Or, is there an alternative to asking users to duplicate large chunks of XML configuration?

    Read the article

  • Data format for content heavy iPhone app - Plist or XML?

    - by Toby
    Hello, I'm building an iPhone app that is essentially a book, it will be bundled with a lot of text-heavy content. I considered bundling the data as XML and load it when the application starts but the XML would contain a lot of nested structures and be a bit of a pain to parse. Would it be better to use a plist? I'm concerned about memory usage and plists are loaded entirely into memory - can they be parsed in chunks? Is there a maximum size to a plist and how efficient are they? I'm not sure how big the bundled content is going to be yet but I should imagine it could be anywhere from 500k to 4MB. Thanks in advance.

    Read the article

  • IDL-like parser that turns a document definition into powerful classes?

    - by paniq
    I am looking for an IDL-like (or whatever) translator which turns a DOM- or JSON-like document definition into classes which are accessible from both C++ and Python, within the same application expose document properties as ints, floats, strings, binary blobs and compounds: array, string dict (both nestable) (basically the JSON type feature set) allow changes to be tracked to refresh views of an editing UI provide a change history to enable undo/redo operations can be serialized to and from JSON (can also be some kind of binary format) allow to keep large data chunks on disk, with parts only loaded on demand provide non-blocking thread-safe read/write access to exchange data with realtime threads allow multiple editors in different processes (or even on different machines) to view and modify the document The thing that comes closest so far is the Blender 2.5 DNA/RNA system, but it's not available as a separate library, and badly documented. I'm most of all trying to make sure that such a lib does not exist yet, so I know my time is not wasted when I start to design and write such a thing. It's supposed to provide a great foundation to write editing UI components.

    Read the article

  • Efficiency: what block size of kernel-mode memory allocations?

    - by Robert
    I need a big, driver-internal memory buffer with several tens of megabytes (non-paged, since accessed at dispatcher level). Since I think that allocating chunks of non-continuous memory will more likely succeed than allocating one single continuous memory block (especially when memory becomes fragmented) I want to implement that memory buffer as a linked list of memory blocks. What size should the blocks have to efficiently load the memory pages? (read: not to waste any page space) A multiple of 4096? (equally to the page size of the OS) A multiple of 4000? (not to waste another page for OS-internal memory allocation information) Another size? Target platform is Windows NT = 5.1 (XP and above) Target architectures are x86 and amd64 (not Itanium)

    Read the article

  • send() always interrupted by EPIPE

    - by Manuel Abeledo
    I've this weird behaviour in a multithreaded server programmed in C under GNU/Linux. While it's sending data, eventually will be interrupted by SIGPIPE. I managed to ignore signals in send() and treat errno after each action because of it. So, it has two individual sending methods, one that sends a large amount of data at once (or at least tries to), and another that sends a nearly similar amount and slices it in little chunks. Finally, I tried with this to keep it sending data. do { total_bytes_sent += send(client_sd, output_buf + total_bytes_sent, output_buf_len - total_bytes_sent, MSG_NOSIGNAL); } while ((total_bytes_sent < output_buf_len) && (errno != EPIPE)); This ugly piece of code does its work in certain situations, but not always. I'm pretty sure it's not a hardware or ISP problem, as this server is running in six european servers, four in Germany and two in France. Any ideas? Thanks in advance.

    Read the article

  • how do I download a large file (via HTTP) in .NET

    - by nickcartwright
    I need to download a LARGE file (2GB) over HTTP in a C# console app. Problem is, after about 1.2GB, the app runs out of memory. Here's the code I'm using: WebClient request = new WebClient(); request.Credentials = new NetworkCredential(username, password); byte[] fileData = request.DownloadData(baseURL + fName); As you can see... I'm reading the file directly into memory. I'm pretty sure I could solve this if I were to read the data back from HTTP in chunks and write it to a file on disk. Does anyone know how I could do this?

    Read the article

  • Sending a large file over network continuously

    - by David Parunakian
    Hello, We need to write software that would continuously (i.e. new data is sent as it becomes available) send very large files (several Tb) to several destinations simultaneously. Some destinations have a dedicated fiber connection to the source, while some do not. Several questions arise: We plan to use TCP sockets for this task. What failover procedure would you recommend in order to handle network outages and dropped connections? What should happen upon upload completion: should the server close the socket? If so, then is it a good design decision to have another daemon provide file checksums on another port? Could you recommend a method to handle corrupted files, aside from downloading them again? Perhaps I could break them into 10Mb chunks and calculate checksums for each chunk separately? Thanks.

    Read the article

  • How to split and join array in C++?

    - by Richard Knop
    I have a byte array like this: lzo_bytep out; // my byte array size_t uncompressedImageSize = 921600; out = (lzo_bytep) malloc((uncompressedImageSize + uncompressedImageSize / 16 + 64 + 3)); wrkmem = (lzo_voidp) malloc(LZO1X_1_MEM_COMPRESS); // Now the byte array has 802270 bytes r = lzo1x_1_compress(imageData, uncompressedImageSize, out, &out_len, wrkmem); How can I split it into smaller parts under 65,535 bytes (the byte array is one large packet which I want to sent over UDP which has upper limit 65,535 bytes) and then join those small chunks back into a continuous array?

    Read the article

  • How can I ensure that JavaScript inserted via AJAX will be executed after the accompanying HTML (als

    - by RenderIn
    I've got portions of pages being replaced with HTML retrieved via AJAX calls. Some of the HTML coming back has JavaScript that needs to be run once in order to initialize the accompanying HTML (setting up event handlers). Since the document has already been loaded, when I replace chunks of HTML using jQuery's .html function, having jQuery(document).ready(function() {...}); doesn't execute since the page loaded long before and this is just a snippet of HTML being replaced. What's the best way to attach event handlers whose code is packaged along with the HTML it's interested in, when that content is loaded via AJAX? Should I just put a procedural block of javascript after the HTML , so that when I insert the new HTML block, jQuery will execute the javascript immediately? Is the HTML definitely in the DOM and ready to be acted upon by JavaScript which is in the same .html call?

    Read the article

  • [multiple issues] Customizing Xcode [fonts, code sense, and more]

    - by wwrob
    How can I make code completion case-sensitive? How can I make Ctrl-k kill the content of the line and the new line character? How can I make backspace always delete only one character, no matter what it is? Right now, it deletes spaces in chunks equal to my indent level. How to change the indentation style in file templates? I like to have the opening brace on its own line. How can I make the font aliased? EDIT: Issues 4 and 5 are solved. 1 through 3 are still open.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >