Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 3/869 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Upload large files in .NET

    - by Austin
    I've done a good bit of research to find an upload component for .NET that I can use to upload large files, has a progress bar, and can resume the upload of large files. I've come across some components like AjaxUploader, SlickUpload, and PowUpload, to name a few. Each of these options cost money and only PowUpload does the resumable upload, but it does it with a java applet. I'm willing to pay for a component that does those things well, but if I could write it myself that would be best. I have two questions: Is it possible to resume a file upload on the client without using flash/java/Silverlight? Does anyone have some code or a link to an article that explains how to write a .NET HTTPHandler that will allow streaming upload and an ajax progress bar? Thank you, Austin [Edit] I realized I do need to be able to do resumable file uploads for my project, any suggestions for components that can do that?

    Read the article

  • Cutting objects and applying texture to cut. Unity3d/C#

    - by Timothy Williams
    Basically what I'm trying to do is figure out how to calculate realtime cutting of objects, and apply a texture to the cut. I found some good scripts, but most of them have been abandoned and aren't really fully working yet. Applying textures: http://forum.unity3d.com/threads/75949-Mesh-Real-Cutting?highlight=mesh+real+cutting Cutting: http://forum.unity3d.com/threads/78594-Object-Cutter Another (Free) Cutter (Also, I'm not entirely sure how this one will handle cutting complex meshes): http://forum.unity3d.com/threads/69992-fake-slicer?p=449114&viewfull=1#post449114 My plan as of right now is to combine links 1 & 2 or 1 & 3 programming wise. What I'm asking here for is any advice on how to advance (links to asset store packages, or other codes to show how to accomplish something complex like this.)

    Read the article

  • Large File Upload in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay this is a big BIG B-I-G problem. And with SP2010 it’s going to be more prominent, because atleast at the server side, SharePoint can support large files much much better than SharePoint 2007 ever did. The issue with very large files being uploaded through any browser based API are - Reliably transferring gigabyte or bigger files without breakages over a protocol like HTTP, which is better suited for tiny transfers like images and text. Not killing your browser because it has to load all that in memory Not killing your web server because All that you upload through HTTP post, first gets streamed into IIS Memory, w3wp.exe memory before the ENTIRE FILE finishes uploading .. before it is stored. Which means, You cannot show an accurate and live progress bar of the upload, IIS gives you no such accurate metric of an upload. All the counters it gives you are approximate. Your w3wp.exe eats up all server memory – 4GB of it, for a 4GB upload. A thread is kept busy for the entire duration of the upload, thereby greatly limiting your web server’s capability to serve newer requests. Kills effective load balancing. Not killing your content database because, As you are uploading a very large file, that large file gets written sequentially into the DB, and therefore for a very large file very severely impacts the database performance. I had put together another video showing RBS usage in SharePoint 2010. I talked about many practical ramifications of using RBS in SharePoint in that video. Note that enabling large file support will never ever be a point and click job, simply because there are too many questions one needs to ask, and too many things one needs to plan for. However, one part that will remain common across all large file upload scenarios, in SharePoint or outside of SharePoint is to do it efficiently while not killing the web server. In this video, I describe using the Telerik Silverlight Upload control with SharePoint 2010 to enable efficient large file uploads in SharePoint. Presenting .. The video Comment on the article ....

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • Large Reports for MSRS

    - by Greg Lorenz
    I have a report that needs to be able to render a very large amount of pages (about 4500 in this instance) in a web browser. The total time needed to finish on the report server from start time to end time is about 30 mins for the instance that I am looking at. Does anyone know what options exist for handling the rendering of such a large report in a web browser? In terms of looking into how this can be resolved I have already performed the following tasks. The report gets its data off of a database table that already has the data flattened to the point that the TimeDataRetrieval on the report server is 17812 or about 18 secs. The report itself has been reformatted to include the least expensive report objects that it can in order to render the data in the correct format. I basically consists of a table with about 4 nested tables and thats it. We were trying to accomplish this on a 2005 report server but continued to run into memory issues that were not feasible for our clients. In response to that we moved this onto a 2008 report server to take advantage of the fact that it uses the file system instead of memory and finally were able to get this to work without running out of the available memory but of course it takes much longer.

    Read the article

  • Retaining Managed objects - more general retaining objects

    - by Luuk D. Jansen
    A quick question regarding Managed Objects. I created an Array with Managed Objects (in Object 1: TableViewConbtroller), and pass one of those objects to another class/object (object 2: TableCell). The original array should still be retained in the original caller class. Then Object 2 is released, does that mean that that particular item in the array is released as well, as the reference to it in Object 2 was released? I am trying to better understand how to work with ManagedObjects as I get 'Object was released' errors. [EDIT] After some experimenting I came across the following scenario: I have the main AppDelegate. In a different class I create an AppDelegate to obtain the ManagedObjectContext. appDelegate = (iDomsAppDelegate *)[[UIApplication sharedApplication] delegate]; [self setContext:[appDelegate managedObjectContext]]; When the class is finished, and I release it, the variable in the class 'appDelegate' is also released. But then the ManagedObjectContext is closed, and obvious any future attempt to use it will cause a crash. So should I leave the appDelegate unreleased? This comes to the same question as the above about when and how to release in those situations where an objects is used from another class. I think a way of putting it is, how to know when you own an object and when not.

    Read the article

  • Should these concerns be separated into separate objects?

    - by Lewis Bassett
    I have objects which implement the interface BroadcastInterface, which represents a message that is to be broadcast to all users of a particular group. It has a setter and getter method for the Subject and Body properties, and an addRecipientRole() method, which takes a given role and finds the contact token (e.g., an email address) for each user in the role and stores it. It then has a getContactTokens() method. BroadcastInterface objects are passed to an object that implements BroadcasterInterface. These objects are responsible for broadcasting a passed BroadcastInterface object. For example, an EmailBroadcaster implementation of the BroadcasterInterface will take EmailBroadcast objects and use the mailer services to email them out. Now, depending on what BroadcasterInterface implementation is used to broadcast, a different implementation of BroadcastInterface is used by client code. The Single Responsibility Principle seems to suggest that I should have a separate BroadcastFactory object, for creating BroadcastInterface objects, depending on what BroadcasterInterface implementation is used, as creating the BroadcastInterface object is a different responsibility to broadcasting them. But the class used for creating BroadcastInterface objects depends on what implementation of BroadcasterInterface is used to broadcast them. I think, because the knowledge of what method is used to send the broadcasts should only be configured once, the BroadcasterInterface object should be responsible for providing new BroadcastInterface objects. Does the responsibility of “creating and broadcasting objects that implement the BroadcastInterface interface” violate the Single Responsibility Principle? (Because the contact token for sending the broadcast out to the users will differ depending on the way it is broadcasted, I need different broadcast classes—though client code will not be able to tell the difference.)

    Read the article

  • Split NSData objects into other NSData objects with a given size

    - by Cedric Vandendriessche
    I'm having an NSData object of approximately 1000kb big. Now I want to transfer this via bluetooth. This would be better if I have let's say 10 objects of 100kb. It comes to mind that I should use the -subdataWithRange: method of NSData. I haven't really worked with NSRange. Well I know how it works, but then to read from a given location with the length: 'to end of file'... I've no idea how to do that. Some code on how to split this into multiple 100kb NSData objects would really help me out here. (it probably involves the length method to see how many objects should be made..?) Thank you in advance.

    Read the article

  • PHP bitwise left shifting 32 spaces problem and bad results with large numbers arithmetic operations

    - by Victor Stanciu
    Hello, I have the following problems: First: I am trying to do a 32-spaces bitwise left shift on a large number, and for some reason the number is always returned as-is. For example: echo(516103988<<32); // echoes 516103988 Because shifting the bits to the left one space is the equivalent of multiplying by 2, i tried multiplying the number by 2^32, and it works, it returns 2216649749795176448. Second: I have to add 9379 to the number from the above point: printf('%0.0f', 2216649749795176448 + 9379); // prints 2216649749795185920 Should print: 2216649749795185827

    Read the article

  • Creating C++ objects

    - by Phenom
    I noticed that there are two ways to create C++ objects: BTree *btree = new BTree; and BTree btree; From what I can tell, the only difference is in how class objects are accessed (. vs. - operator), and when the first way is used, private integers get initialized to 0. Which way is better, and what's the difference? How do you know when to use one or the other?

    Read the article

  • dictionary interface for large data sets

    - by Richard
    I have a set of key/values (all text) that is too large to load in memory at once. I would like to interact with this data via a Python dictionary-like interface. Does such a module already exist? Reading key values should be efficient and values compressed on disk to save space.

    Read the article

  • Fast ruby http library for large XML downloads

    - by Vlad Zloteanu
    I am consuming various XML-over-HTTP web services returning large XML files ( 2MB). What would be the fastest ruby http library to reduce the 'downloading' time? Required features: both GET and POST requests gzip/deflate downloads (Accept-Encoding: deflate, gzip) - very important I am thinking between: open-uri Net::HTTP curb but you can also come with other suggestions. P.S. To parse the response, I am using a pull parser from Nokogiri, so I don't need an integrated solution like rest-client or hpricot.

    Read the article

  • view Large PDF file in iPhone SDK

    - by aRrAY
    I have followed some tutorial that teach me how to view PDF file in UIWebView, however I found that if the file size is large, it will be lag when I zoom the PDF, but it doesn't occur in Mobile Safari. So anyone know how to solve it?

    Read the article

  • Hadoop: Processing large serialized objects

    - by restrictedinfinity
    I am working on development of an application to process (and merge) several large java serialized objects (size of order GBs) using Hadoop framework. Hadoop stores distributes blocks of a file on different hosts. But as deserialization will require the all the blocks to be present on single host, its gonna hit the performance drastically. How can I deal this situation where different blocks have to cant be individually processed, unlike text files ?

    Read the article

  • How to export large data to Excel

    - by mavera
    I have a criteria page in my asp.net application. When user clicks report button, firstly in a new page results are binded to a datagrid, then this page is exported to excel file with changing content type method. That normally works, but when large amount of data comes, system.outofmemoryexception is thrown. Does anyone know a way to fix this problem, or another usefull technic to do?

    Read the article

  • How to count JavaScript array objects?

    - by Nikita Sumeiko
    When I have a JavaScript array like this: var member = { "mother": { "name" : "Mary", "age" : "48" }, "father": { "name" : "Bill", "age" : "50" }, "brother": { "name" : "Alex", "age" : "28" } } How to count objects in this array?!I mean how to get a counting result 3, because there're only 3 objects inside: mother, father, brother?!

    Read the article

  • Shortening large CSV on debian

    - by Unkwntech
    I have a very large CSV file and I need to write an app that will parse it but using the 6GB file to test against is painful, is there a simple way to extract the first hundred or two lines without having to load the entire file into memory? The file resides on a Debian server.

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • Viewing large XML files in eclipse?

    - by Paul Wicks
    I'm working on a project involving some large XML files (from 50MB to over 1GB) and it would be nice if I could view them in eclipse (simple text view is fine) without Java running out of heap space. I've tried tweaking the amount of memory available to the jvm in eclipse.ini but haven't had much success. Any ideas?

    Read the article

  • How value objects are saving and loading?

    - by yeraycaballero
    Since there isn't respositories for value objects. How can I load all value objects? Suppose we are modeling a blog application and we have this classes: Post (Entity) Comment (Value object) Tag (Value object) PostsRespository (Respository) I Know that when I save a new post, its tags are saving with it in the same table. But how could I load all tags of all posts. Has PostsRespository got a method to load all tags? I usually do it, but I want to know others opinions

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >