Search Results

Search found 39613 results on 1585 pages for 'large object heap'.

Page 11/1585 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Large File Upload in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Okay this is a big BIG B-I-G problem. And with SP2010 it’s going to be more prominent, because atleast at the server side, SharePoint can support large files much much better than SharePoint 2007 ever did. The issue with very large files being uploaded through any browser based API are - Reliably transferring gigabyte or bigger files without breakages over a protocol like HTTP, which is better suited for tiny transfers like images and text. Not killing your browser because it has to load all that in memory Not killing your web server because All that you upload through HTTP post, first gets streamed into IIS Memory, w3wp.exe memory before the ENTIRE FILE finishes uploading .. before it is stored. Which means, You cannot show an accurate and live progress bar of the upload, IIS gives you no such accurate metric of an upload. All the counters it gives you are approximate. Your w3wp.exe eats up all server memory – 4GB of it, for a 4GB upload. A thread is kept busy for the entire duration of the upload, thereby greatly limiting your web server’s capability to serve newer requests. Kills effective load balancing. Not killing your content database because, As you are uploading a very large file, that large file gets written sequentially into the DB, and therefore for a very large file very severely impacts the database performance. I had put together another video showing RBS usage in SharePoint 2010. I talked about many practical ramifications of using RBS in SharePoint in that video. Note that enabling large file support will never ever be a point and click job, simply because there are too many questions one needs to ask, and too many things one needs to plan for. However, one part that will remain common across all large file upload scenarios, in SharePoint or outside of SharePoint is to do it efficiently while not killing the web server. In this video, I describe using the Telerik Silverlight Upload control with SharePoint 2010 to enable efficient large file uploads in SharePoint. Presenting .. The video Comment on the article ....

    Read the article

  • Can an object oriented program be seen as a Finite State Machine?

    - by Peretz
    This might be a philosophical/fundamental question, but I just want to clarify it. In my understanding a Finite State Machine is a way of modeling a system in which the system's output will not only depend on the current inputs, but also the current state of the system. Additionally, as the name suggests it, a finite state machine can be segmented in a finite N number of states with its respective state and behavior. If this is correct, shouldn't every single object with data and function members be a state in our object oriented model, making any object oriented design a finite state machine? If that is not the interpretation of a FSM in object design, what exactly people mean when they implement a FSM in software? am I missing something? Thanks

    Read the article

  • Questioning one of the arguments for dependency injection: Why is creating an object graph hard?

    - by oberlies
    Dependency injection frameworks like Google Guice give the following motivation for their usage (source): To construct an object, you first build its dependencies. But to build each dependency, you need its dependencies, and so on. So when you build an object, you really need to build an object graph. Building object graphs by hand is labour intensive (...) and makes testing difficult. But I don't buy this argument: Even without dependency injection, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way: class BillingService { private final CreditCardProcessor processor; private final TransactionLog transactionLog; // constructor for tests, taking all collaborators as parameters BillingService(CreditCardProcessor processor, TransactionLog transactionLog) { this.processor = processor; this.transactionLog = transactionLog; } // constructor for production, calling the (productive) constructors of the collaborators public BillingService() { this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog()); } public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard) { ... } } So there may be other arguments for dependency injection (which are out of scope for this question!), but easy creation of testable object graphs is not one of them, is it?

    Read the article

  • Is there an alternative to the term "calling object"?

    - by ybakos
    Let's suppose you've got a class defined (in pseudocode): class Puppy { // ... string sound = "Rawr!"; void bark() { print(sound); } } And say, given a Puppy instance, you call it's bark() method: Puppy p; p.bark(); Notice how bark() uses the member variable sound. In many contexts, I've seen folks describe sound as the member variable of the "calling object." My question is, what's a better term to use than "calling object?" To me, the object is not doing any calling. We know that member functions are in a way just functions with an implicit this or self parameter. I've come up with "receiving object," or "message recipient," which makes sense if you're down with the "messaging" paradigm. Do any of you happy hackers have a term that you like to use? I feel it should mean "the object upon which a method is called" and TOUWAMIC just doesn't cut it.

    Read the article

  • spl_object_hash for PHP < 5.2 (unique ID for object instances)

    - by Rowan
    I'm trying to get unique IDs for object instances in PHP 5+. The function, spl_object_hash() is available from PHP 5.2 but I'm wondering if there's a workaround for older versions. There are a couple of functions in the comments on php.net but they're not working for me. The first (simplified): function spl_object_hash($object){ if (is_object($object)){ return md5((string)$object); } return null; } does not work with native objects (such as DOMDocument), and the second: function spl_object_hash($object){ if (is_object($object)){ ob_start(); var_dump($object); $dump = ob_get_contents(); ob_end_clean(); if (preg_match('/^object\(([a-z0-9_]+)\)\#(\d)+/i', $dump, $match)) { return md5($match[1] . $match[2]); } } return null; } looks like it could be a major performance buster! Does anybody have anything up their sleeve?

    Read the article

  • PHP: How do I access child properties from a method in a base object?

    - by Nick
    I'd like for all of my objects to be able to return a JSON string of themselves. So I created a base class for all of my objects to extend, with an AsJSON() method: class BaseObject { public function AsJSON() { $JSON=array(); foreach ($this as $key = $value) { if(is_null($value)) continue; $JSON[$key] = $value; } return json_encode($JSON); } } And then extend my child classes from that: class Package extends BaseObject { ... } So in my code, I expect to do this: $Box = new Package; $Box-SetID('123'); $Box-SetName('12x8x6'); $Box-SetBoxX('12'); $Box-SetBoxY('8'); $Box-SetBoxZ('6'); echo $Box-AsJSON(); But the JSON string it returns only contains the BaseClass's properties, not the child properties. How do I modify my AsJSON() function so that $this refers to the child's properties, not the parent's?

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • Large Reports for MSRS

    - by Greg Lorenz
    I have a report that needs to be able to render a very large amount of pages (about 4500 in this instance) in a web browser. The total time needed to finish on the report server from start time to end time is about 30 mins for the instance that I am looking at. Does anyone know what options exist for handling the rendering of such a large report in a web browser? In terms of looking into how this can be resolved I have already performed the following tasks. The report gets its data off of a database table that already has the data flattened to the point that the TimeDataRetrieval on the report server is 17812 or about 18 secs. The report itself has been reformatted to include the least expensive report objects that it can in order to render the data in the correct format. I basically consists of a table with about 4 nested tables and thats it. We were trying to accomplish this on a 2005 report server but continued to run into memory issues that were not feasible for our clients. In response to that we moved this onto a 2008 report server to take advantage of the fact that it uses the file system instead of memory and finally were able to get this to work without running out of the available memory but of course it takes much longer.

    Read the article

  • Why "object reference not set to an instance of an object" doesn't tell us which object?

    - by Saeed Neamati
    We're launching a system, and we sometimes get the famous exception NullReferenceException with the message Object reference not set to an instance of an object. However, in a method where we have almost 20 objects, having a log which says an object is null, is really of no use at all. It's like telling you, when you are the security agent of a seminar, that a man among 100 attendees is a terrorist. That's really of no use to you at all. You should get more information, if you want to detect which man is the threatening man. Likewise, if we want to remove the bug, we do need to know which object is null. Now, something has obsessed my mind for several months, and that is: Why .NET doesn't give us the name, or at least the type of the object reference, which is null?. Can't it understand the type from reflection or any other source? Also, what are the best practices to understand which object is null? Should we always test nullability of objects in these contexts manually and log the result? Is there a better way?

    Read the article

  • Associating an object with another object for GC clearup

    - by thecoop
    Is there any way of associating an object instance (object A) with a second object (object B) in a generalised way, so that when B gets collected A becomes eligable for collection? The same behaviour that would happen if B had an instance variable pointing to A, but without explicitly changing the class definition of B, and being able to do this in a dynamic way? The same sort of effect could be done by using the Component.Disposed event in a funky way, but I don't want to make B disposable EDIT I'm basically creating a cache of objects that are associated with a single 'root' object, and I don't want the cache to be static, as there can be lots of root objects using different caches, so lots of memory will be used up when a root object is no longer used but the cached objects are still around. So, I want a collection of cached objects to be associated with each 'root' object, without changing the root object definition. Sort of like metadata of an extra object reference attached to each root object instance. That way, each collection will get collected when the root object is collected, and not hang around like they would if a static cache was used.

    Read the article

  • PHP bitwise left shifting 32 spaces problem and bad results with large numbers arithmetic operations

    - by Victor Stanciu
    Hello, I have the following problems: First: I am trying to do a 32-spaces bitwise left shift on a large number, and for some reason the number is always returned as-is. For example: echo(516103988<<32); // echoes 516103988 Because shifting the bits to the left one space is the equivalent of multiplying by 2, i tried multiplying the number by 2^32, and it works, it returns 2216649749795176448. Second: I have to add 9379 to the number from the above point: printf('%0.0f', 2216649749795176448 + 9379); // prints 2216649749795185920 Should print: 2216649749795185827

    Read the article

  • dictionary interface for large data sets

    - by Richard
    I have a set of key/values (all text) that is too large to load in memory at once. I would like to interact with this data via a Python dictionary-like interface. Does such a module already exist? Reading key values should be efficient and values compressed on disk to save space.

    Read the article

  • How to send large objects using boost::asio

    - by Max
    Good day. I'm receiving a large objects via the net using boost::asio. And I have a code: for (int i = 1; i <= num_packets; i++) boost::asio::async_read(socket_, boost::asio::buffer(Obj + packet_size * (i - 1), packet_size), boost::bind(...)); Where My_Class * Obj. I'm in doubt if that approach possible (because i have a pointer to an object here)? Or how it would be better to receive this object using packets of fixed size in bytes? Thanks in advance.

    Read the article

  • Fast ruby http library for large XML downloads

    - by Vlad Zloteanu
    I am consuming various XML-over-HTTP web services returning large XML files ( 2MB). What would be the fastest ruby http library to reduce the 'downloading' time? Required features: both GET and POST requests gzip/deflate downloads (Accept-Encoding: deflate, gzip) - very important I am thinking between: open-uri Net::HTTP curb but you can also come with other suggestions. P.S. To parse the response, I am using a pull parser from Nokogiri, so I don't need an integrated solution like rest-client or hpricot.

    Read the article

  • view Large PDF file in iPhone SDK

    - by aRrAY
    I have followed some tutorial that teach me how to view PDF file in UIWebView, however I found that if the file size is large, it will be lag when I zoom the PDF, but it doesn't occur in Mobile Safari. So anyone know how to solve it?

    Read the article

  • How to export large data to Excel

    - by mavera
    I have a criteria page in my asp.net application. When user clicks report button, firstly in a new page results are binded to a datagrid, then this page is exported to excel file with changing content type method. That normally works, but when large amount of data comes, system.outofmemoryexception is thrown. Does anyone know a way to fix this problem, or another usefull technic to do?

    Read the article

  • Shortening large CSV on debian

    - by Unkwntech
    I have a very large CSV file and I need to write an app that will parse it but using the 6GB file to test against is painful, is there a simple way to extract the first hundred or two lines without having to load the entire file into memory? The file resides on a Debian server.

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • How/where to run the algorithm on large dataset?

    - by niko
    I would like to run the PageRank algorithm on graph with 4 000 000 nodes and around 45 000 000 edges. Currently I use neo4j graph databse and classic relational database (postgres) and for software projects I mostly use C# and Java. Does anyone know what would be the best way to perform a PageRank computation on such graph? Is there any way to modify the PageRank algorithm in order to run it at home computer or server (48GB RAM) or is there any useful cloud service to push the data along the algorithm and retrieve the results? At this stage the project is at the research stage so in case of using cloud service if possible, would like to use such provider that doesn't require much administration and service setup, but instead focus just on running the algorith once and get the results without much overhead administration work.

    Read the article

  • Managing large downloadable content on mobile devices

    - by larromba
    This is a general question of how to best manage large downloadable content on mobile devices. Lets consider a situation whereby a mobile app needs to download a number of very large content items, like HD videos, that are over 500MB but under 2GB. Now, lets assume this content delivery system should be scalable. Would it be a fair assumption that: A reputable cloud service would be needed - if so, what is a reliable and cost effective cloud service for mobile devices based on anyone's experience? Large content downloads should only be attempted over a wifi connection, so the end user doesn't incur large costs, e.g. when travelling. Downloads should carry on in the background if possible, as the user won't want to wait in an app for long periods. If the downloads don't finish, or the OS quits the app, all downloads should carry on when the app is next activated? Are there any other pitfalls anyone may have experienced when managing large content on mobile devices? Thanks.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >