Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 71/869 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Comparing two objects that are the same in MbUnit

    - by Coppermill
    From MBUnit I am trying to check if the values of two objects are the same using Assert.AreSame(RawDataRow, result); However I am getting the following fail: ====================== Expected Value & Actual Value : {RawDataRow: CentreID = "CentreID1", CentreLearnerRef = "CentreLearnerRef1", ContactID = 1, DOB = 2010-05-05T00:00:00.0000000, Email = "Email1", ErrorCodes = "ErrorCodes1", ErrorDescription = "ErrorDescription1", FirstName = "FirstName1"} Remark : Both values look the same when formatted but they are distinct instances. ====================== I don't want to have to go through each property, can I do this from MbUnit

    Read the article

  • ZipArchive memory problems on iPhone for large archive

    - by Mithin
    Hi, I am trying to compress multiple files into a single zip archive and I am running into low memory warning. Since the complete zip file is loaded into the memory I guess that's the problem. Is there a way by which I can manage the compression/decompression better using ZipArchive so that not all the data is in the memory at once? Thanks!

    Read the article

  • How to write a large number of nested records in JSON with Python

    - by jamesmcm
    I want to produce a JSON file, containing some initial parameters and then records of data like this: { "measurement" : 15000, "imi" : 0.5, "times" : 30, "recalibrate" : false, { "colorlist" : [234, 431, 134] "speclist" : [0.34, 0.42, 0.45, 0.34, 0.78] } { "colorlist" : [214, 451, 114] "speclist" : [0.44, 0.32, 0.45, 0.37, 0.53] } ... } How can this be achieved using the Python json module? The data records cannot be added by hand as there are very many.

    Read the article

  • Using Large Lists

    - by cam
    In an Outlook AddIn I'm working on, I use a list to grab all the messages in the current folder, then process them, then save them. First, I create a list of all messages, then I create another list from the list of messages, then finally I create a third list of messages that need to be moved. Essentially, they are all copies of eachother, and I made it this way to organize it. Would it increase performance if I used only one list? I thought lists were just references to the actual item.

    Read the article

  • How to open a large text file in C#

    - by desmati
    I have a text file that contains about 100000 articles. The structure of file is: BEGIN OF FILE .Document ID 42944-YEAR:5 .Date 03\08\11 .Cat political Article Content 1 .Document ID 42945-YEAR:5 .Date 03\08\11 .Cat political Article Content 2 END OF FILE I want to open this file in c# for processing it line by line. I tried this code: String[] FileLines = File.ReadAllText(TB_SourceFile.Text).Split(Environment.NewLine.ToCharArray()); But it says: Exception of type 'System.OutOfMemoryException' was thrown. The question is How can I open this file and read it line by line. File Size: 564 MB (591,886,626 bytes) File Encoding: UTF-8 File contains Unicode characters.

    Read the article

  • Large Scale VHDL techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

  • get average from set of objects in django

    - by dotty
    Hay, i have a simple rating system for a property. You give it a mark out of 5 (stars). The models are defined like this def Property(models.Model) # stuff here def Rating(models.Model) property = models.ForeignKey(Property) stars = models.IntegerField() What i want to do is get a property, find all the Rating objects, collect them, then get the average 'stars' from them. any ideas how to do this?

    Read the article

  • Small objects allocator

    - by Felics
    Hello, Has anybody used SmallObjectAllocator from Modern C++ Design by Andrei Alexandrescu in a big project? I want to implement this allocator but I need some opinions about it before using it in my project. I made some tests and it seems very fast, but the tests were made in a small test environment. I want to know how fast it is when are lots of small objects(like events, smart pointers, etc) and how much extra memory it uses.

    Read the article

  • What is the largest file size we can transfer through air application?

    - by Naveen kumar
    Hi all, I'm trying to transfer large file(1Gb+) using UDP(in packets) through air application. I'm transfering byteArray by taking chunks of packets from FileStream. But its giving 'Error #1000: The system is out of memory' at sender side after certain number of packets sent and by this time the downloaded file size at server side is 256 MB. I tried with other files but after downloading 256MB, sender is giving the same error. Is it because of the file stream size? How can I solve this problem so that I can transfer files of GB size.

    Read the article

  • How to access core data objects from Javascript?

    - by Eli
    How can I gain access to Core Data objects from Javascript/WebKit on Mac OS X? I've made custom subclasses of NSManagedObject for each of my tables, with accessors defined using @property/@dynamic for each attribute, but neither isSelectorExcludedFromWebScript: or isKeyExcludedFromWebScript: is called for any of them, so Javascript just stops when I try to access any of the attributes. It returns 'undefined' if I access it as a property (eg business.name ) and javascript execution stops if I access it as a function (eg business.name() ).

    Read the article

  • handling large arrays with array_diff

    - by bigmac
    I have been trying to compare two arrays. Using array_intersect presents no problems. When using array_diff and arrays with ~5,000 values, it works. When I get to ~10,000 values, the script dies when I get to array_diff. Turning on error_reporting did not produce anything. I tried creating my own array_diff function: function manual_array_diff($arraya, $arrayb) { foreach ($arraya as $keya => $valuea) { if (in_array($valuea, $arrayb)) { unset($arraya[$keya]); } } return $arraya; } source: http://stackoverflow.com/questions/2479963/how-does-array-diff-work I would expect it to be less efficient that than the official array_diff, but it can handle arrays of ~10,000. Unfortunately, both array_diffs fail when I get to ~15,000. I tried the same code on a different machine and it runs fine, so it's not an issue with the code or PHP. There must be some limit set somewhere on that particular server. Any idea how I can get around that limit or alter it or just find out what it is?

    Read the article

  • Searching through large data set

    - by calccrypto
    how would i search through a list with ~5 mil 128bit (or 256, depending on how you look at it) strings quickly and find the duplicates (in python)? i can turn the strings into numbers, but i don't think that's going to help much. since i haven't learned much information theory, is there anything about this in information theory? and since these are hashes already, there's no point in hashing them again

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • Best language for scripting large scale file management

    - by Dan
    The National Park Service's Natural Sounds Program collects multiple terabytes of data each year measuring soundscapes. In your opinion, what is best available scripting language to manage massive amounts of files and file types? We would like to easily design and run efficient user-friendly scripts to search for and retrieve/create copies of files that may be located in different directories according a single static hierarchy. The OS will most likely be windows. Thanks!

    Read the article

  • C# System.Diagnostics.Process redirecting Standard Out for large amounts of data

    - by Matt
    I running an exe from a .NET app and trying to redirect standard out to a streamreader. The problem is that when I do myprocess.exe out.txt out.txt is close to 14mb. When I do the command line version it is very fast but when I run the process from my csharp app it is excruciatingly slow because I believe the default streamreader flushes every 4096 bytes. Is there a way to change the default stream reader for the Process object?

    Read the article

  • Javascript expando objects

    - by xyz
    What are expando objects in javascripts? For what purpose we need this ? Any complete example will be appreciated I found 1 article here Javascript: The red-headed stepchild of web development Thanks

    Read the article

  • How do you handle large repeated UI elements with JQuery

    - by jpoz
    Howdy, Here's the situation: You have a very complex UI element that is repeated in a list. Each has a menu on it, buttons, it hides and shows subelements, buttons for switch it's state, etc, etc. The elements are populated via JSON so you have to construct the elements and the functionality of the fly. What's the best way to accomplish this with JQuery? Where would you save the reusable template for the DOM structure? How would you add the behavior on? $().live? .livequery? onclick? manual after every JSON get? I guess I just see a lot of people doing different things. What's your experience with performance? Any insight would be much appreciated. Thanks, JPoz

    Read the article

  • LINQ to group objects according to timestamp

    - by Benny
    I have a serial of object defined as: public class Foo { public DateTime Time {get;set;} } now I want to group objects(IEnumerable<Foo>) according to the time, e.g. I want to group them according to hour or day or month. for example (group into hour): group 1(13:00-14:00) : foo1, foo2, foo3 group 2(14:00-15:00): foo4, foo5 How to write LINQ over this? hope I made myself clear.

    Read the article

  • Objects With No Behavior

    - by Patrick Donovan
    I've been teaching myself object oriented programming and I'm thinking about a situation where I have an object "Transaction", that has quite a few properties to it like account, amount, date, currency, type, etc. I never plan to mutate these data points, and calculation logic will live in other classes. My question is, is it poor Python design to instantiate thousands of objects just to hold data? I find the data far easier to work with embedded in a class rather than trying to cram it into some combination of data structures.

    Read the article

  • How large is a "buffer" in PostgreSQL

    - by Konrad Garus
    I am using pg_buffercache module for finding hogs eating up my RAM cache. For example when I run this query: SELECT c.relname, count(*) AS buffers FROM pg_buffercache b INNER JOIN pg_class c ON b.relfilenode = c.relfilenode AND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database())) GROUP BY c.relname ORDER BY 2 DESC LIMIT 10; I discover that sample_table is using 120 buffers. How much is 120 buffers in bytes?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >