Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 38/869 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Batch faulting in a to-many relationship for a collection of objects

    - by indragie
    Scenario: Let's say I have an entity called Author that has a to-many relationship called books to the Book entity (inverse relationship author). If I have an existing collection of Author objects, I want to fault in the books relationship for all of them in a single fetch request. Code This is what I've tried so far: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Book"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"author IN %@", authors]; Executing this fetch request does not result in the books relationship of the objects in the authors array being faulted in (inspected via logging). I've also tried doing the fetch request the other way around: NSArray *authors = ... // array of `Author` objects NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"Author"]; fetchRequest.returnsObjectsAsFaults = NO; fetchRequest.predicate = [NSPredicate predicateWithFormat:@"SELF IN %@", authors]; fetchRequest.relationshipKeypathsForPrefetching = @[@"books"]; This doesn't fire the faults either. What's the appropriate way of going about doing this?

    Read the article

  • The fastest way to iterate through a collection of objects

    - by Trev
    Hello all, First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this for each object { for a number of vectors { do some computations for each object { check if vector intersects } } } As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before. So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker. Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

    Read the article

  • Avoid loading unnecessary data from db into objects (web pages)

    - by GmGr
    Really newbie question coming up. Is there a standard (or good) way to deal with not needing all of the information that a database table contains loaded into every associated object. I'm thinking in the context of web pages where you're only going to use the objects to build a single page rather than an application with longer lived objects. For example, lets say you have an Article table containing id, title, author, date, summary and fullContents fields. You don't need the fullContents to be loaded into the associated objects if you're just showing a page containing a list of articles with their summaries. On the other hand if you're displaying a specific article you might want every field loaded for that one article and maybe just the titles for the other articles (e.g. for display in a recent articles sidebar). Some techniques I can think of: Don't worry about it, just load everything from the database every time. Have several different, possibly inherited, classes for each table and create the appropriate one for the situation (e.g. SummaryArticle, FullArticle). Use one class but set unused properties to null at creation if that field is not needed and be careful. Give the objects access to the database so they can load some fields on demand. Something else? All of the above seem to have fairly major disadvantages. I'm fairly new to programming, very new to OOP and totally new to databases so I might be completely missing the obvious answer here. :)

    Read the article

  • XML/RDF to Java Objects with XSD

    - by bmucklow
    So here's the scenario...I have an XSD file describing all the objects that I need. I can create the objects in Java using JAXB no problem. I have an XML/RDF file that I need to parse into those objects. What is the EASIEST way to do this? I have been looking into Jena and have played around with it, but can't see how to easily map the XML/RDF file to the XSD objects that were generated. Here is a snippet of the XSD file as well as the XML/RDF file: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:a="http://langdale.com.au/2005/Message#" xmlns:sawsdl="http://www.w3.org/ns/sawsdl" targetNamespace="http://iec.ch/TC57/2007/profile#" elementFormDefault="qualified" attributeFormDefault="unqualified" xmlns="http://langdale.com.au/2005/Message#" xmlns:m="http://iec.ch/TC57/2007/profile#"> <xs:annotation/> <xs:element name="Profile" type="m:Profile"/> <xs:complexType name="Profile"> <xs:sequence> <xs:element name="Breaker" type="m:Breaker" minOccurs="0" maxOccurs="unbounded"/> And the XML/RDF: <!-- CIM XML Output For switch783:(295854688) --> <cim:Switch rdf:ID="Switch_295854688"> <cim:IdentifiedObject.mRID>Switch_295854688</cim:IdentifiedObject.mRID> <cim:IdentifiedObject.aliasName>Switch_295854688</cim:IdentifiedObject.aliasName> <cim:ConductingEquipment.phases rdf:resource="http://iec.ch/TC57/2009/CIM-schema-cim14#PhaseCode.ABC" /> <cim:Switch.circuit2>0001406</cim:Switch.circuit2> <cim:Equipment.Line rdf:resource="#Line_0001406" />

    Read the article

  • Comparing two Objects which implement the same interface for equality / equivalence - Design help

    - by gav
    Hi All, I have an interface and two objects implementing that interface, massively simplied; public interface MyInterface { public int getId(); public int getName(); ... } public class A implements MyInterface { ... } public class B implements MyInterface { ... } We are migrating from using one implementation to the other but I need to check that the objects of type B that are generated are equivalent to those of type A. Specifically I mean that for all of the interface methods an object of Type A and Type B will return the same value (I'm just checking my code for generating this objects is correct). How would you go about this? Map<String, MyInterface> oldGeneratedObjects = getOldGeneratedObjects(); Map<String, MyInterface> newGeneratedObjects = getNewGeneratedObjects(); // TODO: Establish that for each Key the Values in the two maps return equivalent values. I'm looking for good coding practices and style here. I appreciate that I could just iterate through one key set pulling out both objects which should be equivalent and then just call all the methods and compare, I'm just thinking there may be a cleaner, more extensible way and I'm interested to learn what options there might be. Would it be appropriate / possible / advised to override equals or implement Comparable? Thanks in advance, Gavin

    Read the article

  • What is the best way to find a processed memory allocations in terms of C# objects

    - by Shantaram
    I have written various C# console based applications, some of them long running some not, which can over time have a large memory foot print. When looking at the windows perofrmance monitor via the task manager, the same question keeps cropping up in my mind; how do I get a break down of the number objects by type that are contributing to this footprint; and which of those are f-reachable and those which aren't and hence can be collected. On numerous occasions I've performed a code inspection to ensure that I am not unnecessarily holding on objects longer than required and disposing of objects with the using construct. I have also recently looked at employing the CG.Collect method when I have released a large number of objects (for example held in a collection which has just been cleared). However, I am not so sure that this made that much difference, so I threw that code away. I am guessing that there are tools in sysinternals suite than can help to resolve these memory type quiestions but I am not sure which and how to use them. The alternative would be to pay for a third party profiling tool such as JetBrains dotTrace; but I need to make sure that I've explored the free options first before going cap in hand to my manager.

    Read the article

  • Reference to the Main Form whilst trying to Serialize objects in C#

    - by Paul Matthews
    I have a button on my main form which calls a method to serialize some objects to disk. I am trying to add these objects to an ArrayList and then serialize them using a BinaryFormatter and a FileStream. public void SerializeGAToDisk(string GenAlgName) { // Let's make a list that includes all the objects we // need to store a GA instance ArrayList GAContents = new ArrayList(); GAContents.Add(GenAlgInstances[GenAlgName]); // Structure and info for a GA GAContents.Add(RunningGAs[GenAlgName]); // There may be several running GA's using (FileStream fStream = new FileStream(GenAlgName + ".ga", FileMode.Create, FileAccess.Write, FileShare.None)) { BinaryFormatter binFormat = new BinaryFormatter(); binFormat.Serialize(fStream, GAContents); } } When running the above code I get the following exception: System.Runtime.Serialization.SerializationException was unhandled Message=Type 'WindowsFormsApplication1.Form1' in Assembly 'GeneticAlgApp, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not marked as serializable. So that means that somewhere in the objects I'm trying to save there must be a reference to the main form. The only possible references I can see are 3 delegates which all point to methods in the main form code. Do delegates get serialized as well? I can't seem to apply the [NonSerialized] attribute to them. Is there anything else I might be missing? Even better, is there a quick method to find the reference(s) that are causing the problem?

    Read the article

  • Entity Framework POCO objects

    - by Dan
    I'm struggling with understanding Entity Framework and POCO objects. Here's what I'm trying to achieve. 1) Separate the DAL from the Business Layer by having my business layer use an interface to my DAL. Maybe use Unity to create my context. 2) Use Entity Framework inside my DAL. I have a Domain model with objects that reside in my business layer. I also have a database full of tables that doesn't really represent my domain model. I setup Entity Framework and generated POCO objects by using the ADO.NET POCO Generator extension. This gave me an object for each table in my database. Now I want to be able to say context.GetAll<User>(); and have it return a list of my User objects. The User object is in my business layer. Is that possible? Does that make sense or am I totally off and should start over? I'm guessing I need to use the repository pattern to achieve this, but I'm not sure. Can anyone help?

    Read the article

  • how do I best create a set of list classes to match my business objects

    - by ken-forslund
    I'm a bit fuzzy on the best way to solve the problem of needing a list for each of my business objects that implements some overridden functions. Here's the setup: I have a baseObject that sets up database, and has its proper Dispose() method All my other business objects inherit from it, and if necessary, override Dispose() Some of these classes also contain arrays (lists) of other objects. So I create a class that holds a List of these. I'm aware I could just use the generic List, but that doesn't let me add extra features like Dispose() so it will loop through and clean up. So if I had objects called User, Project and Schedule, I would create UserList, ProjectList, ScheduleList. In the past, I have simply had these inherit from List< with the appropriate class named and then written the pile of common functions I wanted it to have, like Dispose(). this meant I would verify by hand, that each of these List classes had the same set of methods. Some of these classes had pretty simple versions of these methods that could have been inherited from a base list class. I could write an interface, to force me to ensure that each of my List classes has the same functions, but interfaces don't let me write common base functions that SOME of the lists might override. I had tried to write a baseObjectList that inherited from List, and then make my other Lists inherit from that, but there are issues with that (which is really why I came here). One of which was trying to use the Find() method with a predicate. I've simplified the problem down to just a discussion of Dispose() method on the list that loops through and disposes its contents, but in reality, I have several other common functions that I want all my lists to have. What's the best practice to solve this organizational matter?

    Read the article

  • WCF data services - Limiting related objects returned based on critera

    - by Mike Morley
    I have an object graph consisting of a base employee object, and a set of related message objects. I am able to return the employee objects based on search criteria on the employee properties (eg team) etc. However, if I expand on the messages, I get the full collection of messages back. I would like to be able to either take the top n messages (i.e. restrict to 10 most recent) or ideally use a date range on the message objects to limit how many are brought back. So far I have not been able to figure out a way of doing this: I get an error if I attempt to filter on properties on the message (&$filter=employee/message/StartDate gives an error "No property 'StartDate' exists in type 'System.Data.Objects.DataClasses.EntityCollection`1). Attempting to use Top on the message related object doesn't work either. I have also tried using a WebGet extension that takes a string list of employee IDs. That works until the list gets too long, and then fails due to the URL getting too long (it might be possible to setup a paging mechanism on this approach)... Unfortunately the UI control I am using requires the data to be in a fairly specific hierarchical shape, so I can't easily come at this from starting on the message side and working backwards. Outside of making multiple calls does anyone know of a method to accomplish this with wcf data services? Thanks! M.

    Read the article

  • Inversion of control domain objects construction problem

    - by Andrey
    Hello! As I understand IoC-container is helpful in creation of application-level objects like services and factories. But domain-level objects should be created manually. Spring's manual tells us: "Typically one does not configure fine-grained domain objects in the container, because it is usually the responsibility of DAOs and business logic to create/load domain objects." Well. But what if my domain "fine-grained" object depends on some application-level object. For example I have an UserViewer(User user, UserConstants constants) class. There user is domain object which cannot be injected, but UserViewer also needs UserConstants which is high-level object injected by IoC-container. I want to inject UserConstants from the IoC-container, but I also need a transient runtime parameter User here. What is wrong with the design? Thanks in advance! UPDATE It seems I was not precise enough with my question. What I really need is an example how to do this: create instance of class UserViewer(User user, UserService service), where user is passed as the parameter and service is injected from IoC. If I inject UserViewer viewer then how do I pass user to it? If I create UserViewer viewer manually then how do I pass service to it?

    Read the article

  • Avoiding duplicate objects in Java deserialization

    - by YGL
    I have two lists (list1 and list2) containing references to some objects, where some of the list entries may point to the same object. Then, for various reasons, I am serializing these lists to two separate files. Finally, when I deserialize the lists, I would like to ensure that I am not re-creating more objects than needed. In other words, it should still be possible for some entry of List1 to point to the same object as some entry in List2. MyObject obj = new MyObject(); List<MyObject> list1 = new ArrayList<MyObject>(); List<MyObject> list2 = new ArrayList<MyObject>(); list1.add(obj); list2.add(obj); // serialize to file1.ser ObjectOutputStream oos = new ObjectOutputStream(...); oos.writeObject(list1); oos.close(); // serialize to file2.ser oos = new ObjectOutputStream(...); oos.writeObject(list2); oos.close(); I think that sections 3.4 and A.2 of the spec say that deserialization strictly results in the creation of new objects, but I'm not sure. If so, some possible solutions might involve: Implementing equals() and hashCode() and checking references manually. Creating a "container class" to hold everything and then serializing the container class. Is there an easy way to ensure that objects are not duplicated upon deserialization? Thanks.

    Read the article

  • Where to place interactive objects in JavaScript?

    - by Chris
    I'm creating a web based application (i.e. JavaScript with jQuery and lots of SVG) where the user interacts with "objects" on the screen (think of DIVs that can be draged around, resized and connected by arraows - like a vector drawing programm or a graphical programming language). As each "object" contains individual information but is allways belonging to a "class" of elements it's obvious that this application should be programmed by using an OOP approach. But where do I store the "objects" best? Should I create a global structure ("registry") with all (JS native) objects and tell them "draw yourself on the DOM"? Or should I avoid such a structure and think of the (relevant) DOM nodes as my objects and attach the relevant data as .data() to them? The first approach is very MVC - but I guess the attachment of all the event handlers will be non trivial. The second approach will handle the events in a trivial way and it doesn't create a duplicate structure, but I guess the usual OO stuff (like methods) will be more complex. What do you recomend? I think the answer will be JavaScript and SVG specific as "usual" programming languages don't have such a highly organized output "canvas".

    Read the article

  • How to synchronize access to many objects

    - by vividos
    I have a thread pool with some threads (e.g. as many as number of cores) that work on many objects, say thousands of objects. Normally I would give each object a mutex to protect access to its internals, lock it when I'm doing work, then release it. When two threads would try to access the same object, one of the threads has to wait. Now I want to save some resources and be scalable, as there may be thousands of objects, and still only a hand full of threads. I'm thinking about a class design where the thread has some sort of mutex or lock object, and assigns the lock to the object when the object should be accessed. This would save resources, as I only have as much lock objects as I have threads. Now comes the programming part, where I want to transfer this design into code, but don't know quite where to start. I'm programming in C++ and want to use Boost classes where possible, but self written classes that handle these special requirements are ok. How would I implement this? My first idea was to have a boost::mutex object per thread, and each object has a boost::shared_ptr that initially is unset (or NULL). Now when I want to access the object, I lock it by creating a scoped_lock object and assign it to the shared_ptr. When the shared_ptr is already set, I wait on the present lock. This idea sounds like a heap full of race conditions, so I sort of abandoned it. Is there another way to accomplish this design? A completely different way?

    Read the article

  • UIDs for data objects in MySQL

    - by Callash
    Hi there, I am using C++ and MySQL. I have data objects I want to persist to the database. They need to have a unique ID for identification purposes. The question is, how to get this unique ID? Here is what I came up with: 1) Use the auto_increment feature of MySQL. But how to get the ID then? I am aware that MySQL offers this "SELECT LAST_INSERT_ID()" feature, but that would be a race condition, as two objects could be inserted quite fast after each other. Also, there is nothing else that makes the objects discernable. Two objects could be created pretty much at the same time with exactly the same data. 2) Generate the UID on the C++ side. No dice, either. There are multiple programs that will write to and read from the database, who do not know of each other. 3) Insert with MAX(uid)+1 as the uid value. But then, I basically have the same problem as in 1), because we still have the race condition. Now I am stumped. I am assuming that this problem must be something other people ran into as well, but so far, I did not find any answers. Any ideas?

    Read the article

  • Very large database, very small portion most being retrieved in real time

    - by mingyeow
    Hi folks, I have an interesting database problem. I have a DB that is 150GB in size. My memory buffer is 8GB. Most of my data is rarely being retrieved, or mainly being retrieved by backend processes. I would very much prefer to keep them around because some features require them. Some of it (namely some tables, and some identifiable parts of certain tables) are used very often in a user facing manner How can I make sure that the latter is always being kept in memory? (there is more than enough space for these) More info: We are on Ruby on rails. The database is MYSQL, our tables are stored using INNODB. We are sharding the data across 2 partitions. Because we are sharding it, we store most of our data using JSON blobs, while indexing only the primary keys

    Read the article

  • rsync from OS X to Ubuntu failing for large (>15GB) files

    - by johnny_bgoode
    I'm trying to rsync a 15 GB file from my OSX box to a box running Ubuntu 10.04 server. rsync is transferring ~300-700Mb and then closing the connection with the following error: Read from remote host my.host.name: Connection reset by peer rsync: writefd_unbuffered failed to write 4 bytes [sender]: Broken pipe (32) rsync: connection unexpectedly closed (397214 bytes received so far) [sender] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-40/rsync/io.c(452) [sender=2.6.9] The exact command I am executing is: rsync --progress --archive --inplace my.15GB.file.tgz my.host.name:~/ I am sure that there is enough free space on the Ubuntu box. Any ideas what could be causing the connection to drop?

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • amazon dynamoDB or MySQL for storing large arrays inside each row

    - by Logan Besecker
    I am trying to decide which database I should use for an application I'm making. I was leaning toward dynamoDB because of its scalability, but then I read in the documentation which said: there is a limit of 64 KB on the item size although it looks like MySQL has a similar restriction documented here This application will be storing a lot of data in two arrays, which could contain upwards of 10,000-100,000 strings in each. I estimate that these strings will each be somewhere around 20 characters long, so each element of the array will be around 40bytes and each array could be around 4MB. Given this predicament, what database on amazon AWS would you use; or how would you get around the limit of size per row? Thanks in advance, Logan Besecker

    Read the article

  • Computer specs for a large database

    - by SpeksETC
    What sort of computer specs (CPU, RAM, disk speed) should I use for running queries on a database of 200+ million records? The queries are for a research project, so there is only one "user" and only one query will be running at a time. I tried it on my own laptop with SQL Server with an i3 processor, 2GB RAM, 5400 RPM disk and a simple query didn't finish even after 8+ hours. I have an option to connect a SSD via eSata and upgrade to 4GB RAM, but not sure if this will be enough... Thanks! Edit: The database is about 25 GB and the indexes are not setup properly. When I tried to add an index, I let it run for about 8 hours and it still hadn't finished so I gave up. Should I have more patience :)? In general, the queries will run once in a while and its ok even if it takes a couple hours to complete.... Also, the queries will produce probably about 10 million records which I need to process using Stata/Matlab and I'm concerned that my current laptop is not strong enough, but unsure of the bottleneck....

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • trouble backing up large mysql database

    - by Patrick
    I have a wordpress MU database with something like 10,000+ tables for various user's blogs. I need to upgrade wordpress MU to newest version, but want to backup the DB before hand. PHPMyAdmin fails to even load the page when i click export. Ive tried going into the server (windows) and using dos command line: mysqldump -u USERNAME -p PASSWORD> BACKUP.sql but it hangs for a minute and gives me the error: error 23: out of resources when opinging file '.\USERNAME\wp_1037_links.MYD' (Errorcode: 24) when using LOCK Tables What am i doing wrong, or should i be doing? Is PHPMyAdmin right for something this size? Is there a better way of doing this than the two methods i tried? **Note that this is not my site, so any suggestions as to the setup of the DB ill have to run by the owner. Im just here for WP related crap, this is kind of out of scope for what i was brought on to do.

    Read the article

  • Large mailbox in Outlook 2007 takes ages to index

    - by Reado
    In our company each user has a single mailbox and all email they have ever sent/received is in that mailbox. We don't do archiving to PST and we thought that was the way forward. The problem we now have is if someone switches to another PC for the day and opens Outlook, it has to download all emails first to that PC (cached mode) but even then when they try to search for something, Outlook says items are still being indexed. One user has over 100,000 items to be indexed and it's been saying that for about a week! As a temporary workaround I have turned off instant searching which allows them to search for anything, but it takes time to filter through, and Outlook doesn't exactly indicate if it's still searching for something, so in most cases the user thinks the search isn't working when really it is and it's just taking time to populate the results. I need a solution that allows the mailbox to be indexed really quickly if the user has to login to another PC. Are we best using Online Mode instead of Cached Mode or is there another way around this? Thanks in advance.

    Read the article

  • Any large USB sticks with integrated card readers?

    - by Al
    I have one of Kingston's DataTraveller Micro Reader USB sticks, a fantastic memory stick with an integrated micro SD and M2 card reader. However, I've gradually filled it to the brim and am looking for a larger stick. Unfortunately, Kingston don't make them any bigger than the 4GB one that I currently have and I was hoping to go to 16GB now that they've come down in price. Does anyone know if any manufacturers make something similar: a 16GB stick with a micro SD card reader integrated (I'm not bothered about the M2 reader).

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >