Search Results

Search found 12398 results on 496 pages for 'in memory oltp'.

Page 449/496 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • Creation of model in core data on the fly

    - by user1740045
    How can we create a model in core data on the fly? I.e getting the schema of database from somewhere and then creating a Core Data Object graph? *QuesTion:* Yes thats fine, agreed with all the advantages. But, can anybody can tell practically, what is the benefit of integrating Core Data into project instead of using SQL directly. 1.No need to write SQL boiler plate code [but need to learn Core Data Model (steep curve)] 2.WE can undo and redo changes [but practically who needs it] 3.we can migrate to another schema [that can be done by SQLite as well jus need to add another field into table] 4.For say aggregation on some field in table,in Core Data we need to loop through Core Data Objects whereas in SQLite we need to first write SQLite Boiler Plate Code and then the basic aggregation SQL query,which is easy to write,only length of code will increase...But in case of Core Data (need to learn a lot). So apart from reducing the length of Code,does it actually adds value to project? or in terms of Memory Efficiency,Performance,etc.. PS: If anybody has actualy worked on Core Data(Model Creation On the Fly) , if possible share and gve pointers..thanks!

    Read the article

  • C/C++ Bit Array or Bit Vector

    - by MovieYoda
    Hi, I am learning C/C++ programming & have encountered the usage of 'Bit arrays' or 'Bit Vectors'. Am not able to understand their purpose? here are my doubts - Are they used as boolean flags? Can one use int arrays instead? (more memory of course, but..) What's this concept of Bit-Masking? If bit-masking is simple bit operations to get an appropriate flag, how do one program for them? is it not difficult to do this operation in head to see what the flag would be, as apposed to decimal numbers? I am looking for applications, so that I can understand better. for Eg - Q. You are given a file containing integers in the range (1 to 1 million). There are some duplicates and hence some numbers are missing. Find the fastest way of finding missing numbers? For the above question, I have read solutions telling me to use bit arrays. How would one store each integer in a bit?

    Read the article

  • Boost shared_ptr use_count function

    - by photo_tom
    My application problem is the following - I have a large structure foo. Because these are large and for memory management reasons, we do not wish to delete them when processing on the data is complete. We are storing them in std::vector<boost::shared_ptr<foo>>. My question is related to knowing when all processing is complete. First decision is that we do not want any of the other application code to mark a complete flag in the structure because there are multiple execution paths in the program and we cannot predict which one is the last. So in our implementation, once processing is complete, we delete all copies of boost::shared_ptr<foo>> except for the one in the vector. This will drop the reference counter in the shared_ptr to 1. Is it practical to use shared_ptr.use_count() to see if it is equal to 1 to know when all other parts of my app are done with the data. One additional reason I'm asking the question is that the boost documentation on the shared pointer shared_ptr recommends not using "use_count" for production code.

    Read the article

  • How can I synchronize one set of data with another?

    - by RenderIn
    I have an old database and a new database. The old records were converted to the new database recently. All our old applications continue to point to the old database, but the new applications point to the new database. Currently the old database is the only one being updated, so throughout the day the new database becomes out of sync. It is acceptable for the new database to be out of sync for a day, so until all our applications are pointed to the new database I just need to write a nightly cron job that will bring it up to date. I do not want to purge the new database and run the complete conversion script each night, as that would reduce uptime and would create a mess in our auditing of that table. I'm thinking about selecting all the data from the old database, converting it to the new database structure in memory, and then checking for the existence of each record before inserting it in the new database. After that's done, I'd select everything from the new database and check if it exists in the old one, and if not delete it. Is this the simplest way to do this?

    Read the article

  • Graph search problem with route restrictions

    - by Darcara
    I want to calculate the most profitable route and I think this is a type of traveling salesman problem. I have a set of nodes that I can visit and a function to calculate cost for traveling between nodes and points for reaching the nodes. The goal is to reach a fixed known score while minimizing the cost. This cost and rewards are not fixed and depend on the nodes visited before. The starting node is fixed. There are some restrictions on how nodes can be visited. Some simplified examples include: Node B can only be visited after A After node C has been visited, D or E can be visited. Visiting at least one is required, visiting both is permissible. Z can only be visited after at least 5 other nodes have been visited Once 50 nodes have been visited, the nodes A-M will no longer reward points Certain nodes can (and probably must) be visited multiple times Currently I can think of only two ways to solve this: a) Genetic Algorithms, with the fitness function calculating the cost/benefit of the generated route b) Dijkstra search through the graph, since the starting node is fixed, although the large number of nodes will probably make that not feasible memory wise. Are there any other ways to determine the best route through the graph? It doesn't need to be perfect, an approximated path is perfectly fine, as long as it's error acceptable. Would TSP-solvers be an option here?

    Read the article

  • How to optimize my game calendar in C#?

    - by MartyIX
    Hi, I've implemented a simple calendar (message system) for my game which consists from: 1) List<Event> calendar; 2) public class Event { /// <summary> /// When to process the event /// </summary> public Int64 when; /// <summary> /// Which object should process the event /// </summary> public GameObject who; /// <summary> /// Type of event /// </summary> public EventType what; public int posX; public int posY; public int EventID; } 3) calendar.Add(new Event(...)) The problem with this code is that even thought the number of messages is not excessise per second. It allocates still new memory and GC will once need to take care of that. The garbage collection may lead to a slight lag in my game and therefore I'd like to optimalize my code. My considerations: To change Event class in a structure - but the structure is not entirely small and it takes some time to copy it wherever I need it. Reuse Event object somehow (add queue with used events and when new event is needed I'll just take from this queue). Does anybody has other idea how to solve the problem? Thanks for suggestions!

    Read the article

  • why doesnt this program print?

    - by Alex
    What I'm trying to do is to print my two-dimensional array but i'm lost. The first function is running perfect, the problem is the second or maybe the way I'm passing it to the "Print" function. #include <stdio.h> #include <stdlib.h> #define ROW 2 #define COL 2 //Memory allocation and values input void func(int **arr) { int i, j; arr = (int**)calloc(ROW,sizeof(int*)); for(i=0; i < ROW; i++) arr[i] = (int*)calloc(COL,sizeof(int)); printf("Input: \n"); for(i=0; i<ROW; i++) for(j=0; j<COL; j++) scanf_s("%d", &arr[i][j]); } //This is where the problem begins or maybe it's in the main void print(int **arr) { int i, j; for(i=0; i<ROW; i++) { for(j=0; j<COL; j++) printf("%5d", arr[i][j]); printf("\n"); } } void main() { int *arr; func(&arr); print(&arr); //maybe I'm not passing the arr right ? }

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • iPhone producing strange results on 'if' statement

    - by Rob
    I have a UIPicker where the user inputs a specified time (i.e. 13:00, 13:01, 13:02, etc.) - which determines their score. Once they hit the button an alert comes up with the score that is determined through this 'if-else' statement. Everything seems to work great MOST of the time - but I am getting some erratic behavior. This is the code: //Gets my value from the UIPicker and then converts it into a format that can be used in the 'if' statement. NSInteger runRow = [runTimePicker selectedRowInComponent:2]; NSString *runSelected = [runTimePickerData objectAtIndex:runRow]; NSString *runSelectedFixed = [runSelected stringByReplacingOccurrencesOfString:@":" withString:@"."]; //The actual 'if' statment. if ([runSelectedFixed floatValue] <= 13.00) { runScore = 100; } else if ([runSelectedFixed floatValue] <= 13.06) { runScore = 99; } else if ([runSelectedFixed floatValue] <= 13.12) { runScore = 97; } else if ([runSelectedFixed floatValue] <= 13.18) { runScore = 96; } else if ([runSelectedFixed floatValue] <= 13.24) { runScore = 94; } else if ([runSelectedFixed floatValue] <= 13.30) { runScore = 93; } else if ([runSelectedFixed floatValue] <= 13.36) { runScore = 92; } else if ([runSelectedFixed floatValue] <= 13.42) { runScore = 90; } else if ([runSelectedFixed floatValue] <= 13.48) { runScore = 89; } else if ([runSelectedFixed floatValue] <= 13.54) { runScore = 88; } Now, when I test the program, I will get the expected result when I choose '13:00' which is '100'. I also get the expected result of '99' when I choose all of the times between '13:01 and 13:05'. BUT, when I choose '13:06' it gives me a score of '97'. I also get a score of '97' on '13:07 through 13:12' - which is the desired result. Why would I get a '97' right on '13:12' but not get a '99' right on '13:06'???? Could this be a memory leak or something???

    Read the article

  • What does the destructor do silently?

    - by zhanwu
    Considering the following code which looks like that the destructor doesn't do any real job, valgrind showed me clearly that it has memory leak without using the destructor. Any body can explain me what does the destructor do in this case? #include <iostream> using namespace std; class A { private: int value; A* follower; public: A(int); ~A(); void insert(int); }; A::A(int n) { value = n; follower = NULL; } A::~A() { if (follower != NULL) delete follower; cout << "do nothing!" << endl; } void A::insert(int n) { if (this->follower == NULL) { A* f = new A(n); this->follower = f; } else this->follower->insert(n); } int main(int argc, char* argv[]) { A* objectA = new A(1); int i; for (i = 0; i < 10; i++) objectA->insert(i); delete objectA; }

    Read the article

  • How to know the type of an object in a list?

    - by nacho4d
    Hi, I want to know the type of object (or type) I have in my list so I wrote this: void **list; //list of references list = new void * [2]; Foo foo = Foo(); const char *not_table [] = {"tf", "ft", 0 }; list[0] = &foo; list[1] = not_table; if (dynamic_cast<LogicProcessor*>(list[0])) { //ERROR here ;( printf("Foo was found\n"); } if (dynamic_cast<char*> (list[0])) { //ERROR here ;( printf("char was found\n"); } but I get : error: cannot dynamic_cast '* list' (of type 'void*') to type 'class Foo*' (source is not a pointer to class) error: cannot dynamic_cast '* list' (of type 'void*') to type 'char*' (target is not pointer or reference to class) Why is this? what I am doing wrong here? Is dynamic_cast what I should use here? Thanks in advance EDIT: I know above code is much like plain C and surely sucks from the C++ point of view but is just I have the following situation and I was trying something before really implementing it: I have two arrays of length n but both arrays will never have an object at the same index. Hence, or I have array1[i]!=NULL or array2[i]!=NULL. This is obviously a waste of memory so I thought everything would be solved if I could have both kind of objects in a single array of length n. I am looking something like Cocoa's (Objective-C) NSArray where you don't care about the type of the object to be put in. Not knowing the type of the object is not a problem since you can use other method to get the class of a certain later. Is there something like it in c++ (preferably not third party C++ libraries) ? Thanks in advance ;)

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Sparc Assembly Call corrupts data

    - by Sigge
    I am at the moment working with some assembler code for the Sparc processor family, and i am having some truble with a piece of code.. I think the code and output explains more, but in the short.. When i do a call to the function println my varaibels that i have written to the %fp - 8 memory location is destoryed.. here is my assembler code that i am trying to run !PROCEDURE main .section ".text" .global main .align 4 main: save %sp, -96, %sp L1: set 96, %l0 mov %l0, %o0 call initObject ; nop mov %o0, %l0 mov %l0, %o0 call Test$go ; nop mov %o0, %l0 mov %l0, %o0 call println ; nop L0: ret restore !END main !PROCEDURE Test$go .section ".text" .global Test$go .align 4 Test$go: save %sp, -96, %sp L3: mov %i0, %l0 set 0, %l0 set -8, %l1 add %fp,%l1, %l1 st %l0, [%l1] set 1, %l0 mov %l0, %o0 call println ; nop set -8, %l0 add %fp,%l0, %l0 ld [%l0], %l0 mov %l0, %o0 call println ; nop set 1, %l0 mov %l0, %i0 L2: ret restore !END Test$go Here is the assembler code for the println code .global println .type println,#function println: save %sp,-96,%sp ! block 1 .L193: ! File runtime.c: ! 42 } ! 43 ! 45 /** ! 46 Prints an integer to the standard output stream. ! 47 ! 48 @param i The integer to be printed. ! 49 */ ! 50 void println(int i) { ! 51 printf("%d\n", i); sethi %hi(.L195),%o0 or %o0,%lo(.L195),%o0 call printf mov %i0,%o1 jmp %i7+8 restore This is the out put i get when i run this piece of assembler code 1 67584 1 As u can see, the data that is located at %fp - 8 has been destroyed.. please all feedback is apritiated

    Read the article

  • Linking Error Building 64bit Qt app on 32bit XP machine.

    - by photo_tom
    I'm trying to build a 64 bit version of my application (and yes I really do need the memory) on my 32bit xp dev box for production testing on our Vista64 server. Previously, I have built w/o any errors the Qt 4.6.2 DLL's in 64 bit mode. That step went vary smooth. Just to get started in building production, I'm trying to rebuild Qt's Star Delegate demo in 64bit mode. I converted the 32bit to 64bit app by changing the application configuration and adjusting the library's to the 64bit venisons. Now, when I go to link, I'm getting the following error when I link 1>------ Build started: Project: stardelegate, Configuration: Release x64 ------ 1>Linking... 1>MSVCRT.lib(crtexew.obj) : error LNK2001: unresolved external symbol WinMain 1>release64\stardelegate.exe : fatal error LNK1120: 1 unresolved externals Suggestions? edit - After some more searching, discovered if I link as a console app it will work and run. But not as a windows app. And I don't have this problem in 32 bit mode.

    Read the article

  • Is `super` local variable?

    - by Michael
    // A : Parent @implementation A -(id) init { // change self here then return it } @end A A *a = [[A alloc] init]; a. Just wondering, if self is a local variable or global? If it's local then what is the point of self = [super init] in init? I can successfully define some local variable and use like this, why would I need to assign it to self. -(id) init { id tmp = [super init]; if(tmp != nil) { //do stuff } return tmp; } b. If [super init] returns some other object instance and I have to overwrite self then I will not be able to access A's methods any more, since it will be completely new object? Am I right? c. super and self pointing to the same memory and the major difference between them is method lookup order. Am I right? sorry, don't have Mac to try, learning theory as for now...

    Read the article

  • Prototype or jQuery for DOM manipulation (client-side dynamic content)

    - by luiggitama
    I need to know which of these two JavaScript frameworks is better for client-side dynamic content modification for known DOM elements (by id), in terms of performance, memory usage, etc.: Prototype's $('id').update(content) jQuery's jQuery('#id').html(content) BTW, both libraries coexist with no conflict in my app, because I'm using RichFaces for JSF development, that's why I can use "jQuery" instead of "$". I have at least 20 updatable areas in my page, and for each one I prepare content (tables, option lists, etc.), based on some user-defined client-side criteria filtering or some AJAX event, etc., like this: var html = []; int idx = 0; ... html[idx++] = '<tr><td class="cell"><span class="link" title="View" onclick="myFunction('; html[idx++] = param; html[idx++] = ')"></span>'; html[idx++] = someText; html[idx++] = '</td></tr>'; ... So here comes the question, which is better to use: // Prototype's $('myId').update(html.join('')); // or jQuery's jQuery('#myId').html(html.join('')); Other needed functions are hide() and show(), which are present in both frameworks. Which is better? Also I'm needing to enable/disable form controls, and to read/set their values. Note that I know my updatable area's id (I don't need CSS selectors at this point). And I must tell that I'm saving these queried objects in some data structure for later use, so they are requested just once when the page is rendered, like this: MyData = {div1:jQuery('#id1'), div2:$('id2'), ...}; ... div1.update('content 1'); div2.html('content 2'); So, which is the best practice?

    Read the article

  • Resource allocation and automatic deallocation

    - by nabulke
    In my application I got many instances of class CDbaOciNotifier. They all share a pointer to only one instance of class OCIEnv. What I like to achieve is that allocation and deallocation of the resource class OCIEnv will be handled automatically inside class CDbaOciNotifier. The desired behaviour is, with the first instance of class CDbaOciNotifier the environment will be created, after that all following notifiers use that same environment. With the destruction of the last notifier, the environment will be destroyed too (call to custom deleter). What I've got so far (using a static factory method to create notifiers): #pragma once #include <string> #include <memory> #include "boost\noncopyable.hpp" class CDbaOciNotifier : private boost::noncopyable { public: virtual ~CDbaOciNotifier(void); static std::auto_ptr<CDbaOciNotifier> createNotifier(const std::string &tnsName, const std::string &user, const std::string &password); private: CDbaOciNotifier(OCIEnv* envhp); // All notifiers share one environment static OCIEnv* m_ENVHP; // Custom deleter static void freeEnvironment(OCIEnv *env); OCIEnv* m_envhp; }; CPP: #include "DbaOciNotifier.h" using namespace std; OCIEnv* CDbaOciNotifier::m_ENVHP = 0; CDbaOciNotifier::~CDbaOciNotifier(void) { } CDbaOciNotifier::CDbaOciNotifier(OCIEnv* envhp) :m_envhp(envhp) { } void CDbaOciNotifier::freeEnvironment(OCIEnv *env) { OCIHandleFree((dvoid *) env, (ub4) OCI_HTYPE_ENV); *env = null; } auto_ptr<CDbaOciNotifier> CDbaOciNotifier::createNotifier(const string &tnsName, const string &user, const string &password) { if(!m_ENVHP) { OCIEnvCreate( (OCIEnv **) &m_ENVHP, OCI_EVENTS|OCI_OBJECT, (dvoid *)0, (dvoid * (*)(dvoid *, size_t)) 0, (dvoid * (*)(dvoid *, dvoid *, size_t))0, (void (*)(dvoid *, dvoid *)) 0, (size_t) 0, (dvoid **) 0 ); } //shared_ptr<OCIEnv> spEnvhp(m_ENVHP, freeEnvironment); ...got so far... return auto_ptr<CDbaOciNotifier>(new CDbaOciNotifier(m_ENVHP)); } I'd like to avoid counting references (notifiers) myself, and use something like shared_ptr. Do you see an easy solution to my problem?

    Read the article

  • What is a truly empty std::vector in C++?

    - by RyanG
    I've got a two vectors in class A that contain other class objects B and C. I know exactly how many elements these vectors are supposed to hold at maximum. In the initializer list of class A's constructor, I initialize these vectors to their max sizes (constants). If I understand this correctly, I now have a vector of objects of class B that have been initialized using their default constructor. Right? When I wrote this code, I thought this was the only way to deal with things. However, I've since learned about std::vector.reserve() and I'd like to achieve something different. I'd like to allocate memory for these vectors to grow as large as possible because adding to them is controlled by user-input, so I don't want frequent resizings. However, I iterate through this vector many, many times per second and I only currently work on objects I've flagged as "active". To have to check a boolean member of class B/C on ever iteration is silly. I don't want these objects to even BE there for my iterators to see when I run through this list. Is reserving the max space ahead of time and using push_back to add a new object to the vector a solution to this?

    Read the article

  • localhost yes but phpmyadmin blank

    - by Giskin Leow
    WAMP people having problem with both localhost and phpmyadmin loads blank which usually the port problem. Mine is only phpmyadmin blank. sqlbuddy and phpinfo no problem. tried uninstall reinstalled wamp. tried xampp, same problem, all works well, not phpmyadmin. mysql log: 120905 8:03:08 [Note] Plugin 'FEDERATED' is disabled. 120905 8:03:08 InnoDB: The InnoDB memory heap is disabled 120905 8:03:08 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120905 8:03:08 InnoDB: Compressed tables use zlib 1.2.3 120905 8:03:09 InnoDB: Initializing buffer pool, size = 128.0M 120905 8:03:09 InnoDB: Completed initialization of buffer pool 120905 8:03:09 InnoDB: highest supported file format is Barracuda. 120905 8:03:09 InnoDB: Waiting for the background threads to start 120905 8:03:10 InnoDB: 1.1.8 started; log sequence number 1595675 120905 8:03:11 [Note] Server hostname (bind-address): '(null)'; port: 3306 120905 8:03:11 [Note] - '(null)' resolves to '::'; 120905 8:03:11 [Note] - '(null)' resolves to '0.0.0.0'; 120905 8:03:11 [Note] Server socket created on IP: '0.0.0.0'. 120905 8:03:13 [Note] Event Scheduler: Loaded 0 events 120905 8:03:13 [Note] wampmysqld: ready for connections. apache log [Wed Sep 05 08:03:09 2012] [notice] Apache/2.2.22 (Win32) PHP/5.4.3 configured -- resuming normal operations [Wed Sep 05 08:03:09 2012] [notice] Server built: May 13 2012 13:32:42 [Wed Sep 05 08:03:09 2012] [notice] Parent: Created child process 3812 [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Child process is running [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Acquired the start mutex. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting 64 worker threads. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:04:14 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:09:50 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:41:03 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/phpMyAdmin

    Read the article

  • How can I get sessions to work if I'm using Google App Engine + Django 1.1?

    - by user341642
    Is there a way for me to get sessions working? I know Django has built in session management, and GAE has some tools for it if you're using their watered down version of Django 0.96, but is there a way to get sessions to work if you're trying to use GAE w/ Django 1.1 (i.e. use_library() call). I assume using a db-backed session doesn't work, and a file system backed one won't work b/c we don't have access to the filesystem if we deploy to the Google production servers. This kinda worked (as in didn't crap out) when I used SessionMiddleware backed by a local-memory backed cache and a non-persistent cache (i.e. setting SESSION_ENGINE to django.contrib.sessions.backends.cache). But the session never seems to persist in this case, no matter how I set the timeouts. A new session key is generated on every page reload. Maybe this is b/c the GAE assumes complete statelessness with each request and blows away my local cache? Apologies in advance, I'm pretty new to Python. Any suggestions would be greatly appreciated.

    Read the article

  • How to get results efficiently out of an Octree/Quadtree?

    - by Reveazure
    I am working on a piece of 3D software that has sometimes has to perform intersections between massive numbers of curves (sometimes ~100,000). The most natural way to do this is to do an N^2 bounding box check, and then those curves whose bounding boxes overlap get intersected. I heard good things about octrees, so I decided to try implementing one to see if I would get improved performance. Here's my design: Each octree node is implemented as a class with a list of subnodes and an ordered list of object indices. When an object is being added, it's added to the lowest node that entirely contains the object, or some of that node's children if the object doesn't fill all of the children. Now, what I want to do is retrieve all objects that share a tree node with a given object. To do this, I traverse all tree nodes, and if they contain the given index, I add all of their other indices to an ordered list. This is efficient because the indices within each node are already ordered, so finding out if each index is already in the list is fast. However, the list ends up having to be resized, and this takes up most of the time in the algorithm. So what I need is some kind of tree-like data structure that will allow me to efficiently add ordered data, and also be efficient in memory. Any suggestions?

    Read the article

  • What is the best way to properly test object equality against an array of objects?

    - by radesix
    My objective is to abort the NSXMLParser when I parse an item that already exists in cache. The basic flow of the program works like this: 1) Program starts and downloads an XML feed. Each item in the feed is represented by a custom object (FeedItem). Each FeedItem gets added to an array. 2) When the parsing is complete the contents of the array (all FeedItem objects) are archived to the disk. The next time the program is executed or the feed is refreshed by the user I begin parsing again; however, since a cache (array) now exists as each item is parsed I want to see if the object exists in the cache. If it does then I know I have downloaded all the new items and no longer need to continue parsing. What I am learning, I think, is that I can't use indexOfObject or indexOfObjectIDenticalTo: because these really seem to be checking to see that the objects are using the same memory address (thus identical). What I want to do is see if the contents of the object are equal (or at least some of the contents). I've done some research and found that I can override the IsEqual method; however, I really don't want to iterate/enumerate through the entire cache contents table for every newly parsed XML FeedItem. Is iterating through the collection and testing each one for equality the only way to do this or is there a better technique I am not aware of? Currently I am using the following code though I know it needs to change: NSUInteger index = [self.feedListCache.feedList indexOfObject:self.currentFeedItem]; if (index == NSNotFound) { }

    Read the article

  • Good C string libary

    - by chamakits
    Hello all. I recently got inspired to start up a project I've been wanting to code for a while. I want to do it in C, because memory handling is key this application. I was searching around for a good implementation of strings in C, since I know me doing it myself could lead to some messy buffer overflows, and I expect to be dealing with a fairly big amount of strings. I found this article which gives details on each, but they each seem like they have a good amount of cons going for them (don't get me wrong, this article is EXTREMELY helpful, but it still worries me that even if I were to choose one of those, I wouldn't be using the best I can get). I also don't know how up to date the article is, hence my current plea. What I'm looking for is something that may hold a large amount of characters, and simplifies the process of searching through the string. If it allows me to tokenize the string in any way, even better. Also, it should have some pretty good I/O performance. Printing, and formatted printing isn't quite a top priority. I know I shouldn't expect a library to do all the work for me, but was just wandering if there was a well documented string function out there that could save me some time and some work. Any help is greatly appreciated. Thanks in advance! EDIT: I was asked about the license I prefer. Any sort of open source license will do, but preferably GPL (v2 or v3). EDIt2: I found betterString (bstring) library and it looks pretty good. Good documentation, small yet versatile amount of functions, and easy to mix with c strings. Anyone have any good or bad stories about it? The only downside I've read about it is that it lacks Unicode (again, read about this, haven't seen it face to face just yet), but everything else seems pretty good. EDIT3: Also, preferable that its pure C.

    Read the article

  • Rails Nested Attributes Doesn't Insert ID Correctly

    - by MunkiPhD
    I'm attempting to edit a model's nested attributes, much as outline here, replicated here: <%= form_for @person do |person_form| %> <%= person_form.text_field :name %> <% for address in @person.addresses %> <%= person_form.fields_for address, :index => address do |address_form|%> <%= address_form.text_field :city %> <% end %> <% end %> <% end %> In my code, I have the following: <%= form_for(@meal) do |f| %> <!-- some other stuff that's irrelevant... --> <% for subitem in @meal.meal_line_items %> <%= f.fields_for subitem, :index => subitem do |line_item_form| %> <%= line_item_form.label :servings %><br/> <%= line_item_form.text_field :servings %><br/> <%= line_item_form.label :food_id %><br/> <%= line_item_form.text_field :food_id %><br/> <% end %> <% end %> <%= f.submit %> <% end %> This works great, except, when I look at the HTML, it's creating the inputs that look like the following, failing to input the correct id and instead placing the memory representation(?) of the model: <input type="text" value="2" size="30" name="meal[meal_line_item][#<MealLineItem:0x00000005c5d618>][servings]" id="meal_meal_line_item_#<MealLineItem:0x00000005c5d618>_servings">

    Read the article

  • Create table class as a singleton

    - by Mark
    I got a class that I use as a table. This class got an array of 16 row classes. These row classes all have 6 double variables. The values of these rows are set once and never change. Would it be a good practice to make this table a singleton? The advantage is that it cost less memory, but the table will be called from multiple threads so I have to synchronize my code which way cause a bit slower application. However lookups in this table are probably a very small portion of the total code that is executed. EDIT: This is my code, are there better ways to do this or is this a good practice? Removed synchronized keyword according to recommendations in this question. final class HalfTimeTable { private HalfTimeRow[] table = new HalfTimeRow[16]; private static final HalfTimeTable instance = new HalfTimeTable(); private HalfTimeTable() { if (instance != null) { throw new IllegalStateException("Already instantiated"); } table[0] = new HalfTimeRow(4.0, 1.2599, 0.5050, 1.5, 1.7435, 0.1911); table[1] = new HalfTimeRow(8.0, 1.0000, 0.6514, 3.0, 1.3838, 0.4295); //etc } @Override @Deprecated public Object clone() throws CloneNotSupportedException { throw new CloneNotSupportedException(); } public static HalfTimeTable getInstance() { return instance; } public HalfTimeRow getRow(int rownumber) { return table[rownumber]; } }

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >