Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 607/672 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Fast serialization/deserialization of structs

    - by user256890
    I have huge amont of geographic data represented in simple object structure consisting only structs. All of my fields are of value type. public struct Child { readonly float X; readonly float Y; readonly int myField; } public struct Parent { readonly int id; readonly int field1; readonly int field2; readonly Child[] children; } The data is chunked up nicely to small portions of Parent[]-s. Each array contains a few thousands Parent instances. I have way too much data to keep all in memory, so I need to swap these chunks to disk back and forth. (One file would result approx. 2-300KB). What would be the most efficient way of serializing/deserializing the Parent[] to a byte[] for dumpint to disk and reading back? Concerning speed, I am particularly interested in fast deserialization, write speed is not that critical. Would simple BinarySerializer good enough? Or should I hack around with StructLayout (see accepted answer)? I am not sure if that would work with array field of Parent.children. UPDATE: Response to comments - Yes, the objects are immutable (code updated) and indeed the children field is not value type. 300KB sounds not much but I have zillions of files like that, so speed does matter.

    Read the article

  • Using the read function to read in a file.

    - by robUK
    Hello, gcc 4.4.1 I am using the read function to read in a wave file. However, when it gets to the read function. Execution seems to stop and freezes. I am wondering if I am doing anything wrong with this. The file size test-short.wave is: 514K. What I am aiming for is to read the file into the memory buffer chunks at a time. Currently I just testing this. Many thanks for any suggestions, #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <fcntl.h> #include <string.h> #include <unistd.h> int main(void) { char buff = malloc(10240); int32_t fd = 0; int32_t bytes_read = 0; char *filename = "test-short.wav"; /* open wave file */ if((fd = (open(filename, O_RDWR)) == -1)) { fprintf(stderr, "open [ %s ]\n", strerror(errno)); return 1; } printf("Opened file [ %s ]\n", filename); printf("sizeof(buff) [ %d ]\n", sizeof(buff)); printf("strlen(buff) [ %d ]\n", strlen(buff)); bytes_read = read(fd, buff, sizeof(buff)); printf("Bytes read [ %d ]\n", bytes_read); return 0; }

    Read the article

  • Sparc Assembly Call currupts data

    - by Sigge
    I am at the moment working with some assembler code for the Sparc processor family, and i am having some truble with a piece of code.. I think the code and output explains more, but in the short.. When i do a call to the function println my varaibels that i have written to the %fp - 8 memory location is destoryed.. here is my assembler code that i am trying to run !PROCEDURE main .section ".text" .global main .align 4 main: save %sp, -96, %sp L1: set 96, %l0 mov %l0, %o0 call initObject ; nop mov %o0, %l0 mov %l0, %o0 call Test$go ; nop mov %o0, %l0 mov %l0, %o0 call println ; nop L0: ret restore !END main !PROCEDURE Test$go .section ".text" .global Test$go .align 4 Test$go: save %sp, -96, %sp L3: mov %i0, %l0 set 0, %l0 set -8, %l1 add %fp,%l1, %l1 st %l0, [%l1] set 1, %l0 mov %l0, %o0 call println ; nop set -8, %l0 add %fp,%l0, %l0 ld [%l0], %l0 mov %l0, %o0 call println ; nop set 1, %l0 mov %l0, %i0 L2: ret restore !END Test$go Here is the assembler code for the println code .global println .type println,#function println: save %sp,-96,%sp ! block 1 .L193: ! File runtime.c: ! 42 } ! 43 ! 45 /** ! 46 Prints an integer to the standard output stream. ! 47 ! 48 @param i The integer to be printed. ! 49 */ ! 50 void println(int i) { ! 51 printf("%d\n", i); sethi %hi(.L195),%o0 or %o0,%lo(.L195),%o0 call printf mov %i0,%o1 jmp %i7+8 restore This is the out put i get when i run this piece of assembler code 1 67584 1 As u can see, the data that is located at %fp - 8 has been destroyed.. please all feedback is apritiated

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • NSManagedObject How To Reload

    - by crissag
    I have a view that consists of a table of existing objects and an Add button, which allows the user to create a new object. When the user presses Add, the object is created in the list view controller, so that the object will be part of that managed object context (via the NSEntityDescription insertNewObjectForEntityForName: method). The Add view has a property for the managed object. In the list view controller, I create an Add view controller, set the property to the managed object I created, and then push the Add view on to the navigation stack. In the Add view, I have two buttons for save and cancel. In the save, I save the managed object and pass the managed object back to the list view controller via a delegate method. If the user cancels, then I delete the object and pass nil back to the list view controller. The complication I am having in the add view is related to a UIImagePickerController. In the Add view, I have a button which allows the user to take a photo of the object (or use an existing photo from the photo library). However, the process of transferring to the UIImagePickerController and having the user use the camera, is resulting in a didReceiveMemoryWarning in the add view controller. Further, the view was unloaded, which also caused my NSManagedObject to get clobbered. My question is, how to you go about reloading the NSManagedObject in the case where it was released because of the low memory situation?

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • What happens when modifying Gemfile.lock directly?

    - by Mik378
    Since the second time of bundle install execution, dependencies are loaded from Gemfile.lock when Gemfile isn't changed. But I wonder how detection of changes is made between those two files. For instance, if I'm adding a new dependency directly into Gemfile.lock without adding it into Gemfile (as opposed to the best practice since Gemfile.lock is auto-generated from Gemfile), would a bundle install consider Gemfile as changed ? Indeed, does bundle install process compares the whole Gemfile and Gemfile.lock trees in order to detect changes? If it is, even if I'm adding a dependency directly to Gemfile.lock, Gemfile would be detected as changed (since different) and would re-erase Gemfile.lock (so losing the added dependency...) What is the process of bundle install since the launch for the second time ? To be more clear, my question is: Are changes based only from Gemfile ? That means bundler would keep a Gemfile snapshot of every bundle install execution number N and merely compares it to the bundle install execution N+1 ? Or none snapshot are created in bundler memory and bundler makes a comparison with Gemfile.lock each time to detect if Gemfile must be considered as changed.

    Read the article

  • iPhone producing strange results on 'if' statement

    - by Rob
    I have a UIPicker where the user inputs a specified time (i.e. 13:00, 13:01, 13:02, etc.) - which determines their score. Once they hit the button an alert comes up with the score that is determined through this 'if-else' statement. Everything seems to work great MOST of the time - but I am getting some erratic behavior. This is the code: //Gets my value from the UIPicker and then converts it into a format that can be used in the 'if' statement. NSInteger runRow = [runTimePicker selectedRowInComponent:2]; NSString *runSelected = [runTimePickerData objectAtIndex:runRow]; NSString *runSelectedFixed = [runSelected stringByReplacingOccurrencesOfString:@":" withString:@"."]; //The actual 'if' statment. if ([runSelectedFixed floatValue] <= 13.00) { runScore = 100; } else if ([runSelectedFixed floatValue] <= 13.06) { runScore = 99; } else if ([runSelectedFixed floatValue] <= 13.12) { runScore = 97; } else if ([runSelectedFixed floatValue] <= 13.18) { runScore = 96; } else if ([runSelectedFixed floatValue] <= 13.24) { runScore = 94; } else if ([runSelectedFixed floatValue] <= 13.30) { runScore = 93; } else if ([runSelectedFixed floatValue] <= 13.36) { runScore = 92; } else if ([runSelectedFixed floatValue] <= 13.42) { runScore = 90; } else if ([runSelectedFixed floatValue] <= 13.48) { runScore = 89; } else if ([runSelectedFixed floatValue] <= 13.54) { runScore = 88; } Now, when I test the program, I will get the expected result when I choose '13:00' which is '100'. I also get the expected result of '99' when I choose all of the times between '13:01 and 13:05'. BUT, when I choose '13:06' it gives me a score of '97'. I also get a score of '97' on '13:07 through 13:12' - which is the desired result. Why would I get a '97' right on '13:12' but not get a '99' right on '13:06'???? Could this be a memory leak or something???

    Read the article

  • Aliasing `T*` with `char*` is allowed. Is it also allowed the other way around?

    - by StackedCrooked
    Note: This question has been renamed and reduced to make it more focused and readable. Most of the comments refer to the old text. According to the standard objects of different type may not share the same memory location. So this would not be legal: int i = 0; short * s = reinterpret_cast<short*>(&i); // BAD! The standard however allows an exception to this rule: any object may be accessed through a pointer to char or unsigned char: int i = 0; char * c = reinterpret_cast<char*>(&i); // OK However, it is not clear to me if this is also allowed the other way around. For example: char * c = read_socket(...); unsigned * u = reinterpret_cast<unsigned*>(c); // huh? Summary of the answers The answer is NO for two reasons: You an only access an existing object as char*. There is no object in my sample code, only a byte buffer. The pointer address may not have the right alignment for the target object. In that case dereferencing it would result in undefined behavior. On the Intel and AMD platforms it will result performance overhead. On ARM it will trigger a CPU trap and your program will be terminated! This is a simplified explanation. For more detailed information see answers by @Luc Danton, @Cheers and hth. - Alf and @David Rodríguez.

    Read the article

  • What happens when you create an instance of an object containing no state in C#?

    - by liquorice
    I am I think ok at algorithmic programming, if that is the right term? I used to play with turbo pascal and 8086 assembly language back in the 1980s as a hobby. But only very small projects and I haven't really done any programming in the 20ish years since then. So I am struggling for understanding like a drowning swimmer. So maybe this is a very niave question or I'm just making no sense at all, but say I have an object kind of like this: class Something : IDoer { void Do(ISomethingElse x) { x.DoWhatEverYouWant(42); } } And then I do var Thing1 = new Something(); var Thing2 = new Something(); Thing1.Do(blah); Thing2.Do(blah); does Thing1 = Thing2? does "new Something()" create anything? Or is it not much different different from having a static class, except I can pass it around and swap it out etc. Is the "Do" procedure in the same location in memory for both the Thing1(blah) and Thing2(blah) objects? I mean when executing it, does it mean there are two Something.Do procedures or just one?

    Read the article

  • Reading and writing to files simultaneously?

    - by vipersnake005
    Moved the question here. Suppose, I want to store 1,000,000,000 integers and cannot use my memory. I would use a file(which can easily handle so much data ). How can I let it read and write and the same time. Using fstream file("file.txt', ios::out | ios::in ); doesn't create a file, in the first place. But supposing the file exists, I am unable to use to do reading and writing simultaneously. WHat I mean is this : Let the contents of the file be 111111 Then if I run : - #include <fstream> #include <iostream> using namespace std; int main() { fstream file("file.txt",ios:in|ios::out); char x; while( file>>x) { file<<'0'; } return 0; } Shouldn't the file's contents now be 101010 ? Read one character and then overwrite the next one with 0 ? Or incase the entire contents were read at once into some buffer, should there not be atleast one 0 in the file ? 1111110 ? But the contents remain unaltered. Please explain. Thank you.

    Read the article

  • How to construct objects based on XML code?

    - by the_drow
    I have XML files that are representation of a portion of HTML code. Those XML files also have widget declarations. Example XML file: <message id="msg"> <p> <Widget name="foo" type="SomeComplexWidget" attribute="value"> inner text here, sets another attribute or inserts another widget to the tree if needed... </Widget> </p> </message> I have a main Widget class that all of my widgets inherit from. The question is how would I create it? Here are my options: Create a compile time tool that will parse the XML file and create the necessary code to bind the widgets to the needed objects. Advantages: No extra run-time overhead induced to the system. It's easy to bind setters. Disadvantages: Adds another step to the build chain. Hard to maintain as every widget in the system should be added to the parser. Use of macros to bind the widgets. Complex code Find a method to register all widgets into a factory automatically. Advantages: All of the binding is done completely automatically. Easier to maintain then option 1 as every new widget will only need to call a WidgetFactory method that registers it. Disadvantages: No idea how to bind setters without introducing a maintainability nightmare. Adds memory and run-time overhead. Complex code What do you think is better? Can you guys suggest a better solution?

    Read the article

  • Overlaying several CLR reference fields with each other in explicit struct?

    - by thr
    Edit: I'm well aware of that this works very well with value types, my specific question is about using this for reference types. I've been tinkering around with structs in .NET/C#, and I just found out that you can do this: using System; using System.Runtime.InteropServices; namespace ConsoleApplication1 { class Foo { } class Bar { } [StructLayout(LayoutKind.Explicit)] struct Overlaid { [FieldOffset(0)] public object AsObject; [FieldOffset(0)] public Foo AsFoo; [FieldOffset(0)] public Bar AsBar; } class Program { static void Main(string[] args) { var overlaid = new Overlaid(); overlaid.AsObject = new Bar(); Console.WriteLine(overlaid.AsBar); overlaid.AsObject = new Foo(); Console.WriteLine(overlaid.AsFoo); Console.ReadLine(); } } } Basically circumventing having to do dynamic casting during runtime by using a struct that has an explicit field layout and then accessing the object inside as it's correct type. Now my question is: Can this lead to memory leaks somehow, or any other undefined behavior inside the CLR? Or is this a fully supported convention that is usable without any issues? I'm aware that this is one of the darker corners of the CLR, and that this technique is only a viable option in very few specific cases.

    Read the article

  • how to change uibutton title at runtime in objective c?

    - by Sarah
    Hello, I know this question is asked many a times,and i am also implementing the same funda for chanding the title of the uibutton i guess. Let me clarify my problem first. I have one uibutton named btnType, on clicking of what one picker pops up and after selecting one value,i am hitting done button to hide the picker and at the same time i am changing the the title of the uibutton with code [btnType setTitle:btnTitle forState:UIControlEventTouchUpInside]; [btnType setTitleColor:[UIColor redColor] forState:UIControlEventAllEvents]; But with my surpriaze,it is not changed and application crashes with signal EXC_BAD_ACCESS. I am not getting where i am making mistake.I have allocated memory to the btnType at viewdidLoad. Also I am using -(IBAction)pressAddType { toolBar.hidden = FALSE; dateTypePicker.hidden = FALSE; } event on pressing the button to open the picker. Also i would like to mention that i have made connection with IB with event TouchUpInside for pressAddType. Any guesses? I will be grateful if you could help me. Thanks.

    Read the article

  • Sparc Assembly Call corrupts data

    - by Sigge
    I am at the moment working with some assembler code for the Sparc processor family, and i am having some truble with a piece of code.. I think the code and output explains more, but in the short.. When i do a call to the function println my varaibels that i have written to the %fp - 8 memory location is destoryed.. here is my assembler code that i am trying to run !PROCEDURE main .section ".text" .global main .align 4 main: save %sp, -96, %sp L1: set 96, %l0 mov %l0, %o0 call initObject ; nop mov %o0, %l0 mov %l0, %o0 call Test$go ; nop mov %o0, %l0 mov %l0, %o0 call println ; nop L0: ret restore !END main !PROCEDURE Test$go .section ".text" .global Test$go .align 4 Test$go: save %sp, -96, %sp L3: mov %i0, %l0 set 0, %l0 set -8, %l1 add %fp,%l1, %l1 st %l0, [%l1] set 1, %l0 mov %l0, %o0 call println ; nop set -8, %l0 add %fp,%l0, %l0 ld [%l0], %l0 mov %l0, %o0 call println ; nop set 1, %l0 mov %l0, %i0 L2: ret restore !END Test$go Here is the assembler code for the println code .global println .type println,#function println: save %sp,-96,%sp ! block 1 .L193: ! File runtime.c: ! 42 } ! 43 ! 45 /** ! 46 Prints an integer to the standard output stream. ! 47 ! 48 @param i The integer to be printed. ! 49 */ ! 50 void println(int i) { ! 51 printf("%d\n", i); sethi %hi(.L195),%o0 or %o0,%lo(.L195),%o0 call printf mov %i0,%o1 jmp %i7+8 restore This is the out put i get when i run this piece of assembler code 1 67584 1 As u can see, the data that is located at %fp - 8 has been destroyed.. please all feedback is apritiated

    Read the article

  • Question about array subscripting in C#

    - by Michael J
    Back in the old days of C, one could use array subscripting to address storage in very useful ways. For example, one could declare an array as such. This array represents an EEPROM image with 8 bit words. BYTE eepromImage[1024] = { ... }; And later refer to that array as if it were really multi-dimensional storage BYTE mpuImage[2][512] = eepromImage; I'm sure I have the syntax wrong, but I hope you get the idea. Anyway, this projected a two dimension image of what is really single dimensional storage. The two dimensional projection represents the EEPROM image when loaded into the memory of an MPU with 16 bit words. In C one could reference the storage multi-dimensionaly and change values and the changed values would show up in the real (single dimension) storage almost as if by magic. Is it possible to do this same thing using C#? Our current solution uses multiple arrays and event handlers to keep things synchronized. This kind of works but it is additional complexity that we would like to avoid if there is a better way.

    Read the article

  • Any workarounds for non-static member array initialization?

    - by TomiJ
    In C++, it's not possible to initialize array members in the initialization list, thus member objects should have default constructors and they should be properly initialized in the constructor. Is there any (reasonable) workaround for this apart from not using arrays? [Anything that can be initialized using only the initialization list is in our application far preferable to using the constructor, as that data can be allocated and initialized by the compiler and linker, and every CPU clock cycle counts, even before main. However, it is not always possible to have a default constructor for every class, and besides, reinitializing the data again in the constructor rather defeats the purpose anyway.] E.g. I'd like to have something like this (but this one doesn't work): class OtherClass { private: int data; public: OtherClass(int i) : data(i) {}; // No default constructor! }; class Foo { private: OtherClass inst[3]; // Array size fixed and known ahead of time. public: Foo(...) : inst[0](0), inst[1](1), inst[2](2) {}; }; The only workaround I'm aware of is the non-array one: class Foo { private: OtherClass inst0; OtherClass inst1; OtherClass inst2; OtherClass *inst[3]; public: Foo(...) : inst0(0), inst1(1), inst2(2) { inst[0]=&inst0; inst[1]=&inst1; inst[2]=&inst2; }; }; Edit: It should be stressed that OtherClass has no default constructor, and that it is very desirable to have the linker be able to allocate any memory needed (one or more static instances of Foo will be created), using the heap is essentially verboten. I've updated the examples above to highlight the first point.

    Read the article

  • No class def found error for JUnit Test on android

    - by J Bellamy
    I am having some very bizarre behaviour. I have a large number of test cases for my Android application, and they all work except for one. When I run this one I get a java.lang.NoClassDefFoundError: org.JUnit.test Yes, I have the JUnit 4 library imported into the project, and my other JUnit tests are running without any problems. What is particularly bizarre is that before I hit this problem I had an error in my code- basically, I tried writing a file to a read only folder. When that occurred, the JUnitTest would execute up to the point where it would hit an IO exception for accessing a part of memory it cannot access. I fix this problem, and suddenly the Android emulator doesn't seem to know what org.JUnit.test is. I have examined the run configuration for this test class, and it is the same as my others. It is in the same folder as the other tests as well. It also uses the same import statements. Any idea on what is going on? I am using the Android 10 emulator, and eclipse version 3.7.2. Edit: To clarify, the error I get appears on Logcat and not in my Eclipse project.

    Read the article

  • What is a truly empty std::vector in C++?

    - by RyanG
    I've got a two vectors in class A that contain other class objects B and C. I know exactly how many elements these vectors are supposed to hold at maximum. In the initializer list of class A's constructor, I initialize these vectors to their max sizes (constants). If I understand this correctly, I now have a vector of objects of class B that have been initialized using their default constructor. Right? When I wrote this code, I thought this was the only way to deal with things. However, I've since learned about std::vector.reserve() and I'd like to achieve something different. I'd like to allocate memory for these vectors to grow as large as possible because adding to them is controlled by user-input, so I don't want frequent resizings. However, I iterate through this vector many, many times per second and I only currently work on objects I've flagged as "active". To have to check a boolean member of class B/C on ever iteration is silly. I don't want these objects to even BE there for my iterators to see when I run through this list. Is reserving the max space ahead of time and using push_back to add a new object to the vector a solution to this?

    Read the article

  • Is `super` local variable?

    - by Michael
    // A : Parent @implementation A -(id) init { // change self here then return it } @end A A *a = [[A alloc] init]; a. Just wondering, if self is a local variable or global? If it's local then what is the point of self = [super init] in init? I can successfully define some local variable and use like this, why would I need to assign it to self. -(id) init { id tmp = [super init]; if(tmp != nil) { //do stuff } return tmp; } b. If [super init] returns some other object instance and I have to overwrite self then I will not be able to access A's methods any more, since it will be completely new object? Am I right? c. super and self pointing to the same memory and the major difference between them is method lookup order. Am I right? sorry, don't have Mac to try, learning theory as for now...

    Read the article

  • Resource allocation and automatic deallocation

    - by nabulke
    In my application I got many instances of class CDbaOciNotifier. They all share a pointer to only one instance of class OCIEnv. What I like to achieve is that allocation and deallocation of the resource class OCIEnv will be handled automatically inside class CDbaOciNotifier. The desired behaviour is, with the first instance of class CDbaOciNotifier the environment will be created, after that all following notifiers use that same environment. With the destruction of the last notifier, the environment will be destroyed too (call to custom deleter). What I've got so far (using a static factory method to create notifiers): #pragma once #include <string> #include <memory> #include "boost\noncopyable.hpp" class CDbaOciNotifier : private boost::noncopyable { public: virtual ~CDbaOciNotifier(void); static std::auto_ptr<CDbaOciNotifier> createNotifier(const std::string &tnsName, const std::string &user, const std::string &password); private: CDbaOciNotifier(OCIEnv* envhp); // All notifiers share one environment static OCIEnv* m_ENVHP; // Custom deleter static void freeEnvironment(OCIEnv *env); OCIEnv* m_envhp; }; CPP: #include "DbaOciNotifier.h" using namespace std; OCIEnv* CDbaOciNotifier::m_ENVHP = 0; CDbaOciNotifier::~CDbaOciNotifier(void) { } CDbaOciNotifier::CDbaOciNotifier(OCIEnv* envhp) :m_envhp(envhp) { } void CDbaOciNotifier::freeEnvironment(OCIEnv *env) { OCIHandleFree((dvoid *) env, (ub4) OCI_HTYPE_ENV); *env = null; } auto_ptr<CDbaOciNotifier> CDbaOciNotifier::createNotifier(const string &tnsName, const string &user, const string &password) { if(!m_ENVHP) { OCIEnvCreate( (OCIEnv **) &m_ENVHP, OCI_EVENTS|OCI_OBJECT, (dvoid *)0, (dvoid * (*)(dvoid *, size_t)) 0, (dvoid * (*)(dvoid *, dvoid *, size_t))0, (void (*)(dvoid *, dvoid *)) 0, (size_t) 0, (dvoid **) 0 ); } //shared_ptr<OCIEnv> spEnvhp(m_ENVHP, freeEnvironment); ...got so far... return auto_ptr<CDbaOciNotifier>(new CDbaOciNotifier(m_ENVHP)); } I'd like to avoid counting references (notifiers) myself, and use something like shared_ptr. Do you see an easy solution to my problem?

    Read the article

  • Graph search problem with route restrictions

    - by Darcara
    I want to calculate the most profitable route and I think this is a type of traveling salesman problem. I have a set of nodes that I can visit and a function to calculate cost for traveling between nodes and points for reaching the nodes. The goal is to reach a fixed known score while minimizing the cost. This cost and rewards are not fixed and depend on the nodes visited before. The starting node is fixed. There are some restrictions on how nodes can be visited. Some simplified examples include: Node B can only be visited after A After node C has been visited, D or E can be visited. Visiting at least one is required, visiting both is permissible. Z can only be visited after at least 5 other nodes have been visited Once 50 nodes have been visited, the nodes A-M will no longer reward points Certain nodes can (and probably must) be visited multiple times Currently I can think of only two ways to solve this: a) Genetic Algorithms, with the fitness function calculating the cost/benefit of the generated route b) Dijkstra search through the graph, since the starting node is fixed, although the large number of nodes will probably make that not feasible memory wise. Are there any other ways to determine the best route through the graph? It doesn't need to be perfect, an approximated path is perfectly fine, as long as it's error acceptable. Would TSP-solvers be an option here?

    Read the article

  • GTK+: How do I process RadioMenuItem choice without marking it chosen? And vise versa

    - by eugene.shatsky
    In my program, I've got a menu with a group of RadioMenuItem entries. Choosing one of them should trigger a function which can either succeed or fail. If it fails, this RadioMenuItem shouldn't be marked chosen (the previous one should persist). Besides, sometimes I want to set marked item without running the choice processing function. Here is my current code: # Update seat menu list def update_seat_menu(self, seats, selected_seat=None): seat_menu = self.builder.get_object('seat_menu') # Delete seat menu items for menu_item in seat_menu: # TODO: is it a good way? does remove() delete obsolete menu_item from memory? if menu_item.__class__.__name__ == 'RadioMenuItem': seat_menu.remove(menu_item) # Fill menu with new items group = [] for seat in seats: menu_item = Gtk.RadioMenuItem.new_with_label(group, str(seat[0])) group = menu_item.get_group() seat_menu.append(menu_item) if str(seat[0]) == selected_seat: menu_item.activate() menu_item.connect("activate", self.choose_seat, str(seat[0])) menu_item.show() # Process item choice def choose_seat(self, entry, seat_name): # Looks like this is called when item is deselected, too; must check if active if entry.get_active(): # This can either succeed or fail self.logind.AttachDevice(seat_name, '/sys'+self.device_syspath, True) Chosen RadioMenuItem gets marked irrespective of the choose_seat() execution result; and the only way to set marked item without triggering choose_seat() is to re-run update_seat_menu() with selected_seat argument, which is an overkill. I tried to connect choose_seat() with 'button-release-event' instead of 'activate' and call entry.activate() in choose_seat() if AttachDevice() succeeds, but this resulted in whole X desktop lockup until AttachDevice() timed out, and chosen item still got marked.

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >