Search Results

Search found 8440 results on 338 pages for 'wms implementation'.

Page 92/338 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • How to call a function from another class file

    - by Guy Parker
    I am very familiar with writing VB based applications but am new to Xcode (and Objective C). I have gone through numerous tutorials on the web and understand the basics and how to interact with Interface Builder etc. However, I am really struggling with some basic concepts of the C language and would be grateful for any help you can offer. Heres my problem… I have a simple iphone app which has a view controller (FirstViewController) and a subview (SecondViewController) with associated header and class files. In the FirstViewController.m have a function defined @implementation FirstViewController (void) writeToServer:(const uint8_t ) buf { [oStream write:buf maxLength:strlen((char)buf)]; } It doesn't really matter what the function is. I want to use this function in my SecondViewController, so in SecondViewController.m I import FirstViewController.h import "SecondViewController.h" import "FirstViewController.h" @implementation SecondViewController -(IBAction) SetButton: (id) sender { NSString *s = [@"Fill:" stringByAppendingString: FillLevelValue.text]; NSString *strToSend = [s stringByAppendingString: @":"]; const uint8_t *str = (uint8_t *) [strToSend cStringUsingEncoding:NSASCIIStringEncoding]; FillLevelValue.text = strToSend; [FirstViewController writeToServer:str]; } This last line is where my problem is. XCode tells me that FirstViewController may not respond to writeToServer. And when I try to run the application it crashes when this function is called. I guess I don't fully understand how to share functions and more importantly, the relationship between classes. In an ideal world I would create a global class to place my functions in and call them as required. Any advice gratefully received.

    Read the article

  • Jboss 6 Cluster Singleton Clustered

    - by DanC
    I am trying to set up a Jboss 6 in a clustered environment, and use it to host clustered stateful singleton EJBs. So far we succesfully installed a Singleton EJB within the cluster, where different entrypoints to our application (through a website deployed on each node) point to a single environment on which the EJB is hosted (thus mantaining the state of static variables). We achieved this using the following configuration: Bean interface: @Remote public interface IUniverse { ... } Bean implementation: @Clustered @Stateful public class Universe implements IUniverse { private static Vector<String> messages = new Vector<String>(); ... } jboss-beans.xml configuration: <deployment xmlns="urn:jboss:bean-deployer:2.0"> <!-- This bean is an example of a clustered singleton --> <bean name="Universe" class="Universe"> </bean> <bean name="UniverseController" class="org.jboss.ha.singleton.HASingletonController"> <property name="HAPartition"><inject bean="HAPartition"/></property> <property name="target"><inject bean="Universe"/></property> <property name="targetStartMethod">startSingleton</property> <property name="targetStopMethod">stopSingleton</property> </bean> </deployment> The main problem for this implementation is that, after the master node (the one that contains the state of the singleton EJB) shuts down gracefuly, the Singleton's state is lost and reset to default. Please note that everything was constructed following the JBoss 5 Clustering documents, as no JBoss 6 documents were found on this subject. Any information on how to solve this problem or where to find JBoss 6 documention on clustering is appreciated.

    Read the article

  • How to efficiently compare the sign of two floating-point values while handling negative zeros

    - by François Beaune
    Given two floating-point numbers, I'm looking for an efficient way to check if they have the same sign, given that if any of the two values is zero (+0.0 or -0.0), they should be considered to have the same sign. For instance, SameSign(1.0, 2.0) should return true SameSign(-1.0, -2.0) should return true SameSign(-1.0, 2.0) should return false SameSign(0.0, 1.0) should return true SameSign(0.0, -1.0) should return true SameSign(-0.0, 1.0) should return true SameSign(-0.0, -1.0) should return true A naive but correct implementation of SameSign in C++ would be: bool SameSign(float a, float b) { if (fabs(a) == 0.0f || fabs(b) == 0.0f) return true; return (a >= 0.0f) == (b >= 0.0f); } Assuming the IEEE floating-point model, here's a variant of SameSign that compiles to branchless code (at least with with Visual C++ 2008): bool SameSign(float a, float b) { int ia = binary_cast<int>(a); int ib = binary_cast<int>(b); int az = (ia & 0x7FFFFFFF) == 0; int bz = (ib & 0x7FFFFFFF) == 0; int ab = (ia ^ ib) >= 0; return (az | bz | ab) != 0; } with binary_cast defined as follow: template <typename Target, typename Source> inline Target binary_cast(Source s) { union { Source m_source; Target m_target; } u; u.m_source = s; return u.m_target; } I'm looking for two things: A faster, more efficient implementation of SameSign, using bit tricks, FPU tricks or even SSE intrinsics. An efficient extension of SameSign to three values.

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • how to update UI controls in cocoa application from background thread

    - by AmitSri
    following is .m code: #import "ThreadLabAppDelegate.h" @interface ThreadLabAppDelegate() - (void)processStart; - (void)processCompleted; @end @implementation ThreadLabAppDelegate @synthesize isProcessStarted; - (void)awakeFromNib { //Set levelindicator's maximum value [levelIndicator setMaxValue:1000]; } - (void)dealloc { //Never called while debugging ???? [super dealloc]; } - (IBAction)startProcess:(id)sender { //Set process flag to true self.isProcessStarted=YES; //Start Animation [spinIndicator startAnimation:nil]; //perform selector in background thread [self performSelectorInBackground:@selector(processStart) withObject:nil]; } - (IBAction)stopProcess:(id)sender { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } - (void)processStart { int counter = 0; while (counter != 1000) { NSLog(@"Counter : %d",counter); //Sleep background thread to reduce CPU usage [NSThread sleepForTimeInterval:0.01]; //set the level indicator value to showing progress [levelIndicator setIntValue:counter]; //increment counter counter++; } //Notify main thread for process completed [self performSelectorOnMainThread:@selector(processCompleted) withObject:nil waitUntilDone:NO]; } - (void)processCompleted { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } @end I need to clear following things as per the above code. How to interrupt/cancel processStart while loop from UI control? I also need to show the counter value in main UI, which i suppose to do with performSelectorOnMainThread and passing argument. Just want to know, is there anyother way to do that? When my app started it is showing 1 thread in Activity Monitor, but when i started the processStart() in background thread its creating two new thread,which makes the total 3 thread until or unless loop get finished.After completing the loop i can see 2 threads. So, my understanding is that, 2 thread created when i called performSelectorInBackground, but what about the thrid thread, from where it got created? What if thread counts get increases on every call of selector.How to control that or my implementation is bad for such kind of requirements? Thanks

    Read the article

  • VB6 Game Development

    - by CVS-2600Hertz-wordpress-com
    Hi All, I am developing a game in VB6 (plz don't ask me why :) ). The storyboard is ready and a rough implementation is underway. I am following a "pure-software-rendering" approach. (i.e. no DirectX, no openGL etc.) Amongst many others, the following "serious" problems exist: 2D alpha transparency reqd. to implement overlays. Parallax implementation to give depth-of-field illusion. Capturing mouse-scroll events globally (as in FPS-es; mapping them to changing weapon). Async sound play with absolute "near-zero-lag". Any ideas anyone. Please suggest any well documented library/ocx or sample-code. Plz do suggest solutions with good performance and as little overhead as possible. Also, anyone who has developed any games, and would be open to sharing her/his code would be highly appreciated. (any well-acknowledged VB games whose source-code i can study??) UPDATE: Here is a screen shot of GearHead Garage. This picture ought to describe what i was attempting in words above... :) Thank You

    Read the article

  • Android depth buffer issue: Advice for anyone experiencing problem

    - by Andrew Smith
    I've wasted around 30 hours this week writing and re-writing code, believing that I had misunderstood how the OpenGL depth buffer works. Everything I tried, failed. I have now resolved my problem by finding what may be an error in the Android implementation of OpenGL. See this API entry: http://www.opengl.org/sdk/docs/man/xhtml/glClearDepth.xml void glClearDepth(GLclampd depth); Specifies the depth value used when the depth buffer is cleared. The initial value is 1. Android's implementation has two versions of this command: glClearDepthx which takes an integer value, clamped 0-1 glClearDepthf which takes a floating point value, clamped 0-1 If you use glClearDepthf(1) then you get the results you would expect. If you use glClearDepthx(1), as I was doing then you get different results. (Note that 1 is the default value, but calling the command with the argument 1 produces different results than not calling it at all.) Quite what is happening I do not know, but the depth buffer was being cleared to a value different from what I had specified.

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Is there a pattern for initializing objects created wth a DI container

    - by Igor Zevaka
    I am trying to get Unity to manage the creation of my objects and I want to have some initialization parameters that are not known until run-time: At the moment the only way I could think of the way to do it is to have an Init method on the interface. interface IMyIntf { void Initialize(string runTimeParam); string RunTimeParam { get; } } Then to use it (in Unity) I would do this: var IMyIntf = unityContainer.Resolve<IMyIntf>(); IMyIntf.Initialize("somevalue"); In this scenario runTimeParam param is determined at run-time based on user input. The trivial case here simply returns the value of runTimeParam but in reality the parameter will be something like file name and initialize method will do something with the file. This creates a number of issues, namely that the Initialize method is available on the interface and can be called multiple times. Setting a flag in the implementation and throwing exception on repeated call to Initialize seems way clunky. At the point where I resolve my interface I don't want to know anything about the implementation of IMyIntf. What I do want, though, is the knowledge that this interface needs certain one time initialization parameters. Is there a way to somehow annotate(attributes?) the interface with this information and pass those to framework when the object is created? Edit: Described the interface a bit more.

    Read the article

  • stealing inside the move constructor

    - by FredOverflow
    During the implementation of the move constructor of a toy class, I noticed a pattern: array2D(array2D&& that) { data_ = that.data_; that.data_ = 0; height_ = that.height_; that.height_ = 0; width_ = that.width_; that.width_ = 0; size_ = that.size_; that.size_ = 0; } The pattern obviously being: member = that.member; that.member = 0; So I wrote a preprocessor macro to make stealing less verbose and error-prone: #define STEAL(member) member = that.member; that.member = 0; Now the implementation looks as following: array2D(array2D&& that) { STEAL(data_); STEAL(height_); STEAL(width_); STEAL(size_); } Are there any downsides to this? Is there a cleaner solution that does not require the preprocessor?

    Read the article

  • Android - Basic CRUD (REST/RPC client) to remote server

    - by bsreekanth
    There are lot of discussion about REST client implementation in android, based on the talk at GoogleIO 2010, by Virgil Dobjanschi. My requirement may not necessarily involved, as I have some freedom to choose the configuration. I only target tablets configuration changes can be prevented if no other easy way (fix at Landscape mode etc) I am trying to achieve. Basic CRUD operation to my server (JSON RPC/ REST). Basically mimic an ajax request from android app (no webview, need native app) Based on the above mentioned talk, and some reading I see these options. Implement any of the 3 mentioned in the Google IO talk Especially, the last pattern may be more suitable as I don't care much of caching. But not sure how "real time" is sync implementation. Use HTTP request in AsyncTask. Simplest, but need to avoid re-sending request during change in device configuration (orientation change etc). Even if I fix at one orientation, recreation of activity still happens. So need to handle it elegantly. Use service to handle http request. So far, it say use service for long ruiing request. Not sure whether it is a good approach for simple GET/POST/PUt requests. Please share your experience on what could be the best way.

    Read the article

  • Is learning C++ a good idea?

    - by chang
    The more I hear and read about C++ (e.g. this: http://lwn.net/Articles/249460/), I get the impression, that I'd waste my time learning C++. I some wrote network routing algorithm in C++ for a simulator, and it was a pain (as expected, especially coming from a perl/python/Java background ...). I'm never happy about giving up on some technology, but I would be happy, if I could limit my knowledge of C-family languages to just C, C# and Objective-C (even OS Xs Cocoa, which is huge and takes a lot of time to learn looks like joy compared to C++ ...). Do I need to consider myself dumb or unwilling, just because I'm not partial to the pain involved learning this stuff? Technologies advance and there will be options other than C++, when deciding on implementation languages, or not? And for speed: If speed were that critical, I'd go for a plain C implementation instead, or write C extensions for much more productive languages like ruby or python ... The one-line version of the above: Will C++ stay such a relevant language that every committed programmer should be familiar with it? [ edit / thank you very much for your interesting and useful answers so far .. ] [ edit / .. i am accepting the top-rated answer; thanks again for all answers! ]

    Read the article

  • How are declared private ivars different from synthesized ivars?

    - by lemnar
    I know that the modern Objective-C runtime can synthesize ivars. I thought that synthesized ivars behaved exactly like declared ivars that are marked @private, but they don't. As a result, come code compiles only under the modern runtime that I expected would work on either. For example, a superclass: @interface A : NSObject { #if !__OBJC2__ @private NSString *_c; #endif } @property (nonatomic, copy) NSString *d; @end @implementation A @synthesize d=_c; - (void)dealloc { [_c release]; [super dealloc]; } @end and a subclass: @interface B : A { #if !__OBJC2__ @private NSString *_c; #endif } @property (nonatomic, copy) NSString *e; @end @implementation B @synthesize e=_c; - (void)dealloc { [_c release]; [super dealloc]; } @end A subclass can't have a declared ivar with the same name as one of its superclass's declared ivars, even if the superclass's ivar is private. This seems to me like a violation of the meaning of @private, since the subclass is affected by the superclass's choice of something private. What I'm more concerned about, however, is how should I think about synthesized ivars. I thought they acted like declared private ivars, but without the fragile base class problem. Maybe that's right, and I just don't understand the fragile base class problem. Why does the above code compile only in the modern runtime? Does the fragile base class problem exist when all superclass instance variables are private?

    Read the article

  • CCNet exception during build of vs2010 project

    - by sonee
    We have two build machines. Lately, we've migrated our projects to vs2010 from vs2005. But the problem is that one of the machines occurs error during build. Another machine works well, but just one machine shows error. The differences between the machines are os and computer spec. The machine which is working well is installed windows server 2003 and the other is windows7. the error message is unhandled exception: System.NullReferenceException: Microsoft.VisualStudio.Shell.ThreadHelper.InvokeOnUIThread(InvokableBase invokable) Microsoft.VisualStudio.Shell.ThreadHelper.Invoke(Action action)Microsoft.VisualStudio.Project.VS.Implementation.VSShellServices.InvokeOnUIThread(Action method) Microsoft.VisualStudio.Project.VisualC.VCProjectEngine.ApartmentMarshaler.Invoke(Action method) Microsoft.VisualStudio.Project.VisualC.VCProjectEngine.VCConfigBuildJob.BuildCompleted(BuildSubmission ar) Microsoft.VisualStudio.Project.Contracts.Implementation.BuildProjectBase.BuildCompletedCallbackManager.BuildCompleted(BuildSubmission buildSubmission) Microsoft.Build.Execution.BuildSubmission.<CheckForCompletion>b__0(Object state) System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() System.Threading.ThreadPoolWorkQueue.Dispatch() System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() Curiously enough, when I run building project in command line on the machine which occurs error, it works well. The machine just shows error when launched by ccnet. I've installed latest version of ccnet to all machines. Is there anybody who faced like this problem?

    Read the article

  • Abstract base class puzzle

    - by 0x80
    In my class design I ran into the following problem: class MyData { int foo; }; class AbstraktA { public: virtual void A() = 0; }; class AbstraktB : public AbstraktA { public: virtual void B() = 0; }; template<class T> class ImplA : public AbstraktA { public: void A(){ cout << "ImplA A()"; } }; class ImplB : public ImplA<MyData>, public AbstraktB { public: void B(){ cout << "ImplB B()"; } }; void TestAbstrakt() { AbstraktB *b = (AbstraktB *) new ImplB; b->A(); b->B(); }; The problem with the code above is that the compiler will complain that AbstraktA::A() is not defined. Interface A is shared by multiple objects. But the implementation of A is dependent on the template argument. Interface B is the seen by the outside world, and needs to be abstrakt. The reason I would like this is that it would allow me to define object C like this: Define the interface C inheriting from abstrakt A. Define the implementation of C using a different datatype for template A. I hope I'm clear. Is there any way to do this, or do I need to rethink my design?

    Read the article

  • How to copy generically superclass instances to subclass instances?

    - by gerry
    Hi @all, I have a class hierarchy / inheritance like this: public class A { private String name; // with getters & setters public void doAWithName(){ ... } } public class B extends A { public void doBWithName(){ // a differnt implementation to what I do in class A } } public class C extends B { public void doCWithName(){ // a differnt implementation to what I do in class A and B } } So at one time there is a instance of class A with the initialized field "name". Later I want this instance of A get wrapped into instance of B or C. So the superclasses should be get wrapped with a subclass! How can I make this most efficent with respect to DRY? I've thought about a constructor that does some copying with the getters/setters. But in this case I have to repeat myself - and this doesn't respect anymore to my initial requirement of DRY! So, how can I warp A to B by just initializing B's new fields (with default values) and delegating the rest to a method in A (which knows more than B about which fields of A should be accessed...). In the same way: If A should be wrapped into C only a method in c should init C's 'new' fields, delegate to B's wrap method (which therefore inits B's 'new' fields in C) and at last B delegates to A which copies it's fields to the fields of C). So in the end I have a new instance of C which has the values of A wrapped (and some default init values to the fields which the inheritance hierarchy has added).

    Read the article

  • Why is TRest in Tuple<T1... TRest> not constrained?

    - by Anthony Pegram
    In a Tuple, if you have more than 7 items, you can provide an 8th item that is another tuple and define up to 7 items, and then another tuple as the 8th and on and on down the line. However, there is no constraint on the 8th item at compile time. For example, this is legal code for the compiler: var tuple = new Tuple<int, int, int, int, int, int, int, double> (1, 1, 1, 1, 1, 1, 1, 1d); Even though the intellisense documentation says that TRest must be a Tuple. You do not get any error when writing or building the code, it does not manifest until runtime in the form of an ArgumentException. You can roughly implement a Tuple in a few minutes, complete with a Tuple-constrained 8th item. I just wonder why it was left off the current implementation? Is it possibly a forward-compatibility issue where they could add more elements with a hypothetical C# 5? Short version of rough implementation interface IMyTuple { } class MyTuple<T1> : IMyTuple { public T1 Item1 { get; private set; } public MyTuple(T1 item1) { Item1 = item1; } } class MyTuple<T1, T2> : MyTuple<T1> { public T2 Item2 { get; private set; } public MyTuple(T1 item1, T2 item2) : base(item1) { Item2 = item2; } } class MyTuple<T1, T2, TRest> : MyTuple<T1, T2> where TRest : IMyTuple { public TRest Rest { get; private set; } public MyTuple(T1 item1, T2 item2, TRest rest) : base(item1, item2) { Rest = rest; } } ... var mytuple = new MyTuple<int, int, MyTuple<int>>(1, 1, new MyTuple<int>(1)); // legal var mytuple2 = new MyTuple<int, int, int>(1, 2, 3); // illegal

    Read the article

  • Indices instead of pointers in STL containers?

    - by zvrba
    Due to specific requirements [*], I need a singly-linked list implementation that uses integer indices instead of pointers to link nodes. The indices are always interpreted with respect to a vector containing the list nodes. I thought I might achieve this by defining my own allocator, but looking into the gcc's implementation of , they explicitly use pointers for the link fields in the list nodes (i.e., they do not use the pointer type provided by the allocator): struct _List_node_base { _List_node_base* _M_next; ///< Self-explanatory _List_node_base* _M_prev; ///< Self-explanatory ... } (For this purpose, the allocator interface is also deficient in that it does not define a dereference function; "dereferencing" an integer index always needs a pointer to the underlying storage.) Do you know a library of STL-like data structures (i am mostly in need of singly- and doubly-linked list) that use indices (wrt. a base vector) instead of pointers to link nodes? [*] Saving space: the lists will contain many 32-bit integers. With two pointers per node (STL list is doubly-linked), the overhead is 200%, or 400% on 64-bit platform, not counting the overhead of the default allocator.

    Read the article

  • Can't add to NSMutableArray from SecondaryView

    - by Antonio
    Hi guys, I've searched and read and still haven't found a concrete answer. Brief: I have an application where I declare an NSMutableArray in my AppDelegate to synthesize when the application loads. (code below). I have a SecondaryViewController call this function, and I have it output a string to let me know what the array size is. Every time I run it, it executes but it does not add any objects to the array. How do I fix this? AppDelegate.h file #import <UIKit/UIKit.h> @class arrayTestViewController; @interface arrayTestAppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; arrayTestViewController *viewController; NSMutableArray *myArray3; } @property (nonatomic, retain) NSMutableArray *myArray3; @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet arrayTestViewController *viewController; -(void)addToArray3; @end AppDelegate.m file #import "arrayTestAppDelegate.h" #import "arrayTestViewController.h" @implementation arrayTestAppDelegate @synthesize window; @synthesize viewController; @synthesize myArray3; #pragma mark - #pragma mark Application lifecycle - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { myArray3 = [[NSMutableArray alloc] init]; // Override point for customization after application launch. // Add the view controller's view to the window and display. [window addSubview:viewController.view]; [window makeKeyAndVisible]; return YES; } -(void)addToArray3{ NSLog(@"Array Count: %d", [myArray3 count]); [myArray3 addObject:@"Test"]; NSLog(@"Array triggered from SecondViewController"); NSLog(@"Array Count: %d", [myArray3 count]); } SecondViewController.m file #import "SecondViewController.h" #import "arrayTestAppDelegate.h" @implementation SecondViewController -(IBAction)addToArray{ arrayTestAppDelegate *object = [[arrayTestAppDelegate alloc] init]; [object addToArray3]; }

    Read the article

  • Creating a RESTful API - HELP!

    - by Martin Cox
    Hi Chaps Over the last few weeks I've been learning about iOS development, which has naturally led me into the world of APIs. Now, searching around on the Internet, I've come to the conclusion that using the REST architecture is very much recommended - due to it's supposed simplicity and ease of implementation. However, I'm really struggling with the implementation side of REST. I understand the concept; using HTTP methods as verbs to describe the action of a request and responding with suitable response codes, and so on. It's just, I don't understand how to code it. I don't get how I map a URI to an object. I understand that a GET request for domain.com/api/user/address?user_id=999 would return the address of user 999 - but I don't understand where or how that mapping from /user/address to some method that queries a database has taken place. Is this all coded in one php script? Would I just have a method that grabs the URI like so: $array = explode("/", ltrim(rtrim($_SERVER['REQUEST_URI'], "/"), "/")) And then cycle through that array, first I would have a request for a "user", so the PHP script would direct my request to the user object and then invoke the address method. Is that what actually happens? I've probably not explained my thought process very well there. The main thing I'm not getting is how that URI /user/address?id=999 somehow is broken down and executed - does it actually resolve to code? class user(id) { address() { //get user address } } I doubt I'm making sense now, so I'll call it a day trying to explain further. I hope someone out there can understand what I'm trying to say! Thanks Chaps, look forward to your responses. Martin p.s - I'm not a developer yet, I'm learning :)

    Read the article

  • Custom UIProgressView drawing weirdness

    - by Werner
    I am trying to create my own custom UIProgressView by subclassing it and then overwrite the drawRect function. Everything works as expected except the progress filling bar. I can't get the height and image right. The images are both in Retina resolution and the Simulator is in Retina mode. The images are called: "[email protected]" (28px high) and "[email protected]" (32px high). CustomProgressView.h #import <UIKit/UIKit.h> @interface CustomProgressView : UIProgressView @end CustomProgressView.m #import "CustomProgressView.h" @implementation CustomProgressView - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code } return self; } // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. - (void)drawRect:(CGRect)rect { // Drawing code self.frame = CGRectMake(self.frame.origin.x, self.frame.origin.y, self.frame.size.width, 16); UIImage *progressBarTrack = [[UIImage imageNamed:@"progressBarTrack"] resizableImageWithCapInsets:UIEdgeInsetsZero]; UIImage *progressBar = [[UIImage imageNamed:@"progressBar"] resizableImageWithCapInsets:UIEdgeInsetsMake(4, 4, 5, 4)]; [progressBarTrack drawInRect:rect]; NSInteger maximumWidth = rect.size.width - 2; NSInteger currentWidth = floor([self progress] * maximumWidth); CGRect fillRect = CGRectMake(rect.origin.x + 1, rect.origin.y + 1, currentWidth, 14); [progressBar drawInRect:fillRect]; } @end The resulting ProgressView has the right height and width. It also fills at the right percentage (currently set at 80%). But the progress fill image isn't drawn correctly. Does anyone see where I go wrong?

    Read the article

  • Why doesn't java.lang.Number implement Comparable?

    - by Julien Chastang
    Does anyone know why java.lang.Number does not implement Comparable? This means that you cannot sort Numbers with Collections.sort which seems to me a little strange. Post discussion update: Thanks for all the helpful responses. I ended up doing some more research about this topic. The simplest explanation for why java.lang.Number does not implement Comparable is rooted in mutability concerns. For a bit of review, java.lang.Number is the abstract super-type of AtomicInteger, AtomicLong, BigDecimal, BigInteger, Byte, Double, Float, Integer, Long and Short. On that list, AtomicInteger and AtomicLong to do not implement Comparable. Digging around, I discovered that it is not a good practice to implement Comparable on mutable types because the objects can change during or after comparison rendering the result of the comparison useless. Both AtomicLong and AtomicInteger are mutable. The API designers had the forethought to not have Number implement Comparable because it would have constrained implementation of future subtypes. Indeed, AtomicLong and AtomicInteger were added in Java 1.5 long after java.lang.Number was initially implemented. Apart from mutability, there are probably other considerations here too. A compareTo implementation in Number would have to promote all numeric values to BigDecimal because it is capable of accommodating all the Number sub-types. The implication of that promotion in terms of mathematics and performance is a bit unclear to me, but my intuition finds that solution kludgy.

    Read the article

  • iOS: click counter for row in tableview

    - by roger.arliner21
    I am developing and tableView in which in each row I have to display the click counter and row number.Each cell must have an initial click counter value of 0 when application is initialized. Then increment the click counter value within a cell whenever a user clicks on the cell. I have take 26 fixed rows. I have taken tableData.plist file as in the attached image. I am initializing the self.dataArray with the plist . Now I want to do implementation in the didSelectRowAtIndexPath delegate method,if any row is tapped that row's click counter should increment. - (void)viewDidLoad { [super viewDidLoad]; NSString *path = [[NSBundle mainBundle] pathForResource:@"tableData" ofType:@"plist"]; self.dataArray = [NSMutableArray arrayWithContentsOfFile:path];//array initialized with plist } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [self.dataArray count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *kCustomCellID = @"MyCellID"; UILabel *label1 = [[UILabel alloc] initWithFrame:CGRectMake(160.0f, 2.0f, 30.0f, 20.0f)]; UILabel *label2 = [[UILabel alloc] initWithFrame:CGRectMake(160.0f, 24.0f, 30.0f, 30.0f)]; label2.textColor = [UIColor grayColor]; UITableViewCell *cell = (UITableViewCell *)[tableView dequeueReusableCellWithIdentifier:kCustomCellID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:kCustomCellID] autorelease]; cell.selectionStyle = UITableViewCellSelectionStyleNone; } if([self.dataArray count]>indexPath.row) { label1.text = [[self.dataArray objectAtIndex:indexPath.row] objectForKey:@"Lable1"]; label2.text =[[self.dataArray objectAtIndex:indexPath.row] objectForKey:@"Lable2"]; } else { label1.text = @""; label2.text =@""; } [cell.contentView addSubview:label1]; [cell.contentView addSubview:label2]; [label1 release]; [label2 release]; return cell; } #pragma mark - #pragma mark Table view delegate - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //implementation for row counter incrementer. }

    Read the article

  • Boost multi_index_container crash in release mode

    - by Zan Lynx
    I have a program that I just changed to using a boost::multi_index_container collection. After I did that and tested my code in debug mode, I was feeling pretty good about myself. However, then I compiled a release build with NDEBUG set, and the code crashed. Not immediately, but sometimes in single-threaded tests and often in multi-threaded tests. The segmentation faults happen deep inside boost insert and rotate functions related to the index updates and they are happening because a node has NULL left and right pointers. My code looks a bit like this: struct Implementation { typedef std::pair<uint32_t, uint32_t> update_pair_type; struct watch {}; struct update {}; typedef boost::multi_index_container< update_pair_type, boost::multi_index::indexed_by< boost::multi_index::ordered_unique< boost::multi_index::tag<watch>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::first> >, boost::multi_index::ordered_non_unique< boost::multi_index::tag<update>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::second> > > > update_map_type; typedef std::vector< update_pair_type > update_list_type; update_map_type update_map; update_map_type::iterator update_hint; void register_update(uint32_t watch, uint32_t update); void do_updates(uint32_t start, uint32_t end); }; void Implementation::register_update(uint32_t watch, uint32_t update) { update_pair_type new_pair( watch_offset, update_offset ); update_hint = update_map.insert(update_hint, new_pair); if( update_hint->second != update_offset ) { bool replaced _unused_ = update_map.replace(update_hint, new_pair); assert(replaced); } }

    Read the article

  • In Varnish, how can I read the Set-Cookie response header?

    - by Adam Friedman
    I am trying to detect if my application has set a cookie that holds an "alert message" for the user on the next page, where the Javascript displays it if detected. In my vcl_fetch(), I need to detect if the specific cookie value "alert_message" appears anywhere in the Set-Cookie header (presumably in the VCL variable beresp.http.Set-Cookie). If detected, then I do not want to cache that next page (since Varnish strips the Set-Cookie header by default, which would obliterate the alert message before it makes it back to the browser). So here is my simple test: if(beresp.http.Set-Cookie ~ "alert_message") { set req.http.no_cache = 1; } Strangely, it fails to evaluate to true. So I throw the variable into the Server header to see what it looks like: set beresp.http.Server = " MyApp Varnish implementation - test reading set-cookie: "+beresp.http.Set-Cookie; But for some reason this only displays the FIRST Set-Cookie line in the response headers. Here are the relevant response headers: Server: MyApp Varnish implementation - test reading cookie: elstats_session=7d7279kjmsnkel31lre3s0vu24; expires=Wed, 10-Oct-2012 00:03:32 GMT; path=/; HttpOnly Set-Cookie:app_session=7d7279kjmsnkel31lre3s0vu24; expires=Wed, 10-Oct-2012 00:03:32 GMT; path=/; HttpOnly Set-Cookie:alert_message=Too+many+results.; expires=Tue, 09-Oct-2012 20:13:32 GMT; path=/; domain=.my.app.com Set-Cookie:alert_key=flash_error; expires=Tue, 09-Oct-2012 20:13:32 GMT; path=/; domain=.my.app.com Vary:Accept-Encoding How do I read and run string detection on ALL Set-Cookie header lines?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >