Search Results

Search found 9353 results on 375 pages for 'implementation phase'.

Page 113/375 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Why does my iOS app crash when trying presentModalViewController?

    - by user555807
    I've been banging my head against this all day, it seems like something simple but I can't figure it out. I've got an iOS app that I created using the "View-based Application" template in XCode. Here is essentially the code I have: AppDelegate.h: #import <UIKit/UIKit.h> @interface AppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; MainViewController *viewController; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet MainViewController *viewController; @end AppDelegate.m: #import "AppDelegate.h" #import "MainViewController.h" @implementation AppDelegate @synthesize window, viewController; - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { [self.window addSubview:viewController.view]; [self.window makeKeyAndVisible]; return YES; } - (void)dealloc { [viewController release]; [window release]; [super dealloc]; } @end MainViewController.h: #import <UIKit/UIKit.h> @interface MainViewController : UIViewController { } -(IBAction) button:(id)sender; @end MainViewController.m: #import "MainViewController.h" #import "ModalViewController.h" @implementation MainViewController ... -(IBAction) button:(id)sender { ModalViewController *mvc = [[[ModalViewController alloc] initWithNibName:NSStringFromClass([ModalViewController class]) bundle:nil] autorelease]; [self presentModalViewController:mvc animated:YES]; } @end There's nothing of interest in the ModalViewController. So the modal view should display when the button is pressed. When I press the button, it hangs for a second then crashes back to the home screen with no error message. I am stumped, please show me what I'm doing wrong!

    Read the article

  • Is there a way to automaticly call all versions of an inherited method?

    - by Eric
    I'm writing a plug-in for a 3D modeling program. I have a custom class that wraps instances of elements in the 3D model, and in turn derives it's properties from the element it wraps. When the element in the model changes I want my class(es) to update their properties based on the new geometry. In the simplified example below. I have classes AbsCurveBasd, Extrusion, and Shell which are all derived from one another. Each of these classes implement a RefreshFromBaseShape() method which updates specific properties based on the current baseShape the class is wrapping. I can call base.RefreshFromBaseShape() in each implementation of RefreshFromBaseShape() to ensure that all the properties are updated. But I'm wondering if there is a better way where I don't have to remember to do this in every implementation of RefershFromBaseShape()? For example because AbsCurveBased does not have a parameterless constructor the code wont even compile unless the constructors call the base class constructors. public abstract class AbsCurveBased { internal Curve baseShape; double Area{get;set;} public AbsCurveBased(Curve baseShape) { this.baseShape = baseShape; RefreshFromBaseShape(); } public virtual void RefreshFromBaseShape() { //sets the Area property from the baseShape } } public class Extrusion : AbsCurveBased { double Volume{get;set;} double Height{get;set;} public Extrusion(Curve baseShape):base(baseShape) { this.baseShape = baseShape; RefreshFromBaseShape(); } public override void RefreshFromBaseShape() { base.RefreshFromBaseShape(); //sets the Volume property based on the area and the height } } public class Shell : Extrusion { double ShellVolume{get;set;} double ShellThickness{get;set;} public Shell(Curve baseShape): base(baseShape) { this.baseShape = baseShape; RefreshFromBaseShape(); } public void RefreshFromBaseShape() { base.RefreshFromBaseShape(); //sets this Shell Volume from the Extrusion properties and ShellThickness property } }

    Read the article

  • Inserting bits into byte

    - by JB_SO
    I was looking at an example of reading bits from a byte and the implementation looked simple and easy to understand. I was wondering if anyone has a similar example of how to insert bits into a byte or byte array, that is easier to understand and also implement like the example below. Here is the example I found of reading bits from a byte (http://bytes.com/topic/c-sharp/answers/505085-reading-bits-byte-file): static int GetBits3(byte b, int offset, int count) { return (b >> offset) & ((1 << count) - 1); } Here is what i'm trying to do....and this is my current implementation.....just a little confused with the bit-masking/shifting, etc, that's why I'm trying to find out if there is an easier way to do what i'm doing BYTE Msg[2]; Msg_Id = 3; Msg_Event = 1; Msg_Ready = 2; Msg[0] = ( ( Msg_Event << 4 ) & 0xF0 ) | ( Msg_Id & 0x0F ) ; Msg[1] = Msg_Ready & 0x0F; //MsgReady & Unused Thanks for your help!

    Read the article

  • VBA: Difference in two ways of declaring a new object? (Trying to understand why my solution works)

    - by Matt
    I was creating a new object within a loop, and adding that object to a collection; but when I read back the collection after, it was always filled entirely with the last object I had added. I've come up with two ways around this, but I simply do not understand why my initial implementation was wrong. Original: Dim oItem As Variant Dim sOutput As String Dim i As Integer Dim oCollection As New Collection For i = 0 To 10 Dim oMatch As New clsMatch oMatch.setLineNumber i oCollection.Add oMatch Next For Each oItem In oCollection sOutput = sOutput & "[" & oItem.lineNumber & "]" Next MsgBox sOutput This resulted in every lineNumber being 10; I was obviously not creating new objects, but instead using the same one each time through the loop, despite the declaration being inside of the loop. So, I added Set oMatch = Nothing immediately before the Next line, and this fixed the problem, it was now 0 to 10. So if the old object was explicitly destroyed, then it was willing to create a new one? I would have thought the next iteration through the loop would cause anything declared within the loop do be destroyed due to scope? Curious, I tried another way of declaring a new object: Dim oMatch As clsMatch: Set oMatch = New clsMatch. This, too, results in 0 to 10. Can anyone explain to me why the first implementation was wrong?

    Read the article

  • UIImagePickerController random behavior

    - by pion
    I have the following code snippets: @interface PinRecordNewTableViewController : UITableViewController { } ... @implementation PinRecordNewTableViewController ... - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { ... PinRecordNewPicture *pinRecordNewPicture = [[PinRecordNewPicture alloc] initWithNibName:@"PinRecordNewPicture" bundle:nil]; pinRecordNewPicture.delegate = self; [self.navigationController pushViewController:pinRecordNewPicture animated:YES]; [pinRecordNewPicture release]; ... } @interface PinRecordNewPicture : UIViewController ... @implementation PinRecordNewPicture ... - (void)picturePicker:(UIImagePickerControllerSourceType)theSource { UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = theSource; picker.allowsEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; } - (IBAction) takePicture:(id)sender { UIImagePickerControllerSourceType source = UIImagePickerControllerSourceTypeCamera; if ([UIImagePickerController isSourceTypeAvailable:source]) { [self picturePicker:source]; } It works 80% of the time -- I could get the picture correctly. The problem is that it occasionally failed to take a picture. When tracing the code, I found out that it occasionally goes to -PinRecordNewTableViewController:viewDidUnload. This is where it fails because it set nil to all ivars. What did I do wrong? Did I miss something so it behaves "randomly"? Thanks in advance for your help.

    Read the article

  • Adroid's DateFormat replacement - missing the format() with FieldPosition

    - by user331244
    Hi, I need to split a date string into pieces and I'm doing it using the public final StringBuffer format (Object object, StringBuffer buffer, FieldPosition field) from the java.text.DateFormat class. However, the implementation of this function is really slow, hence Android has an own implementation in android.text.format.DateFormat. BUT, in my case, I want to extract the different pieces of the date string (year, minute and so on). Since I need to be locale independent, I can not use SimpleDateFormat and custom strings. I do it as follows: Calendar c = ... // find out what field to extract int field = getField(); // Create a date string Field calendarField = DateFormat.Field.ofCalendarField(field); FieldPosition fieldPosition = new FieldPosition(calendarField); StringBuffer label = new StringBuffer(); label = getDateFormat().format(c.getTime(), label, fieldPosition); // Find the piece that we are looking for int beginIndex = fieldPosition.getBeginIndex(); int endIndex = fieldPosition.getEndIndex(); String asString = label.substring(beginIndex, endIndex); For some reason, the format() overload with the FieldPosition argument is not included in the android platform. Any ideas of how to do this in another way? Is there any easy way to tokenize the pattern string? Any other ideas?

    Read the article

  • What is the PastryKit Framework?

    - by Kerrick
    I'm trying to find any information I can on the PastryKit Javascript Framework. It appears to be in use on the iPhone User Guide that is displayed on the iPhone itself in Mobile Safari, but I cannot find any documentation or API. If you want to see it in action, open Safari 4, set your user agent to iPhone 3 (In the Develop menu) and check out the guide. Overall, it seems to be a way to write an HTML/CSS/Javascript application that acts like a native iPhone app. When it comes to Javascript, I used the JS Beautifier on (what I assume to be) the framework file and it was over 3,400 lines! Beautified, (again what I assume to be) their implementation of it was over 1,200 lines. On the CSS side, I used Clean CSS on (again what I assume to be) the framework CSS, and it came out to over 700 lines. Their implementation was shy of 500. Does anybody have, or know where to find, any information, documentation, or APIs on PastryKit? Or, can anybody figure out how to implement it?

    Read the article

  • erlang design of a card game [closed]

    - by user601836
    I would like to discuss with you a possible implementation for a card game in erlang. The only full example I found online is OpenPoker. I would like to create one myself, so here is the implementation I have in mind: A gen_server to represent a deck: when started creates a deck of cards (shuffled). And stores it in its state. provides an handle_call (draw_card) A gen_server to represent the chat room. Stores in its state the registered name of a player process (e.g. player1, player2, luke etc etc). Exports handle_cast to join the chat (executed by default when somebody joins successfully the game) and one to broadcast a chat message to all users by calling an handle_cast on the gen_server representing a player. a gen_fsm to represent a game instance. Has two states (wait_join, and turn). Exports join/1 to join the game, play_card/2 and send_msg/2. One parameter is the pid of the player process. a gen_server to represent the player. Exports only start_link/1 where the parameter is the name to use to register the process (inside the init I call join method of gen_fsm). Has different handle_calls (e.g. get_hand, draw_card) and handle_casts (e.g. play_card, deliver_msg, and send_msg) A gen_server to represent the main process. Exports (join_game/1 which calls player:start_link/1, send_msg/2 to call player's send_msg, play_card/3 to call player's play_card). What do you think of this architecture? Thanks in advance

    Read the article

  • Calling a subclass method from a superclass

    - by Shaun
    Preface: This is in the context of a Rails application. The question, however, is specific to Ruby. Let's say I have a Media object. class Media < ActiveRecord::Base end I've extended it in a few subclasses: class Image < Media def show # logic end end class Video < Media def show # logic end end From within the Media class, I want to call the implementation of show from the proper subclass. So, from Media, if self is a Video, then it would call Video's show method. If self is instead an Image, it would call Image's show method. Coming from a Java background, the first thing that popped into my head was 'create an abstract method in the superclass'. However, I've read in several places (including Stack Overflow) that abstract methods aren't the best way to deal with this in Ruby. With that in mind, I started researching typecasting and discovered that this is also a relic of Java thinking that I need to banish from my mind when dealing with Ruby. Defeated, I started coding something that looked like this: def superclass_method # logic this_media = self.type.constantize.find(self.id) this_media.show end I've been coding in Ruby/Rails for a while now, but since this was my first time trying out this behavior and existing resources didn't answer my question directly, I wanted to get feedback from more-seasoned developers on how to accomplish my task. So, how can I call a subclass's implementation of a method from the superclass in Rails? Is there a better way than what I ended up (almost) implementing?

    Read the article

  • NUnit integration programmatically with spring

    - by harkon
    Hi! I have a component based architecture framework designed and I use NUnit for isolated testing - okay so far. Now I want to enable integration tests. Therefore the tests use real implementations of the existing components. Each element of the component has a life cycle (init, start and stop) and I created a NUnit component. In the start section the Console runner of the NUnit will be executed. Okay - now if I have a test fixture class in my dlls in the execution path the runner exectues them - fine! But: And this is crucial! Each to be tested implementation exists so far in the process and I want to use this instances for testing. If I use NUnit runner in the current way each instance will be created twice - and above all: I have a spring container and a implementation registry. Via this registry I can get access to all instances in the processes. But how do I give the test fixture access to the existing registry? Good: I can start the component architecture framework in the startup of the nunit runner - but this is not what I want. My guide is the apache Cactus framework (with JUnit and tomcat, JBoss etc.) Can someone help? Thanks a lot! Check: http://cone.codeplex.com

    Read the article

  • how to update UI controls in cocoa application from background thread

    - by AmitSri
    following is .m code: #import "ThreadLabAppDelegate.h" @interface ThreadLabAppDelegate() - (void)processStart; - (void)processCompleted; @end @implementation ThreadLabAppDelegate @synthesize isProcessStarted; - (void)awakeFromNib { //Set levelindicator's maximum value [levelIndicator setMaxValue:1000]; } - (void)dealloc { //Never called while debugging ???? [super dealloc]; } - (IBAction)startProcess:(id)sender { //Set process flag to true self.isProcessStarted=YES; //Start Animation [spinIndicator startAnimation:nil]; //perform selector in background thread [self performSelectorInBackground:@selector(processStart) withObject:nil]; } - (IBAction)stopProcess:(id)sender { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } - (void)processStart { int counter = 0; while (counter != 1000) { NSLog(@"Counter : %d",counter); //Sleep background thread to reduce CPU usage [NSThread sleepForTimeInterval:0.01]; //set the level indicator value to showing progress [levelIndicator setIntValue:counter]; //increment counter counter++; } //Notify main thread for process completed [self performSelectorOnMainThread:@selector(processCompleted) withObject:nil waitUntilDone:NO]; } - (void)processCompleted { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } @end I need to clear following things as per the above code. How to interrupt/cancel processStart while loop from UI control? I also need to show the counter value in main UI, which i suppose to do with performSelectorOnMainThread and passing argument. Just want to know, is there anyother way to do that? When my app started it is showing 1 thread in Activity Monitor, but when i started the processStart() in background thread its creating two new thread,which makes the total 3 thread until or unless loop get finished.After completing the loop i can see 2 threads. So, my understanding is that, 2 thread created when i called performSelectorInBackground, but what about the thrid thread, from where it got created? What if thread counts get increases on every call of selector.How to control that or my implementation is bad for such kind of requirements? Thanks

    Read the article

  • Google app engine: Poor Performance with JDO + Datastore

    - by Bosh
    I have a simple data model that includes USERS: store basic information (key, name, phone # etc) RELATIONS: describe, e.g. a friendship between two users (supplying a relationship_type + two user keys) I'm getting very poor performance, for instance, if I try to print the first names of all of a user's friends. Say the user has 500 friends: I can fetch the list of friend user_ids very easily in a single query. But then, to pull out first names, I have to do 500 back-and-forth trips to the Datastore, each of which seems to take on the order of 30 ms. If this were SQL, I'd just do a JOIN and get the answer out fast. I understand there are rudimentary facilities for performing joins across un-owned relations in a relaxed implementation of JDO (as described at http://gae-java-persistence.blogspot.com) but they sound experimental and non-standard (e.g. my code won't work in any other JDO implementation). Is this really my best bet? Otherwise, how do people extract satisfactory performance from JDO/Datastore in this kind of (very common) situation? -Bosh

    Read the article

  • Problems with CGPoint/touches event

    - by Jason
    I'm having some problems with storing variables from my touch events. The warning I get when I run this is that coord and icoord are unused, but I used them in the viewDidLoad implementation, is there a reason why this does not work? Any suggestions? #import "iGameViewController.h" @implementation iGameViewController @synthesize player; -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[event allTouches] anyObject]; CGPoint icoord = [touch locationInView:touch.view]; } -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[event allTouches] anyObject]; CGPoint coord = [touch locationInView:touch.view]; } - (void)viewDidLoad { if (coord.x > icoord.x) { player.center = CGPointMake(player.center.x + 5, player.center.y); } if (coord.x < icoord.x) { player.center = CGPointMake(player.center.x - 5, player.center.y); } if (coord.y > icoord.y) { player.center = CGPointMake(player.center.x, player.center.y - 5); } if (coord.y < icoord.y) { player.center = CGPointMake(player.center.x, player.center.y + 5); } } Thanks.

    Read the article

  • ASP.NET MVC - How to Unit Test boundaries in the Repository pattern?

    - by JK
    Given a basic repository interface: public interface IPersonRepository { void AddPerson(Person person); List<Person> GetAllPeople(); } With a basic implementation: public class PersonRepository: IPersonRepository { public void AddPerson(Person person) { ObjectContext.AddObject(person); } public List<Person> GetAllPeople() { return ObjectSet.AsQueryable().ToList(); } } How can you unit test this in a meaningful way? Since it crosses the boundary and physically updates and reads from the database, thats not a unit test, its an integration test. Or is it wrong to want to unit test this in the first place? Should I only have integration tests on the repository? I've been googling the subject and blogs often say to make a stub that implements the IRepository: public class PersonRepositoryTestStub: IPersonRepository { private List<Person> people = new List<Person>(); public void AddPerson(Person person) { people.Add(person); } public List<Person> GetAllPeople() { return people; } } But that doesnt unit test PersonRepository, it tests the implementation of PersonRepositoryTestStub (not very helpful).

    Read the article

  • How can I release this NSXMLParser without crashing my app?

    - by prendio2
    Below is the @interface for an MREntitiesConverter object I use to strip all html tags from a string using an NSXMLParser. @interface MREntitiesConverter : NSObject { NSMutableString* resultString; NSString* xmlStr; NSData *data; NSXMLParser* xmlParser; } @property (nonatomic, retain) NSMutableString* resultString; - (NSString*)convertEntitiesInString:(NSString*)s; @end And this is the implementation: @implementation MREntitiesConverter @synthesize resultString; - (id)init { if([super init]) { self.resultString = [NSMutableString string]; } return self; } - (NSString*)convertEntitiesInString:(NSString*)s { xmlStr = [NSString stringWithFormat:@"<data>%@</data>", s]; data = [xmlStr dataUsingEncoding:NSUTF8StringEncoding allowLossyConversion:YES]; xmlParser = [[NSXMLParser alloc] initWithData:data]; [xmlParser setDelegate:self]; [xmlParser parse]; return [resultString autorelease]; } - (void)dealloc { [data release]; //I want to release xmlParser here but it crashes the app [super dealloc]; } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)s { [self.resultString appendString:s]; } @end If I release xmlParser in the dealloc method I am crashing my app but without releasing I am quite obviously leaking memory. I am new to Instruments and trying to get the hang of optimising this app. Any help you can offer on this particular issue will likely help me solve other memory issues in my app. Yours in frustrated anticipation: ) Oisin

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Jboss 6 Cluster Singleton Clustered

    - by DanC
    I am trying to set up a Jboss 6 in a clustered environment, and use it to host clustered stateful singleton EJBs. So far we succesfully installed a Singleton EJB within the cluster, where different entrypoints to our application (through a website deployed on each node) point to a single environment on which the EJB is hosted (thus mantaining the state of static variables). We achieved this using the following configuration: Bean interface: @Remote public interface IUniverse { ... } Bean implementation: @Clustered @Stateful public class Universe implements IUniverse { private static Vector<String> messages = new Vector<String>(); ... } jboss-beans.xml configuration: <deployment xmlns="urn:jboss:bean-deployer:2.0"> <!-- This bean is an example of a clustered singleton --> <bean name="Universe" class="Universe"> </bean> <bean name="UniverseController" class="org.jboss.ha.singleton.HASingletonController"> <property name="HAPartition"><inject bean="HAPartition"/></property> <property name="target"><inject bean="Universe"/></property> <property name="targetStartMethod">startSingleton</property> <property name="targetStopMethod">stopSingleton</property> </bean> </deployment> The main problem for this implementation is that, after the master node (the one that contains the state of the singleton EJB) shuts down gracefuly, the Singleton's state is lost and reset to default. Please note that everything was constructed following the JBoss 5 Clustering documents, as no JBoss 6 documents were found on this subject. Any information on how to solve this problem or where to find JBoss 6 documention on clustering is appreciated.

    Read the article

  • VB6 Game Development

    - by CVS-2600Hertz-wordpress-com
    Hi All, I am developing a game in VB6 (plz don't ask me why :) ). The storyboard is ready and a rough implementation is underway. I am following a "pure-software-rendering" approach. (i.e. no DirectX, no openGL etc.) Amongst many others, the following "serious" problems exist: 2D alpha transparency reqd. to implement overlays. Parallax implementation to give depth-of-field illusion. Capturing mouse-scroll events globally (as in FPS-es; mapping them to changing weapon). Async sound play with absolute "near-zero-lag". Any ideas anyone. Please suggest any well documented library/ocx or sample-code. Plz do suggest solutions with good performance and as little overhead as possible. Also, anyone who has developed any games, and would be open to sharing her/his code would be highly appreciated. (any well-acknowledged VB games whose source-code i can study??) UPDATE: Here is a screen shot of GearHead Garage. This picture ought to describe what i was attempting in words above... :) Thank You

    Read the article

  • How to efficiently compare the sign of two floating-point values while handling negative zeros

    - by François Beaune
    Given two floating-point numbers, I'm looking for an efficient way to check if they have the same sign, given that if any of the two values is zero (+0.0 or -0.0), they should be considered to have the same sign. For instance, SameSign(1.0, 2.0) should return true SameSign(-1.0, -2.0) should return true SameSign(-1.0, 2.0) should return false SameSign(0.0, 1.0) should return true SameSign(0.0, -1.0) should return true SameSign(-0.0, 1.0) should return true SameSign(-0.0, -1.0) should return true A naive but correct implementation of SameSign in C++ would be: bool SameSign(float a, float b) { if (fabs(a) == 0.0f || fabs(b) == 0.0f) return true; return (a >= 0.0f) == (b >= 0.0f); } Assuming the IEEE floating-point model, here's a variant of SameSign that compiles to branchless code (at least with with Visual C++ 2008): bool SameSign(float a, float b) { int ia = binary_cast<int>(a); int ib = binary_cast<int>(b); int az = (ia & 0x7FFFFFFF) == 0; int bz = (ib & 0x7FFFFFFF) == 0; int ab = (ia ^ ib) >= 0; return (az | bz | ab) != 0; } with binary_cast defined as follow: template <typename Target, typename Source> inline Target binary_cast(Source s) { union { Source m_source; Target m_target; } u; u.m_source = s; return u.m_target; } I'm looking for two things: A faster, more efficient implementation of SameSign, using bit tricks, FPU tricks or even SSE intrinsics. An efficient extension of SameSign to three values.

    Read the article

  • Loading Views dynamically

    - by Dann
    Case 1: I have created View-based sample application and tried execute below code. When I press on "Job List" button it should load another view having "Back Btn" on it. In test function, if I use [self.navigationController pushViewController:jbc animated:YES]; nothing gets loaded, but if I use [self presentModalViewController:jbc animated:YES]; it loads another view haveing "Back Btn" on it. Case 2: I did create another Navigation Based Applicaton and used [self.navigationController pushViewController:jbc animated:YES]; it worked as I wanted. Can someone please explain why it was not working in Case 1. Does it has something to do with type of project that is selected? @interface MWViewController : UIViewController { } -(void) test; @end @interface JobViewCtrl : UIViewController { } @end @implementation MWViewController (void)viewDidLoad { UIButton* btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(80, 170, 150, 35); [btn setTitle:@"Job List!" forState:UIControlStateNormal]; [btn addTarget:self action:@selector(test) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:btn]; [super viewDidLoad]; } -(void) test { JobViewCtrl* jbc = [[JobViewCtrl alloc] init]; [self.navigationController pushViewController:jbc animated:YES]; //[self presentModalViewController:jbc animated:YES]; [jbc release]; } (void)dealloc { [super dealloc]; } @end @implementation JobViewCtrl -(void) loadView { self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; self.view.backgroundColor = [UIColor grayColor]; UIButton* btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(80, 170, 150, 35); [btn setTitle:@"Back Btn!" forState:UIControlStateNormal]; [self.view addSubview:btn]; } @end

    Read the article

  • How to call a function from another class file

    - by Guy Parker
    I am very familiar with writing VB based applications but am new to Xcode (and Objective C). I have gone through numerous tutorials on the web and understand the basics and how to interact with Interface Builder etc. However, I am really struggling with some basic concepts of the C language and would be grateful for any help you can offer. Heres my problem… I have a simple iphone app which has a view controller (FirstViewController) and a subview (SecondViewController) with associated header and class files. In the FirstViewController.m have a function defined @implementation FirstViewController (void) writeToServer:(const uint8_t ) buf { [oStream write:buf maxLength:strlen((char)buf)]; } It doesn't really matter what the function is. I want to use this function in my SecondViewController, so in SecondViewController.m I import FirstViewController.h import "SecondViewController.h" import "FirstViewController.h" @implementation SecondViewController -(IBAction) SetButton: (id) sender { NSString *s = [@"Fill:" stringByAppendingString: FillLevelValue.text]; NSString *strToSend = [s stringByAppendingString: @":"]; const uint8_t *str = (uint8_t *) [strToSend cStringUsingEncoding:NSASCIIStringEncoding]; FillLevelValue.text = strToSend; [FirstViewController writeToServer:str]; } This last line is where my problem is. XCode tells me that FirstViewController may not respond to writeToServer. And when I try to run the application it crashes when this function is called. I guess I don't fully understand how to share functions and more importantly, the relationship between classes. In an ideal world I would create a global class to place my functions in and call them as required. Any advice gratefully received.

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Is there a pattern for initializing objects created wth a DI container

    - by Igor Zevaka
    I am trying to get Unity to manage the creation of my objects and I want to have some initialization parameters that are not known until run-time: At the moment the only way I could think of the way to do it is to have an Init method on the interface. interface IMyIntf { void Initialize(string runTimeParam); string RunTimeParam { get; } } Then to use it (in Unity) I would do this: var IMyIntf = unityContainer.Resolve<IMyIntf>(); IMyIntf.Initialize("somevalue"); In this scenario runTimeParam param is determined at run-time based on user input. The trivial case here simply returns the value of runTimeParam but in reality the parameter will be something like file name and initialize method will do something with the file. This creates a number of issues, namely that the Initialize method is available on the interface and can be called multiple times. Setting a flag in the implementation and throwing exception on repeated call to Initialize seems way clunky. At the point where I resolve my interface I don't want to know anything about the implementation of IMyIntf. What I do want, though, is the knowledge that this interface needs certain one time initialization parameters. Is there a way to somehow annotate(attributes?) the interface with this information and pass those to framework when the object is created? Edit: Described the interface a bit more.

    Read the article

  • Android depth buffer issue: Advice for anyone experiencing problem

    - by Andrew Smith
    I've wasted around 30 hours this week writing and re-writing code, believing that I had misunderstood how the OpenGL depth buffer works. Everything I tried, failed. I have now resolved my problem by finding what may be an error in the Android implementation of OpenGL. See this API entry: http://www.opengl.org/sdk/docs/man/xhtml/glClearDepth.xml void glClearDepth(GLclampd depth); Specifies the depth value used when the depth buffer is cleared. The initial value is 1. Android's implementation has two versions of this command: glClearDepthx which takes an integer value, clamped 0-1 glClearDepthf which takes a floating point value, clamped 0-1 If you use glClearDepthf(1) then you get the results you would expect. If you use glClearDepthx(1), as I was doing then you get different results. (Note that 1 is the default value, but calling the command with the argument 1 produces different results than not calling it at all.) Quite what is happening I do not know, but the depth buffer was being cleared to a value different from what I had specified.

    Read the article

  • How are declared private ivars different from synthesized ivars?

    - by lemnar
    I know that the modern Objective-C runtime can synthesize ivars. I thought that synthesized ivars behaved exactly like declared ivars that are marked @private, but they don't. As a result, come code compiles only under the modern runtime that I expected would work on either. For example, a superclass: @interface A : NSObject { #if !__OBJC2__ @private NSString *_c; #endif } @property (nonatomic, copy) NSString *d; @end @implementation A @synthesize d=_c; - (void)dealloc { [_c release]; [super dealloc]; } @end and a subclass: @interface B : A { #if !__OBJC2__ @private NSString *_c; #endif } @property (nonatomic, copy) NSString *e; @end @implementation B @synthesize e=_c; - (void)dealloc { [_c release]; [super dealloc]; } @end A subclass can't have a declared ivar with the same name as one of its superclass's declared ivars, even if the superclass's ivar is private. This seems to me like a violation of the meaning of @private, since the subclass is affected by the superclass's choice of something private. What I'm more concerned about, however, is how should I think about synthesized ivars. I thought they acted like declared private ivars, but without the fragile base class problem. Maybe that's right, and I just don't understand the fragile base class problem. Why does the above code compile only in the modern runtime? Does the fragile base class problem exist when all superclass instance variables are private?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >