Search Results

Search found 15913 results on 637 pages for 'screen'.

Page 168/637 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Wizard style navigation, dismissing a view and showing another one

    - by Robin Jamieson
    I'm making a set of screens similar to a wizard and I'd like to know how to make a view dismiss itself and its parent view and immediately show a 'DoneScreen' without worrying about resource leaks. My views look like the following: Base -> Level1 -> DoneScreen -> Level2 -> DoneScreen The Level1 controller is a navigation controller created with a view.xib and shown with [self presentModalViewController ...] by the Base controller. The Level1 controller is also responsible for creating the 'DoneScreen' which may be shown instead of the Level2 Screen based on a certain criteria. When the user taps a button on the screen, the Level1 controller instantiates the the Level2 controller and it displays it via [self.navigationController pushViewController ..] and Level2 controller's view has a 'Next' button. When the use hits the 'Next' button in the Level2 screen, I need to dismiss the current Level2's view as well as the Level1's view and display the 'DoneScreen', which would have been created and passed in to the Level2 controller from Level1. (partly to reduce code duplication, and partly to separate responsibilities among the controllers) In the Level2 controller, if I show the 'DoneScreen' first and dismiss itself with [self.navigationController popViewControllerAnimated:YES]; then the Level1 controller's modal view is still present above the 'Base' but under the Done screen. What's a good way to clear out all of these views except the Base and then show the 'DoneScreen'? Any good suggestions on how to get this done in a simple but elegant manner?

    Read the article

  • C# Vector maths questions

    - by Mark
    Im working in a screen coordinate space that is different to that of the classical X/Y coordinate space, where my Y direction goes down in the positive instead of up. Im also trying to figure out how to make a Circle on my screen always face away from the center point of the screen. If the center point of my screen is at x(200) y(300) and the point of my circle's center is at x(150) and y(380) then I would like to calculate the angle that the circle should be facing. At the moment I have this: Point centerPoint = new Point(200, 300); Point middleBottom = new Point(200, 400); Vector middleVector = new Vector(centerPoint.X - middleBottom.X, centerPoint.Y - middleBottom.Y); Vector vectorOfCircle = new Vector(centerPoint.X - 150, centerPoint.Y - 400); middleVector.Normalize(); vectorOfCircle.Normalize(); var angle = Math.Acos(Vector.CrossProduct(vectorOfCircle, middleVector)); Console.WriteLine("Angle: {0}", angle * (180/Math.PI)); Im not getting what I would expect. I would say that when I enter in x(150) and y(300) of my circle, I would expect to see the rotation of 90 deg, but Im not getting that... Im getting 180!! Any help here would be greatly appreciated. Cheers, Mark

    Read the article

  • What information should a SVN/Versioned file commit comment contain?

    - by RenderIn
    I'm curious what kind of content should be in a versioned file commit comment. Should it describe generally what changed (e.g. "The widget screen was changed to display only active widgets") or should it be more specific (e.g. "A new condition was added to the where clause of the fetchWidget query to retrieve only active widgets by default") How atomic should a single commit be? Just the file containing the updated query in a single commit (e.g. "Updated the widget screen to display only active widgets by default"), or should that and several other changes + interface changes to a screen share the same commit with a more general description like ("Updated the widget screen: A) display only active widgets by default B) added button to toggle showing inactive widgets") I see subversion commit comments being used very differently and was wondering what others have had success with. Some comments are as brief as "updated files", while others are many paragraphs long, and others are formatted in a way that they can be queried and associated with some external system such as JIRA. I used to be extremely descriptive of the reason for the change as well as the specific technical changes. Lately I've been scaling back and just giving a general "This is what I changed on this page" kind of comment.

    Read the article

  • Changing the direction of a Combo box dropdown in SWT

    - by Kris
    Hi, I'm building an Eclipse plugin in SWT, and I have the following problem: one of my fields is a combo box, and in some cases it may have fairly long items as selection options. My plugin runs on the right side of the screen, so when you go to use the combo-box, the right side of the combo box is cut off. So, my question is: is there any way to change the dropdown's alignment relative to the combo control? It seems to be permanently left-aligned... and I'm pretty sure you can change the direction in Swing (though the only place I've seen it done is in the Substance UI demo. The Combo Box tab has boxes with North, South East, and West flyout directions... for my application, I need something like the West flyout) Note: Setting actual text limits is a last-case-scenario option; it would be quite a bit of guesswork to set the text limit dynamically (since the widget's view can be resized). Here's a picture (sorry, I can only have one link and no images :( ... I need some more rep :p) Left side of the line: Proper width - the view is the wide enough for the combo dropdown to display all the text; you can see the scrollbars on the right side. Right side of the line: Too small - Here, the view has been resized, and the combo dropdown scrollbar (as well as some of the text) is cut off by the right side of the screen. I always have more screen space available to the left (since this is always on the right hand side of the screen), but the combo dropdown always appears to the lower right. Hopefully this is clear enough.

    Read the article

  • Graphical glitches when adding cells and scrolling with UITableView

    - by Daniel I-S
    I am using a UITableView to display the results of a series of calculations. When the user hits 'calculate', I wish to add the latest result to the screen. This is done by adding a new cell to a 'results' section. The UITableViewCell object is added to an array, and then I use the following code to add this new row to what is displayed on the screen: [thisView beginUpdates]; [thisView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation: UITableViewRowAnimationFade]; [thisView endUpdates]; This results in the new cell being displayed. However, I then want to immediately scroll the screen down so that the new cell is the lowermost cell on-screen. I use the following code: [thisView scrollToRowAtIndexPath:newIndexPath atScrollPosition:UITableViewScrollPositionBottom animated:YES]; This almost works great. However, the first time a cell is added and scrolled to, it appears onscreen only briefly before vanishing. The view scrolls down to the correct place, but the cell is not there. Scrolling the view by hand until this invisible new cell's position is offscreen, then back again, causes the cell to appear - after which it behaves normally. This only happens the first time a cell is added; subsequent cells don't have this problem. It also happens regardless of the combination of scrollToRowAtIndexPath and insertRowsAtIndexPath animation settings. There is also a problem where, if new cells are added repeatedly and quickly, the new cells stop 'connecting up'. The lowermost cell in a group is supposed to have rounded corners, and when a new cell is added these turn into square corners so that there is a clean join with the next cell in the group. In this case, however, a cell often does not lose its rounded edges despite not being the last cell anymore. This also gets corrected once the affected area moves offscreen and back. This method of adding and scrolling would be perfect for my application if it weren't for these weird glitches. Any ideas as to what I may be doing wrong?

    Read the article

  • NES Programming - Nametables?

    - by Jeffrey Kern
    Hello everyone, I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables. Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this: (assuming 16 8x8 blocks) Nametable, with A B C D = pointers to sprite data: ABBB CDCC DDDD DDDD Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above: 1<2< ^'^' 3<3< ^'^' So, in the example above, the blocks would be colored as so 1A 1B 2B 2B 1C 1D 2C 2C 3D 3D 3D 3D 3D 3D 3D 3D Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling? Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3). The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.

    Read the article

  • Navigating through code with keyboard shortcuts

    - by MarceloRamires
    I'm starting to feel the need to run fastly through code with keyboard shortcuts, to arrive faster where I want to make any changes (avoiding use of mouse or long times holding [up], [left], [right] and [down]). I'm already using some: [home] - first position in current line [end] - last position in current line [ctrl] + [home] - first line of the entire code [ctrl] + [end] - last line of the entire code [pageup] - same vertical position, one screen above [pagedown] - same vertical position, one screen below [ctrl] + [pageup] - first line in current screen [ctrl] + [end] - last line in current screen [ctrl] + [left/right] - skipping word per word What have you got ? I use Visual Studio. (but I'm open to any answer, as I maybe can use others soon) obs: I've searched through stackoverflow and didn't find a nice question with this content, nor a list of keyboard code searching. If it's repeated, I'm sorry for not finding it, I'm here in my best intentions. This question is NOT about any shortcuts, and not only about visual studio, it's about running through code with shortcuts. Answers that suit the question so far: [Ctrl] + [-] - jumps to last cursor position [Ctrl] + [F3] - Jumps to next occurance of the word the curson is in [Shift] + [F3] - Same as the above, backwards. [F12] - Goes to definition of method/variable the cursor is in [Ctrl] + [ ] ] - Jumps to matching brace and select I'll ad more as there are answers.

    Read the article

  • targeting sprites from a method in the document class - null object reference

    - by Freddyk
    Hi I am trying to code a flash app entirely in the document class. I am using GestureWorks with a touch screen. When a user essentially presses a button it calls a method that should hide a specific graphic but not the graphic they touched. Essentially I need a way to refer to a graphic on the screen using a method besides 'e.target'. //This code works because it can access 'e.target'. private function photo1SpriteFlickHandler(e:GestureEvent):void { var openTween:Tween = new Tween(e.target, "x", Strong.easeOut, 232, 970, 5, true); } //this code gives me a null object reference because I am using 'photo1Sprite' rather than 'e.target' private function photo1SpriteFlickHandler(e:GestureEvent):void { var openTween:Tween = new Tween(photo1Sprite, "x", Strong.easeOut, 232, 970, 5, true); } //photo1Sprite has already been programatically added to the screen as so: var photo1Sprite = new TouchSprite(); var photo1Loader=new Loader(); photo1Loader.load(new URLRequest("media/photos1/photo1.jpg")); photo1Loader.contentLoaderInfo.addEventListener(Event.COMPLETE,loaderComplete); photo1Sprite.x = 232; photo1Sprite.y = 538; photo1Sprite.scaleX = .3; photo1Sprite.scaleY = .3; photo1Sprite.blobContainerEnabled = true; photo1Sprite.addEventListener(TouchEvent.TOUCH_DOWN, startDrag_Press); photo1Sprite.addEventListener(TouchEvent.TOUCH_UP, stopDrag_Release); photo1Sprite.addChild(photo1Loader); addChild(photo1Sprite); So I can make photo1Sprite react if my method is attached to it directly using 'e.target' but not if I am trying to call it from a method that was called from another element on the screen.

    Read the article

  • How to intercept touch events globally?

    - by mystify
    I have an view which is sometimes covered by some other views. However, if the user slides the finger across the screen, I want to slide that underlying view across the screen, too. I could start making custom views for all those covering subviews and forward all kinds of touch events, but that's somewhat cumbersome. Maybe there's some kind of notification or another way that a UIView or UIControl subclass can be aware of touch events happening right now, no matter where they are. In short: I need an UIView subclass or UIControl subclass which knows about any touch events happening on the entire screen. Or at least if tht's not possible, knowing about any touch events happening above itself in the same underlying superview. Another description: There are 20 views, all reside inside the same superview. The first view is covered by 19 others. But if the user slides across the screen, that first view must slide too, so it must be aware of touch events. Is there any better solution that making all 19 views forward touch events? (yes, all 19 views respond to touch events in this example)

    Read the article

  • Method in RootViewController not Storing Array

    - by Antonio
    I have an array initialized in my RootViewController and a method that addsObjects to an array. I created a RootViewController object in my SecondViewController. The method runs (outputs a message) but it doesn't add anything to the array, and the array seems empty. Code is below, any suggestions? RootViewController.h #import "RootViewController.h" #import "SecondViewController.h" @implementation RootViewController - (void)viewDidLoad { [super viewDidLoad]; myArray2 = [[NSMutableArray alloc] init]; NSLog(@"View was loaded"); } -(void)addToArray2{ NSLog(@"Array triggered from SecondViewController"); [myArray2 addObject:@"Test"]; [self showArray2]; } -(void)showArray2{ NSLog(@"Array Count: %d", [myArray2 count]); } -(IBAction)switchViews{ SecondViewController *screen = [[SecondViewController alloc] initWithNibName:nil bundle:nil]; screen.modalTransitionStyle = UIModalTransitionStyleCoverVertical; [self presentModalViewController:screen animated:YES]; [screen release]; } SecondViewController.m #import "SecondViewController.h" #import "RootViewController.h" @implementation SecondViewController -(IBAction)addToArray{ RootViewController *object = [[RootViewController alloc] init]; [object addToArray2]; } -(IBAction)switchBack{ [self dismissModalViewControllerAnimated:YES]; } EDIT***** With Matt's code I got the following error: " expected specifier-qualifier-list before 'RootViewController' "

    Read the article

  • Get active window title in X

    - by dutt
    I'm trying to get the title of the active window. The application is a background task so if the user has Eclipse open the function returns "Eclipse - blabla", so it's not getting the window title of my own window. I'm developing this in Python 2.6 using PyQt4. My current solution, borrowed and slightly modified from an old answer here at SO, looks like this: def get_active_window_title(): title = '' root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) for j in id_w.stdout: if 'WM_ICON_NAME(STRING)' in j: if title != j.split()[2]: return j.split("= ")[1].strip(' \n\"') It works for most windows, but not all. For example it can't find my kopete chat windows, or the name of the application i'm currently developing. My next try looks like this: def get_active_window_title(self): screen = wnck.screen_get_default() if screen == None: return "Could not get screen" window = screen.get_active_window() if window == None: return "Could not get window" title = window.get_name() return title; But for some reason window is always None. Does somebody have a better way of getting the current window title, or how to modify one of my ways, that works for all windows? Edit: In case anybody is wondering this is the way I found that seems to work for all windows. def get_active_window_title(self): root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) id_w.wait() buff = [] for j in id_w.stdout: buff.append(j) for line in buff: match = re.match("WM_NAME\((?P<type>.+)\) = (?P<name>.+)", line) if match != None: type = match.group("type") if type == "STRING" or type == "COMPOUND_TEXT": return match.group("name") return "Active window not found"

    Read the article

  • Mobile detection - Meta tag and max-device-width vs. php user agent?

    - by nimmbl
    Which form of mobile detection should I use and why? <meta name="viewport" content="width=320,initial-scale=1,maximum-scale=1.0,user-scalable=no" /> <link media="only screen and (max-device-width: 480px) and (min-device-width: 320px)" href="css/mobile.css" type= "text/css" rel="stylesheet"> <link media="handheld, only screen and (max-device-width: 319px)" href="css/mobile_simple.css" type="text/css" rel="stylesheet" /> Or include('mobile_device_detect.php'); $mobile = mobile_device_detect(); And why on earth would this: <?php if(strpos($_SERVER['HTTP_USER_AGENT'], 'iPhone') !== false) { ?> <meta name="viewport" content="width=320,initial-scale=1,maximum-scale=1.0,user-scalable=no" /> <link media="screen" href="css/mobile.css" type= "text/css" rel="stylesheet"> <?php } else { ?> <link media="screen" href="css/mobile_simple.css" type= "text/css" rel="stylesheet"> <?php } ?> ignore this css? body { background: -webkit-gradient(linear, left top, left bottom, from(#555), to(#000)); }

    Read the article

  • CSS to specify positions over a scanned document

    - by itsols
    I'm trying to write the CSS rules to position text over a scanned document. Reason: The document is a pre-printed form. I am trying to position the text on-screen so that it relates to the 'spaces' on the actual form. Issue: Although I position the values using centimeters, they don't seem to get aligned with the ones on the actual page. I can see this misalignment since my scanned image is in the background of the page. What I've tried: I used a ruler to physically measure the locations and specify them with CSS. But on-screen, it doesn't tally. I used the scanned image to position the CSS values. Then the printout is not correct. I even scaled the scanned page using Inkscape to the exact dimensions in centimeters and took into account all margins, etc... What I need: I am trying to correctly show the output values on-screen AND have them print in the correct manner as well. I know that using two CSS sheets (one for print) is an option. But I'm developing this program away from where the actual printing is to be done. So is there a convenient way of matching the exact screen locations with those on the actual/final prinout? Thanks!

    Read the article

  • How to call a method from another class that's been instantiated within the current class

    - by Pavan
    my screen has a few views like such __________________ | _____ | | | | | //viewX is a video screen | | | | | viewX | vY | | //viewY is a custom uiview i created. | |____| | //it contains a method which i would like to call that toggles |_________________| //the hidden property of this view. and when it hides, a little | | //button is replaced no the top right corner on top of viewX | viewZ | //the video layer | | |_________________| //viewZ is a view containing many square views - thumbnails. my question is, i dont know how to register for touch events so that it recognises any touch event on no matter which view the user touches the screen.. atm im handling the touch events for each view inside it. so all works well... however what im trying to do is that when the user taps anywhere else on the screen but on viewY, viewY should dissapear by calling that method in the viewY class. this viewY class is instantiated and has no xib file attached to it. the uiview is created progammatically in the viewY class. this whole class for viewY behviour is instantiated in viewX - the video view. my boss says add delegates.. although i have now clue how to do that... any help? is there anyway i can just make it really simple and be able to say REMOVE VIEW no matter which class im calling from? Also ive seen other people achieve this by using these funky arrows - ... <- etc.. although im not sure if thats what i need or how to implement such a thing. ah i think ive made my question quite complicated but i really mean it to be a simple one, and know it can be done in an easy way!

    Read the article

  • smart phone UI limitations

    - by Manny
    I would like to know, what limitations there are for how far one can go in terms of replacing UI components of current touch screen smart phones, in particular iPhone, Blackberry and android based phones. What I would like to do is create a custom UI for dialing out and incoming calls. I have some experience with Blackberry development. The theme builder for it, can be used to customize certain items on the incoming call screen, but it doesn't look like that you can increase the size of answer button. I know Blackberry also gives you access to all the phone APIs, but I'm not sure that you can create your own UI that can gain preference over the Blackberry incoming call screen. And if you try to customize the incoming call screen by adding any buttons to it, they would be rendered as pictures. I could possibly design a complete UI for android, since different manufactures have different UI for android based phones. Can I do what I want to do using iPhone, Blackberry or android? Or any other phone for that matter? I am guessing may be for Nokia phones using Qt, but I prefer the 3 platforms I listed. Thanks for all your help.

    Read the article

  • Efficient mapping of game entity positions in Java

    - by byte
    In Java (Swing), say I've got a 2D game where I have various types of entities on the screen, such as a player, bad guys, powerups, etc. When the player moves across the screen, in order to do efficient checking of what is in the immediate vicinity of the player, I would think I'd want indexed access to the things that are near the character based on their position. For example, if player 'P' steps onto element 'E' in the following example... | | | | | | | | | |P| | | | |E| | | | | | | | | ... would be to do something like: if(player.getPosition().x == entity.getPosition().x && entity.getPosition.y == thing.getPosition().y) { //do something } And thats fine, but that implies that the entities hold their positions, and therefor if I had MANY entities on the screen I would have to loop through all possible entities available and check each ones position against the player position. This seems really inefficient especially if you start getting tons of entities. So, I would suspect I'd want some sort of map like Map<Point, Entity> map = new HashMap<Point, Entity>(); And store my point information there, so that I could access these entities in constant time. The only problem with that approach is that, if I want to move an entity to a different point on the screen, I'd have to search through the values of the HashMap for the entity I want to move (inefficient since I dont know its Point position ahead of time), and then once I've found it remove it from the HashMap, and re-insert it with the new position information. Any suggestions or advice on what sort of data structure / storage format I ought to be using here in order to have efficient access to Entities based on their position, as well as Position's based on the Entity?

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • How to convert X/Y position to Canvas Left/Top properties when using ItemsControl

    - by kshahar
    I am trying to use a Canvas to display objects that have "world" location (rather than "screen" location). The canvas is defined like this: <Canvas Background="AliceBlue"> <ItemsControl Name="myItemsControl" ItemsSource="{Binding MyItems}"> <ItemsControl.ItemTemplate> <DataTemplate> <Canvas> <TextBlock Canvas.Left="{Binding WorldX}" Canvas.Top="{Binding WorldY}" Text="{Binding Text}" Width="Auto" Height="Auto" Foreground="Red" /> </Canvas> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Canvas> MyItem is defined like this: public class MyItem { public MyItem(double worldX, double worldY, string text) { WorldX = worldX; WorldY = worldY; Text = text; } public double WorldX { get; set; } public double WorldY { get; set; } public string Text { get; set; } } In addition, I have a method to convert between world and screen coordinates: Point worldToScreen(double worldX, double worldY) { // return screen coordinates using the canvas properties and an internal MapData object } With the current implementation, the items are positioned in the wrong location, because their location is not converted to screen coordinates. How can I apply the worldToScreen method on the MyItem objects before they are added to the canvas?

    Read the article

  • Bitmap issue in Samsung Galaxy S3

    - by user1531240
    I wrote a method to change Bitmap from camera shot : public Bitmap bitmapChange(Bitmap bm) { /* get original image size */ int w = bm.getWidth(); int h = bm.getHeight(); /* check the image's orientation */ float scale = w / h; if(scale < 1) { /* if the orientation is portrait then scaled and show on the screen*/ float scaleWidth = (float) 90 / (float) w; float scaleHeight = (float) 130 / (float) h; Matrix mtx = new Matrix(); mtx.postScale(scaleWidth, scaleHeight); Bitmap rotatedBMP = Bitmap.createBitmap(bm, 0, 0, w, h, mtx, true); return rotatedBMP; } else { /* if the orientation is landscape then rotate 90 */ float scaleWidth = (float) 130 / (float) w; float scaleHeight = (float) 90 / (float) h; Matrix mtx = new Matrix(); mtx.postScale(scaleWidth, scaleHeight); mtx.postRotate(90); Bitmap rotatedBMP = Bitmap.createBitmap(bm, 0, 0, w, h, mtx, true); return rotatedBMP; } } It works fine in another Android device, even Galaxy Nexus but in Samsung Galaxy S3, the scaled image doesn't show on screen. I tried to mark the bitmapChange method , let it show the original size Bitmap on screen but S3 also show nothing on screen. The information of variables in eclipse is here. The information of sony xperia is here. xperia and other device is working fine.

    Read the article

  • C++ game loop example

    - by David
    Can someone write up a source for a program that just has a "game loop", which just keeps looping until you press Esc, and the program shows a basic image. Heres the source I have right now but I have to use SDL_Delay(2000); to keep the program alive for 2 seconds, during which the program is frozen. #include "SDL.h" int main(int argc, char* args[]) { SDL_Surface* hello = NULL; SDL_Surface* screen = NULL; SDL_Init(SDL_INIT_EVERYTHING); screen = SDL_SetVideoMode(640, 480, 32, SDL_SWSURFACE); hello = SDL_LoadBMP("hello.bmp"); SDL_BlitSurface(hello, NULL, screen, NULL); SDL_Flip(screen); SDL_Delay(2000); SDL_FreeSurface(hello); SDL_Quit(); return 0; } I just want the program to be open until I press Esc. I know how the loop works, I just don't know if I implement inside the main() function, or outside of it. I've tried both, and both times it failed. If you could help me out that would be great :P

    Read the article

  • Should we EVER use dp values for width/height?

    - by sandalone
    I've come across a project done by some other team which I have to fix. They used dp values for images' width/height. When I tried to adopt the layout for some tablets and/or mobiles, I've faced a lot of troubles. For example, the image of 40x40 dp has top padding of 15dp. When such image is loaded by some new mobile screen, the image is not where is was supposed to be - it's either shifted or distorted or of the wrong size. Now I need to propose a redesign of the whole app and I need some advise from the more experienced community. Should I abandon such layout policy (described abobe) and do like this: make the image with the size of 40x40 px position the image for the mdpi screen set its height/width to wrap_content do like this for other images after I finish layout for mdpi, resize each image for ldpi, hdpi and xhdpi screens in case of a special mobile/tablet, make a special set of images + xml files Is there a way when you would advise to use the explicit size of some images? Do you advise setting the size of images in xml layout or setting size via photoshop or similar graphics tools and then resize images for other screen sizes or screen densities?

    Read the article

  • Web UI element to represent two different micro-views of data in the same spot?

    - by Chris McCall
    I've been tasked with laying out a portion of a screen for a customer care (call center) app that serves as sort of a header/summary block at the top of the screen. Here's what it looks like: The important part is in the red box. That little tooltip is the biz's vision for how to represent both the numeric SiteId and the textual Site Name all in the same piece of screen real estate. I asked, and the business thinks the Name is more important than the ID, but lists the Id by default, because the Name can't be truncated in the display, and there's only so much horizontal room to put the data. So they go with the Id, because it's fewer characters, and then they have the user mouse-over the Id to display the name (presumably because the tooltip can be of unlimited width and since it's floating over the rest of the screen, the full name will always be displayed. So, here's my question: Is there some better UI metaphor that I don't know about that could get this job done, while meeting the following constraints?: Does not require the mouse (uses a keyboard shortcut to do the "reveal") Allows the user to copy and paste the name Will not truncate the name Provides for the display of both the ID and name in the same spot Works with IE7

    Read the article

  • how to fetch data from Plist in Label

    - by SameSung Vs Iphone
    I have a RegistrationController screen to store email-id ,password,DOB,Height,Weight and logininController screen to match email-id and password to log-in purpose. Now In some third screen I have to fetch only the Height,Weight from the plist of the logged-in user to display it on the label.now if I Store the values of email-id and password in from LoginViewController in string and call it in the new screen to match if matches then gives Height,Weight ..if it corrects then how to fetch Height,Weight from the plist of the same one. how can i fetch from the stored plist in a string... the code which i used to match in LoginController -(NSArray*)readFromPlist { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [documentPaths objectAtIndex:0]; NSString *documentPlistPath = [documentsDirectory stringByAppendingPathComponent:@"XYZ.plist"]; NSDictionary *dict = [NSDictionary dictionaryWithContentsOfFile:documentPlistPath]; NSArray *valueArray = [dict objectForKey:@"title"]; return valueArray; } - (void)authenticateCredentials { NSMutableArray *plistArray = [NSMutableArray arrayWithArray:[self readFromPlist]]; for (int i = 0; i< [plistArray count]; i++) { id object = [plistArray objectAtIndex:i]; if ([object isKindOfClass:[NSDictionary class]]) { NSDictionary *objDict = (NSDictionary *)object; if ([[objDict objectForKey:@"pass"] isEqualToString:emailTextFeild.text] && [[objDict objectForKey:@"title"] isEqualToString:passwordTextFeild.text]) { NSLog(@"Correct credentials"); return; } NSLog(@"INCorrect credentials"); } else { NSLog(@"Error! Not a dictionary"); } } }

    Read the article

  • How to make a piece of WPF content take up the entire application window

    - by Bojin Li
    I'm working on an application that contains a number of content areas. I want to implement a behavior such that in response to user input, any of these content areas can be toggled to fit the entire application window, and optionally back to its original position again. I experimented with several approaches and none of them seem optimal for me. Here's what I tried to do: Use the ClipToBoundsProperty on the content I want to make "Full Screen": Doesn't work because only the CanvasPanel seems to fully respect this property. The application need to be localized so I would really like to avoid the CanvasPanel. Use a Grid and collapse the other content areas, such that only the one I want to see is visible, hence taking up the entire screen: This will probably work but doesn't seem easy to implement nor maintain. The "Full Screen" content area could be several levels deep, for example residing inside a Tabcontrol, so I would have to hide the tab headers too etc. Reconstruct the content area in a separate view and display it while hiding the rest: Seems easy enough to do with DataTemplates and my ViewModel objects, but any GUI/View only states are not preserved using this approach. Somehow "lift" the GUI/View I want to "Full Screen" into the separate view and display it while hiding the rest: I don't know how to do this or even if this is possible. Anyway if anyone knows a better approach I would love to know about it. Thanks a lot!

    Read the article

  • BlackBerry Field class extension will not paint.

    - by jlindenbaum
    Using JRE 5.0.0, simulator device is an 8520. On a screen I am using a FlowFieldManager(Manager.VERTICAL_SCROLL) and adding Fields to it to show data. When I do this.flowManager = new FlowFieldManager(Manager.VERTICAL_SCROLL); Field field = new Field() { protected void paint(Graphics graphics) { graphics.drawTest("Test", 0, 0); } protected void layout(int width, int height) { this.setExtend(300, 300); // just testing } } this.flowManager.add(field); The screen renders correctly and 'Test' appears on the screen. If, on the other hand, I try and abstract this into a class called CustomField with the same properties and add it to the flow manager the render will not happen. Debugging shows that the device enters into the Object, into the layout function, but not the paint function. I can't figure out why the paint function is not called when I extend Field. The 4.5 API says that layout and paint are the only functions that I really need to extend. (getPreferredWidth and getPreferredHeight will be used to calculate screen sizes etc.) Thanks in advance.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >