Search Results

Search found 16009 results on 641 pages for 'screen saver'.

Page 170/641 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • NES Programming - Nametables?

    - by Jeffrey Kern
    Hello everyone, I'm wondering about how the NES displays its graphical muscle. I've researched stuff online and read through it, but I'm wondering about one last thing: Nametables. Basically, from what I've read, each 8x8 block in a NES nametable points to a location in the pattern table, which holds graphic memory. In addition, the nametable also has an attribute table which sets a certain color palette for each 16x16 block. They're linked up together like this: (assuming 16 8x8 blocks) Nametable, with A B C D = pointers to sprite data: ABBB CDCC DDDD DDDD Attribute table, with 1 2 3 = pointers to color palette data, with < referencing value to the left, ^ above, and ' to the left and above: 1<2< ^'^' 3<3< ^'^' So, in the example above, the blocks would be colored as so 1A 1B 2B 2B 1C 1D 2C 2C 3D 3D 3D 3D 3D 3D 3D 3D Now, if I have this on a fixed screen - it works great! Because the NES resolution is 256x240 pixels. Now, how do these tables get adjusted for scrolling? Because Nametable 0 can scroll into Nametable 1, and if you keep scrolling Nametable 0 will wrap around again. That I get. But what I don't get is how to scroll the attribute table wraps around as well. From what I've read online, the 16x16 blocks it assigns attributes for will cause color distortions on the edge tiles of the screen (as seen when you scroll left to right and vice-versa in SMB3). The concern I have is that I understand how to scroll the nametables, but how do you scroll the attribute table? For intsance, if I have a green block on the left side of the screen, moving the screen to right should in theory cause the tiles to the right to be green as well until they move more into frame, to which they'll revert to their normal colors.

    Read the article

  • Navigating through code with keyboard shortcuts

    - by MarceloRamires
    I'm starting to feel the need to run fastly through code with keyboard shortcuts, to arrive faster where I want to make any changes (avoiding use of mouse or long times holding [up], [left], [right] and [down]). I'm already using some: [home] - first position in current line [end] - last position in current line [ctrl] + [home] - first line of the entire code [ctrl] + [end] - last line of the entire code [pageup] - same vertical position, one screen above [pagedown] - same vertical position, one screen below [ctrl] + [pageup] - first line in current screen [ctrl] + [end] - last line in current screen [ctrl] + [left/right] - skipping word per word What have you got ? I use Visual Studio. (but I'm open to any answer, as I maybe can use others soon) obs: I've searched through stackoverflow and didn't find a nice question with this content, nor a list of keyboard code searching. If it's repeated, I'm sorry for not finding it, I'm here in my best intentions. This question is NOT about any shortcuts, and not only about visual studio, it's about running through code with shortcuts. Answers that suit the question so far: [Ctrl] + [-] - jumps to last cursor position [Ctrl] + [F3] - Jumps to next occurance of the word the curson is in [Shift] + [F3] - Same as the above, backwards. [F12] - Goes to definition of method/variable the cursor is in [Ctrl] + [ ] ] - Jumps to matching brace and select I'll ad more as there are answers.

    Read the article

  • targeting sprites from a method in the document class - null object reference

    - by Freddyk
    Hi I am trying to code a flash app entirely in the document class. I am using GestureWorks with a touch screen. When a user essentially presses a button it calls a method that should hide a specific graphic but not the graphic they touched. Essentially I need a way to refer to a graphic on the screen using a method besides 'e.target'. //This code works because it can access 'e.target'. private function photo1SpriteFlickHandler(e:GestureEvent):void { var openTween:Tween = new Tween(e.target, "x", Strong.easeOut, 232, 970, 5, true); } //this code gives me a null object reference because I am using 'photo1Sprite' rather than 'e.target' private function photo1SpriteFlickHandler(e:GestureEvent):void { var openTween:Tween = new Tween(photo1Sprite, "x", Strong.easeOut, 232, 970, 5, true); } //photo1Sprite has already been programatically added to the screen as so: var photo1Sprite = new TouchSprite(); var photo1Loader=new Loader(); photo1Loader.load(new URLRequest("media/photos1/photo1.jpg")); photo1Loader.contentLoaderInfo.addEventListener(Event.COMPLETE,loaderComplete); photo1Sprite.x = 232; photo1Sprite.y = 538; photo1Sprite.scaleX = .3; photo1Sprite.scaleY = .3; photo1Sprite.blobContainerEnabled = true; photo1Sprite.addEventListener(TouchEvent.TOUCH_DOWN, startDrag_Press); photo1Sprite.addEventListener(TouchEvent.TOUCH_UP, stopDrag_Release); photo1Sprite.addChild(photo1Loader); addChild(photo1Sprite); So I can make photo1Sprite react if my method is attached to it directly using 'e.target' but not if I am trying to call it from a method that was called from another element on the screen.

    Read the article

  • How to intercept touch events globally?

    - by mystify
    I have an view which is sometimes covered by some other views. However, if the user slides the finger across the screen, I want to slide that underlying view across the screen, too. I could start making custom views for all those covering subviews and forward all kinds of touch events, but that's somewhat cumbersome. Maybe there's some kind of notification or another way that a UIView or UIControl subclass can be aware of touch events happening right now, no matter where they are. In short: I need an UIView subclass or UIControl subclass which knows about any touch events happening on the entire screen. Or at least if tht's not possible, knowing about any touch events happening above itself in the same underlying superview. Another description: There are 20 views, all reside inside the same superview. The first view is covered by 19 others. But if the user slides across the screen, that first view must slide too, so it must be aware of touch events. Is there any better solution that making all 19 views forward touch events? (yes, all 19 views respond to touch events in this example)

    Read the article

  • Method in RootViewController not Storing Array

    - by Antonio
    I have an array initialized in my RootViewController and a method that addsObjects to an array. I created a RootViewController object in my SecondViewController. The method runs (outputs a message) but it doesn't add anything to the array, and the array seems empty. Code is below, any suggestions? RootViewController.h #import "RootViewController.h" #import "SecondViewController.h" @implementation RootViewController - (void)viewDidLoad { [super viewDidLoad]; myArray2 = [[NSMutableArray alloc] init]; NSLog(@"View was loaded"); } -(void)addToArray2{ NSLog(@"Array triggered from SecondViewController"); [myArray2 addObject:@"Test"]; [self showArray2]; } -(void)showArray2{ NSLog(@"Array Count: %d", [myArray2 count]); } -(IBAction)switchViews{ SecondViewController *screen = [[SecondViewController alloc] initWithNibName:nil bundle:nil]; screen.modalTransitionStyle = UIModalTransitionStyleCoverVertical; [self presentModalViewController:screen animated:YES]; [screen release]; } SecondViewController.m #import "SecondViewController.h" #import "RootViewController.h" @implementation SecondViewController -(IBAction)addToArray{ RootViewController *object = [[RootViewController alloc] init]; [object addToArray2]; } -(IBAction)switchBack{ [self dismissModalViewControllerAnimated:YES]; } EDIT***** With Matt's code I got the following error: " expected specifier-qualifier-list before 'RootViewController' "

    Read the article

  • Get active window title in X

    - by dutt
    I'm trying to get the title of the active window. The application is a background task so if the user has Eclipse open the function returns "Eclipse - blabla", so it's not getting the window title of my own window. I'm developing this in Python 2.6 using PyQt4. My current solution, borrowed and slightly modified from an old answer here at SO, looks like this: def get_active_window_title(): title = '' root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) for j in id_w.stdout: if 'WM_ICON_NAME(STRING)' in j: if title != j.split()[2]: return j.split("= ")[1].strip(' \n\"') It works for most windows, but not all. For example it can't find my kopete chat windows, or the name of the application i'm currently developing. My next try looks like this: def get_active_window_title(self): screen = wnck.screen_get_default() if screen == None: return "Could not get screen" window = screen.get_active_window() if window == None: return "Could not get window" title = window.get_name() return title; But for some reason window is always None. Does somebody have a better way of getting the current window title, or how to modify one of my ways, that works for all windows? Edit: In case anybody is wondering this is the way I found that seems to work for all windows. def get_active_window_title(self): root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) id_w.wait() buff = [] for j in id_w.stdout: buff.append(j) for line in buff: match = re.match("WM_NAME\((?P<type>.+)\) = (?P<name>.+)", line) if match != None: type = match.group("type") if type == "STRING" or type == "COMPOUND_TEXT": return match.group("name") return "Active window not found"

    Read the article

  • Mobile detection - Meta tag and max-device-width vs. php user agent?

    - by nimmbl
    Which form of mobile detection should I use and why? <meta name="viewport" content="width=320,initial-scale=1,maximum-scale=1.0,user-scalable=no" /> <link media="only screen and (max-device-width: 480px) and (min-device-width: 320px)" href="css/mobile.css" type= "text/css" rel="stylesheet"> <link media="handheld, only screen and (max-device-width: 319px)" href="css/mobile_simple.css" type="text/css" rel="stylesheet" /> Or include('mobile_device_detect.php'); $mobile = mobile_device_detect(); And why on earth would this: <?php if(strpos($_SERVER['HTTP_USER_AGENT'], 'iPhone') !== false) { ?> <meta name="viewport" content="width=320,initial-scale=1,maximum-scale=1.0,user-scalable=no" /> <link media="screen" href="css/mobile.css" type= "text/css" rel="stylesheet"> <?php } else { ?> <link media="screen" href="css/mobile_simple.css" type= "text/css" rel="stylesheet"> <?php } ?> ignore this css? body { background: -webkit-gradient(linear, left top, left bottom, from(#555), to(#000)); }

    Read the article

  • CSS to specify positions over a scanned document

    - by itsols
    I'm trying to write the CSS rules to position text over a scanned document. Reason: The document is a pre-printed form. I am trying to position the text on-screen so that it relates to the 'spaces' on the actual form. Issue: Although I position the values using centimeters, they don't seem to get aligned with the ones on the actual page. I can see this misalignment since my scanned image is in the background of the page. What I've tried: I used a ruler to physically measure the locations and specify them with CSS. But on-screen, it doesn't tally. I used the scanned image to position the CSS values. Then the printout is not correct. I even scaled the scanned page using Inkscape to the exact dimensions in centimeters and took into account all margins, etc... What I need: I am trying to correctly show the output values on-screen AND have them print in the correct manner as well. I know that using two CSS sheets (one for print) is an option. But I'm developing this program away from where the actual printing is to be done. So is there a convenient way of matching the exact screen locations with those on the actual/final prinout? Thanks!

    Read the article

  • How to call a method from another class that's been instantiated within the current class

    - by Pavan
    my screen has a few views like such __________________ | _____ | | | | | //viewX is a video screen | | | | | viewX | vY | | //viewY is a custom uiview i created. | |____| | //it contains a method which i would like to call that toggles |_________________| //the hidden property of this view. and when it hides, a little | | //button is replaced no the top right corner on top of viewX | viewZ | //the video layer | | |_________________| //viewZ is a view containing many square views - thumbnails. my question is, i dont know how to register for touch events so that it recognises any touch event on no matter which view the user touches the screen.. atm im handling the touch events for each view inside it. so all works well... however what im trying to do is that when the user taps anywhere else on the screen but on viewY, viewY should dissapear by calling that method in the viewY class. this viewY class is instantiated and has no xib file attached to it. the uiview is created progammatically in the viewY class. this whole class for viewY behviour is instantiated in viewX - the video view. my boss says add delegates.. although i have now clue how to do that... any help? is there anyway i can just make it really simple and be able to say REMOVE VIEW no matter which class im calling from? Also ive seen other people achieve this by using these funky arrows - ... <- etc.. although im not sure if thats what i need or how to implement such a thing. ah i think ive made my question quite complicated but i really mean it to be a simple one, and know it can be done in an easy way!

    Read the article

  • smart phone UI limitations

    - by Manny
    I would like to know, what limitations there are for how far one can go in terms of replacing UI components of current touch screen smart phones, in particular iPhone, Blackberry and android based phones. What I would like to do is create a custom UI for dialing out and incoming calls. I have some experience with Blackberry development. The theme builder for it, can be used to customize certain items on the incoming call screen, but it doesn't look like that you can increase the size of answer button. I know Blackberry also gives you access to all the phone APIs, but I'm not sure that you can create your own UI that can gain preference over the Blackberry incoming call screen. And if you try to customize the incoming call screen by adding any buttons to it, they would be rendered as pictures. I could possibly design a complete UI for android, since different manufactures have different UI for android based phones. Can I do what I want to do using iPhone, Blackberry or android? Or any other phone for that matter? I am guessing may be for Nokia phones using Qt, but I prefer the 3 platforms I listed. Thanks for all your help.

    Read the article

  • Efficient mapping of game entity positions in Java

    - by byte
    In Java (Swing), say I've got a 2D game where I have various types of entities on the screen, such as a player, bad guys, powerups, etc. When the player moves across the screen, in order to do efficient checking of what is in the immediate vicinity of the player, I would think I'd want indexed access to the things that are near the character based on their position. For example, if player 'P' steps onto element 'E' in the following example... | | | | | | | | | |P| | | | |E| | | | | | | | | ... would be to do something like: if(player.getPosition().x == entity.getPosition().x && entity.getPosition.y == thing.getPosition().y) { //do something } And thats fine, but that implies that the entities hold their positions, and therefor if I had MANY entities on the screen I would have to loop through all possible entities available and check each ones position against the player position. This seems really inefficient especially if you start getting tons of entities. So, I would suspect I'd want some sort of map like Map<Point, Entity> map = new HashMap<Point, Entity>(); And store my point information there, so that I could access these entities in constant time. The only problem with that approach is that, if I want to move an entity to a different point on the screen, I'd have to search through the values of the HashMap for the entity I want to move (inefficient since I dont know its Point position ahead of time), and then once I've found it remove it from the HashMap, and re-insert it with the new position information. Any suggestions or advice on what sort of data structure / storage format I ought to be using here in order to have efficient access to Entities based on their position, as well as Position's based on the Entity?

    Read the article

  • How to convert X/Y position to Canvas Left/Top properties when using ItemsControl

    - by kshahar
    I am trying to use a Canvas to display objects that have "world" location (rather than "screen" location). The canvas is defined like this: <Canvas Background="AliceBlue"> <ItemsControl Name="myItemsControl" ItemsSource="{Binding MyItems}"> <ItemsControl.ItemTemplate> <DataTemplate> <Canvas> <TextBlock Canvas.Left="{Binding WorldX}" Canvas.Top="{Binding WorldY}" Text="{Binding Text}" Width="Auto" Height="Auto" Foreground="Red" /> </Canvas> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Canvas> MyItem is defined like this: public class MyItem { public MyItem(double worldX, double worldY, string text) { WorldX = worldX; WorldY = worldY; Text = text; } public double WorldX { get; set; } public double WorldY { get; set; } public string Text { get; set; } } In addition, I have a method to convert between world and screen coordinates: Point worldToScreen(double worldX, double worldY) { // return screen coordinates using the canvas properties and an internal MapData object } With the current implementation, the items are positioned in the wrong location, because their location is not converted to screen coordinates. How can I apply the worldToScreen method on the MyItem objects before they are added to the canvas?

    Read the article

  • C++ game loop example

    - by David
    Can someone write up a source for a program that just has a "game loop", which just keeps looping until you press Esc, and the program shows a basic image. Heres the source I have right now but I have to use SDL_Delay(2000); to keep the program alive for 2 seconds, during which the program is frozen. #include "SDL.h" int main(int argc, char* args[]) { SDL_Surface* hello = NULL; SDL_Surface* screen = NULL; SDL_Init(SDL_INIT_EVERYTHING); screen = SDL_SetVideoMode(640, 480, 32, SDL_SWSURFACE); hello = SDL_LoadBMP("hello.bmp"); SDL_BlitSurface(hello, NULL, screen, NULL); SDL_Flip(screen); SDL_Delay(2000); SDL_FreeSurface(hello); SDL_Quit(); return 0; } I just want the program to be open until I press Esc. I know how the loop works, I just don't know if I implement inside the main() function, or outside of it. I've tried both, and both times it failed. If you could help me out that would be great :P

    Read the article

  • How to convert a byte array of 19200 bytes in size where each byte represents 4 pixels (2 bits per p

    - by Klinger
    I am communicating with an instrument (remote controlling it) and one of the things I need to do is to draw the instrument screen. In order to get the screen I issue a command and the instrument replies with an array of bytes that represents the screen. Below is what the instrument manual has to say about converting the response to the actual screen: The command retrieves the framebuffer data used for the display. It is 19200 bytes in size, 2-bits per pixel, 4 pixels per byte arranged as 320x240 characteres. The data is sent in RLE encoded form. To convert this data into a BMP for use in Windows, it needs to be turned into a 4BPP. Also note that BMP files are upside down relative to this data, i.e. the top display line is the last line in the BMP. I managed to unpack the data, but now I am stuck on how to actually go from the unpacked byte array to a bitmap. My background on this is pretty close to zero and my searches have not revealed much either. I am looking for directions and/or articles I could use to help me undestand how to get this done. Any code or even pseudo code would also help. :-) So, just to summarize it all: How to convert a byte array of 19200 bytes in size, where each byte represents 4 pixels (2 bits per pixel), to a bitmap arranged as 320x240 characters. Thanks in advance.

    Read the article

  • Should we EVER use dp values for width/height?

    - by sandalone
    I've come across a project done by some other team which I have to fix. They used dp values for images' width/height. When I tried to adopt the layout for some tablets and/or mobiles, I've faced a lot of troubles. For example, the image of 40x40 dp has top padding of 15dp. When such image is loaded by some new mobile screen, the image is not where is was supposed to be - it's either shifted or distorted or of the wrong size. Now I need to propose a redesign of the whole app and I need some advise from the more experienced community. Should I abandon such layout policy (described abobe) and do like this: make the image with the size of 40x40 px position the image for the mdpi screen set its height/width to wrap_content do like this for other images after I finish layout for mdpi, resize each image for ldpi, hdpi and xhdpi screens in case of a special mobile/tablet, make a special set of images + xml files Is there a way when you would advise to use the explicit size of some images? Do you advise setting the size of images in xml layout or setting size via photoshop or similar graphics tools and then resize images for other screen sizes or screen densities?

    Read the article

  • Bitmap issue in Samsung Galaxy S3

    - by user1531240
    I wrote a method to change Bitmap from camera shot : public Bitmap bitmapChange(Bitmap bm) { /* get original image size */ int w = bm.getWidth(); int h = bm.getHeight(); /* check the image's orientation */ float scale = w / h; if(scale < 1) { /* if the orientation is portrait then scaled and show on the screen*/ float scaleWidth = (float) 90 / (float) w; float scaleHeight = (float) 130 / (float) h; Matrix mtx = new Matrix(); mtx.postScale(scaleWidth, scaleHeight); Bitmap rotatedBMP = Bitmap.createBitmap(bm, 0, 0, w, h, mtx, true); return rotatedBMP; } else { /* if the orientation is landscape then rotate 90 */ float scaleWidth = (float) 130 / (float) w; float scaleHeight = (float) 90 / (float) h; Matrix mtx = new Matrix(); mtx.postScale(scaleWidth, scaleHeight); mtx.postRotate(90); Bitmap rotatedBMP = Bitmap.createBitmap(bm, 0, 0, w, h, mtx, true); return rotatedBMP; } } It works fine in another Android device, even Galaxy Nexus but in Samsung Galaxy S3, the scaled image doesn't show on screen. I tried to mark the bitmapChange method , let it show the original size Bitmap on screen but S3 also show nothing on screen. The information of variables in eclipse is here. The information of sony xperia is here. xperia and other device is working fine.

    Read the article

  • Web UI element to represent two different micro-views of data in the same spot?

    - by Chris McCall
    I've been tasked with laying out a portion of a screen for a customer care (call center) app that serves as sort of a header/summary block at the top of the screen. Here's what it looks like: The important part is in the red box. That little tooltip is the biz's vision for how to represent both the numeric SiteId and the textual Site Name all in the same piece of screen real estate. I asked, and the business thinks the Name is more important than the ID, but lists the Id by default, because the Name can't be truncated in the display, and there's only so much horizontal room to put the data. So they go with the Id, because it's fewer characters, and then they have the user mouse-over the Id to display the name (presumably because the tooltip can be of unlimited width and since it's floating over the rest of the screen, the full name will always be displayed. So, here's my question: Is there some better UI metaphor that I don't know about that could get this job done, while meeting the following constraints?: Does not require the mouse (uses a keyboard shortcut to do the "reveal") Allows the user to copy and paste the name Will not truncate the name Provides for the display of both the ID and name in the same spot Works with IE7

    Read the article

  • how to fetch data from Plist in Label

    - by SameSung Vs Iphone
    I have a RegistrationController screen to store email-id ,password,DOB,Height,Weight and logininController screen to match email-id and password to log-in purpose. Now In some third screen I have to fetch only the Height,Weight from the plist of the logged-in user to display it on the label.now if I Store the values of email-id and password in from LoginViewController in string and call it in the new screen to match if matches then gives Height,Weight ..if it corrects then how to fetch Height,Weight from the plist of the same one. how can i fetch from the stored plist in a string... the code which i used to match in LoginController -(NSArray*)readFromPlist { NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [documentPaths objectAtIndex:0]; NSString *documentPlistPath = [documentsDirectory stringByAppendingPathComponent:@"XYZ.plist"]; NSDictionary *dict = [NSDictionary dictionaryWithContentsOfFile:documentPlistPath]; NSArray *valueArray = [dict objectForKey:@"title"]; return valueArray; } - (void)authenticateCredentials { NSMutableArray *plistArray = [NSMutableArray arrayWithArray:[self readFromPlist]]; for (int i = 0; i< [plistArray count]; i++) { id object = [plistArray objectAtIndex:i]; if ([object isKindOfClass:[NSDictionary class]]) { NSDictionary *objDict = (NSDictionary *)object; if ([[objDict objectForKey:@"pass"] isEqualToString:emailTextFeild.text] && [[objDict objectForKey:@"title"] isEqualToString:passwordTextFeild.text]) { NSLog(@"Correct credentials"); return; } NSLog(@"INCorrect credentials"); } else { NSLog(@"Error! Not a dictionary"); } } }

    Read the article

  • How to make a piece of WPF content take up the entire application window

    - by Bojin Li
    I'm working on an application that contains a number of content areas. I want to implement a behavior such that in response to user input, any of these content areas can be toggled to fit the entire application window, and optionally back to its original position again. I experimented with several approaches and none of them seem optimal for me. Here's what I tried to do: Use the ClipToBoundsProperty on the content I want to make "Full Screen": Doesn't work because only the CanvasPanel seems to fully respect this property. The application need to be localized so I would really like to avoid the CanvasPanel. Use a Grid and collapse the other content areas, such that only the one I want to see is visible, hence taking up the entire screen: This will probably work but doesn't seem easy to implement nor maintain. The "Full Screen" content area could be several levels deep, for example residing inside a Tabcontrol, so I would have to hide the tab headers too etc. Reconstruct the content area in a separate view and display it while hiding the rest: Seems easy enough to do with DataTemplates and my ViewModel objects, but any GUI/View only states are not preserved using this approach. Somehow "lift" the GUI/View I want to "Full Screen" into the separate view and display it while hiding the rest: I don't know how to do this or even if this is possible. Anyway if anyone knows a better approach I would love to know about it. Thanks a lot!

    Read the article

  • BlackBerry Field class extension will not paint.

    - by jlindenbaum
    Using JRE 5.0.0, simulator device is an 8520. On a screen I am using a FlowFieldManager(Manager.VERTICAL_SCROLL) and adding Fields to it to show data. When I do this.flowManager = new FlowFieldManager(Manager.VERTICAL_SCROLL); Field field = new Field() { protected void paint(Graphics graphics) { graphics.drawTest("Test", 0, 0); } protected void layout(int width, int height) { this.setExtend(300, 300); // just testing } } this.flowManager.add(field); The screen renders correctly and 'Test' appears on the screen. If, on the other hand, I try and abstract this into a class called CustomField with the same properties and add it to the flow manager the render will not happen. Debugging shows that the device enters into the Object, into the layout function, but not the paint function. I can't figure out why the paint function is not called when I extend Field. The 4.5 API says that layout and paint are the only functions that I really need to extend. (getPreferredWidth and getPreferredHeight will be used to calculate screen sizes etc.) Thanks in advance.

    Read the article

  • Create a Bootable Ubuntu 9.10 USB Flash Drive

    - by Trevor Bekolay
    The Ubuntu Live CD isn’t just useful for trying out Ubuntu before you install it, you can also use it to maintain and repair your Windows PC. Even if you have no intention of installing Linux, every Windows user should have a bootable Ubuntu USB drive on hand in case something goes wrong in Windows. Creating a bootable USB flash drive is surprisingly easy with a small self-contained application called UNetbootin. It will even download Ubuntu for you! Note: Ubuntu will take up approximately 700 MB on your flash drive, so choose a flash drive with at least 1 GB of free space, formatted as FAT32. This process should not remove any existing files on the flash drive, but to be safe you should backup the files on your flash drive. Put Ubuntu on your flash drive UNetbootin doesn’t require installation; just download the application and run it. Select Ubuntu from the Distribution drop-down box, then 9.10_Live from the Version drop-down box. If you have a 64-bit machine, then select 9.10_Live_x64 for the Version. At the bottom of the screen, select the drive letter that corresponds to the USB drive that you want to put Ubuntu on. If you select USB Drive in the Type drop-down box, the only drive letters available will be USB flash drives. Click OK and UNetbootin will start doing its thing. First it will download the Ubuntu Live CD. Then, it will copy the files from the Ubuntu Live CD to your flash drive. The amount of time it takes will vary depending on your Internet speed, an when it’s done, click on Exit. You’re not planning on installing Ubuntu right now, so there’s no need to reboot. If you look at the USB drive now, you should see a bunch of new files and folders. If you had files on the drive before, they should still be present. You’re now ready to boot your computer into Ubuntu 9.10! How to boot into Ubuntu When the time comes that you have to boot into Ubuntu, or if you just want to test and make sure that your flash drive works properly, you will have to set your computer to boot off of the flash drive. The steps to do this will vary depending on your BIOS – which varies depending on your motherboard. To get detailed instructions on changing how your computer boots, search for your motherboard’s manual (or your laptop’s manual for a laptop). For general instructions, which will suffice for 99% of you, read on. Find the important keyboard keys When your computer boots up, a bunch of words and numbers flash across the screen, usually to be ignored. This time, you need to scan the boot-up screen for a few key words with some associated keys: Boot menu and Setup. Typically, these will show up at the bottom of the screen. If your BIOS has a Boot Menu, then read on. Otherwise, skip to the Hard: Using Setup section. Easy: Using the Boot Menu If your BIOS offers a Boot Menu, then during the boot-up process, press the button associated with the Boot Menu. In our case, this is ESC. Our example Boot Menu doesn’t have the ability to boot from USB, but your Boot Menu should have some options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others. Try the options that start with USB until you find one that works. Don’t worry if it doesn’t work – you can just restart and try again. Using the Boot Menu does not change the normal boot order on your system, so the next time you start up your computer it will boot from the hard drive as normal. Hard: Using Setup If your BIOS doesn’t offer a Boot Menu, then you will have to change the boot order in Setup. Note: There are some options in BIOS Setup that can affect the stability of your machine. Take care to only change the boot order options. Press the button associated with Setup. In our case, this is F2. If your BIOS Setup has a Boot tab, then switch to it and change the order such that one of the USB options occurs first. There may be several USB options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others; try them out to see which one works for you. If your BIOS does not have a boot tab, boot order is commonly found in Advanced CMOS Options. Note that this changes the boot order permanently until you change it back. If you plan on only plugging in a bootable flash drive when you want to boot from it, then you could leave the boot order as it is, but you may find it easier to switch the order back to the previous order when you reboot from Ubuntu. Booting into Ubuntu If you set the right boot option, then you should be greeted with the UNetbootin screen. Press enter to start Ubuntu with the default options, or wait 10 seconds for this to happen automatically. Ubuntu will start loading. It should go straight to the desktop with no need for a username or password. And that’s it! From this live desktop session, you can try out Ubuntu, and even install software that is not included in the live CD. Installed software will only last for the duration of your session – the next time you start up the live CD it will be back to its original state. Download UNetbootin from sourceforge.net Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7Speed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • Silverlight 5 Hosting :: Features in Silverlight 5 and Release Date

    - by mbridge
    Silverlight 5 is finally announced in the Silverlight FireStarter Event on the 2nd December, 2010. This new version of Silverlight which was earlier labeled as 'Future of Microsoft Silverlight' has now come much closer to go live as the first Silverlight 5 Beta version is expected to be shipped during the early months of 2011. However for the full fledged and the final release of Silverlight 5, we have to wait many more months as the same is likely to be made available within the Q3 2011. As would have been usually expected, this latest edition would feature many new capabilities thereby extending the developer productivity to a whole new dimension of premium media experience and feature-rich business applications. It comes along with many new feature updates as well as the inclusion of new technologies to improve the standard of the Silverlight applications which are now fine-tuned to produce next generation business and media solutions that is capable to meet the requirements of the advanced web-based app development. The Silverlight 5 is all set to replace the previous fourth version which now includes more than forty new features while also dropping various deprecated elements that was prevalent earlier. It has brought around some major performance enhancements and also included better support for various other tools and technologies. Following are some of the changes that are registered to be available under the Silverlight 5 Beta edition which is scheduled to be launched during the Q1 2011. Silverlight 5 : Premium Media Experiences The media features of Silverlight 5 has seen some major enhancements with a lot of optimizations being made to deliver richer solutions. It's capability has now been extended to make things easier, faster and capable of performing the desired tasks in the most efficient manner. The Silverlight media solutions has already been a part of many companies in the recent days where various on-demand Silverlight services were featured but with the arrival of the next generation premium media solution of Silverlight 5, it is expected to register new heights of success and global user acclamation for using it with many esteemed web-based projects and media solutions. - The most happening element in the new Silverlight 5 will be its support for utilizing the GPU based hardware acceleration which is intended to lower down the CPU load to a significant extent and thereby allowing faster rendering of media contents without consuming much resources. This feature is believed to be particularly helpful for low configured machines to run full HD media content without any lagging caused due to processor load. It will hence be one great feature to revolutionize the new generation high quality media contents to be available within the web in a more efficient manner with its hardware decoded video playback capabilities. - With the inclusion of hardware video decoding to minimize the processor load, the Silverlight 5 also comes with another optimization enhancement to also reduce the power consumption level by making new methods to deal with the power-saver settings. With this optimization in effect, the computer would be automatically allowed to switch to sleep mode while no video playback is in progress and also to prevent any screensavers to popup and cause annoyances during any video playback. There would also be other power saver options which will be made available to best suit the users requirements and purpose. - The Silverlight trickplay feature is another great way to tweak any silverlight powered media content as is used for many video tutorial sites or for dealing with any sort of presentations. This feature enables the user to modify the playback speed to either slowdown or speedup during the playback durations based on the requirements without compromising on the quality of output. Normally such manipulations always makes the content's audio to go off-pitch, but the same will not be the case with TrickPlay and the audio would seamlessly progress with the video without skipping any of its part. - In addition to all of the above, the new Silverlight 5 will be featuring wireless control of all the media contents by making use of remote controllers. With the use of such remote devices, it will be easier to handle the various media playback controls thereby providing more freedom while experiencing the premium media services. Silverlight 5 : Business Application Development The application development standard has been extended with more possibilities by bringing forth new and useful technologies and also reviving the existing methods to work better than what it was used to. From the UI improvements to advanced technical aspects, the Silverlight 5 scores high on all grounds to produce great next generation business delivered applications by putting in more creativity and resourceful touch to all the apps being produced with it. - The WPF feature of Silverlight is made more effective by introducing new standards of Databinding which is intended to improve the productivity standards of the Silverlight application developer. It brings in a lot of convenience in debugging the databinding components or expressions and hence making things work in a flawless manner. Some additional features related to databinding includes that of Ancestor RelativeSource, Implicit DataTemplates and Model View ViewModel (MVVM) support with DataContextChanged event and many other new features relating it. - It now comes with a refined text and printing service which facilitates better clarity of the text rendering and also many positive changes which are being applied to the layout pattern. New supports has been added to include OpenType font, multi-column text, linked-text containers and character leading support to name a few among the available features.This also includes some important printing aspects like that of Postscript Vector Printing API which allows to program our printing tasks in a user defined way and Pivot functionality for visualization concerns of informations. - The Graphics support is the key improvements being incorporated which now enables to utilize three dimensional graphics pattern using GPU acceleration. It can manage to provide some really cool visualizations being curved to provide media contents within the business apps with also the support for full HD contents at 1080p quality. - Silverlight 5 includes the support for 64-bit operating systems and relevant browsers and is also optimized to provide better performance. It can support the background thread for the networking which can reduce the latency of the network to a considerable extent. The Out-of-Browser functionality adds the support for utilizing various libraries and also the Win32 API. It also comes with testing support with VS 2010 which is mostly an automated procedure and has also enabled increased security aspects of all the Silverlight 5 developed applications by using the improved version of group policy support.

    Read the article

  • Hey, Google: It’s Time to Add Multi-Window Multitasking To Android

    - by Chris Hoffman
    In 2012, Google’s Dianne Hackborn threatened to revoke CyanogenMod’s access to the Android Market if they moved forward with adding “Cornerstone” multitasking to their custom ROM. Samsung has since created their own multi-window multitasking feature. Dianne Hackborn said this “is something that needs to be done at the mainline platform level” so apps wouldn’t break. She was right — Android needs this as a standard feature and it’s time for Google to provide it. Doesn’t Android Have Multitasking? Android originally stood out from Apple’s iOS with its powerful multitasking. Applications can continue running in the background while you’re using another application. This makes Android powerful — you can even have BitTorrent clients downloading files in the background while using another app. Android still kept the design of a single app on screen at a time. This made a lot of sense when Android only ran on smartphones with small screens. Today, Android runs on everything from smaller smartphones all the way up to huge “phablets” like the Galaxy Note. Android has gone beyond phones and runs on 12-inch tablets, convertibles with keyboard docks, laptops, and even Android desktops. Android isn’t just a phone operating system. Samsung’s Multi-Window Isn’t Good Enough Samsung has tried to add value to Android by adding a multi-window feature. When you’re using a high-end phone like the Galaxy Note or Galaxy S, or a Galaxy tablet, you have the ability to run certain apps side-by-side with each other. There are big problems here. This only works on Samsung devices, and only on specific Samsung devices. To add support for this feature in a way that doesn’t break other apps, Samsung’s multi-window feature also only works with specific apps. You can’t just run any app in multi-window view, only the apps on the Multi Window bar Samsung provides. This prevents third-party apps from breaking, which is what Google was worried about with CyanogenMod’s Cornerstone feature. A feature that only works with a handful of apps on specific devices from a single manufacturer isn’t good enough. This feature needs to work on every Android device — or at least ones with suitably large screens and powerful enough internals. It needs to be an Android platform feature so application developers can ensure their apps will work properly with it on every device. Android developers shouldn’t have to add support for each manufacturer’s own multi-window feature if other manufacturers decide to copy Samsung. Floating Apps Are a Dirty Hack Floating apps also enable real multitasking. Remember that Android allows apps to run in the background while you’re using an app in the foreground. These apps can present interfaces that appear floating above the current app — think of it like using “always on top” to make a window always appear over every other app on a desktop operating system. You can install floating apps to browse the web, take notes, chat, and watch videos while using any app. Only apps specifically designed to run as floating apps will work, so you have to seek them out. Floating apps are also awkward to use because they float over the app you’re using, blocking parts of its interface. Microsoft added floating-window support to Skype for Android. You can have a video conversation and the other person’s face will always appear on your screen, even when you leave the Skype app. Microsoft is using more of Android’s multi-window multitasking power than Google is. Custom ROMs and Root-Only Tweaks Aren’t Acceptable Some custom ROMs are adding this feature to Android. Google threatened to revoke CyanogenMod’s access to the Android Market (now known as Google Play) if they added this feature because it could potentially break third-party apps. Today, other custom ROMs are working on split-screen multitasking. Samsung added their own version to their own devices. You can also get this feature by using a root-only Xposed Framework tweak known as XMultiWindow. If you have root access, you can get multi-window multitasking or any app on your device. This shouldn’t require rooting your device or installing a custom ROM. These third-party solutions often have awkward interfaces and bugs. We need an integrated, supported solution that works the same on every device. Why Multi-Window is Important Microsoft’s Windows 8.1 stands out among tablet operating systems for its powerful multitasking support, allowing you to view several apps side-by-side at the same time. Apple is also reported to be working on adding side-by-side apps to the iPad with iOS 8. On every competitor’s operating system, you’ll be able to view a web page while you write an email, watch a video while you browse the web, or chat with someone while you do anything else. But Android’s still remained frozen in time. Despite all Android’s underlying power — and despite the way Android allows apps to adapt to different screen sizes — Google is resisting adding this feature. Large-screen Android tablets like the Nexus 10 (remember that tablet Google hasn’t updated in over 18 months?) need this feature. So do huge phones, convertibles, laptops, and Android desktops. If tablets are the future of personal computing, we should be able to do more than one thing at a time on our tablets’ big screens. Microsoft, Samsung, and even Apple are realizing this — now it’s Google’s turn. Image Credit: Sergey Galyonkin on Flickr, Karlis Dambrans on Flickr

    Read the article

  • Transparency and AlphaBlending

    - by TechTwaddle
    In this post we'll look at the AlphaBlend() api and how it can be used for semi-transparent blitting. AlphaBlend() takes a source device context and a destination device context (DC) and combines the bits in such a way that it gives a transparent effect. Follow the links for the msdn documentation. So lets take a image like, and AlphaBlend() it on our window. The code to do so is below, (under the WM_PAINT message of WndProc) HBITMAP hBitmap=NULL, hBitmapOld=NULL; HDC hMemDC=NULL; BLENDFUNCTION bf; hdc = BeginPaint(hWnd, &ps); hMemDC = CreateCompatibleDC(hdc); hBitmap = LoadBitmap(g_hInst, MAKEINTRESOURCE(IDB_BITMAP1)); hBitmapOld = SelectObject(hMemDC, hBitmap); bf.BlendOp = AC_SRC_OVER; bf.BlendFlags = 0; bf.SourceConstantAlpha = 80; //transparency value between 0-255 bf.AlphaFormat = 0;    AlphaBlend(hdc, 0, 25, 240, 100, hMemDC, 0, 0, 240, 100, bf); SelectObject(hMemDC, hBitmapOld); DeleteDC(hMemDC); DeleteObject(hBitmap); EndPaint(hWnd, &ps);   The code above creates a memory DC (hMemDC) using CreateCompatibleDC(), loads a bitmap onto the memory DC and AlphaBlends it on the device DC (hdc), with a transparency value of 80. The result is: Pretty simple till now. Now lets try to do something a little more exciting. Lets get two images involved, each overlapping the other, giving a better demonstration of transparency. I am also going to add a few buttons so that the user can increase or decrease the transparency by clicking on the buttons. Since this is the first time I played around with GDI apis, I ran into something that everybody runs into sometime or the other, flickering. When clicking the buttons the images would flicker a lot, I figured out why and used something called double buffering to avoid flickering. We will look at both my first implementation and the second implementation just to give the concept a little more depth and perspective. A few pre-conditions before I dive into the code: - hBitmap and hBitmap2 are handles to the two images obtained using LoadBitmap(), these variables are global and are initialized under WM_CREATE - The two buttons in the application are labeled Opaque++ (make more opaque, less transparent) and Opaque-- (make less opaque, more transparent) - DrawPics(HWND hWnd, int step=0); is the function called to draw the images on the screen. This is called from under WM_PAINT and also when the buttons are clicked. When Opaque++ is clicked the 'step' value passed to DrawPics() is +20 and when Opaque-- is clicked the 'step' value is -20. The default value of 'step' is 0 Now lets take a look at my first implementation: //this funciton causes flicker, cos it draws directly to screen several times void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL;     BLENDFUNCTION bf;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     //create a memory DC     hMemDC = CreateCompatibleDC(hdc);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents of screen     if (step < 0)     {         RECT rect = {0, 0, 240, 200};         FillRect(hdc, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     BitBlt(hdc, 0, 25, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hdc, 0, 75, 240, 100, hMemDC, 0, 0, 240, 100, bf);     DeleteDC(hMemDC);     ReleaseDC(hWnd, hdc); }   In the code above, we first get the window DC using GetDC() and create a memory DC using CreateCompatibleDC(). Then we select hBitmap2 onto the memory DC and Blt it on the window DC (hdc). Next, we select the other image, hBitmap, onto memory DC and AlphaBlend() it over window DC. As I told you before, this implementation causes flickering because it draws directly on the screen (hdc) several times. The video below shows what happens when the buttons were clicked rapidly: Well, the video recording tool I use captures only 15 frames per second and so the flickering is not visible in the video. So you're gonna have to trust me on this, it flickers (; To solve this problem we make sure that the drawing to the screen happens only once and to do that we create an additional memory DC, hTempDC. We perform all our drawing on this memory DC and finally when it is ready we Blt hTempDC on hdc, and the images are displayed in one go. Here is the code for our new DrawPics() function: //no flicker void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL, hTempDC=NULL;     BLENDFUNCTION bf;     HBITMAP hBitmapTemp=NULL, hBitmapOld=NULL;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     hMemDC = CreateCompatibleDC(hdc);     hTempDC = CreateCompatibleDC(hdc);     hBitmapTemp = CreateCompatibleBitmap(hdc, 240, 150);     hBitmapOld = (HBITMAP)SelectObject(hTempDC, hBitmapTemp);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents     if (step < 0)     {         RECT rect = {0, 0, 240, 150};         FillRect(hTempDC, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     //Blt hBitmap2 directly to hTempDC     BitBlt(hTempDC, 0, 0, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hTempDC, 0, 50, 240, 100, hMemDC, 0, 0, 240, 100, bf);     //now hTempDC is ready, blt it directly on hdc     BitBlt(hdc, 0, 25, 240, 150, hTempDC, 0, 0, SRCCOPY);     SelectObject(hTempDC, hBitmapOld);     DeleteObject(hBitmapTemp);     DeleteDC(hMemDC);     DeleteDC(hTempDC);     ReleaseDC(hWnd, hdc); }   This function is very similar to the first version, except for the use of hTempDC. Another point to note is the use of CreateCompatibleBitmap(). When a memory device context is created using CreateCompatibleDC(), the context is exactly one monochrome pixel high and one monochrome pixel wide. So in order for us to draw anything onto hTempDC, we first have to set a bitmap on it. We use CreateCompatibleBitmap() to create a bitmap of required dimension (240x150 above), and then select this bitmap onto hTempDC. Think of it as utilizing an extra canvas, drawing everything on the canvas and finally transferring the contents to the display in one scoop. And with this version the flickering is gone, video follows:   If you want the entire solutions source code then leave a message, I will share the code over SkyDrive.

    Read the article

  • Testing Mobile Websites with Adobe Shadow

    - by dwahlin
    It’s no surprise that mobile development is all the rage these days. With all of the new mobile devices being released nearly every day the ability for developers to deliver mobile solutions is more important than ever. Nearly every developer or company I’ve talked to recently about mobile development in training classes, at conferences, and on consulting projects says that they need to find a solution to get existing websites into the mobile space. Although there are several different frameworks out there that can be used such as jQuery Mobile, Sencha Touch, jQTouch, and others, how do you test how your site renders on iOS, Android, Blackberry, Windows Phone, and the variety of mobile form factors out there? Although there are different virtual solutions that can be used including Electric Plum for iOS, emulators, browser plugins for resizing the laptop/desktop browser, and more, at some point you need to test on as many physical devices as possible. This can be extremely challenging and quite time consuming though especially when you consider that you have to manually enter URLs into devices and click links on each one to drill-down into sites. Adobe Labs just released a product called Adobe Shadow (thanks to Kurt Sprinzl for letting me know about it) that significantly simplifies testing sites on physical devices, debugging problems you find, and even making live modifications to HTML and CSS content while viewing a site on the device to see how rendering changes. You can view a page in your laptop/desktop browser and have it automatically pushed to all of your devices without actually touching the device (a huge time saver). See a problem with a device? Locate it using the free Chrome extension, pull up inspection tools (based on the Chrome Developer tools) and make live changes through Chrome that appear on the respective device so that it’s easy to identify how problems can be resolved. I’ve been using Adobe Shadow and am very impressed with the amount of time saved and the different features that it offers. In the rest of the post I’ll walk through how to get it installed, get it started, and use it to view and debug pages.   Getting Adobe Shadow Installed The following steps can be used to get Adobe Shadow installed: 1. Download and install Adobe Shadow on your laptop/desktop 2. Install the Adobe Shadow extension for Chrome 3. Install the Adobe Shadow app on all of your devices (you can find it in various app stores) 4. Connect your devices to Wifi. Make sure they’re on the same network that your laptop/desktop machine is on   Getting Adobe Shadow Started Once Adobe Shadow is installed, you’ll need to get it running on your laptop/desktop and on all your mobile devices. The following steps walk through that process: 1. Start the Adobe Shadow application on your laptop/desktop 2. Start the Adobe Shadow app on each of your mobile devices 3. Locate the laptop/desktop name in the list that’s shown on each mobile device: 4. Select the laptop/desktop name and a passcode will be shown: 5. Open the Adobe Shadow Chrome extension on the laptop/desktop and enter the passcode for the given device: Using Adobe Shadow to View and Modify Pages Once Adobe Shadow is up and running on your laptop/desktop and on all of your mobile devices you can navigate to a page in Chrome on the laptop/desktop and it will automatically be pushed out to all connected mobile devices. If you have 5 mobile devices setup they’ll all navigate to the page displayed in Chrome (pretty awesome!). This makes it super easy to see how a given page looks on your iPad, Android device, etc. without having to touch the device itself. If you find a problem with a page on a device you can select the device in the Chrome Adobe Shadow extension on your laptop/desktop and select the remote inspector icon (it’s the < > icon): This will pull up the Adobe Shadow remote debugging window which contains the standard Chrome Developer tool tabs such as Elements, Resources, Network, etc. Click on the Elements tab to see the HTML rendered for the target device and then drill into the respective HTML content, CSS styles, etc. As HTML elements are selected in the Adobe Shadow debugging tool they’ll be highlighted on the device itself just like they would if you were debugging a page directly in Chrome with the developer tools. Here’s an example from my Android device that shows how the page looks on the device as I select different HTML elements on the laptop/desktop: Conclusion I’m really impressed with what I’ve to this point from Adobe Shadow. Controlling pages that display on devices directly from my laptop/desktop is a big time saver and the ability to remotely see changes made through the Chrome Developer Tools (on my laptop/desktop) really pushes the tool over the top. If you’re developing mobile applications it’s definitely something to check out. It’s currently free to download and use. For additional details check out the video below:  

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >