Search Results

Search found 5213 results on 209 pages for 'integrated camera'.

Page 172/209 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • iPhone - ModalViewController not raising to top of the screen

    - by Oliver
    Hello, I have a UIImagePickerController that is shown [self presentModalViewController:self.picker animated:NO]; Then later on the code, I allow the user to display a preference panel : PreferencesController *nextWindow = [[[PreferencesController alloc] initWithNibName:@"Preferences" bundle:nil] autorelease]; UINavigationController* navController = [[[UINavigationController alloc] initWithRootViewController:nextWindow] autorelease]; [self presentModalViewController:navController animated:YES]; At this point, the new controller raises on the screen, but don't go to the top. Some space is left "transparent" at the top (I can see the camera view behind), and the bottom of the view is hidden out of the screen. The space I am talking about is about a status bar height. The status bar is not present on the screen. The navigation controller is hidden : self.navigationController.navigationBarHidden = YES; There is a toolbar at the top of the view. Nothing special into the view. The height of the view is defined at 480. All simulated element are set off in IB. The autoresize properties are all set on. I had a previous xib (I rebuilt it from scratch) that worked very well. I don't see what I missed on this one (I have only changed the xib, that replaces the previous one). I've cleaned the cache to be sure there was nothing left. No change... I've deleted everything in the new view to prevent some conflicts. No change... What did I miss ? How could I remove this empty space ?

    Read the article

  • Reusing Windows Picture and Fax Viewer process to load a new image from FileSystemWatcher

    - by Cory Larson
    So for an idea for my birthday party I'm setting up a photo booth. I've got software to remotely control the camera and all that, but I need to write a little application to monitor the folder where the pictures get saved and display them. Here's what I've got so far. The issue is that I don't want to launch a new Windows Photo Viewer process every time the FileSystemWatcher sees a new file, I just want to load the latest image into the current instance of the Windows Photo Viewer (or start a new one if one isn't running). class Program { static void Main(string[] args) { new Program().StartWatching(); } public void StartWatching() { FileSystemWatcher incoming = new FileSystemWatcher(); incoming.Path = @"G:\TempPhotos\"; incoming.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.FileName; incoming.Filter = "*.jpg"; incoming.Created += new FileSystemEventHandler(ShowImage); incoming.EnableRaisingEvents = true; Console.WriteLine("Press \'q\' to quit."); while (Console.Read() != 'q') ; } private void ShowImage(object source, FileSystemEventArgs e) { string s1 = Environment.ExpandEnvironmentVariables("%windir%\\system32\\rundll32.exe "); string s2 = Environment.ExpandEnvironmentVariables("%windir%\\system32\\shimgvw.dll,ImageView_Fullscreen " + e.FullPath); Process.Start(s1, s2); Console.WriteLine("{0} : Image \"{0}\" at {1:t}.", e.ChangeType, e.FullPath, DateTime.Now); } } If you don't have a tried and true solution, a simple push in the right direction would be just as valuable. And FYI, this will be running on a 64-bit Windows 7 machine. Thanks!

    Read the article

  • Trying to attach a file from SD Card to email

    - by Chrispix
    I am trying to launch an Intent to send an email. All of that works, but when I try to actually send the email a couple 'weird' things happen. here is code Intent sendIntent = new Intent(Intent.ACTION_SEND); sendIntent.setType("image/jpeg"); sendIntent.putExtra(Intent.EXTRA_SUBJECT, "Photo"); sendIntent.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://sdcard/dcim/Camera/filename.jpg")); sendIntent.putExtra(Intent.EXTRA_TEXT, "Enjoy the photo"); startActivity(Intent.createChooser(sendIntent, "Email:")); So if I launch using the Gmail menu context It shows the attachment, lets me type who the email is to, and edit the body & subject. No big deal. I hit send, and it sends. The only thing is the attachment does NOT get sent. So. I figured, why not try it w/ the Email menu context (for my backup email account on my phone). It shows the attachment, but no text at all in the body or subject. When I send it, the attachment sends correctly. That would lead me to believe something is quite wrong. Do I need a new permission in the Manifest launch an intent to send email w/ attachment? What am I doing wrong?

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • How to draw an unfilled square on top of a stream video using a mouse and track the object enclosed

    - by Haxed
    Hi, I am making an object tracking application. I have used Emgucv 2.1.0.0 to load a video file to a picturebox. I have also taken the video stream from a web camera. Now, I want to draw an unfilled square on the video stream using a mouse and then track the object enclosed by the unfilled square as the video continues to stream. This is what people have suggested so far:- (1) .NET Video overlay drawing(DirectX) - but this is for C++ users, the suggester said that there are .NET wrappers, but I had a hard time finding any. (2) DxLogo sample DxLogo – A sample application showing how to superimpose a logo on a data stream. It uses a capture device for the video source, and outputs the result to a file. Sadly, this does not use a mouse. (3) GDI+ and mouse handling - this area I do not have a clue. And for tracking the object in the square, I would appreciate if someone give me some research paper links to read. Any help as to using the mouse to draw on a video is greatly appreciated. Thank you for taking the time to read this. Many Thanks

    Read the article

  • Optimizing quality for available bandwidth in Flash/RTMFP

    - by Artem M.
    I'm developing a simple one-on-one P2P video chat using ActionScript, and I'd like to ensure the best video quality for the peers given their bandwidth. This means: Setting the best quality given the available bandwidth when the chat starts Responding to network congestions during chat by decreasing the quality. The task is similar to dynamic stream switching, but P2P has its specifics that make dynamic streaming approaches not work. For example, the maxBytesPerSecond metric monitored in dynamic stream switching is pretty useless in P2P where the receiving NetStream's buffer size is set to 0 to minimize latency. So far, it looks like the most reliable QoS metric for P2P is SRTT. In my simulated tests on a local network, a bandwidth congestion makes it shot up to 500 ms and more when there's a bandwidth limit introduced. However, it gives no hint as to how best adjust the value for bandwidth in Camera.setQuality(0, bandwidth) to respond to the congestion. I've done lots of experiments, and I still don't see a clear and simple solution to the problem. I'm also wondering how this issue is addressed (if at all) in other RTMFP chat solutions.

    Read the article

  • Zoom image to pixel level

    - by zaf
    For an art project, one of the things I'll be doing is zooming in on an image to a particular pixel. I've been rubbing my chin and would love some advice on how to proceed. Here are the input parameters: Screen: sw - screen width sh - screen height Image: iw - image width ih - image height Pixel: px - x position of pixel in image py - y position of pixel in image Zoom: zf - zoom factor (0.0 to 1.0) Background colour: bc - background colour to use when screen and image aspect ratios are different Outputs: The zoomed image (no anti-aliasing) The screen position/dimensions of the pixel we are zooming to. When zf is 0 the image must fit the screen with correct aspect ratio. When zf is 1 the selected pixel fits the screen with correct aspect ratio. One idea I had was to use something like povray and move the camera towards a big image texture or some library (e.g. pygame) to do the zooming. Anyone think of something more clever with simple pseudo code? To keep it more simple you can make the image and screen have the same aspect ratio. I can live with that. I'll update with more info as its required.

    Read the article

  • UIImagePickerControllerDelegate Returns Blank "editingInfo" Dictionary Object

    - by Leachy Peachy
    Hi there, I have an iPhone app that calls upon the UIImagePickerController to offer folks a choice between selecting images via the camera or via their photo library on the phone. The problem is that sometimes, (Can't always get it to replicate.), the editingInfo dictionary object that is supposed to be returned by didFinishPickingImage delegate message, comes back blank or (null). Has anyone else seen this before? I am implementing the UIImagePickerControllerDelegate in my .h file and I am correctly implementing the two delegate methods: didFinishPickingImage and imagePickerControllerDidCancel. Any help would be greatly appreciated. Thank you in advance! Here is my code... my .h file: @interface AddPhotoController : UIViewController <UIImagePickerControllerDelegate, UINavigationControllerDelegate> { IBOutlet UIImageView *imageView; IBOutlet UIButton *snapNewPictureButton; IBOutlet UIButton *selectFromPhotoLibraryButton; } @property (nonatomic, retain) UIImageView *imageView; @property (nonatomic, retain) UIButton *snapNewPictureButton; @property (nonatomic, retain) UIButton * selectFromPhotoLibraryButton; my .m file: @implementation AddPhotoController @synthesize imageView, snapNewPictureButton, selectFromPhotoLibraryButton; - (IBAction)getCameraPicture:(id)sender { UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeCamera; picker.allowsImageEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)image editingInfo:(NSDictionary *)editingInfo { NSLog(@"Image Meta Info.: %@",editingInfo); UIImage *selectedImage = image; imageView.image = selectedImage; self._havePictureData = YES; [self.useThisPhotoButton setEnabled:YES]; [picker dismissModalViewControllerAnimated:YES]; } - (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker { [picker dismissModalViewControllerAnimated:YES]; }

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • How to upload image to remote server in iphone?

    - by Atulkumar V. Jain
    Hi Everybody, I am trying to upload a image which i am clicking with the help of the camera. I am trying the following code to upload the image to the remote server. -(void)searchAction:(UIImage*)theImage { UIDevice *dev = [UIDevice currentDevice]; NSString *uniqueId = dev.uniqueIdentifier; NSData * imageData = UIImagePNGRepresentation(theImage); NSString *postLength = [NSString stringWithFormat:@"%d",[imageData length]]; NSString *urlString = [@"http://www.amolconsultants.com/im.jsp?" stringByAppendingString:@"imagedata=iPhoneV0&mcid="]; urlString = [urlString stringByAppendingString:uniqueId]; urlString = [urlString stringByAppendingString:@"&lang=en_US.UTF-8"]; NSLog(@"The URL of the image is :- %@", urlString); NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setHTTPBody:imageData]; NSURLConnection *conn=[[NSURLConnection alloc] initWithRequest:request delegate:self]; if (conn == nil) { NSLog(@"Failed to create the connection"); } } But nothing is getting posted. Nothing comes in the console window also. I am calling this method in the action sheet. When the user clicks on the 1st button of the action sheet this method is called to post the image. Can anyone help me with this... Any code will be very helpful... Thanx in advance...

    Read the article

  • Opengl: fit a quad to screen, given the value of Z.

    - by hytparadisee
    Short Version of the question: I will put a quad. I know the width and height of the screen in window coordinates, i know the Z-coordinates of the quad in 3D. I know the FOVY, I know the aspect. The quad will be put along Z-axis, My camera doesn't move (placed at 0, 0, 0). I want to find out the width and height of the quad IN 3D COORDINATES that will fit exactly onto my screen. Long Version of the question: I would like to put a quad along the Z-axis at the specified offset Z, I would like to find out the width and height of the quad that will exactly fill the entire screen. I used to have a post on gamedev.net that uses a formula similar to the following: *dist = Z * tan ( FOV / 2 )* Now I can never find the post! Though it's similar, it is still different, because I remembered in that working formula, they do make use of screenWidth and screenHeight, which are the width and height of the screen in window coordinates. I am not really familiar with concepts like frustum, fov and aspect so that's why I can't work out the formula on my own. Besides, I am sure I don't need gluUnproject (I tried it, but the results are way off). It's not some gl calls, it's just a math formula that can find out the width and height in 3D space that can fill the entire screen, IF Z offset, width in window coordinates, and height in window coordinates, are known. Thanks all in advance.

    Read the article

  • Expression Blend doesn't recognize command objects declared in code behind file

    - by Brian Ensink
    I have a WPF UserControl. The code behind file declares some RoutedUICommand objects which are referenced in the XAML. The application builds and runs just fine. However Expression Blend 3 cannot load the XAML in the designer and gives errors like this one: The member "ResetCameraCommand" is not recognized or accessible. The class and the member are both public. Building and rebuilding the project in Blend and restarting Blend hasn't helped. Any ideas what the problem is? Here are fragments of my XAML ... <UserControl x:Class="CAP.Visual.CameraAndLightingControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:CAP.Visual;assembly=VisualApp" Height="100" Width="700"> <UserControl.CommandBindings> <CommandBinding Command="local:CameraAndLightingControl.ResetCameraCommand" Executed="ResetCamera_Executed" CanExecute="ResetCamera_CanExecute"/> </UserControl.CommandBindings> .... ... and the code behind C# namespace CAP.Visual { public partial class CameraAndLightingControl : UserControl { public readonly static RoutedUICommand ResetCameraCommand; static CameraAndLightingControl() { ResetCameraCommand = new RoutedUICommand("Reset Camera", "ResetCamera", typeof(CameraAndLightingControl)); }

    Read the article

  • How can you transform a set of numbers into mostly whole ones?

    - by Alice
    Small amount of background: I am working on a converter that bridges between a map maker (Tiled) that outputs in XML, and an engine (Angel2D) that inputs lua tables. Most of this is straight forward However, Tiled outputs in pixel offsets (integers of absolute values), while Angel2D inputs OpenGL units (floats of relative values); a conversion factor between these two is needed (for example, 32px = 1gu). Since OpenGL units are abstract, and the camera can zoom in or out if the objects are too small or big, the actual conversion factor isn't important; I could use a random number, and the user would merely have to zoom in or out. But it would be best if the conversion factor was selected such that most numbers outputted were small and whole (or fractions of small whole numbers), because that makes it easier to work with (and the whole point of the OpenGL units is that they are easy to work with). How would I find such a conversion factor reliably? My first attempt was to use the smallest number given; this resulted in no fractions below 1, but often lead to lots of decimal places where the factors didn't line up. Then I tried the mode of the sequence, which lead to the largest number of 1's possible, but often lead to very long floats for background images. My current approach gets the GCD of the whole sequence, which, when it works, works great, but can easily be thrown off course by a single bad apple. Note that while I could easily just pass the numbers I am given along, or pick some fixed factor, or use one of the conversions I specified above, I am looking for a method to reliably scale this list of integers to small, whole numbers or simple fractions, because this would most likely be unsurprising to the end user; this is not a one off conversion. The end users tend to use 1.0 as their "base" for manipulations (because it's simple and obvious), so it would make more sense for the sizes of entities to cluster around this.

    Read the article

  • Barcode to Product Name Converter

    - by spagetticode
    Hi All, We are designing a mobile shopping system. The camera on the phone will read the barcode and then we have to convert the barcode to a standard product name in order to save it to our database. We are saving it to our database because we are connecting to a web service of local e-commerce sites to get their price about the related product. We are sending the product name to get the price from them, so that the user can see the prices, compare and buy. We cannot send barcode number to get the data from the e-commerce sites because some sites do not have the info of the barcode number. I have to somehow get the product name by only knowing the barcode. Google returns the result when barcode number is searched. But how am I going to parse the data? or how am I going to know which answer of google search best suits my input? Is there a site that sells barcode and product name data match? We are designing the system with C# Thanks alot.

    Read the article

  • How to load an ImageView from a png file?

    - by Peter vdL
    I take a picture with the camera using Intent intent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE ); startActivityForResult( intent, 22 ); When the activity completes, I write the bitmap picture out to a PNG file. java.io.FileOutputStream out = openFileOutput("myfile.png", Context.MODE_PRIVATE); bmp.compress(Bitmap.CompressFormat.PNG, 90, out); That goes OK, and I can see the file is created in my app private data space. I'm having difficulty when I later want to display that image using an ImageView. Can anyone suggest code to do this? If I try to create a File with path separators in, it fails. If I try to create a Uri from a name without separators, that fails. I can open the file OK using: java.io.FileInputStream in = openFileInput("myfile.png"); But that doesn't give me the Uri I need to set an image with iv.setImageURI(u) Summary: I have the picture in a png file in private app data. What's the code to set that into an ImageView? Thanks.

    Read the article

  • How to run OpenGL code with out compiling?

    - by Ole Jak
    So I have some openGL code (such code for example) /* FUNCTION: YCamera :: CalculateWorldCoordinates ARGUMENTS: x mouse x coordinate y mouse y coordinate vec where to store coordinates RETURN: n/a DESCRIPTION: Convert mouse coordinates into world coordinates */ void YCamera :: CalculateWorldCoordinates(float x, float y, YVector3 *vec) { // START GLint viewport[4]; GLdouble mvmatrix[16], projmatrix[16]; GLint real_y; GLdouble mx, my, mz; glGetIntegerv(GL_VIEWPORT, viewport); glGetDoublev(GL_MODELVIEW_MATRIX, mvmatrix); glGetDoublev(GL_PROJECTION_MATRIX, projmatrix); real_y = viewport[3] - (GLint) y - 1; // viewport[3] is height of window in pixels gluUnProject((GLdouble) x, (GLdouble) real_y, 1.0, mvmatrix, projmatrix, viewport, &mx, &my, &mz); /* 'mouse' is the point where mouse projection reaches FAR_PLANE. World coordinates is intersection of line(camera->mouse) with plane(z=0) (see LaMothe 306) Equation of line in 3D: (x-x0)/a = (y-y0)/b = (z-z0)/c Intersection of line with plane: z = 0 x-x0 = a(z-z0)/c <=> x = x0+a(0-z0)/c <=> x = x0 -a*z0/c y = y0 - b*z0/c */ double lx = fPosition.x - mx; double ly = fPosition.y - my; double lz = fPosition.z - mz; double sum = lx*lx + ly*ly + lz*lz; double normal = sqrt(sum); double z0_c = fPosition.z / (lz/normal); vec->x = (float) (fPosition.x - (lx/normal)*z0_c); vec->y = (float) (fPosition.y - (ly/normal)*z0_c); vec->z = 0.0f; } I want to run It but with out precompiling. Is there any way to do such thing

    Read the article

  • iPhone code forUIImageViewControllerPickerSourceTypeCamera

    - by aman-gupta
    - (IBAction)pickAndDecode:(id) sender { UIImagePickerControllerSourceType sourceType; int i = [sender tag]; switch (i) { case 0: sourceType = UIImagePickerControllerSourceTypeCamera; break; case 1: sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; break; case 2: sourceType = UIImagePickerControllerSourceTypePhotoLibrary; break; default: sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; } [self pickAndDecodeFromSource:sourceType]; } - (void) updateToolbar { self.cameraBarItem.enabled = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]; self.savedPhotosBarItem.enabled = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeSavedPhotosAlbum]; self.libraryBarItem.enabled = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypePhotoLibrary]; self.archiveBarItem.enabled = true; self.actionBarItem.enabled = (self.result != nil) && ([self.result actions] != nil) && ([self.result actions].count > 0); } - (void)pickAndDecodeFromSource:(UIImagePickerControllerSourceType) sourceType { [self reset]; // Create the Image Picker if ([UIImagePickerController isSourceTypeAvailable:sourceType]) { UIImagePickerController* picker = [[UIImagePickerController alloc] init]; picker.sourceType = sourceType; picker.delegate = self; picker.allowsImageEditing = YES; // [[NSUserDefaults standardUserDefaults] boolForKey:@"allowEditing"]; // Picker is displayed asynchronously. [self presentModalViewController:picker animated:YES]; } else { NSLog(@"Attempted to pick an image with illegal source type '%d'", sourceType); } } Where I Put this line in my above codes; [picker setShowsCameraControls:FALSE]; please help me so that i can change the real view of iPhone camera according to my view which i have designed

    Read the article

  • C# Minimize all running windows when application runs

    - by Derek
    I am working on a C# windows form application. How can i edit my code in a way that when more than 2 faces is being detected by my webcam. More information: When "FaceDetectedLabel.Text = "Faces Detected : " + cam.facesdetected.ToString();" becomes Face Detected: 2 or more... How can i do the following: Minimize all program running except my application. Log out of my computer Here is my code: namespace PBD { public partial class MainPage : Form { //declaring global variables private Capture capture; //takes images from camera as image frames public MainPage() { InitializeComponent(); } private void ProcessFrame(object sender, EventArgs arg) { Wrapper cam = new Wrapper(); //show the image in the EmguCV ImageBox WebcamPictureBox.Image = cam.start_cam(capture).Resize(390, 243, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC).ToBitmap(); FaceDetectedLabel.Text = "Faces Detected : " + cam.facesdetected.ToString(); } private void MainPage_Load(object sender, EventArgs e) { #region if capture is not created, create it now if (capture == null) { try { capture = new Capture(); } catch (NullReferenceException excpt) { MessageBox.Show(excpt.Message); } } #endregion Application.Idle += ProcessFrame; }

    Read the article

  • Cocos2d - smooth sprite movement in tile map RPG

    - by Lendo92
    I've been working on a 2-D Gameboy style RPG for a while now, and the game logic is mostly all done so I'm starting to try to make things look good. One thing I've noticed is that the walking movement / screen movement is a little bit choppy. Technically, it should work fine, but either it seems to be having some quirks, either due to taking up a lot of processing power or just timing inconsistencies between moving the screen and moving the sprite. To move the sprite, once I know where I want to move it, I call: tempPos.y += 3*theHKMap.tileSize.width/5; id actionMove = [CCMoveTo actionWithDuration:0.1 position:tempPos]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(orientOneMove)]; [guy runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; [self setCenterOfScreen:position]; Then, in orientOneMove, I call: [self.guy setTexture:[[CCTextureCache sharedTextureCache] addImage:@"guysprite07.png"]]; //the walking picture-I change texture back at the end of the movement id actionMove = [CCMoveTo actionWithDuration:0.15 position:self.tempLocation2]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(toggleTouchEnabled)]; [guy runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; The code for the concurrently running setCenterOfScreen:position method is: id actionMove = [CCMoveTo actionWithDuration:0.25 position:difference]; [self runAction: [CCSequence actions:actionMove, nil, nil]]; So the setCenterOfScreen moves the camera in one clean move while the guy moving is chopped into two actions to animate it (which I believe might be inefficient.) It's hard to tell what is making the movement not perfectly clean from looking at it, but essentially the guy isn't always perfectly in the center of the screen -- during movement, he's often times a pixel or two off for an instant. Any ideas/ solutions?

    Read the article

  • Font Awesome Not Working In Chrome

    - by Connor Black
    So I'm trying to prototype a marketing page and I'm using bootstrap and the new font awesome file. The problem is when I try to use an icon all that gets rendered on the page in a big square. Here's how I include the files in the head: <head> <title> </title> <link rel="stylesheet" href="css/bootstrap.css"> <link rel="stylesheet" href="css/bootstrap-responsive.css"> <link rel="stylesheet" href="css/font-awesome.css"> <link rel="stylesheet" href="css/app.css"> <!--[if IE 7]> <link rel="stylesheet" href="css/font-awesome-ie7.min.css"> <![endif]--> </head> And here's an example of me trying to use an icon: <i class="icon-camera-retro"></i> But all that gets rendered in a big square. Does anyone know what could be going on?

    Read the article

  • Is it possible to create an Android Service that listens for hardware key presses?

    - by VoteBrian
    I'd like to run an Android background service that will act as a keylistener from the home screen or when the phone is asleep. Is this possible? From semi-related examples online, I put together the following service, but get the error, "onKeyDown is undefined for the type Service". Does this mean it can't be done without rewriting Launcher, or is there something obvious I'm missing? public class ServiceName extends Service { @Override public void onCreate() { //Stuff } public IBinder onBind(Intent intent) { //Stuff return null; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if(event.getAction() == KeyEvent.ACTION_DOWN) { switch(keyCode) { case KeyEvent.KEYCODE_A: //Stuff return true; case KeyEvent.KEYCODE_B: //Stuff return true; //etc. } } return super.onKeyDown(keyCode, event); } } I realize Android defaults to the search bar when you type from the home screen, but this really is just for a very particular use. I don't really expect anyone but me to want this. I just think it'd be nice, for example, to use the camera button to wake the phone.

    Read the article

  • Android: Resolution issue while saving image into gallery

    - by Luca D'Amico
    I've managed to programmatically take a photo from the camera, then display it on an imageview, and then after pressing a button, saving it on the Gallery. It works, but the problem is that the saved photo are low resolution.. WHY?! I took the photo with this code: Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(cameraIntent, CAMERA_PIC_REQUEST); Then I save the photo on a var using : protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == CAMERA_PIC_REQUEST) { thumbnail = (Bitmap) data.getExtras().get("data"); } } and then after displaying it on an imageview, I save it on the gallery with this function: public void SavePicToGallery(Bitmap picToSave, File savePath){ String JPEG_FILE_PREFIX= "PIC"; String JPEG_FILE_SUFFIX= ".JPG"; String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date()); String imageFileName = JPEG_FILE_PREFIX + timeStamp + "_"; File filePath = null; try { filePath = File.createTempFile(imageFileName, JPEG_FILE_SUFFIX, savePath); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } FileOutputStream out = null; try { out = new FileOutputStream(filePath); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } picToSave.compress(CompressFormat.JPEG, 100, out); try { out.flush(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { out.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } //Add the pic to Android Gallery String mCurrentPhotoPath = filePath.getAbsolutePath(); MediaScannerConnection.scanFile(this, new String[] { mCurrentPhotoPath }, null, new MediaScannerConnection.OnScanCompletedListener() { public void onScanCompleted(String path, Uri uri) { } }); } I really can't figure out why it lose so much quality while saved.. Any help please ? Thanks..

    Read the article

  • Trouble parsing self closing XML tags using SAX parser

    - by sandesh
    Hi, I am having trouble parsing self closing XML tags using SAX. I am trying to extract the link tag from the Google Base API.I am having reasonable success in parsing regular tags. Here is a snippet of the xml <entry> <id>http://www.google.com/base/feeds/snippets/15802191394735287303</id> <published>2010-04-05T11:00:00.000Z</published> <updated>2010-04-24T19:00:07.000Z</updated> <category scheme='http://base.google.com/categories/itemtypes' term='Products'/> <title type='text'>En-el1 Li-ion Battery+charger For Nikon Digital Camera</title> <link rel='alternate' type='text/html' href='http://rover.ebay.com/rover/1/711-67261-24966-0/2?ipn=psmain&amp;icep_vectorid=263602&amp;kwid=1&amp;mtid=691&amp;crlp=1_263602&amp;icep_item_id=170468125748&amp;itemid=170468125748'/> . . and so on I can parse the updates and published tags, but not the link and category tag. Here is my startElement and endElement overrides public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException { if (qName.equals("title") && xmlTags.peek().equals("entry")) { insideEntryTitle = true; } xmlTags.push(qName); } public void endElement(String uri, String localName, String qName) throws SAXException { // If a "title" element is closed, we start a new line, to prepare // printing the new title. xmlTags.pop(); if (insideEntryTitle) { insideEntryTitle = false; System.out.println(); } } declaration for xmltags.. private Stack<String> xmlTags = new Stack<String>(); Any help guys? this is my first post here.. I hope I have followed posting rules! thanks a ton guys..

    Read the article

  • UIImagePickerController crashes on rapid scrolling, slower than photos app

    - by vvanhee
    Most of the time, my image picker works perfectly (iOS 4.2.1). However, if I scroll very rapidly up and down about 4-6 times through my camera roll of about 300 photos, I get a crash. This never happens with the "photos" app on the same iPhone 3Gs. Also, I'm noticing that the stock "photos" app scrolls much more smoothly than my image picker. Has anyone else noticed this behavior? I'd be interested if others could attempt this in their own apps and see if they crash. I don't think it's related to other objects hogging memory on my iPhone because it's a simple app, and this happens right after I start the app. It also doesn't seem to be related to messages sent to other released objects or overreleasing of other objects in viewdidunload, based on my crash logs and the fact that the simulator responds well to simulated memory warnings. I think it might be a bug in the internal implementation of the UIImagePickerController... This is how I start the picker. I've done this multiple ways (including setting a retain property for the UIImagePickerController in my header and releasing on dealloc). This seems to be the best way (crashes least): UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; picker.allowsEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; This is the crashed thread (I get various exception types): Exception Type: SIGSEGV Exception Codes: SEGV_ACCERR at 0xfffffffff4faafa4 Crashed Thread: 8 ... Thread 8 Crashed: 0 CoreFoundation 0x000494ea -[__NSArrayM replaceObjectAtIndex:withObject:] + 98 1 PhotoLibrary 0x00008e0f -[PLImageTable _segmentAtIndex:] + 527 2 PhotoLibrary 0x00008a21 -[PLImageTable _mappedImageDataAtIndex:] + 221 3 PhotoLibrary 0x0000893f -[PLImageTable dataForEntryAtIndex:] + 15 4 PhotoLibrary 0x000087e7 PLThumbnailManagerImageDataAtIndex + 35 5 PhotoLibrary 0x00008413 -[PLThumbnailManager _dataForPhoto:format:width:height:bytesPerRow:dataWidth:dataHeight:imageDataOffset:imageDataFormat:preheat:] + 299 6 PhotoLibrary 0x000b6c13 __-[PLThumbnailManager preheatImageDataForImages:withFormat:]_block_invoke_1 + 159 7 libSystem.B.dylib 0x000d6680 _dispatch_call_block_and_release + 20 8 libSystem.B.dylib 0x000d6ba0 _dispatch_worker_thread2 + 128 9 libSystem.B.dylib 0x0007b251 _pthread_wqthread + 265

    Read the article

  • PhotoChooserTask + Navigation.

    - by user562405
    I taken two Images & added event (MouseButtonDown) for them. When first image handles event to open Gallery. Second image handles events for open camera. When user has choosed his image from the gallery, I want to navigate to next page. Its navigates. But before completing navigation process, it displays MainPage & then moves toward next page. I didnt want to display the MainPage once user chooses the image from the gallery. Plz help. Thanks in advance. public partial class MainPage : PhoneApplicationPage { PhotoChooserTask objPhotoChooser; CameraCaptureTask cameraCaptureTask; // Constructor public MainPage() { InitializeComponent(); objPhotoChooser = new PhotoChooserTask(); objPhotoChooser.Completed += new EventHandler<PhotoResult>(objPhotoChooser_Completed); cameraCaptureTask = new CameraCaptureTask(); cameraCaptureTask.Completed += new EventHandler<PhotoResult>(objCameraCapture_Completed); } void objPhotoChooser_Completed(object sender, PhotoResult e) { if (e != null && e.TaskResult == TaskResult.OK) { //Take JPEG stream and decode into a WriteableBitmap object App.CapturedImage = PictureDecoder.DecodeJpeg(e.ChosenPhoto); //Delay navigation until the first navigated event NavigationService.Navigated += new NavigatedEventHandler(navigateCompleted); } } void navigateCompleted(object sender, EventArgs e) { //Do the delayed navigation from the main page this.NavigationService.Navigate(new Uri("/ImageViewer.xaml", UriKind.RelativeOrAbsolute)); NavigationService.Navigated -= new NavigatedEventHandler(navigateCompleted); } void objCameraCapture_Completed(object sender, PhotoResult e) { if (e.TaskResult == TaskResult.OK) { //Take JPEG stream and decode into a WriteableBitmap object App.CapturedImage = PictureDecoder.DecodeJpeg(e.ChosenPhoto); //Delay navigation until the first navigated event NavigationService.Navigated += new NavigatedEventHandler(navigateCompleted); } } protected override void OnBackKeyPress(System.ComponentModel.CancelEventArgs e) { e.Cancel = true; } private void image1_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { objPhotoChooser.Show(); } private void image2_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { cameraCaptureTask.Show(); }

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >