Search Results

Search found 19546 results on 782 pages for 'image recognition'.

Page 118/782 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Adding a dynamic image to UITable View Cell

    - by Graeme
    Hi, I have a table in which I want a dynamic image to load in at the left-hand side. The table needs to use an IF statement to select the appropriate image from the "Resources" folder, and needs to be based upon [dog types]. The [dog types] is extracted from an RSS feed, and so the image in the table cell needs to match the each cell's [dog types] tag. Update: See code below for what I'm looking to do (only below it's for earthquakes, for me its pictures of dogs). I suspect I need to use - (UIImage *)imageForTypes:(NSString *)types { to do such a thing. Thanks. Updated code: This is for Apple's Earthquake sample but does exactly what I need to do. It matches images to the severity (2.0, 3.0, 4.0, 5.0 etc.) of every earthquake to the magnitude. - (UIImage *)imageForMagnitude:(CGFloat)magnitude { if (magnitude >= 5.0) { return [UIImage imageNamed:@"5.0.png"]; } if (magnitude >= 4.0) { return [UIImage imageNamed:@"4.0.png"]; } if (magnitude >= 3.0) { return [UIImage imageNamed:@"3.0.png"]; } if (magnitude >= 2.0) { return [UIImage imageNamed:@"2.0.png"]; } return nil; } and under - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath magnitudeImage.image = [self imageForMagnitude:earthquake.magnitude];

    Read the article

  • Using android gesture on top of menu buttons

    - by chriacua
    What I want is to have an options menu where the user can choose to navigate the menu between: 1) touching a button and then pressing down on the trackball to select it, and 2) drawing predefined gestures from Gestures Builder As it stands now, I have created my buttons with OnClickListener and the gestures with GestureOverlayView. Then I select starting a new Activity depending on whether the using pressed a button or executed a gesture. However, when I attempt to draw a gesture, it is not picked up. Only pressing the buttons is recognized. The following is my code: public class Menu extends Activity implements OnClickListener, OnGesturePerformedListener { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); //create TextToSpeech myTTS = new TextToSpeech(this, this); myTTS.setLanguage(Locale.US); //create Gestures mLibrary = GestureLibraries.fromRawResource(this, R.raw.gestures); if (!mLibrary.load()) { finish(); } // Set up click listeners for all the buttons. View playButton = findViewById(R.id.play_button); playButton.setOnClickListener(this); View instructionsButton = findViewById(R.id.instructions_button); instructionsButton.setOnClickListener(this); View modeButton = findViewById(R.id.mode_button); modeButton.setOnClickListener(this); View statsButton = findViewById(R.id.stats_button); statsButton.setOnClickListener(this); View exitButton = findViewById(R.id.exit_button); exitButton.setOnClickListener(this); GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this); } public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) { ArrayList<Prediction> predictions = mLibrary.recognize(gesture); // We want at least one prediction if (predictions.size() > 0) { Prediction prediction = predictions.get(0); // We want at least some confidence in the result if (prediction.score > 1.0) { // Show the gesture Toast.makeText(this, prediction.name, Toast.LENGTH_SHORT).show(); //User drew symbol for PLAY if (prediction.name.equals("Play")) { myTTS.shutdown(); //connect to game // User drew symbol for INSTRUCTIONS } else if (prediction.name.equals("Instructions")) { myTTS.shutdown(); startActivity(new Intent(this, Instructions.class)); // User drew symbol for MODE } else if (prediction.name.equals("Mode")){ myTTS.shutdown(); startActivity(new Intent(this, Mode.class)); // User drew symbol to QUIT } else { finish(); } } } } @Override public void onClick(View v) { switch (v.getId()){ case R.id.instructions_button: startActivity(new Intent(this, Instructions.class)); break; case R.id.mode_button: startActivity(new Intent(this, Mode.class)); break; case R.id.exit_button: finish(); break; } } Any suggestions would be greatly appreciated!

    Read the article

  • mysql image disable print download

    - by Vish
    Hi, We use a Flex AIR client and a WAMP server. Tiff images are stored in MySQL. Currently, I can download the image from AIR client and it prompts for a download dialog. Things are fine till this point. We got a new requirement. Requirement is that only some users can print the image which gets downloaded. For other users, they should not be able to print the tiff image. Wondering how to accomplish this. One idea, not sure if its efficient, is to convert the image requested to pdf at the server side, disable print option there(hope there are API's available) and send back the pdf. Please let me know btter ideas. Also, is there a way to prevent file download dialog from popping up everytime the file is requested for download? Can we just get the file stream to the client and manipulate it to open with a particular viewer or write it to pdf... Please help.

    Read the article

  • webcam refreshing image in iframe not working IE

    - by Nix
    I'm trying to show images from this webcam http://regacam.usz.ch/cgi-bin/guestimage.html on an html page by putting an iframe in pointing to another page that has an image tag in it. But it isn't working in IE (I just see a little cross where the image should be), Firefox is ok. Here the page with the image in: <html> <head> <meta http-euqiv="refresh" content="5"> </head> <body> <img src="http://regacam.usz.ch/cgi-bin/faststream.jpg?stream=full&fps=0.01667&rand=295543" width="200" height="150"/> </body> <html> and the iframe: <div id="rightcontainer"> <iframe id="rightframe" src="zurich.html" name="content" frameborder="0"></iframe> </div> I tried a different webcam elsewhere and that works ok in both browsers. Is there something odd about the one in my example? I got the link in src by right-clicking and copying the image location - is that the right way to go about it?

    Read the article

  • how to tile 3D mesh with image brush in XAML

    - by MC9000
    I have a 2D square in a ViewPort3D that I want to do a tiling of an image (like a checkerboard or flooring with "tiles" effect). I've created an image brush (the image is 50x50 pixels, the surface 250x550 pixels) and a viewport (trying to follow MS's site - though their example is for 2D), but only 1 of the colors in the "tile" image shows up and no tiling is seen. I can't find a single example on the Internet and MS's site has no info (that I can find) on 3D XAML anywhere, so I'm stumped as how to actually do this. <Viewport3D> <Viewport3D.Camera> <PerspectiveCamera Position="125,790,120" LookDirection="0,-.7,-0.25" UpDirection="0,0,1" /> </Viewport3D.Camera> <ModelVisual3D> <ModelVisual3D.Content> <Model3DGroup> <AmbientLight Color="white" /> <GeometryModel3D> <GeometryModel3D.Geometry> <MeshGeometry3D Positions="0,0,0 250,0,0 250,550,0 0,550,0 " TriangleIndices="0 1 3 1 2 3 "/> </GeometryModel3D.Geometry> <GeometryModel3D.Material> <DiffuseMaterial> <DiffuseMaterial.Brush> <ImageBrush ViewportUnits="Absolute" TileMode="Tile" ImageSource="testsquare.gif" Viewport="0,0,50,50" Stretch="None" ViewboxUnits="Absolute" /> </DiffuseMaterial.Brush> </DiffuseMaterial> </GeometryModel3D.Material> </GeometryModel3D> </Model3DGroup> </ModelVisual3D.Content> </ModelVisual3D> </Viewport3D>

    Read the article

  • Android 2.1 fling gesture captured on textview but still a contextmenu opens

    - by hermo
    The following problem seems unique to 2.1, happens both on an emulator and on a nexus. The same example works fine on other platforms I've tested (1.5, 1.6 and 2.0 emulators). I've added created gestureListener as described in this post. The difference is that I've added the listener on a TextView which also has a contextMenu registered, i.e. sth like the following: onCreate(...) { ... // Layout contains a large TextView on which I want to add a context menu tv = findViewById(R.id.text_view); tv.registerForContextMenu(this); // create the gestureListener according above mentioned post. gestureListener = ... // set the listener on the text-view tv.setOnTouchListener(gestureListener); ... } When testing it, the correct gesture is recognized alright, but every other time it also causes the context menu to be opened. As the same example is working on non 2.1 platforms, I've got a feeling it is not my code that is the problem... Thankful for any suggestions.

    Read the article

  • Cursor image to the center on mouseout

    - by Milla
    Hi all! I'm a real noobie with flash and I was wondering if somebody could help me with this one. I have this actionsript 3 code, where the cursor image "ball_mc" follows the mouse's position with a slight delay: stage.addEventListener(Event.ENTER_FRAME,followBall); function followBall(event:Event):void { var dx:int = ball_mc.x - mouseX; var dy:int = ball_mc.y - mouseY; ball_mc.x -= dx / 5; ball_mc.y -= dy /5; } 1) How can I get the cursor image automatically return to the center of the stage on mouseout? As of now, it stays at the position where the mouse leaves the stage. 2) How can I reverse the movement of the mouse? So that when I, for example, move mouse to the right, the cursor image would move to the left? And when moving the mouse up, the image would go down. The stage is 800 x 250 pixels, in case that makes any difference.

    Read the article

  • Recognize Dates In A String

    - by Tim Scott
    I want a class something like this: public interface IDateRecognizer { DateTime[] Recognize(string s); } The dates might exist anywhere in the string and might be any format. For now, I could limit to U.S. culture formats. The dates would not be delimited in any way. They might have arbitrary amounts of whitespace between parts of the date. The ideas I have are: ANTLR Regex Hand rolled I have never used ANTLR, so I would be learning from scratch. I wonder if there are libraries or code samples out there that do something similar that could jump start me. Is ANTLR too heavy for such a narrow use? I have used Regex a lot before, but I hate it for all the reasons that most people hate it. I could certainly hand roll it but I'd rather not re-solve a solved problem. Suggestions? UPDATE: Here is an example. Given this input: This is a date 11/3/63. Here is another one: November 03, 1963; and another one Nov 03, 63 and some more (11/03/1963). The dates could be in any U.S. format. They might have dashes like 11-2-1963 or weird extra whitespace inside like this: Nov   3,   1963, and even maybe the comma is missing like [Nov 3 63] but that's an edge case. The output should be an array of seven DateTimes. Each date would be the same: 11/03/1963 00:00:00.

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • How to engineer features for machine learning

    - by Ivo Danihelka
    Do you have some advices or reading how to engineer features for a machine learning task? Good input features are important even for a neural network. The chosen features will affect the needed number of hidden neurons and the needed number of training examples. The following is an example problem, but I'm interested in feature engineering in general. A motivation example: What would be a good input when looking at a puzzle (e.g., 15-puzzle or Sokoban)? Would it be possible to recognize which of two states is closer to the goal?

    Read the article

  • Issue binding Image Source dependency property

    - by Archana R
    Hello, I have created a custom control for ImageButton as <Style TargetType="{x:Type Button}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Local:ImageButton}"> <StackPanel Height="Auto" Orientation="Horizontal"> <Image Margin="0,0,3,0" Source="{Binding ImageSource}" /> <TextBlock Text="{TemplateBinding Content}" /> </StackPanel> </ControlTemplate> </Setter.Value> </Setter> </Style> ImageButton class looks like public class ImageButton : Button { public ImageButton() : base() { } public ImageSource ImageSource { get { return base.GetValue(ImageSourceProperty) as ImageSource; } set { base.SetValue(ImageSourceProperty, value); } } public static readonly DependencyProperty ImageSourceProperty = DependencyProperty.Register("Source", typeof(ImageSource), typeof(ImageButton)); } However I'm not able to bind the ImageSource to the image as: (This code is in UI Folder and image is in Resource folder) <Local:ImageButton x:Name="buttonBrowse1" Width="100" Margin="10,0,10,0" Content="Browse ..." ImageSource="../Resources/BrowseFolder.bmp"/> But if i take a simple image it gets displayed if same source is specified. Can anyone tell me what shall be done?

    Read the article

  • Why doesn't SetNotifyWindowMessage() call my WndProc()?

    - by manuel
    I'm using WinForms, and I'm trying to get SetNotifyWindowMessage() to call the WndProc, but it does not do so. The function call: HRESULT initSAPI(HWND hWnd) { ... if(FAILED( g_cpRecoCtxt->SetNotifyWindowMessage( hWnd, WM_RECOEVENT, 0, 0 ))) MessageBoxW(hWnd, L"Error sending window message", L"SAPI Initialization Error", 0); ... } The WndProc: LRESULT WndProc (HWND hWnd, UINT message, WPARAM wparam, LPARAM lparam) { case WM_RECOEVENT: ProcessRecoEvent(hWnd); break; default: return DefWindowProc(hWnd, message, wParam, lParam); } Note: initSAPI() is called on a mouse click event.

    Read the article

  • Improving the efficiency of Kinect for Windows DTWGestureRecognition Application

    - by Ray
    Currently I am using the DTWGestureRecognition open source tool for Kinect SDK v1.5. I have recorded a few gestures and use them to navigate through Windows 7. I also have implemented voice control for simple things such as opening PowerPoint, Chrome, etc. My main issue is that the application uses quite a bit of my CPU power which causes it to become slow. During gestures and voice commands, the CPU usage sometimes spikes to 80-90%, which causes the application to be unresponsive for a few seconds. I am running it on a 64 bit Windows 7 machine with an i5 processor and 8 GB of RAM. I was wondering if anyone with any experience using this tool or Kinect in general has made it more efficient and less performance hogging. Right now I removed sections which display the RGB video and the Depth video but even doing that did not make a big impact. Any help is appreciated, thanks!

    Read the article

  • JQuery - Fade in new Background Image?

    - by JasonS
    Hi, I think I may be trying something that isn't possible. I was recently tasked to create a html version of the flash site. On the flash site when you click on the body the image changes. I have done this with JQuery. Its brilliant! However.. it isn't "flash" enough. The powers that be want a fade effect between images. This is where I have come completely undone. This is how my script works at the moment. Images are stored in a div called photos. This is hidden. <div id="photos"> <img src="image.jpg" alt="Some caption" style="#page-bg-color" /> </div> When the page is loaded jquery checks to see if the first image is loaded. If it isn't then a loading symbol spins. When the image is loaded, it becomes the body background. $("body").css('background', 'url(' + photos[currentImage]["url"] + ') no-repeat 50% 50% fixed ' + photos[currentImage]["background"]); I have tried the following. I have tried animate. However, this doesn't work. I have tried this plugin: http://plugins.jquery.com/project/bgImageTransition This doesn't work either. It appears to clone the tag and do something. I assume it isn't working because you cannot clone the body tag. That or it is really old and is no longer compatible with the current version of JQuery. I fear that I have coded my way down a dead end. Does anybody know how I could make this work?

    Read the article

  • Drawing an image is completely out

    - by ct2k7
    Hi I'm using this code below to let a user "draw" their signature on a UIView component on my app: UIGraphicsBeginImageContext(signature.frame.size); [drawImage.image drawInRect:CGRectMake(0, 0, signature.frame.size.width, signature.frame.size.height)]; CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound); CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 5.0); CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 0.0, 0.0, 1.0); CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y); CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y); CGContextStrokePath(UIGraphicsGetCurrentContext()); CGContextFlush(UIGraphicsGetCurrentContext()); drawImage.image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext() The issue I am having is that the drawing is not only is the drawing not in line with the pointer when in use in the simulator, the image is going outside of the designated UIView compontent and is getting smaller/out of focus the more I draw, as you can see in this image: And after a few lines, showing the boundaries of the exact are of where I can draw: Any ideas on what's happening here? lastPoint.y is defined as: lastPoint.y -= 20; Any ideas on what on earth is happening here?

    Read the article

  • Image upload - Latin chars problem

    - by Holian
    Dear Gods! I use this script to upload images to serveR: <?php if (($_FILES["image_upload_box"]["type"] == "image/jpeg" || $_FILES["image_upload_box"]["type"] == "image/pjpeg" && ($_FILES["image_upload_box"]["size"] < 2000000)) { $max_upload_width = 450; $max_upload_height = 450; if(isset($_REQUEST['max_width_box']) and $_REQUEST['max_width_box']!='' and $_REQUEST['max_width_box']<=$max_upload_width){ $max_upload_width = $_REQUEST['max_width_box']; } if(isset($_REQUEST['max_height_box']) and $_REQUEST['max_height_box']!='' and $_REQUEST['max_height_box']<=$max_upload_height){ $max_upload_height = $_REQUEST['max_height_box']; } if($_FILES["image_upload_box"]["type"] == "image/jpeg" || $_FILES["image_upload_box"]["type"] == "image/pjpeg"){ $image_source = imagecreatefromjpeg($_FILES["image_upload_box"]["tmp_name"]); } $remote_file =$directory."/".$_FILES["image_upload_box"]["name"]; imagejpeg($image_source,$remote_file,100); chmod($remote_file,0644); list($image_width, $image_height) = getimagesize($remote_file); if($image_width>$max_upload_width || $image_height >$max_upload_height){ $proportions = $image_width/$image_height; if($image_width>$image_height){ $new_width = $max_upload_width; $new_height = round($max_upload_width/$proportions); } else{ $new_height = $max_upload_height; $new_width = round($max_upload_height*$proportions); } $new_image = imagecreatetruecolor($new_width , $new_height); $image_source = imagecreatefromjpeg($remote_file); imagecopyresampled($new_image, $image_source, 0, 0, 0, 0, $new_width, $new_height, $image_width, $image_height); imagejpeg($new_image,$remote_file,100); imagedestroy($new_image); } imagedestroy($image_source); }else{ something.... } ?> This is works well, till i upload a photo with latin chars in filename. For example the filename: kék hegyek.jpg. After upload file name will be: KA©k hegyek.jpg How can i solve this? Thank you

    Read the article

  • Implementing tracing gestures on iPhone

    - by bmoeskau
    I'd like to create an iPhone app that supports tracing of arbitrary shapes using your finger (with accuracy detection). I have seen references to an Apple sample app called "GestureMatch" that supposedly implemented exactly that, but it was removed from the SDK at some point and I cannot find the source anywhere via Google. Does anyone know of a current official sample that demonstrates tracing like this? Or any solid suggestions on other resources to look at? I've done some iPhone programming, but not really anything with the graphics API's or custom handling of touch gestures, so I'm not sure where to start.

    Read the article

  • Calculating probability that a string has been randomized? - Python

    - by RadiantHex
    Hi folks, this is correlated to a question I asked earlier (question) I have a list of manually created strings such as: lucy87 gordan_king fancy_unicorn77 joplucky_kanga90 base_belong_to_narwhals and a list of randomized strings: johnkdf pancake90kgjd fancy_jagookfk manhattanljg What gives away that the last set of strings are randomized is that sequences such as 'kjg', 'jgf', 'lkd', ... . Any clever way I could separate strings that contain these apparently randomized strings from the crowd? I guess that this plays a lot on the fact that certain characters are more likely to be placed next to others (e.g. 'co', 'ka', 'ja', ...). Any ideas on this one? Kylotan mentioned Reverend, but I am not sure if it can be used fr such purpose. Help would be much appreciated!

    Read the article

  • How to have a UISwipeGestureRecognizer AND UIPanGestureRecognizer work on the same view

    - by Shizam
    How would you setup the gesture recognizers so that you could have a UISwipeGestureRecognizer and a UIPanGestureRecognizer work at the same time? Such that if you touch and move quickly (quick swipe) it detects the gesture as a swipe but if you touch then move (short delay between touch & move) it detects it as a pan? I've tried various permutations of requireGestureRecognizerToFail and that didn't help exactly, it made it so that if the SwipeGesture was left then my pan gesture would work up, down and right but any movement left was detected by the swipe gesture.

    Read the article

  • Flex 3 - Image flickers at first load

    - by BS_C3
    Hello Community! I have an application with different components that are accessible through a viewstack in the main application. The main application looks like that: <Application> <Viewstack> <myComponent1/> <myComponent2/> <myComponent3/> . . . </Viewstack> </Application> In myComponent1, I have a horizontalList where the user can select a product. In myComponent2, I have 2 containers inside the component. A left container with a larger image of the product selected in myComponent1 and a right container with all characteristics of the product. Both containers have an embed background image. When I select a product in myComponent1, the application displays myComponent2. When the component is displayed, I first see the page without the large image of the product, then both containers flickers and the product image is displayed. How could I avoid this flickering? It's really annoying _< Thanks in advance for your help =) Regards. BS_C3

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >