Search Results

Search found 19745 results on 790 pages for 'touch event'.

Page 19/790 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • C# event or delegate or other solution?

    - by user295734
    Looking for some help or programmng ideas or mayeb there is some pattern that would help. Have an application that needs to fire alot of events sequentially, it could up to 100 or more unique events, it will be dynamic depeneding on the situation. Trying to find the best practice for doing this. My main idea right now is to create a list of objects iterate thru them, and fire each event. This seems wrong, or bad practice. Or maybe have one object and pass a list or params into one event? Or am I missing some feature in .NET that i could be using or implementing?

    Read the article

  • Javascript Prototype Best Practice Event Handlers

    - by nahum
    Hi this question is more a consulting of best practice, Sometimes when I'm building a complete ajax application I usually add elements dynamically for example. When you'r adding a list of items, I do something like: var template = new Template("<li id='list#{id}'>#{value}</li>"); var arrayTemplate = []; arrayOfItem.each(function(item, index){ arrayTemplate.push(template.evaluate( id : index, value : item)) }); after this two options add the list via "update" or "insert" ----- $("elementToUpdate").update("<ul>" + arrayTemplate.join("") + "</ul">); the question is how can I add the event handler without repeat the process of read the array, this is because if you try add a Event before the update or insert you will get an Error because the element isn't still on the DOM. so what I'm doing by now is after insert or update: arrayOfItem.each(function(item, index){ $("list" + index).observe("click", function(){ alert("I see the world"); }) }); so the question is exist a better way to doing this??????

    Read the article

  • C# property in a form class only accessable after formload event

    - by Spooky2010
    using vs2008, c# Howdy, Im instantiating and calling Form B from Form A. FormB has some custom properties, to allow me to pass it things like sqlAdaptors and dataset instances. When i instantiate and show Form B from Form A as a dialog form with a Using statement, it all works fine, but i find the properties i pass are not available in Form B until after the form_load event has fired. I was under the impression the properties when passed to a instantiated class should be available from a constructor, but this is not the case. If it try to access the properties before the form load event i get a null reference exception. Is this correct behavior ? It is very annoying. thanks for any help

    Read the article

  • Child service not writing to event log

    - by Tommy Fisk
    I am having a very simple (but incredibly frustrating) issue with a parent service and the child service. Let's call the parent service "Service A" and the child "Service B". Both services reside on the same box. Using WCF Storm, when I send Service B a message, I see lots of entries in the event viewer for it. However, if I send a message to Service A, which calls Service B, I only see entries from Service A. So for some reason, the child logs are not written when the parent calls it, but if I call it myself, the logs do indeed show up, so I know it's not a problem with the logging or anything like that. Here is what I am using to write to the event log. // in some random static class private static EventLog el = new EventLog(); el.WriteEntry("In " + location + ", " + message); // params passed in Does anyone know why this is happening? And more importantly what I can do about it?

    Read the article

  • Google maps event problem with flex actionscript

    - by DEH
    I am able to render a google map on a flex canvas. I create the map using the code below and then place markers on it in the onMapReady method (not shown) var map:com.google.maps.Map=new com.google.maps.Map(); map.id="map"; map.key="bla bla"; _mapCanvas.addChild(map); map.addEventListener(MapEvent.MAP_READY,onMapReady); It all works fine. However, if I remove the map and then set _mapCanvas to null, then run exactly the same code again, the onMapReady event does not fire. It is weird, but once a map has been created and deleted, the onMapReady event never seems to fire again. Anyone got any ideas? Thanks.

    Read the article

  • Basic QT Event handling / Threading questions ?

    - by umanga
    Greetings , I am new to QT (4.6) and have some basic questions regarding its event mechanism.I come from Swing background so I am trying to compare it with QT. 1) Does Event-processing-loop run in seperate thread? (like EventDispatch thread in Swing) ? 2) If we open several 'QMainWindow' do they run in several threads? 3) Whats the best way to run an intensive process in a seperate thread? (like SwingWorker in Swing ? ) 4) If intesive-process runs in a seperate thread ,is it possible to call UI methods like update(),repaint() from that process? thanks in advance.

    Read the article

  • WPF UserControl event called only once?

    - by 742
    Hi Everyone, I need to bind two-way a property of a user control to a property of a containing user control. I also need to set a default value to the property from code in the child (cannot be done easily from XAML tags). If I call my code from the child constructor, the value is set in the parent but the change callback routine is not triggered (my understanding is that the parent doesn't yet exist at the time the child is created). My current workaround is to catch the Loaded event of the child and to call the code from the handler. Howver as Loaded is called more than once, I need to set a flag to set the property only the first time. I don't like this way, but I don't know if there is a single shot event that could be used, or if this can be done otherwise. Any feedback based on your experience?

    Read the article

  • How does Windows 7 determine that a system was not shut doen correctly (Kernel-Pwer Event ID 41)

    - by Erik
    I have a strange situation that Windows 7 thinks it was not shut down correctly, and gives me a warning in the system event log like this: http://support.microsoft.com/kb/2028504/de What the KB-article does not tell is how Windows actually determines that situation. Does it parse its own system event log after reboot? Or where does it get that information from? I am currently investigating an issue where I believe the system fails to write the system event log correctly (it stops having entries, although other logs like the application event log still have entries), and after a reboot, Windows thinks it has not been shut doen correctly. Does anybody have any experience with this? ANd can you confrim that Windows determines the previous correct system shutdown by parsing its own system event log on startup?

    Read the article

  • Why would an iPod Touch and PS3 respond to netdiscover as 0.0.0.0?

    - by Iszi
    Only slightly related to this question, because it was discovered in the same scan: Can someone explain how this happened? When running netdiscover on my home network, my iPod Touch (identified by DHCP-reserved IP address and MAC address) responded to both its own IP address on our 10.x.x.x subnet and to 0.0.0.0 also. Why would this have happened? EDIT: After letting netdiscover run awhile longer, it seems one of the PS3s on the network is also answering to 0.0.0.0 - why is this?

    Read the article

  • DataGridView's SelectionChange event firing twice on DataBinding even after removing event binding

    - by Shantanu Gupta
    This Code triggers selection change event twice. how can I prevent it ? Currently i m using a flag or focused property to prevent this. But what is the actual way ? I am using it on winfoms EDIT My Mistake in writing Question, here is the correct code that i wanted to ask private void frmGuestInfo_Load(object sender, EventArgs e) { this.dgvGuestInfo.SelectionChanged -= new System.EventHandler(this.dgvGuestInfo_SelectionChanged); dgvGuestInfo.DataSource=dsFillControls.Tables["tblName"]; this.dgvGuestInfo.SelectionChanged += new System.EventHandler(this.dgvGuestInfo_SelectionChanged); } private void dgvGuestInfo_SelectionChanged(object sender, EventArgs e) { //this function is raised twice, i was expecting that this will not be raised }

    Read the article

  • Event Capturing vs Event Bubbling

    - by Rajat
    I just wanted to get the general consensus on which is a better mode of event delegation in JS between bubbling and capturing. Now I understand that depending on a particular use-case, one might want to use capturing phase over bubbling or vice versa but I want to understand what delegation mode is preferred for most general cases and why (to me it seems bubbling mode). Or to put it in other words, is there a reason behind W3C addEventListener implementation to favor the bubbling mode. [capturing is initiated only when you specify the 3rd parameter and its true. However, you can forget that 3rd param and bubbling mode is kicked in] I looked up the JQuery's bind function to get an answer to my question and it seems it doesn't even support events in capture phase (it seems to me because of IE not support capturing mode). So looks like bubbling mode is the default choice but why?

    Read the article

  • Updating games for iOS 6 and new iPhone/iPod Touch

    - by SundayMonday
    Say I have a game that runs full-screen on iPhone 4S and older devices. The balance of the game is just right for the 480 x 320 screen and associated aspect ratio. Now I want to update my game to run full-screen on the new iPhone/iPod Touch where the aspect ratio of the screen is different. It seems like this can be challenging for some games in terms of maintaining the "balance". For example if the extra screen space was just tacked onto the right side of Jet Pack Joyride the balance would be thrown off since the user now has more time to see and react to obstacles. Also it could be challenging in terms of code maintenance. Perhaps Jet Pack Joyride would slightly increase the speed of approaching obstacles when the game is played on newer devices. However this quickly becomes messy when extra conditional statements are added all over the code. One solution is to have some parameters that are set in once place at start-up depending on the device type. What are some strategies for updating iOS games to run on the new iPhone and iPod Touch?

    Read the article

  • Efficient coding in Visual Studio (or another IDE), with touch typing

    - by cheeesus
    Moving the cursor to another position in code is one of the most frequent actions when coding. I don't write my programs from the beginning to the end, like a letter. However, moving the cursor requires me to move my right hand to the key arrows or to the mouse, which feels like an interruption to my writing rhythm, since I'm using touch typing. I want my hands to rest on the keyboard. It's difficult to explain what I mean, but I think every coder using touch typing knows what I mean. I tried many things, like defining some shortcuts as surrogate arrow keys (Shift+Alt+J, K, L, I), or buying a keyboard with a Trackpoint, Trackpad, or Trackball on it, but I have not yet found a satisfying solution to the problem. What is the best solution you know of, regardless of which IDE you use? Edit: Thank you for your answers. I am using a lot of shortkeys, but I think using a Vim plugin in Visual Studio would interfere too much with the shortkeys I am used to. Also, I have a keyboard with a built-in mouse, but I'm still looking for a better solution.

    Read the article

  • touch detection of non-rectangular sprites (cocos2d)

    - by hogni89
    What is the correct way to implement a non-rectangular sprite in Cocos2d? I am working on a jigsaw puzzle. And therefor do our sprites have some strange forms (Jigsaw puzzle bricks). As of now, we have implemented the "detect" this way: - (void)selectSpriteForTouch:(CGPoint)touchLocation { CCSprite * newSprite = nil; // Loop array of sprites for (CCSprite *sprite in movableSprites) { // Check if sprite is hit. // TODO: Swap if with something better. if (CGRectContainsPoint(sprite.boundingBox, touchLocation)) { newSprite = sprite; break; } } if (newSprite != selSprite) { // Move along, nothing to see here // Not the problem } } - (BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { CGPoint touchLocation = [self convertTouchToNodeSpace:touch]; [self selectSpriteForTouch:touchLocation]; return TRUE; } I know that the problem is in the keyword "sprite.boundingBox". Is there a better way of implementing this, OR is it a limit when using sprites based on .png's? If so, how should I proceed? I'm new to iPhone and game development :D

    Read the article

  • Rythmbox in Ubuntu 12.04 and Ipod touch

    - by leousa
    I don't know if anyone else is experiencing something similar to this. I have an Ipod touch bought 3.5 years ago. It received the last software update a year ago. After that update I had no problems syncing it in Ubuntu 11.04 and 11.10 (two different laptops). Transfer of music albums was flawless with banshee and it would even convert files to the right format automatically. Now, the same ipod touch, without any further software or firmware update does not work in Ubuntu 12.04 rythmbox. The device is mounted and recognized there, but when you drag/drop an album towards the device it does nothing. Before you guys tell me to do it, yes, I have tried with Banshee and gtkpod. The first also brings up an error message saying that the ipod does not support mp3 files, and gtkpod simply crashes all the time. The result is the same. What is going on here? Why a device that worked before does not work now in 12.04? I purchased some music in the Ubuntuone store, and would love to have it transferred to my ipod. Please no links to outdated online manuals with older versions of rythmbox or banshee (I've read them all). And again, nothing wrong with the ipod as it worked in 11.04 and 11.10 and it has not been updated since then. I would strongly appreciate any help provided. thank you

    Read the article

  • Ubuntu for Phones / Touch vs Android, IOS and BlackBerry OS

    - by Ome Noes
    Currently I have a LG Google Nexus 4 with lots of issues because of the latest android 4.3 update. Since the update my battery drains within 7 hours when in it's standby / idle and even faster when I use it normaly! Before the Nexus 4 I had an Iphone but got sick of IOS because for me it's to much of a closed operating system and I dislike having to work with either Windows or Itunes. At this point neither Google or LG is willing to provide me (and all the others that have similar Nexus 4 problems) with a solution or even a reaction... Also i'm not very fond of the idea that the NSA (and maybe others) can and is currently monitoring millions of Android, IOS and BlackBerry OS devices all over the world. Since i've been using Ubuntu now very happily for almost 5 years I see Ubuntu for Phones / Touch as the only remedy for all this BS. Please be so kind to let me know when you will have a fully functioning version of your Ubuntu for Phones / Touch ready for consumer use. I'm realy sad that the Ubuntu Edge campaign didn't work out and hope to see lots and lots of future smartphones outfitted with Ubuntu a.s.a.p.! Keep up the good work!

    Read the article

  • Rotating a sprite to touch

    - by user1691659
    I know this has been asked before and Ive looked through peoples questions and answers all over the internet and I just cant get it to work. Im sure its down to my inexperience. Ive got an arrow attached to a sprite, this sprite appears when touched but I want the arrow to point where the sprite will launch. I have had the arrow moving but it was very erratic and not pointing as it should. Its not moving at the moment but this is what I have. - (void)update:(ccTime)delta { arrow.rotation = dialRotation; } -(void) ccTouchMoved:(UITouch*)touch withEvent:(UIEvent *)event { pt1 = [self convertTouchToNodeSpace:touch]; CGPoint firstVector = ccpSub(pt, arrow.position); CGFloat firstRotateAngle = -ccpToAngle(firstVector); CGFloat previousTouch = CC_RADIANS_TO_DEGREES(firstRotateAngle); CGPoint vector = ccpSub(pt1, arrow.position); CGFloat rotateAngle = -ccpToAngle(vector); CGFloat currentTouch = CC_RADIANS_TO_DEGREES(rotateAngle); dialRotation += currentTouch - previousTouch; } I have got pt from ccTouchBegan the same way as pt1 Thanks

    Read the article

  • C#&ndash;Using a delegate to raise an event from one class to another

    - by Bill Osuch
    Even though this may be a relatively common task for many people, I’ve had to show it to enough new developers that I figured I’d immortalize it… MSDN says “Events enable a class or object to notify other classes or objects when something of interest occurs. The class that sends (or raises) the event is called the publisher and the classes that receive (or handle) the event are called subscribers.” Any time you add a button to a Windows Form or Web app, you can subscribe to the OnClick event, and you can also create your own event handlers to pass events between classes. Here I’ll show you how to raise an event from a separate class to a console application (or Windows Form). First, create a console app project (you could create a Windows Form, but this is easier for this demo). Add a class file called MyEvent.cs (it doesn’t really need to be a separate file, this is just for clarity) with the following code: public delegate void MyHandler1(object sender, MyEvent e); public class MyEvent : EventArgs {     public string message; } Your event can have whatever public properties you like; here we’re just got a single string. Next, add a class file called WorkerDLL.cs; this will simulate the class that would be doing all the work in the project. Add the following code: class WorkerDLL {     public event MyHandler1 Event1;     public WorkerDLL()     {     }     public void DoWork()     {         FireEvent("From Worker: Step 1");         FireEvent("From Worker: Step 5");         FireEvent("From Worker: Step 10");     }     private void FireEvent(string message)     {         MyEvent e1 = new MyEvent();         e1.message = message;         if (Event1 != null)         {             Event1(this, e1);         }         e1 = null;     } } Notice that the FireEvent method creates an instance of the MyEvent class and passes it to the Event1 handler (which we’ll create in just a second). Finally, add the following code to Program.cs: static void Main(string[] args) {     Program p = new Program(args); } public Program(string[] args) {     Console.WriteLine("From Console: Creating DLL");     WorkerDLL wd = new WorkerDLL();     Console.WriteLine("From Console: Wiring up event handler");     WireEventHandlers(wd);     Console.WriteLine("From Console: Doing the work");     wd.DoWork();     Console.WriteLine("From Console: Done - press any key to finish.");     Console.ReadLine(); } private void WireEventHandlers(WorkerDLL wd) {     MyHandler1 handler = new MyHandler1(OnHandler1);     wd.Event1 += handler; } public void OnHandler1(object sender, MyEvent e) {     Console.WriteLine(e.message); } The OnHandler1 method is called any time the event handler “hears” an event matching the specified signature – you could have it log to a file, write to a database, etc. Run the app in debug mode and you should see output like this: You can distinctly see which lines were written by the console application itself (Program.cs) and which were written by the worker class (WorkerDLL.cs). Technorati Tags: Csharp

    Read the article

  • Stop event bubbling in Javascript

    - by Kartik Rao
    I have a html structure like : <div onmouseover="enable_dropdown(1);" onmouseout="disable_dropdown(1);"> My Groups <a href="#">(view all)</a> <ul> <li><strong>Group Name 1</strong></li> <li><strong>Longer Group Name 2</strong></li> <li><strong>Longer Group Name 3</strong></li> </ul> <hr /> Featured Groups <a href="#">(view all)</a> <ul> <li><strong>Group Name 1</strong></li> <li><strong>Longer Group Name 2</strong></li> <li><strong>Longer Group Name 3</strong></li> </ul> </div> I want the onmouseout event to be triggered only from the main div, not the 'a' or 'ul' or 'li' tags within the div! My onmouseout function is as follows : function disable_dropdown(d) { document.getElementById(d).style.visibility = "hidden"; } Can someone please tell me how I can stop the event from bubbling up? I tried the solutions (stopPropogation etc) provided on other sites, but I'm not sure how to implement them in this context. Any help will be appreciated. Thanks a lot!

    Read the article

  • MKMapView loading all annotation views at once (including those that are outside the current rect)

    - by jmans
    Has anyone else run into this problem? Here's the code: - (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(WWMapAnnotation *)annotation { // Only return an Annotation view for the placemarks. Ignore for the current location--the iPhone SDK will place a blue ball there. NSLog(@"Request for annotation view"); if ([annotation isKindOfClass:[WWMapAnnotation class]]){ MKPinAnnotationView *browse_map_annot_view = (MKPinAnnotationView *)[mapView dequeueReusableAnnotationViewWithIdentifier:@"BrowseMapAnnot"]; if (!browse_map_annot_view) { browse_map_annot_view = [[[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:@"BrowseMapAnnot"] autorelease]; NSLog(@"Creating new annotation view"); } else { NSLog(@"Recycling annotation view"); browse_map_annot_view.annotation = annotation; } ... As soon as the view is displayed, I get 2009-08-05 13:12:03.332 xxx[24308:20b] Request for annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Creating new annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Request for annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Creating new annotation view and on and on, for every annotation (~60) I've added. The map (correctly) only displays the two annotations in the current rect. I am setting the region in viewDidLoad: if (center_point.latitude == 0) { center_point.latitude = 35.785098; center_point.longitude = -78.669899; } if (map_span.latitudeDelta == 0) { map_span.latitudeDelta = .001; map_span.longitudeDelta = .001; } map_region.center = center_point; map_region.span = map_span; NSLog(@"Setting initial map center and region"); [browse_map_view setRegion:map_region animated:NO]; The log entry for the region being set is printed to the console before any annotation views are requested. The problem here is that since all of the annotations are being requested at once, [mapView dequeueReusableAnnotationViewWithIdentifier] does nothing, since there are unique MKAnnotationViews for every annotation on the map. This is leading to memory problems for me. One possible issue is that these annotations are clustered in a pretty small space (~1 mile radius). Although the map is zoomed in pretty tight in viewDidLoad (latitude and longitude delta .001), it still loads all of the annotation views at once. Thanks...

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >