Search Results

Search found 19699 results on 788 pages for 'touch screen'.

Page 224/788 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • iPhone drag/drop

    - by Farid
    Trying to get some basic drag/drop functionality happening for an iPhone application. My current code for trying to do this is as follows: - (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:self]; self.center = location; } This code has the result of a the touched UIView flickering while it follows the touch around. After playing around with it a bit, I also noticed that the UIView seemed to flicker from the 0,0 position on the screen to the currently touched location. Any ideas what I'm doing wrong?

    Read the article

  • Can't write the SysWow64 value to registry with vbscript for Screensaver

    - by Valentein
    Scripts, registries, screen-savers, oh my! I'm trying to use a screen-saver on a Windows XP 64 bit machine which uses a .NET app which makes an interop call which relies on some x86 Shockwave Dlls (some Shockwave animation). Everything should be in the %systemroot%\WINNT\SysWOW64 directory. When the timeout for the screensaver occurs, the process should looks like this: Screensaver.scr - .NET app - shockwave animation. During installation I want a vbscript to my screen-saver file to copy the Screensaver.scr to the SysWow64 directory and then set the proper registry key to this file for Windows to launch the screen-saver. The code is something like this: Dim sScreenSaver, tScreenSaver sScreenSaver = "C:\SourceFiles\bin\ScreenSaver.scr" 'screensaver tScreenSaver = "C:\winnt\SysWOW64\" Set WshShell = WScript.CreateObject("WScript.Shell") 'script shell to run objects Set FSO = createobject("scripting.filesystemobject") 'file system object 'copy screensaver FSO.CopyFile sScreenSaver, tScreenSaver, True 'set screen saver Dim p1 p1 = "HKEY_CURRENT_USER\Control Panel\Desktop\" WshShell.RegWrite p1 & "SCRNSAVE.EXE", (tScreenSaver & "ScreenSaver.scr") After installation, I can verify the the Screensaver exists in the correct directory. (It actually seems to be in both the system32 and the sysWOW64 directories---whether that's the install script or something I did post-install I'm in the process of verifying.) However, the registry entry is not correct. In both the 32 and 64 bit regedit I see the HKCU\ControlPanel\Desktop\SCRNSAVE.EX is set to: C:\WINNT\system32\Screensaver.scr This isn't right. The screen-saver won't run from this directory. It only runs from SysWOW64. If I manually edit the registry with regedit to the correct SysWOW64 path everything works fine. Is this a problem with using the script or is this a Windows registry redirection, or filesystem redirection problem? You'd think this would be simple...

    Read the article

  • How can I get the correct DisplayMetrics from an AppWidget in Android?

    - by Gary
    I need to determine the screen density at runtime in an Android AppWidget. I've set up an HDPI emulator device (avd). If set up a regular executable project, and insert this code into the onCreate method: DisplayMetrics dm = getResources().getDisplayMetrics(); Log.d("MyTag", "screen density " + dm.densityDpi); This outputs "screen density 240" as expected. However, if I set up an AppWidget project, and insert this code into the onUpdate method: DisplayMetrics dm = context.getResources().getDisplayMetrics(); Log.d("MyTag", "screen density " + dm.densityDpi); This outputs "screen density 160". I noticed, hooking up the debugger, that the mDefaultDisplay member of the Resources object here is null in the AppWidget case. Similarly, if I get a resource at runtime using the Resources object obtained from context.getResources() in the AppWidget, it returns the wrong resource based on screen density. For instance, I have a 60x60px drawable for mdpi, and an 80x80 drawable for hdpi. If I get this Drawable object using context.getResources().getDrawable(...), it returns the 60x60 version. Is there any way to correctly deal with resources at runtime from the context of an AppWidget? Thanks!

    Read the article

  • Scrolling down to next element via keypress & scrollTo plugin - jQuery

    - by lyrae
    I am using jQuery's scrollTo plugin to scroll up and down my page, using UP arrow and DOWN arrow. i have a bunch of div with class "screen", as so: <div class="screen-wrapper">...</div> What I am trying to do is, when i press UP or DOWN, the window scrolls to the next, or previous div with class of "screen". I have the keypresses taken care of. According to the plugin docs, to scroll a window, you use $.scrollTo(...); Here's the code I have: $(document).keypress(function(e){ switch (e.keyCode) { case 40: // down n = $('.screen-wrapper').next() $.scrollTo( n, 800 ); break; case 38: // up break; case 37: // left break; case 39: // right break; } }); And if it helps, here's the HTML div. I have a few of these on the page, and essentially, am trying to scroll to next one by pressing down arrow: <div class='screen-wrapper'> <div class='screen'> <div class="sections"> <ul> <li><img src="images/portfolio/sushii-1.png " /></li> <li><img src="images/portfolio/sushii-2.png" /></li> <li><img src="images/portfolio/sushii-3.png" /></li> </ul> </div> <div class="next"></div> <div class="prev"></div> </div> And also if it needed, I can provide a link where this is being used if it'll help someone get a better idea. edit And, i forgot to mention what the real question here is. The question/problem is that it won't scroll down past the first element, as seth mentioned.

    Read the article

  • Virtual/soft buttons for (Home, menu,Back, Search) always on top?

    - by Ken
    How can I make an app or maybe service that looks like (Nexus One touch buttons) for the navigation keys (Home, menu,Back, Search) The buttons should always be visibly and always stay on top and send the command to the app thats running. Someone have ideas and sample codes how to do that? * I see an app with name (Smart Taskmanager) wich always detect when you touch the right side of the screen and then detect when you slide the finger to left. So I think its possible, with this function I think its possible to implementate the code to simulate the (Home, Meny, Back, Search) buttons. * I also see and test an app wich show a "cracked display" always ontop so that tecnic maybe shold be useful to always show the buttons/bitmanp on top. Thoose function, to show the button and catch the "touch event" and send the event to the active program, thats what i dont can figure out how to do. Thats my thought! I Hope there are some deep developer wich know this solution! Regards Ken

    Read the article

  • Calling a method in a view controller from a view

    - by Lakshmie
    I have to invoke a method present in a view controller who's reference is available in the view. When I try to call the method like any other method, for some reason, iPhone just ignores the call. Can somebody explain as to why this happens and also how can I go about invoking this method? In the view I have this method: -(void) touchesBegan :(NSSet *) touches withEvent:(UIEvent *)event{ NSArray* mySubViews = [self subviews]; for (UITouch *touch in touches) { int i = 0; for(; i<[mySubViews count]; i++){ if(CGRectContainsPoint([[mySubViews objectAtIndex:i] frame], [touch locationInView:self])){ break; } } if(i<[mySubViews count]){ // viewController is the reference to the View Controller. [viewController pointToSummary:[touch locationInView:self].y]; NSLog(@"Helloooooo"); break; } } } Whenever the touches event is triggered, Hellooooo gets printed in the console but the method before that is simply ignored

    Read the article

  • Malloc to a CGPoint Pointer throwing EXC_BAD_ACCESS when accessing

    - by kdbdallas
    I am trying to use a snippet of code from a Apple programming guide, and I am getting a EXC_BAD_ACCESS when trying to pass a pointer to a function, right after doing a malloc. (For Reference: iPhone Application Programming Guide: Event Handling - Listing 3-6) The code in question is really simple: CFMutableDictionaryRef touchBeginPoints; UITouch *touch; .... CGPoint *point = (CGPoint *)CFDictionaryGetValue(touchBeginPoints, touch); if (point == NULL) { point = (CGPoint *)malloc(sizeof(CGPoint)); CFDictionarySetValue(touchBeginPoints, touch, point); } Now when the program goes into the if statement it assigns the 'output' of malloc into the point variable/pointer. Then when it tries to pass point into the CFDictionarySetValue function it crashes the application with: Program received signal: “EXC_BAD_ACCESS”. Someone suggested not doing the malloc and pass the point var/pointer as: &point, however that still gave me a EXC_BAD_ACCESS. What I am (and it looks like Apple) doing wrong??? Thanks in advance.

    Read the article

  • Cocos2D TouchesEnded not allowing me to access sprites?

    - by maiko
    Hey Guys! Thanks so much for reading! - (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch * touch = [touches anyObject]; CGPoint location = [[CCDirector sharedDirector] convertToGL: [touch locationInView:touch.view]]; CGRect myRect = CGRectMake(100, 120, 75, 113); int tjx = sprite.position.x; if(CGRectContainsPoint(myRect, location)) { tjx ++; } } For some reason, ccTouchesEnded isn't allowing me to access my "sprite". I also tried to use CGRectMake like so : CGRectMake( sprite.position.x, sprite.position.y, sprite.contentSize.Width, sprite.contentSize.Height) But I couldn't access my sprites position or height. I keep getting "sprite" undeclared when it is declared in the init method, and added to the child. Please help!! I'm sure i'm missing something really simple here.

    Read the article

  • Getting x/y coordinate of a UITouch...

    - by Tarek
    HI, I have been trying to get the x/y coordinates from a touch on any iDevice. When getting the touch locations, everything looks ok if the touch is in the middle of the screen. But if I drag my finger to the bottom of the screen, I can only get a y coordinate of 1015. It should be getting to 1023. Same thing for dragging my finger to the top of the screen. I get -6. It should be 0. I have explicitly set the window and views to an origin of 0,0 and the width, height of the device's screen. Still nothing. I am really lost on what might be going on. Is something shifted? Am I not reading the x/y coordinates properly. Does something need to be transformed or converted? Any help would be much appreciated. T

    Read the article

  • Reloading the model of a TTTableViewController

    - by user341338
    My problem is that I have a Register Controller and a Login Controller. The Login Screen displays a Login Screen or a Logout Screen depending if a user is logged in. Now when a user registers, does not close the app, and then goes to the Login Screen it will still display a Login Screen, although there is a logged in user already. This is because the Screen is created when the application loads and does not change afterwards. I tried doing this: - (id)init { if (self = [super init]) { [self invalidateModel]; [self reload]; but that did not work, since it is only called on the first init. then i tried: - (void)viewDidLoad { [self invalidateModel]; [self reload]; } But that method had the same problem. Then I found this method: - (TTNavigationMode)navigationModeForURL:(NSString*)URL; with the following options: typedef enum { TTNavigationModeNone, TTNavigationModeCreate, // a new view controller is created each time TTNavigationModeShare, // a new view controller is created, cached and re-used TTNavigationModeModal, // a new view controller is created and presented modally TTNavigationModeExternal, // an external app will be opened } TTNavigationMode; It seems like TTNavigationModeCreate would be the right thing to use, but I have no clue how to use it. Any help? Thnx.

    Read the article

  • touchesBegan doesnt get detected

    - by Muniraj
    I have a viewcontroller like the following. But the touchsBegan doestnt get detected. Can anyone plz tell me what is wrong. - (id)init { if (self = [super init]) self.view = [[[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]] autorelease]; return self; } -(void) viewWillAppear:(BOOL)animated { overlay = [[[UIImageView alloc] initWithImage:[UIImage imageNamed:@"overlay.png"]] autorelease]; [self.view addSubview:overlay]; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { // Detect touch anywhere UITouch *touch = [touches anyObject]; // Where is the point touched CGPoint point = [touch locationInView:self.view]; NSLog(@"pointx: %f pointy:%f", point.x, point.y); // Was a tab touched, if so, which one... if (CGRectContainsPoint(CGRectMake(1, 440, 106, 40), point)) NSLog(@"tab 1 touched"); else if (CGRectContainsPoint(CGRectMake(107, 440, 106, 40), point)) NSLog(@"tab 2 touched"); else if (CGRectContainsPoint(CGRectMake(214, 440, 106, 40), point)) NSLog(@"tab 3 touched"); }

    Read the article

  • Browser relative positioning with jQuery and CutyCapt

    - by Acoustic
    I've been using CutyCapt to take screen shots of several web pages with great success. My challenge now is to paint a few dots on those screen shots that represent where a user clicked. CutyCapt goes through a process of resizing the web page to the scroll width before taking a screen shot. That's extremely useful because you only get content and not much (if any) of the page's background. My challenge is trying to map a user's mouse X coordinates to the screen shot. Obviously users have different screen resolutions and have their browser window open to different sizes. The image below shows 3 examples with the same logo. Assume, for example, that the logo is 10 pixels to the left edge of the content area (in red). In each of these cases, and for any resolution, I need a JavaScript routine that will calculate that the logo's X coordinate is 10. Again, the challenge (I think) is differing resolutions. In the center-aligned examples, the logo's position, as measured from the left edge of the browser (in black), differs with changing browser size. The left-aligned example should be simple as the logo never moves as the screen resizes. Can anyone think of a way to calculate the scrollable width of a page? In other words, I'm looking for a JavaScript solution to calculate the minimum width of the browser window before a horizontal scroll bar shows up. Thanks for your help!

    Read the article

  • iPhone: how do I set up a clear window-size "blocker view"?

    - by Ben
    I feel like this should be obvious to me, but for some reason I can't figure this out. I have a navigation interface with nav bar, tool bar, and primary view. Sometimes the user takes an action that causes a progress indicator to appear in the middle of the view. While the progress indicator (which is a custom UIView) in spinning in the middle, I want no touch input to be allowed to go to any of the underlying interface (main view, nav bar, toolbar, etc). But this doesn't seem trivial. I've tried (and failed) to create a simple view whose only job is to swallow touch input and use it as a window subview-- no dice, it never gets the touch events (and yes, it does have userInteractionEnabled). I've tried to bolt it on as a transparent modal view controller, but those don't seem to ever be transparent. Thoughts? What am I missing? Thanks!

    Read the article

  • Determining Long Tap (Long Press, Tap Hold) on Android with jQuery

    - by Volomike
    I've been able to successfully play with the touchstart, touchmove, and touchend events on Android using jQuery and an HTML page. Now I'm trying to see what the trick is to determine a long tap event, where one taps and holds for 3 seconds. I can't seem to figure this out yet. I'm wanting to this purely in jQuery without Sencha Touch, JQTouch, jQMobile, etc. I like the concept of jQTouch, although it doesn't provide me a whole lot and some of my code breaks with it. With Sencha Touch, I'm not a fan of moving away from jQuery into Ext.js and some new way of doing Javascript abstraction, especially when jQuery is so capable. So, I want to figure this out with jQuery alone. I've been able to do many jQTouch and Sencha Touch things on my own using jQuery. And jQMobile is still too beta and not directed enough to the Android yet.

    Read the article

  • Qt/PyQt dialog with toggable fullscreen mode - problem on Windows

    - by Guard
    I have a dialog created in PyQt. It's purpose and functionality don't matter. The init is: class MyDialog(QWidget, ui_module.Ui_Dialog): def __init__(self, parent=None): super(MyDialog, self).__init__(parent) self.setupUi(self) self.installEventFilter(self) self.setWindowFlags(Qt.Dialog | Qt.WindowTitleHint) self.showMaximized() Then I have event filtering method: def eventFilter(self, obj, event): if event.type() == QEvent.KeyPress: key = event.key() if key == Qt.Key_F11: if self.isFullScreen(): self.setWindowFlags(self._flags) if self._state == 'm': self.showMaximized() else: self.showNormal() self.setGeometry(self._geometry) else: self._state = 'm' if self.isMaximized() else 'n' self._flags = self.windowFlags() self._geometry = self.geometry() self.setWindowFlags(Qt.Tool | Qt.FramelessWindowHint) self.showFullScreen() return True elif key == Qt.Key_Escape: self.close() return QWidget.eventFilter(self, obj, event) As can be seen, Esc is used for dialog hiding, and F11 is used for toggling full-screen. In addition, if the user changed the dialog mode from the initial maximized to normal and possibly moved the dialog, it's state and position are restored after exiting the full-screen. Finally, the dialog is created on the MainWindow action triggered: d = MyDialog(self) d.show() It works fine on Linux (Ubuntu Lucid), but quite strange on Windows 7: if I go to the full-screen from the maximized mode, I can't exit full-screen (on F11 dialog disappears and appears in full-screen mode again. If I change the dialog's mode to Normal (by double-clicking its title), then go to full-screen and then return back, the dialog is shown in the normal mode, in the correct position, but without the title line. Most probably the reason for both cases is the same - the setWindowFlags doesn't work. But why? Is it also possible that it is the bug in the recent PyQt version? On Ubuntu I have 4.6.x from apt, and on Windows - the latest installer from the riverbank site.

    Read the article

  • Interface Builder only allows one button to be "touchable"

    - by STLMikey
    I'm making a clone of the classic game simon, the memory matching game. My (iPad) app will load fine, i tap start, the game screen loads, and only one of my four buttons will respond to touch commands. To troubleshoot, I tried creating a second unrelated nib and just populating it with four buttons not linked to anything. However, only one of those four buttons would respond to touch! There are no IBActions being called, nothing. Both the view itself, and all four buttons are touch enabled in the inspector...I'm stumped. I'm moreso asking if anyone has encountered anything simillar, as I'd rather not burden potential help with app-specific questions if it's avoidable. Thank you!

    Read the article

  • Javascript and rendering pauses and stays paused on scroll in the android browser

    - by user357303
    Hi. I've found some wierd behaviour related to scrolling and rendering and javascript. How to make it happen: On any webpage that is long enough to scroll on. Start to scroll pretty fast (fling the page). then release the touch. No while the page is still scrolling because of the momentum. Tap the screen to stop the scroll. This make the browser enter a wierd mode. On the nexus one it behaves like this: The updating of what's shown on the screen stops, you can still click on links and the go to where they are supposed to but what's shown on the screen stays the same. If you then scroll the screen a bit the update of the screen kicks in again and what you you where suppsed to see all the time is shown. On all phones with HTC Sense I've tried (Hero, Desire, Legend) this happens: The updating of the screen is stopped just like on the nexus one, but also the execution of any javascript is stopped. If you click on a link that takes you to another page however things return to normal again. The way I tested this was I created a page like this: http://pastebin.ca/1881620 The changeColor function simply changed the background color of 'container' to a few different colors. So before the error what happens is that when you click any link the color changes. after the error this happens: Nexus one: when you click on the links nothing happens (except the "orange link selected rounded corner box thing" is shown as if the link is clicked). Then when you scroll abit. You can see the color has changed (and equal amount of times to the number of times I clicked the link). On Sense: The links take me to google.com Has anyone else noticed this problem? Is there anyway to work around it? Thanks.

    Read the article

  • How to have a UISwipeGestureRecognizer AND UIPanGestureRecognizer work on the same view

    - by Shizam
    How would you setup the gesture recognizers so that you could have a UISwipeGestureRecognizer and a UIPanGestureRecognizer work at the same time? Such that if you touch and move quickly (quick swipe) it detects the gesture as a swipe but if you touch then move (short delay between touch & move) it detects it as a pan? I've tried various permutations of requireGestureRecognizerToFail and that didn't help exactly, it made it so that if the SwipeGesture was left then my pan gesture would work up, down and right but any movement left was detected by the swipe gesture.

    Read the article

  • Is there any Opensource Browser for touchscreen device ?

    - by Wallah
    I need internet browser on my device which has 4.3 Inch screen with 480x272 resolution, I am using embedded Qt 4.6.2 on embedded linux. Micro-controller has ARM9 with 450 Mhz. Requirements for browser are - Touch Screen Support, Panning ( No Scroll bars) - Single touch Zooming ( No Multi Touch Available). - Fit to screen width support ( No Horizontal Scrolling). - Acid 3 Standard Compliable. - Page loading should be like, display all visible text first and then load and show Images Gradually. Is there any browser which is near to this requirements.

    Read the article

  • Qt/PyQt dialog with togglable fullscreen mode - problem on Windows

    - by Guard
    I have a dialog created in PyQt. It's purpose and functionality don't matter. The init is: class MyDialog(QWidget, ui_module.Ui_Dialog): def __init__(self, parent=None): super(MyDialog, self).__init__(parent) self.setupUi(self) self.installEventFilter(self) self.setWindowFlags(Qt.Dialog | Qt.WindowTitleHint) self.showMaximized() Then I have event filtering method: def eventFilter(self, obj, event): if event.type() == QEvent.KeyPress: key = event.key() if key == Qt.Key_F11: if self.isFullScreen(): self.setWindowFlags(self._flags) if self._state == 'm': self.showMaximized() else: self.showNormal() self.setGeometry(self._geometry) else: self._state = 'm' if self.isMaximized() else 'n' self._flags = self.windowFlags() self._geometry = self.geometry() self.setWindowFlags(Qt.Tool | Qt.FramelessWindowHint) self.showFullScreen() return True elif key == Qt.Key_Escape: self.close() return QWidget.eventFilter(self, obj, event) As can be seen, Esc is used for dialog hiding, and F11 is used for toggling full-screen. In addition, if the user changed the dialog mode from the initial maximized to normal and possibly moved the dialog, it's state and position are restored after exiting the full-screen. Finally, the dialog is created on the MainWindow action triggered: d = MyDialog(self) d.show() It works fine on Linux (Ubuntu Lucid), but quite strange on Windows 7: if I go to the full-screen from the maximized mode, I can't exit full-screen (on F11 dialog disappears and appears in full-screen mode again). If I change the dialog's mode to Normal (by double-clicking its title), then go to full-screen and then return back, the dialog is shown in the normal mode, in the correct position, but without the title line. Most probably the reason for both cases is the same - the setWindowFlags doesn't work. But why? Is it also possible that it is the bug in the recent PyQt version? On Ubuntu I have 4.6.x from apt, and on Windows - the latest installer from the riverbank site.

    Read the article

  • Learning Objective-C 2.0 and ASP.NET 4.0 simultaneously?

    - by Sahat
    (HOBBY) I own a Macbook Pro and iPod Touch so developing iPhone/iPod/iPad apps seems like a logical thing to do in order to get some experience in the programming field. Besides I want to write a new application similar to the Capsuleer (Character skills monitor app for EVE Online MMO) but with more features. It's something I'd love to have on my own iPod Touch and I am sure other people will welcome a new EVE Online app for their iPhone or iPod Touch. (CAREER) I want to learn ASP.NET (and possibly Silverlight later on) for my potential future job. I plan to work in the .NET field, so it's a good idea for me to start learning C# and ASP.NET ASAP. Is it a good idea to learn completely unrelated technologies at the same time? Or would it be better to learn one thing at a time? Objective-C first, and ASP.NET second. Or vice versa. Thanks, Sahat

    Read the article

  • NSFetchedResultsController on secondary UITableView - how to query data?

    - by Jason
    I am creating a core-data based Navigation iPhone app with multiple screens. Let's say it is a flash-card application. The data model is very simple, with only two entities: Language, and CardSet. There is a one-to-many relationship between the Language entity and the CardSet entities, so each Language may contain multiple CardSets. In other words, Language has a one-to-many relationship Language.cardSets which points to the list of CardSets, and CardSet has a relationship CardSet.language which points to the Language. There are two screens: (1) An initial TableView screen, which displays the list of languages; and (2) a secondary TableView screen, which displays the list of CardSets in the Language. In the initial screen, which lists the languages, I am using NSFetchedResultsController to keep the list of languages up-to-date. The screen passes the Language selected to the secondary screen. On the secondary screen, I am trying to figure out whether I should again use an NSFetchedResultsController to maintain the list of CardSets, or if I should work through Language.cardSets to simply pull the list out of the object model. The latter makes the most sense programatically because I already have the Language - but then it would not automatically be updated on changes. I have looked at the NSFetchedResultsController documentation, and it seems like I can easily create predicates based on attributes - but not relationships. I.e., I can create the following NSFetchedResultsController: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name LIKE[c] 'Chuck Norris'"]; How can I access my data through the direct relationship - Language.cardSets - and also have the table auto-update using NSFetchedResultsController? Is this possible?

    Read the article

  • Delete object[i] from table or group in corona sdk

    - by Rober Dote
    i have a problem (obviusly :P) i'm create a mini game, and when i touch a Object-A , creates an Object-B. If i touch N times, this create N Object-B. (Object-B are Bubbles in my game) so, i try when I touch the bubble (object-B), that disappears or perform any actions. I try adding Object-B to Array local t = {} . . . bur = display.newImage("burbuja.png") table.insert(t,bur) and where i have my eventListeners i wrote: for i=1, #t do bur[i]:addEventListener("tap",reventar(i)) end and my function 'reventar' local function reventar (event,id) table.remove(t,id) end i'm lost, and only i want disappears the bubbles.

    Read the article

  • Dynamic GUI Framework Designing

    - by user575715
    There is a Scenario to be developed for a 3-tier Application .We need to design a Framework or a utility sort of thing . In tradional aspect of GUI Designing , either we tend to create a static gui page and code the elements on it along with other properties of the elements such as (disabled/enabled,image source,name ,id ,which function to be called under onclick event.) or we tend to drag and drop the elements from the control pallete provided by variety of gui frameworks. Certain things i need to design a POC so that we can develop this concept. 1) There must a utility ,such that during creation of screen layout , that screen should be saved in the database(RDBMS) with a screen number. 2) All the Events related to that control should be saved in some other table which will be dynamically mapped during the calling of screen number by the user. 3) When the user call that screen ,a generic function should be invoked which'll call the screen file from the database and apply all the properties ,events,etc at runtime and the final output will be displayed to the user. This POC will help the us to customised the screens according to our usage.also all the code will seperated which can easily be used for some other development process. Thanks Amit Kalra

    Read the article

  • Using Rails and Rspec, how do you test that the database is not touched by a method

    - by Will Tomlins
    So I'm writing a test for a method which for performance reasons should achieve what it needs to achieve without using SQL queries. I'm thinking all I need to know is what to stub: describe SomeModel do describe 'a_getter_method' do it 'should not touch the database' do thing = SomeModel.create something_inside_rails.should_not_receive(:a_method_querying_the_database) thing.a_getter_method end end end EDIT: to provide a more specific example: class Publication << ActiveRecord::Base end class Book << Publication end class Magazine << Publication end class Student << ActiveRecord::Base has_many :publications def publications_of_type(type) #this is the method I am trying to test. #The test should show that when I do the following, the database is queried. self.publications.find_all_by_type(type) end end describe Student do describe "publications_of_type" do it 'should not touch the database' do Student.create() student = Student.first(:include => :publications) #the publications relationship is already loaded, so no need to touch the DB lambda { student.publications_of_type(:magazine) }.should_not touch_the_database end end end So the test should fail in this example, because the rails 'find_all_by' method relies on SQL.

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >