Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 101/906 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Android game logic problem

    - by semajhan
    I'm currently creating a game and have a problem which I think I know why it is occurring but not entirely sure and even if I knew, don't know how to solve. I have a 2D array 10 x 10 and have a "player" class that takes up a tile. Now, I have created 2 instances of the player and move them around via swiping. Around the edges I have put "walls" that the player cannot walk through and everything works fine, until I remove a wall. Once I remove a wall and move the character/player to the edge of the screen, the player cannot go any further. The problem occurs here, where the second instance of the player is not at the edge of the screen but say 2 tiles from the first instance of "player" who is at the edge. If I try moving them further into the direction of the edge, I understand that the first instance of player wouldn't move or do anything but the second instance of player should still move, but it won't. This is the code that executed when the user swipes: if (player.getArrayX() - 1 != player2.getArrayX()) { player.moveLeft(); } else if (player.getArrayX() - 1 == player2.getArrayX() && player.getArrayY() != player2.getArrayY()) { player.moveLeft(); } if (player2.getArrayX() - 1 != player.getArrayX()) { player2.moveLeft(); } else if (player2.getArrayX() - 1 == player.getArrayX() && player2.getArrayY() != player.getArrayY()) { player2.moveLeft(); } In the player class I have: public void moveLeft() { if (alive) { switch (levelMaster.getLevel1(getArrayX() - 1, getArrayY())) { case 0: break; case 1: subX(); // basically moves player left setArrayX(getArrayX() - 1); // shifts x coord of player 1 within tilemap Log.d("semajhan", "x: " + getArrayX()); break; case 9: subX(); setArrayX(getArrayX() - 1); setAlive(false); break; } } } Any help on the matter or further insight would be greatly appreciated, thanks.

    Read the article

  • Give the mount point of a path

    - by Charles Stewart
    The following, very non-robust shell code will give the mount point of $path: (for i in $(df|cut -c 63-99); do case $path in $i*) echo $i;; esac; done) | tail -n 1 Is there a better way to do this? Postscript This script is really awful, but has the redeeming quality that it Works On My Systems. Note that several mount points may be prefixes of $path. Examples On a Linux system: cas@txtproof:~$ path=/sys/block/hda1 cas@txtproof:~$ for i in $(df -a|cut -c 57-99); do case $path in $i*) echo $i;; esac; done| tail -1 /sys On a Mac osx system cas local$ path=/dev/fd/0 cas local$ for i in $(df -a|cut -c 63-99); do case $path in $i*) echo $i;; esac; done| tail -1 /dev Note the need to vary cut's parameters, because of the way df's output differs: indeed, awk is better. Answer It looks like munging tabular output is the only way within the shell, but df /dev/fd/impossible | tail -1 | awk '{ print $NF}' is a big improvement on what I had. Note two differences in semantics: firstly, df $path insists that $path names an existing file, the script I had above doesn't care; secondly, there are no worries about dereferncing symlinks. It's not difficult to write Python code to do the job.

    Read the article

  • XCode enum woes

    - by Raconteur
    Hi gang, I thought I had this sorted, but I am still missing something. Very simply, I have a Settings class that hold a DAO (sitting on a plist). I want to have a couple of enums for the settings for convenience and readability, such as GamePlayType and DifficultyLevel. Right now I am defining them in the Settings.h file above the @interface line as such: typedef enum { EASY, NORMAL, HARD } DifficultyLevel; and typedef enum { SET_NUMBER_OF_MOVES, TO_COMPLETION } GamePlayType; If I access them from within the Settings class like: - (int)gridSizeForLOD { switch ([self difficultyLevel]) { case EASY: return GRID_SIZE_EASY; case NORMAL: return GRID_SIZE_NORMAL; case HARD: return GRID_SIZE_HARD; default: return GRID_SIZE_NORMAL; } } everything is fine. But, if I try to access them outside of the Settings class, let's say in my main view controller class, like this: if (([settings gameType] == SET_NUMBER_OF_MOVES) && (numMoves == [settings numMovesForLOD])) { [self showLoseScreen]; } I get errors (like EXC_BAD_ACCESS) or the condition always fails. Am I doing something incorrectly? Also, I should point out that I have this code for the call to gameType (which lives in the Settings class): - (GamePlayType)gameType { return [dao gameType]; } and the DAO implements gameType like this: - (int)gameType { return (settingsContent != nil) ? [[settingsContent objectForKey:@"Game Type"] intValue] : 0; } I know I have the DAO returning an int instead of a GamePlayType, but A) the problem I am describing arose there when I tried to use the "proper" data type, and B) I did not think it would matter since the enum is just a bunch of named ints, right? Any help, greatly appreciated. I really want to understand this thoroughly, and something is eluding me... Cheers, Chris

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • "No keyboard for id 0"?

    - by Mellon
    I am new in Android app. development, now I have encountered a strange problem with the Menu button. Here is the thing: I have two activities, "ActivityOne" and "ActivityTwo", where "ActivityTwo" is the child Activity of "ActivityOne". In both activity, I have defined the menu button options like following: @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); MenuItem insertMenuItem = menu.add(0, INSERT_ID, 0, R.string.menu_insert); insertMenuItem.setIcon(R.drawable.ic_menu_add); MenuItem settingMenuItem = menu.add(0, SETTING_ID, 0, R.string.menu_setting); settingMenuItem.setIcon(R.drawable.ic_menu_settings); MenuItem aboutMenuItem = menu.add(0, ABOUT_ID, 0, R.string.menu_about); aboutMenuItem.setIcon(R.drawable.ic_menu_about); logPrinter.println("creating menu options..."); return true; } @Override public boolean onMenuItemSelected(int featureId, MenuItem item) { switch(item.getItemId()) { case INSERT_ID: doInsert(); return true; case SETTING_ID: return true; case ABOUT_ID: showAbout(); return true; } return super.onMenuItemSelected(featureId, item); } In "ActivityOne", when I click the physical Menu button, there is no menu options pop up from screen bottom, when I checked the LogCat console, there are two warning messages, which are "No keyboard for id 0" and "Using default keyMap:/system/usr/keychars/qwerty.kcm.bin" . BUT, in "ActivityTwo", the menu button works fine, it shows me those menu options I defined. Why the menu button does not work in "ActivityOne" ?? What does the warning msg mean???

    Read the article

  • Finding What You Need in R: function arguments/parameters from outside the function's package

    - by doug
    Often in R, there are a dozen functions scattered across as many packages--all of which have the same purpose but of course differ in accuracy, performance, theoretical rigor, and so on. How do you gather all of these in one place before you start your task? So for instance: the generic plot function. Setting secondary ticks is much easier (IMHO) using a function outside of the base package, minor.tick(nx=n, ny=n, tick.ratio=n), found in Hmisc. Of course, that doesn't show up in plot's docstring. Likewise, the data-input arguments to 'plot' can be supplied by an object returned from the function 'hexbin', again, from a library outside of the base installation (where 'plot' resides). What would be great obviously is a programmatic way to gather these function arguments from the various libraries and put them in a single namespace. edit: (trying to re-state my example just above more clearly:) the arguments to plot supplied in the base package for, e.g., setting the axis tick frequency are xaxp/yaxp; however, one can also set a/t/f via a function outside of the base package, again, as in the minor.tick function from the Hmisc package--but you wouldn't know that just from looking at the plot method signature. Is there a meta function in R for this? So far, as i come across them, i've been manually gathering them in a TextMate 'snippet' (along with the attendant library imports). This isn't that difficult or time consuming, but i can only update my snippet as i find out about these additional arguments/parameters. Is there a canonical R way to do this, or at least an easier way? Just in case that wasn't clear, i am not talking about the case where multiple packages provide functions directed to the same statistic or view (e.g., 'boxplot' in the base package; 'boxplot.matrix' in gplots; and 'bplots' in Rlab). What i am talking is the case in which the function name is the same across two or more packages.

    Read the article

  • parse.json of authenticated play request

    - by niklassaers
    I've set up authentication in my application like this, always allow when a username is supplied and the API-key is 123: object Auth { def IsAuthenticated(block: => String => Request[AnyContent] => Result) = { Security.Authenticated(RetrieveUser, HandleUnauthorized) { user => Action { request => block(user)(request) } } } def RetrieveUser(request: RequestHeader) = { val auth = new String(base64Decode(request.headers.get("AUTHORIZATION").get.replaceFirst("Basic", ""))) val split = auth.split(":") val user = split(0) val pass = split(1) Option(user) } def HandleUnauthorized(request: RequestHeader) = { Results.Forbidden } def APIKey(apiKey: String)(f: => String => Request[AnyContent] => Result) = IsAuthenticated { user => request => if(apiKey == "123") f(user)(request) else Results.Forbidden } } I want then to define a method in my controller (testOut in this case) that uses the request as application/json only. Now, before I added authentication, I'd say "def testOut = Action(parse.json) {...}", but now that I'm using authentication, how can I add parse.json in to the mix and make this work? def testOut = Auth.APIKey("123") { username => implicit request => var props:Map[String, JsValue] = Map[String, JsValue]() request.body match { case JsObject(fields) => { props = fields.toMap } case _ => {} // Ok("received something else: " + request.body + '\n') } if(!props.contains("UUID")) props.+("UUID" -> UniqueIdGenerator.uuid) if (!props.contains("entity")) props.+("entity" -> "unset") props.+("username" -> username) Ok(props.toString) } As a bonus question, why is only UUID added to the props map, not entity and username? Sorry about the noob factor, I'm trying to learn Scala and Play at the same time. :-) Cheers Nik

    Read the article

  • Prob comparing pointers and integer in C

    - by Dimitri
    Hi I have a problem with this code. When i am using this function I have no warnings. : void handler(int sig){ switch(sig) { case SIGINT : { click++; fprintf(stdout,"SIGINT recu\n"); if( click == N){ exit(0); } } case SIGALRM : fprintf(stdout,"SIGALRM received\n"); exit(0); case SIGTERM: fprintf(stdout,"SIGTERM received\n"); exit(0); } } But when i rewrite the function with this new version, I have a " comparison between pointer and integer" warning on the if statement: void handler( int sig){ printf("Signal recu\n"); if( signal == SIGINT){ click++; fprintf(stdout,"SIGINT received; Click = %d\n",click); if(click == N){ fprintf(stdout,"Exiting with SIGINT\n"); exit(0); } } else if(signal == SIGALRM){ fprintf(stdout,"SIGALRM received\n"); exit(0); } else if(signal == SIGTERM){ fprintf(stdout,"SIGTERM received\n"); exit(0); } Can someone tell me where is the prob?

    Read the article

  • Need Help with .NET OpenId HttpHandler

    - by Mark E
    I'm trying to use a single HTTPHandler to authenticate a user's open id and receive a claimresponse. The initial authentication works, but the claimresponse does not. The error I receive is "This webpage has a redirect loop." What am I doing wrong? public class OpenIdLogin : IHttpHandler { private HttpContext _context = null; public void ProcessRequest(HttpContext context) { _context = context; var openid = new OpenIdRelyingParty(); var response = openid.GetResponse(); if (response == null) { // Stage 2: user submitting Identifier openid.CreateRequest(context.Request.Form["openid_identifier"]).RedirectToProvider(); } else { // Stage 3: OpenID Provider sending assertion response switch (response.Status) { case AuthenticationStatus.Authenticated: //FormsAuthentication.RedirectFromLoginPage(response.ClaimedIdentifier, false); string identifier = response.ClaimedIdentifier; //****** TODO only proceed if we don't have the user's extended info in the database ************** ClaimsResponse claim = response.GetExtension<ClaimsResponse>(); if (claim == null) { //IAuthenticationRequest req = openid.CreateRequest(identifier); IAuthenticationRequest req = openid.CreateRequest(Identifier.Parse(identifier)); var fields = new ClaimsRequest(); fields.Email = DemandLevel.Request; req.AddExtension(fields); req.RedirectingResponse.Send(); //Is this correct? } else { context.Response.ContentType = "text/plain"; context.Response.Write(claim.Email); //claim.FullName; } break; case AuthenticationStatus.Canceled: //TODO break; case AuthenticationStatus.Failed: //TODO break; } } }

    Read the article

  • Is the C++ compiler optimizer allowed to break my destructor ability to be called multiple times?

    - by sharptooth
    We once had an interview with a very experienced C++ developer who couldn't answer the following question: is it necessary to call the base class destructor from the derived class destructor in C++? Obviously the answer is no, C++ will call the base class destructor automagically anyway. But what if we attempt to do the call? As I see it the result will depend on whether the base class destructor can be called twice without invoking erroneous behavior. For example in this case: class BaseSafe { public: ~BaseSafe() { } private: int data; }; class DerivedSafe { public: ~DerivedSafe() { BaseSafe::~BaseSafe(); } }; everything will be fine - the BaseSafe destructor can be called twice safely and the program will run allright. But in this case: class BaseUnsafe { public: BaseUnsafe() { buffer = new char[100]; } ~BaseUnsafe () { delete[] buffer; } private: char* buffer; }; class DerivedUnsafe { public: ~DerivedUnsafe () { BaseUnsafe::~BaseUnsafe(); } }; the explicic call will run fine, but then the implicit (automagic) call to the destructor will trigger double-delete and undefined behavior. Looks like it is easy to avoid the UB in the second case. Just set buffer to null pointer after delete[]. But will this help? I mean the destructor is expected to only be run once on a fully constructed object, so the optimizer could decide that setting buffer to null pointer makes no sense and eliminate that code exposing the program to double-delete. Is the compiler allowed to do that?

    Read the article

  • What's the best way to communicate the purpose of a string parameter in a public API?

    - by Dave
    According to the guidance published in New Recommendations for Using Strings in Microsoft .NET 2.0, the data in a string may exhibit one of the following types of behavior: A non-linguistic identifier, where bytes match exactly. A non-linguistic identifier, where case is irrelevant, especially a piece of data stored in most Microsoft Windows system services. Culturally-agnostic data, which still is linguistically relevant. Data that requires local linguistic customs. Given that, I'd like to know the best way to communicate which behavior is expected of a string parameter in a public API. I wasn't able to find an answer in the Framework Design Guidelines. Consider the following methods: f(string this_is_a_linguistic_string) g(string this_is_a_symbolic_identifier_so_use_ordinal_compares) Is variable naming and XML documentation the best I can do? Could I use attributes in some way to mark the requirements of the string? Now consider the following case: h(Dictionary<string, object> dictionary) Note that the dictionary instance is created by the caller. How do I communicate that the callee expects the IEqualityComparer<string> object held by the dictionary to perform, for example, a case-insensitive ordinal comparison?

    Read the article

  • this is my first time asking here, I wanted to create a linked list while sorting it thanks

    - by user2738718
    package practise; public class Node { // node contains data and next public int data; public Node next; //constructor public Node (int data, Node next){ this.data = data; this.next = next; } //list insert method public void insert(Node list, int s){ //case 1 if only one element in the list if(list.next == null && list.data > s) { Node T = new Node(s,list); } else if(list.next == null && list.data < s) { Node T = new Node(s,null); list.next = T; } //case 2 if more than 1 element in the list // 3 elements present I set prev to null and curr to list then performed while loop if(list.next.next != null) { Node curr = list; Node prev = null; while(curr != null) { prev = curr; curr = curr.next; if(curr.data > s && prev.data < s){ Node T = new Node(s,curr); prev.next = T; } } // case 3 at the end checks for the data if(prev.data < s){ Node T = new Node(s,null); prev.next = T; } } } } // this is a hw problem, i created the insert method so i can check and place it in the correct order so my list is sorted This is how far I got, please correct me if I am wrong, I keep inserting node in the main method, Node root = new Node(); and root.insert() to add.

    Read the article

  • Advice on Linq to SQL mapping object design

    - by fearofawhackplanet
    I hope the title and following text are clear, I'm not very familiar with the correct terms so please correct me if I get anything wrong. I'm using Linq ORM for the first time and am wondering how to address the following. Say I have two DB tables: User ---- Id Name Phone ----- Id UserId Model The Linq code generator produces a bunch of entity classes. I then write my own classes and interfaces which wrap these Linq classes: class DatabaseUser : IUser { public DatabaseUser(User user) { _user = user; } public Guid Id { get { return _user.Id; } } ... etc } so far so good. Now it's easy enough to find a users phones from Phones.Where(p => p.User = user) but surely comsumers of the API shouldn't need to be writing their own Linq queries to get at data, so I should wrap this query in a function or property somewhere. So the question is, in this example, would you add a Phones property to IUser or not? In other words, should my interface specifically be modelling my database objects (in which case Phones doesn't belong in IUser), or are they actually simply providing a set of functions and properties which are conceptually associated with a User (in which case it does)? There seems drawbacks to both views, but I'm wondering if there is a standard approach to the problem. Or just any general words of wisdom you could share. My first thought was to use extension methods but in fact that doesn't work in this case.

    Read the article

  • Elegant and 'correct' multiton implementation in Objective C?

    - by submachine
    Would you call this implementation of a multiton in objective-c 'elegant'? I have programmatically 'disallowed' use of alloc and allocWithZone: because the decision to allocate or not allocate memory needs to be done based on a key. I know for sure that I need to work with only two instances, so I'm using 'switch-case' instead of a map. #import "Multiton.h" static Multiton *firstInstance = nil; static Multiton *secondInstance = nil; @implementation Multiton + (Multiton *) sharedInstanceForDirection:(char)direction { return [[self allocWithKey:direction] init]; } + (id) allocWithKey:(char)key { return [self allocWithZone:nil andKey:key]; } + (id)allocWithZone:(NSZone *)zone andKey:(char)key { Multiton **sharedInstance; @synchronized(self) { switch (key) { case KEY_1: sharedInstance = &firstInstance; break; case KEY_2: sharedInstance = &secondInstance; break; default: [NSException raise:NSInvalidArgumentException format:@"Invalid key"]; break; } if (*sharedInstance == nil) *sharedInstance = [super allocWithZone:zone]; } return *sharedInstance; } + (id) allocWithZone:(NSZone *)zone { //Do not allow use of alloc and allocWithZone [NSException raise:NSObjectInaccessibleException format:@"Use allocWithZone:andKey: or allocWithKey:"]; return nil; } - (id) copyWithZone:(NSZone *)zone { return self; } - (id) retain { return self; } - (unsigned) retainCount { return NSUIntegerMax; } - (void) release { return; } - (id) autorelease { return self; } - (id) init { [super init]; return self; } PS: I've not tried out if this works as yet, but its compiling cleanly :)

    Read the article

  • Loading views dynamically

    - by Dan
    Case 1: I have created View-based sample application and tried execute below code. When I press on "Job List" button it should load another view having "Back Btn" on it. In test function, if I use [self.navigationController pushViewController:jbc animated:YES]; nothing gets loaded, but if I use [self presentModalViewController:jbc animated:YES]; it loads another view haveing "Back Btn" on it. Case 2: I did create another Navigation Based Applicaton and used [self.navigationController pushViewController:jbc animated:YES]; it worked as I wanted. Can someone please explain why it was not working in Case 1. Does it has something to do with type of project that is selected? @interface MWViewController : UIViewController { } -(void) test; @end @interface JobViewCtrl : UIViewController { } @end @implementation MWViewController (void)viewDidLoad { UIButton* btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(80, 170, 150, 35); [btn setTitle:@"Job List!" forState:UIControlStateNormal]; [btn addTarget:self action:@selector(test) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:btn]; [super viewDidLoad]; } -(void) test { JobViewCtrl* jbc = [[JobViewCtrl alloc] init]; [self.navigationController pushViewController:jbc animated:YES]; //[self presentModalViewController:jbc animated:YES]; [jbc release]; } (void)dealloc { [super dealloc]; } @end @implementation JobViewCtrl -(void) loadView { self.view = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; self.view.backgroundColor = [UIColor grayColor]; UIButton* btn = [UIButton buttonWithType:UIButtonTypeRoundedRect]; btn.frame = CGRectMake(80, 170, 150, 35); [btn setTitle:@"Back Btn!" forState:UIControlStateNormal]; [self.view addSubview:btn]; } @end

    Read the article

  • Writing functions of tuples conveniently in Scala

    - by Alexey Romanov
    Quite a few functions on Map take a function on a key-value tuple as the argument. E.g. def foreach(f: ((A, B)) ? Unit): Unit. So I looked for a short way to write an argument to foreach: > val map = Map(1 -> 2, 3 -> 4) map: scala.collection.immutable.Map[Int,Int] = Map(1 -> 2, 3 -> 4) > map.foreach((k, v) => println(k)) error: wrong number of parameters; expected = 1 map.foreach((k, v) => println(k)) ^ > map.foreach({(k, v) => println(k)}) error: wrong number of parameters; expected = 1 map.foreach({(k, v) => println(k)}) ^ > map.foreach(case (k, v) => println(k)) error: illegal start of simple expression map.foreach(case (k, v) => println(k)) ^ I can do > map.foreach(_ match {case (k, v) => println(k)}) 1 3 Any better alternatives?

    Read the article

  • ListView setOnItemClickListener in ListFragment not working

    - by Siddarth Kaki
    I am developing an app that uses ActionBar tabs to display a list of options through ListFragment. The list (and ListFragment) display without a problem, but the ListView's setOnItemClickListener doesn't seems to work, as nothing happens when an item in the list is clicked. Here's the code for the ListFragment class: package XXX.XXX; public class AboutFrag extends SherlockListFragment { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.aboutfrag, container, false); ListView lv = (ListView) view.findViewById(android.R.id.list); String[] items = new String[] {"About 1", "About 2", "About 3"}; lv.setAdapter(new ArrayAdapter<String>(getActivity(), R.layout.list_item, items)); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { switch (position) { case 0: Intent browserIntent = new Intent(Intent.ACTION_VIEW, Uri.parse("http://google.com")); startActivityForResult(browserIntent, 0); break; case 1: Intent browserIntent2 = new Intent(Intent.ACTION_VIEW, Uri.parse("http://wikipedia.org")); startActivityForResult(browserIntent2, 0); break; case 2: Intent browserIntent3 = new Intent(Intent.ACTION_VIEW, Uri.parse("http:/android.com"); startActivityForResult(browserIntent3, 0); break; } } }); return view; } } I'm assuming it does not work because the class returns the view object, so the FragmentActivity can't run the listener code, so does anyone know how to make this work? By the way, I am using ActionBarSherlock. Thanks in advance!!!

    Read the article

  • Multi-client C# ODBC (Sybase/Oracle/MSSQL) table access question.

    - by Hamish Grubijan
    I am working on a feature that would allow clients pick a unique identifier (ci_name). The code below is a generic version that gets expanded to the right sql depending on the vendor. Hopefully it makes sense. #include "sql.h" create table client_identification ( ci_id TYPE_ID IDENTITY, ci_name varchar(64) not null, constraint ci_pk primary key (ci_name) ); go CREATE_SEQUENCE(ci_id) There will be simple stored procedures for adding, retrieving, and deleting these user records. This will be used by several admins. This will not happen very frequently, but there is still a possibility that something will be added or deleted since the list was initially retrieved. I have not yet decided if I need to detect the case of a double delete, but the user name cannot be created twice - primary key constraint will object. I want to be able to detect this particular case and display something like: "you snooze - you loose." :) I would like to leverage the pk constraint instead of doing some extra sql gymnastics. So, how can I detect this case cleanly, so that it works in MS SQL 2008, Sybase, and Oracle? I hope to do better than catch a general ODBC exception and parse out the text and look for what Sybase, Oracle, and MSSQL would give me back. Oracle is a little different. We actually prepend these variables to the Oracle version of stored procedures because they are not available otherwise: Vret_val out number, Vtran_count in out number, Vmessage_count in out number, Thanks. General helpful tips and comments are welcome, except for naming convention ones ( I do not have a choice here, plus I mangled the actual names a bit).

    Read the article

  • How do I structure my tests with Python unittest module?

    - by persepolis
    I'm trying to build a test framework for automated webtesting in selenium and unittest, and I want to structure my tests into distinct scripts. So I've organised it as following: base.py - This will contain, for now, the base selenium test case class for setting up a session. import unittest from selenium import webdriver # Base Selenium Test class from which all test cases inherit. class BaseSeleniumTest(unittest.TestCase): def setUp(self): self.browser = webdriver.Firefox() def tearDown(self): self.browser.close() main.py - I want this to be the overall test suite from which all the individual tests are run. import unittest import test_example if __name__ == "__main__": SeTestSuite = test_example.TitleSpelling() unittest.TextTestRunner(verbosity=2).run(SeTestSuite) test_example.py - An example test case, it might be nice to make these run on their own too. from base import BaseSeleniumTest # Test the spelling of the title class TitleSpelling(BaseSeleniumTest): def test_a(self): self.assertTrue(False) def test_b(self): self.assertTrue(True) The problem is that when I run main.py I get the following error: Traceback (most recent call last): File "H:\Python\testframework\main.py", line 5, in <module> SeTestSuite = test_example.TitleSpelling() File "C:\Python27\lib\unittest\case.py", line 191, in __init__ (self.__class__, methodName)) ValueError: no such test method in <class 'test_example.TitleSpelling'>: runTest I suspect this is due to the very special way in which unittest runs and I must have missed a trick on how the docs expect me to structure my tests. Any pointers?

    Read the article

  • Populate table fields on query execution

    - by Jason
    I'm trying to build an ASP site that populates a Table based on the results of a selection in a ListBox. In order to do this, I've created a GridView table inside a div element. Currently the default behavior is to show all the items in the specified table in sortable order. However, I'd like to refine this further to allow for display of matches based on the results from the ListBox selection, but am not sure how to execute this. The ListBox fires off a OnSelectionChanged event to the method defined below and the GridView element is defined as <asp:GridView ID="dataListings" runat="server" AllowSorting="True" AutoGenerateColumns="False" DataSourceID="LinqDataSource1" OnDataBinding="ListBox1_SelectedIndexChanged"> protected void ListBox1_SelectedIndexChanged(object sender, EventArgs e) { int itemSelected = selectTopics.SelectedIndex; string[] listing = null; switch (itemSelected)//assign listing the array of course numbers { case 0: break; case 1: listing = arts; break; case 2: listing = currentEvents; break; .... More cases here default: listing = arts; break; } using (OLLIDBDataContext odb = new OLLIDBDataContext()) { var q = from c in odb.tbl_CoursesAndWorkshops where listing.Contains(c.tbl_Course_Description.tbl_CoursesAndWorkshops.course_workshop_number) select c; dataListings.DataSource = q; dataListings.DataBind(); } } However, this method never gets fired. I can see a request being made when changing the selection, but setting a breakpoint at the method declaration does nothing at all. Based on this, setup, I have three related questions What do I need to modify to get the OnSelectionChanged event handler to fire? How can I alter the GridView area to be empty on page load? How do I send the results from the dataListings.DataBind() execution to show in the GridView?

    Read the article

  • How not to lose binding source updates?

    - by Fyodor Soikin
    Suppose I have a modal dialog with a textbox and OK/Cancel buttons. And it is built on MVVM - i.e. it has a ViewModel object with a string property that the textbox is bound to. Say, I enter some text in the textbox and then grab my mouse and click "OK". Everything works fine: at the moment of click, the textbox loses focus, which causes the binding engine to update the ViewModel's property. I get my data, everybody's happy. Now suppose I don't use my mouse. Instead, I just hit Enter on the keyboard. This also causes the "OK" button to "click", since it is marked as IsDefault="True". But guess what? The textbox doesn not lose focus in this case, and therefore, the binding engine remains innocently ignorant, and I don't get my data. Dang! Another variation of the same scenario: suppose I have a data entry form right in the main window, enter some data into it, and then hit Ctrl+S for "Save". Guess what? My latest entry doesn't get saved! This may be somewhat remedied by using UpdateSourceTrigger=PropertyChanged, but that is not always possible. One obvious case would be the use of StringFormat with binding - the text keeps jumping back into "formatted" state as I'm trying to enter it. And another case, which I have encountered myself, is when I have some time-consuming processing in the viewmodel's property setter, and I only want to perform it when the user is "done" entering text. This seems like an eternal problem: I remember trying to solve it systematically from ages ago, ever since I've started working with interactive interfaces, but I've never quite succeeded. In the past, I always ended up using some sort of hacks - like, say, adding an "EnsureDataSaved" method to every "presenter" (as in "MVP") and calling it at "critical" points, or something like that... But with all the cool technologies, as well as empty hype, of WPF, I expected they'd come up with some good solution.

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • Iphone -- maintaining a list of strings and a corresponding typedef enum

    - by William Jockusch
    Suppose I have the following: typedef enum functionType {ln, sin, sqrt} functionType; NSArray *functions = [NSArray arrayWithObjects: @"ln", @"sin", @"sqrt", nil]; Suppose further that *functions will not change at runtime. Question -- is there any way to set up a single structure which updates both of these? So that I only have to keep track of one list, instead of two. To explain what is going on -- the idea is that string input from the user will be stored in a variable of type functionType. Later on, I will have code like this: double valueOfFunction: (functionType) function withInput: (double) input switch (function) { case ln: return ln(input); case sin: return sin(input); case sqrt: return sqrt(input); //etc . . . could grow to include a lot of functions. } And valueOfFunction needs to be fast. So I don't want to be doing string comparisons there.

    Read the article

  • What is jQuery for Document.createElementNS()?

    - by C.W.Holeman II
    What is jQuery for Document.createElementNS()? function emleGraphicToSvg(aGraphicNode) { var lu = function luf(aPrefix){ switch (aPrefix){ case 'xhtml': return 'http://www.w3.org/1999/xhtml'; case 'math': return 'http://www.w3.org/1998/Math/MathML'; case 'svg': return 'http://www.w3.org/2000/svg'; } return ''; }; var svg = document.evaluate("svg:svg", aGraphicNode, lu, XPathResult.FIRST_ORDERED_NODE_TYPE, null). singleNodeValue; $(svg).children().remove(); rect = document.createElementNS(lu('svg'), "rect"); rect.setAttribute("x", "35"); rect.setAttribute("y", "25"); rect.setAttribute("width", "200"); rect.setAttribute("height", "50"); rect.setAttribute("class", "emleGraphicOutline"); svg.appendChild(rect); } The code is a simplified fragment from Emle - Electronic Mathematics Laboratory Equipment JavaScript file emle_lab.js. The Look-Up-Function, luf(), maps a complete reference to a shorten name for the namespace in the XPath string and createElementNS(). The existing svg:svg is located, removed and replaced by a new rectangle.

    Read the article

  • http post request with cross-origin in javascript

    - by Calamarico
    i have a problem with a http post call in firefox. I know that when there are a cross origin, firefox first do a OPTIONS before the POST to know the access-control-allow headers. With this code i dont have any problem: Net.requestSpeech.prototype.post = function(url, data) { if(this.xhr != null) { this.xhr.open("POST", url); this.xhr.onreadystatechange = Net.requestSpeech.eventFunction; this.xhr.setRequestHeader("Content-Type", "application/json; charset=utf-8"); this.xhr.send(data); } } I test this code with a simple html that invokes this function. Everything is ok and i have the response of the OPTIONS and POST, and i process the response. But, i'm trying to integrate this code with an existen application with uses jquery (i dont know if this is a problem), when the send(data) executes in this case, the browser (firefox) do the same, first do a OPTION request, but in this case dont receive the response of the server and puts this message in console: [18:48:13.529] OPTIONS http://localhost:8111/ [undefined 31ms] Undefined... the undefined is because dont receive the response, but the code is the same, i dont know why in this case the option dont receive the response, someone have an idea? i debug my server app and the OPTIONS arrive ok to the server, but it seems like the browser dont wait to the response. edit more later: ok i think that the problem is when i run with a simple html with a SCRIPT tag that invokes the method who do the request run ok, but in this app that dont receive the response, i have a form that do a onsubmit event, i think that the submit event returns very fast and the browser dont have time to get the OPTIONS request. edit more later later: WTF, i resolve the problem make the POST request to sync: this.xhr.open("POST", url, false); The submit reponse very quickly and can't wait to the OPTION response of the browser, any idea to this?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >