Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 180/906 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Why does my IMultiBindingConverter get an array of strings when used to set TextBox.Text?

    - by mcohen75
    Hi- I'm trying to use a MultiBinding with a converter where the child elements also have a converter. The XAML looks like so: <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource localizedMessageConverter}" ConverterParameter="{x:Static res:Resources.RecordsFound}" > <Binding Converter="{StaticResource localizedMessageParameterConverter}" ConverterParameter="ALIAS" Path="Alias" Mode="OneWay" /> <Binding Converter="{StaticResource localizedMessageParameterConverter}" ConverterParameter="COUNT" Path="Count" Mode="OneWay" /> </MultiBinding> </TextBlock.Text> The problem I'm facing here is, whenever this is used with a TextBlock to specify the Text property, my IMultiValueConverter implementation gets an object collection of strings instead of the class returned by the IValueConverter. It seems that the ToString() method is called on the result of the inner converter and passed to the IMultiValueConverter. If used to specify the Content property of Label, all is well. It seems to me that the framework is assuming that the return type will be string, but why? I can see this for the MultiBinding since it should yield a result that is compatible with TextBlock.Text, but why would this also be the case for the Bindings inside a MultiBinding? If I remove the converter from the inner Binding elements, the native types are returned. In my case string and int.

    Read the article

  • Test multiple domains using ASP.NET development server

    - by Pete Lunenfeld
    I am developing a single web application that will dynamically change its content depending on which domain name is used to reach the site. Multiple domains will point to the same application. I wish to use the following code (or something close) to detect the domain name and perform the customizations: string theDomainName = Request.Url.Host; switch (theDomainName) { case "www.clientone.com": // do stuff break; case "www.clienttwo.com": // do other stuff break; } I would like to test the functionality of the above using the ASP.NET development server. I created mappings in the local HOSTS file to map www.clientone.com to 127.0.0.1, and www.clienttwo.com to 127.0.0.1. I then browse to the application with the browser using www.clinetone.com (etc). When I try to test this code using the ASP.net development server the URL always says localhost. It does NOT capture the host entered in the browser, only localhost. Is there a way to test the URL detection functionality using the development server? Thanks.

    Read the article

  • Combine regular expressions for splitting camelCase string into words

    - by stou
    I managed to implement a function that converts camel case to words, by using the solution suggested by @ridgerunner in this question: Split camelCase word into words with php preg_match (Regular Expression) However, I want to also handle embedded abreviations like this: 'hasABREVIATIONEmbedded' translates to 'Has ABREVIATION Embedded' I came up with this solution: <?php function camelCaseToWords($camelCaseStr) { // Convert: "TestASAPTestMore" to "TestASAP TestMore" $abreviationsPattern = '/' . // Match position between UPPERCASE "words" '(?<=[A-Z])' . // Position is after group of uppercase, '(?=[A-Z][a-z])' . // and before group of lowercase letters, except the last upper case letter in the group. '/x'; $arr = preg_split($abreviationsPattern, $camelCaseStr); $str = implode(' ', $arr); // Convert "TestASAP TestMore" to "Test ASAP Test More" $camelCasePattern = '/' . // Match position between camelCase "words". '(?<=[a-z])' . // Position is after a lowercase, '(?=[A-Z])' . // and before an uppercase letter. '/x'; $arr = preg_split($camelCasePattern, $str); $str = implode(' ', $arr); $str = ucfirst(trim($str)); return $str; } $inputs = array( 'oneTwoThreeFour', 'StartsWithCap', 'hasConsecutiveCAPS', 'ALLCAPS', 'ALL_CAPS_AND_UNDERSCORES', 'hasABREVIATIONEmbedded', ); echo "INPUT"; foreach($inputs as $val) { echo "'" . $val . "' translates to '" . camelCaseToWords($val). "'\n"; } The output is: INPUT'oneTwoThreeFour' translates to 'One Two Three Four' 'StartsWithCap' translates to 'Starts With Cap' 'hasConsecutiveCAPS' translates to 'Has Consecutive CAPS' 'ALLCAPS' translates to 'ALLCAPS' 'ALL_CAPS_AND_UNDERSCORES' translates to 'ALL_CAPS_AND_UNDERSCORES' 'hasABREVIATIONEmbedded' translates to 'Has ABREVIATION Embedded' It works as intended. My question is: Can I combine the 2 regular expressions $abreviationsPattern and camelCasePattern so i can avoid running the preg_split() function twice?

    Read the article

  • what exactly is the danger of an uninitialized pointer in C

    - by akh2103
    I am trying get a handle on C as I work my way thru Jim Trevor's "Cyclone: A safe dialect of C" for a PL class. Trevor and his co-authors are trying to make a safe version of C, so they eliminate uninitialized pointers in their language. Googling around a bit on uninitialized pointers, it seems like un-initialized pointers point to random locations in memory. It seems like this alone makes them unsafe. If you reference an un-itilialized pointer, you jump to an unsafe part of memory. Period. But the way Trevor talks about them seems to imply that it is more complex. He cites the following code, and explains that when the function FrmGetObjectIndex dereferences f, it isn’t accessing a valid pointer, but rather an unpredictable address — whatever was on the stack when the space for f was allocated. What does Trevor mean by "whatever was on the stack when the space for f was allocated"? Are "un-initialized" pointers initialized to random locations in memory by default? Or does their "random" behavior have to do with the memory allocated for these pointers getting filled with strange values (that are then referenced) because of unexpected behavior on the stack. Form *f; switch (event->eType) { case frmOpenEvent: f = FrmGetActiveForm(); ... case ctlSelectEvent: i = FrmGetObjectIndex(f, field); ... }

    Read the article

  • Getting a connection from a Sybase datasource in WAS 6.1 fails with message "User name property miss

    - by Abel Morelos
    I have a standalone application that needs to connect to a Sybase database via a datasource, I'm trying to connect using getConnection() and get the connection from this Sybase datasource which is hosted in WAS 6.1, sadly I'm getting an error JZ004 - Sybase(R) jConnect for JDBC(TM) Programmer's Reference: SQL Exception and Warning Messages JZ004 error message is: User name property missing in DriverManager.getConnection(..., Properties) Action: Provide the required user property. As you can see, this is not a connectivity (so we can discard JNDI or lookup problems), but rather a configuration problem. For my Sybase datasource in WAS 6.1 I have set up the proper authentication alias (Component-managed Authentication Alias), and I know the credentials are alright, "Test Connection" is successful for this datasource. Somebody had a similar problem and was because of the authentication alias- http://forum.springsource.org/showthread.php?t=39915 Next, I tried calling getConnection() but now I provided the credentials like getConnection(user, password)... and this time it worked!!! So I suspect that somehow WAS 6.1 is not picking or taking the authentication info I set in the datasource as mentioned before. If you think that maybe getConnection(user, password) should be OK for my case, well, that's not the case since I have a requirement to keep the credentials in the server, the standalone application only needs to know the JNDI information to lookup the datasource. Please let me know if have faced a similar problem, or what would you suggest me to do. Thanks.

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • Django + jQuery: Sometimes AJAX, but always DRY?

    - by Justin Myles Holmes
    Let's say I have an app (in Django) for which I want to sometimes (but not always) load content via ajax. An easy example is logging in. When the user logs in, I don't want to refresh the page, just change things around. Yet, if they are already logged in, and then arrive at (or refresh) the same page, I want it to show the same content. So, in the first case, obviously I do some sort of ajax login and load changes to the page accordingly. Easy enough. But what about in the second case? Do I go back through and add {% if user.authenticated %} all over the place? This seems cold, dark, and WET. On the other hand, I can just wrap all the ajaxy stuff in a javascript function, called loggedIn(), and run that if the user is authenticated. But then I'm faced with two http requests instead of one. Also undesirable. So what's the standard solution here?

    Read the article

  • Serializing a part of object graph

    - by Felix
    Hi all, I have a problem regarding Java custom serialization. I have a graph of objects and want to configure where to stop when I serialize a root object from client to server. Let's make it a bit concrete, clear by giving a sample scenario. I have Classes of type Company Employee (abstract) Manager extends Employee Secretary extends Employee Analyst extends Employee Project Here are the relations: Company(1)---(n)Employee Manager(1)---(n)Project Analyst(1)---(n)Project Imagine, I'm on the client side and I want to create a new company, assign it 10 employees (new or some existing) and send this new company to the server. What I expect in this scenario is to serialize the company and all bounding employees to the server side, because I'll save the relations on the database. So far no problem, since the default Java serialization mechanism serializes the whole object graph, excluding the field which are static or transient. My goal is about the following scenario. Imagine, I loaded a company and its 1000 employees from the server to the client side. Now I only want to rename the company's name (or some other field, that directly belongs to the company) and update this record. This time, I want to send only the company object to the server side and not the whole list of employees (I just update the name, the employees are in this use case irrelevant). My aim also includes the configurability of saying, transfer the company AND the employees but not the Project-Relations, you must stop there. Do you know any possibility of achieving this in a generic way, without implementing the writeObject, readObject for every single Entity-Object? What would be your suggestions? I would really appreciate your answers. I'm open to any ideas and am ready to answer your questions in case something is not clear.

    Read the article

  • QUnit Unit Testing: Test Mouse Click

    - by Ngu Soon Hui
    I have the following HTML code: <div id="main"> <form Id="search-form" action="/ViewRecord/AllRecord" method="post"> <div> <fieldset> <legend>Search</legend> <p> <label for="username">Staff name</label> <input id="username" name="username" type="text" value="" /> <label for="softype"> software type</label> <input type="submit" value="Search" /> </p> </fieldset> </div> </form> </div> And the following Javascript code ( with JQuery as the library): $(function() { $("#username").click(function() { $.getJSON("ViewRecord/GetSoftwareChoice", {}, function(data) { // use data to manipulate other controls }); }); }); Now, how to test $("#username").click so that for a given input, it calls the correct url ( in this case, its ViewRecord/GetSoftwareChoice) And, the output is expected (in this case, function(data)) behaves correctly? Any idea how to do this with QUnit? Edit: I read the QUnit examples, but they seem to be dealing with a simple scenario with no AJAX interaction. And although there are ASP.NET MVC examples, but I think they are really testing the output of the server to an AJAX call, i.e., it's still testing the server response, not the AJAX response. What I want is how to test the client side response.

    Read the article

  • Printing HTML blocks

    - by Lem0n
    I have a page with a few tables that I want to be printable. I want the following behaviour: 1) Add a page break if the next table fits in a single page, but won't fit in the current page (because of other stuff already printed on this page) 2) Print the "table header" again in case it's needed to break a table (I guess it's the default behaviour) Any ideas specially on the first issue? Maybe some CSS can help? I'll give on example. I have a page with 4 tables. All of them with 10 lines, except the third one, with 50 lines. The first and second goes on the first page. Since the third one won't fit in the same page, but will fit in a page alone, it's printed on a page alone... and then the forth table is printed on the third page (in case it doesn't fit together in the second page). But, if the third page had 300 lines and would be broke anyway, it could have started to be print in the first page.

    Read the article

  • Passing Variable Length Arrays to a function

    - by David Bella
    I have a variable length array that I am trying to pass into a function. The function will shift the first value off and return it, and move the remaining values over to fill in the missing spot, putting, let's say, a -1 in the newly opened spot. I have no problem passing an array declared like so: int framelist[128]; shift(framelist); However, I would like to be able to use a VLA declared in this manner: int *framelist; framelist = malloc(size * sizeof(int)); shift(framelist); I can populate the arrays the same way outside the function call without issue, but as soon as I pass them into the shift function, the one declared in the first case works fine, but the one in the second case immediately gives a segmentation fault. Here is the code for the queue function, which doesn't do anything except try to grab the value from the first part of the array... int shift(int array[]) { int value = array[0]; return value; } Any ideas why it won't accept the VLA? I'm still new to C, so if I am doing something fundamentally wrong, let me know.

    Read the article

  • How do I define a Calculated Measure in MDX based on a Dimension Attribute?

    - by ShaneD
    I would like to create a calculated measure that sums up only a specific subset of records in my fact table based on a dimension attribute. Given: Dimension Date LedgerLineItem {Charge, Payment, Write-Off, Copay, Credit} Measures LedgerAmount Relationships * LedgerLineItem is a degenerate dimension of FactLedger If I break down LedgerAmount by LedgerLineItem.Type I can easily see how much is charged, paid, credit, etc, but when I do not break it down by LedgerLineItem.Type I cannot easily add the credit, paid, credit, etc into a pivot table. I would like to create separate calculated measures that sum only specific type (or multiple types) of ledger facts. An example of the desired output would be: | Year | Charged | Total Paid | Amount - Ledger | | 2008 | $1000 | $600 | -$400 | | 2009 | $2000 | $1500 | -$500 | | Total | $3000 | $2100 | -$900 | I have tried to create the calculated measure a couple of ways and each one works in some circumstances but not in others. Now before anyone says do this in ETL, I have already done it in ETL and it works just fine. What I am trying to do as part of learning to understand MDX better is to figure out how to duplicate what I have done in the ETL in MDX as so far I am unable to do that. Here are two attempts I have made and the problems with them. This works only when ledger type is in the pivot table. It returns the correct amount of the ledger entries (although in this case it is identical to [amount - ledger] but when I try to remove type and just get the sum of all ledger entries it returns unknown. CASE WHEN ([Ledger].[Type].currentMember = [Ledger].[Type].&[Credit]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Paid]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Held Money: Copay]) THEN [Measures].[Amount - ledger] ELSE 0 END This works only when ledger type is not in the pivot table. It always returns the total payment amount, which is incorrect when I am slicing by type as I would only expect to see the credit portion under credit, the paid portion, under paid, $0 under charge, etc. sum({([Ledger].[Type].&[Credit]), ([Ledger].[Type].&[Paid]), ([Ledger].[Type].&[Held Money: Copay])}, [Measures].[Amount - ledger]) Is there any way to make this return the correct numbers regardless of whether Ledger.Type is included in my pivot table or not?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Problem with fork exec kill when redirecting output in perl

    - by Edu
    I created a script in perl to run programs with a timeout. If the program being executed takes longer then the timeout than the script kills this program and returns the message "TIMEOUT". The script worked quite well until I decided to redirect the output of the executed program. When the stdout and stderr are being redirected, the program executed by the script is not being killed because it has a pid different than the one I got from fork. It seems perl executes a shell that executes my program in the case of redirection. I would like to have the output redirection but still be able to kill the program in the case of a timeout. Any ideas on how I could do that? A simplified code of my script is: #!/usr/bin/perl use strict; use warnings; use POSIX ":sys_wait_h"; my $timeout = 5; my $cmd = "very_long_program 1>&2 > out.txt"; my $pid = fork(); if( $pid == 0 ) { exec($cmd) or print STDERR "Couldn't exec '$cmd': $!"; exit(2); } my $time = 0; my $kid = waitpid($pid, WNOHANG); while ( $kid == 0 ) { sleep(1); $time ++; $kid = waitpid($pid, WNOHANG); print "Waited $time sec, result $kid\n"; if ($timeout > 0 && $time > $timeout) { print "TIMEOUT!\n"; #Kill process kill 9, $pid; exit(3); } } if ( $kid == -1) { print "Process did not exist\n"; exit(4); } print "Process exited with return code $?\n"; exit($?); Thanks for any help.

    Read the article

  • How should I read from a buffered reader?

    - by Roman
    I have the following example of reading from a buffered reader: while ((inputLine = input.readLine()) != null) { System.out.println("I got a message from a client: " + inputLine); } The code in the loop println will be executed whenever something appears in the buffered reader (input in this case). In my case, if a client-application writes something to the socket, the code in the loop (in the server-application) will be executed. But I do not understand how it works. inputLine = input.readLine() waits until something appears in the buffered reader and when something appears there it returns true and the code in the loop is executed. But when null can be returned. There is another question. The above code was taken from a method which throws Exception and I use this code in the run method of the Thread. And when I try to put throws Exception before the run the compiler complains: overridden method does not throw exception. Without the throws exception I have another complain from the compiler: unreported exception. So, what can I do?

    Read the article

  • Core data migration failing with "Can't find model for source store" but managedObjectModel for source is present

    - by Ira Cooke
    I have a cocoa application using core-data, which is now at the 4th version of its managed object model. My managed object model contains abstract entities but so far I have managed to get migration working by creating appropriate mapping models and creating my persistent store using addPersistentStoreWithType:configuration:options:error and with the NSMigratePersistentStoresAutomaticallyOption set to YES. NSDictionary *optionsDictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; NSURL *url = [NSURL fileURLWithPath: [applicationSupportFolder stringByAppendingPathComponent: @"MyApp.xml"]]; NSError *error=nil; [theCoordinator addPersistentStoreWithType:NSXMLStoreType configuration:nil URL:url options:optionsDictionary error:&error] This works fine when I migrate from model version 3 to 4, which is a migration that involves adding attributes to several entities. Now when I try to add a new model version (version 5), the call to addPersistentStoreWithType returns nil and the error remains empty. The migration from 4 to 5 involves adding a single attribute. I am struggling to debug the problem and have checked all the following; The source database is in fact at version 4 and the persistentStoreCoordinator's managed object model is at version 5. The 4-5 mapping model as well as managed object models for versions 4 and 5 are present in the resources folder of my built application. I've tried various model upgrade paths. Strangely I find that upgrading from an early version 3 - 5 works .. but upgrading from 4 - 5 fails. I've tried adding a custom entity migration policy for migration of the entity whose attributes are changing ... in this case I overrode the method beginEntityMapping:manager:error: . Interestingly this method does get called when migration works (ie when I migrate from 3 to 4, or from 3 to 5 ), but it does not get called in the case that fails ( 4 to 5 ). I'm pretty much at a loss as to where to proceed. Any ideas to help debug this problem would be much appreciated.

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • Avoid the problem with BigDecimal when migrating to Java 1.4 to Java 1.5+

    - by romaintaz
    Hello, I've recently migrated a Java 1.4 application to a Java 6 environment. Unfortunately, I encountered a problem with the BigDecimal storage in a Oracle database. To summarize, when I try to store a "7.65E+7" BigDecimal value (76,500,000.00) in the database, Oracle stores in reality the value of 7,650,000.00. This defect is due to the rewritting of the BigDecimal class in Java 1.5 (see here). In my code, the BigDecimal was created from a double using this kind of code: BigDecimal myBD = new BigDecimal("" + someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... In more than 99% of the cases, everything works fine. Except that in really few case, the bug mentioned above occurs. And that's quite annoying. If I change the previous code to avoid the use of the String constructor of BigDecimal, then I do not encounter the bug in my uses cases: BigDecimal myBD = new BigDecimal(someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... However, how can I be sure that this solution is the correct way to handle the use of BigDecimal? So my question is to know how I have to manage my BigDecimal values to avoid this issue: Do not use the new BigDecimal(String) constructor and use directly the new BigDecimal(double)? Force Oracle to use toPlainString() instead of toString() method when dealing with BigDecimal (and in this case how to do that)? Any other solution? Environment information: Java 1.6.0_14 Hibernate 2.1.8 (yes, it is a quite old version) Oracle JDBC 9.0.2.0 and also tested with 10.2.0.3.0 Oracle database 10.2.0.3.0

    Read the article

  • WM_NCHITTEST and secondary monitor to left of primary monitor

    - by AlanKley
    The described setup with 2nd monitor to left of primary causes WM_NCHITTEST to send negative values which is apparently not supported according to this post. I have a custom control written in win32 that is like a Group control. It has a small clickable area. No MOUSE events are coming to my control when the window containing the custom control lies on a second monitor to the left of the primary monitor. SPY++ shows WM_NCHITTEST messages but no Mouse messages. When window is moved to primary monitor or secondary monitor is positioned to right of primary (all points are positive) then everything works fine. Below is how the WM_NCHITTEST is handled in my custom control. In general I need it to return HTTRANSPARENT so as not to obscure other controls placed inside of it. Anybody have any suggestions what funky coordinate translation I need to do and what to return in response to WM_NCHITTEST to get Mouse messages translated and sent to my control in the case where it is on a 2nd monitor placed to the left of the primary monitor? case WM_NCHITTEST: { POINT Pt = {LOWORD(lP), HIWORD(lP)}; int i; ScreenToClient (hWnd, &Pt); if (PtInRect (&rClickableArea, Pt)) { return(DefWindowProc( hWnd, Msg, wP, lP )); } } lReturn = HTTRANSPARENT; break;

    Read the article

  • Flex AS3: ComboBox set visible to false doesn't hide

    - by jolierouge
    I have a combobox in a view that receives information about application state changes, and then is supposed to show or hide it's children based on the whole application state. It receives state change messages, it traces the correct values, it does what it's supposed to do, however, it just doesn't seem to work. Essentially, all it needs to do is hide a combobox during one state, and show it again during another state. Here is the code: public function updateState(event:* = null):void { trace("Project Panel Updating State"); switch(ApplicationData.getSelf().currentState) { case 'login': this.visible = false; break; case 'grid': this.visible = true; listProjects.includeInLayout = false; listProjects.visible = false; trace("ListProjects: " + listProjects.visible); listLang.visible = true; break; default: break; } } Here is the MXML: <mx:HBox> <mx:Button id="btnLoad" x="422" y="84" label="Load" enabled="true" click="loadProject();"/> <mx:ComboBox id="listProjects" x="652" y="85" editable="true" change="listChange()" color="#050CA8" fontFamily="Arial" /> <mx:Label x="480" y="86" text="Language:" id="label3" fontFamily="Arial" /> <mx:ComboBox id="listLang" x="537" y="84" editable="true" dataProvider="{langList}" color="#050CA8" fontFamily="Arial" width="107" change="listLangChange(event)"/> <mx:CheckBox x="830" y="84" label="Languages in English" id="langCheckbox" click='toggleLang()'/> </mx:HBox>

    Read the article

  • How to read spring-application-context.xml and AnnotationConfigWebApplicationContext both in spring mvc

    - by Suvasis
    In case I want to read bean definitions from spring-application-context.xml, I would do this in web.xml file. <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml </param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> In case I want to read bean definitions through Java Configuration Class (AnnotationConfigWebApplicationContext), I would do this in web.xml <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.AnnotationConfigWebApplicationContext </param-value> </init-param> <init-param> <param-name>contextConfigLocation</param-name> <param-value> org.package.MyConfigAnnotatedClass </param-value> </init-param> </servlet> How do I use both in my application. like reading beans from both configuration xml file and annotated class. Is there a way to load spring beans in xml file while we are using AppConfigAnnotatedClass to instantiate/use rest of the beans.

    Read the article

  • Java method keyword "final" and its use

    - by Lukas Eder
    When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example: interface Garble { int zork(); } interface Gnarf extends Garble { /** * This is the same as calling {@link #zblah(0)} */ int zblah(); int zblah(int defaultZblah); } And then abstract class AbstractGarble implements Garble { @Override public final int zork() { ... } } abstract class AbstractGnarf extends AbstractGarble implements Gnarf { // Here I absolutely want to fix the default behaviour of zblah // No Gnarf shouldn't be allowed to set 1 as the default, for instance @Override public final int zblah() { return zblah(0); } // This method is not implemented here, but in a subclass @Override public abstract int zblah(int defaultZblah); } I do this for several reasons: It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy) I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it. So the final keyword works perfectly for me. My question is: Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

    Read the article

  • backbone marionette - displaying a view in a layout region which has already been rendered

    - by Justin Wyllie
    I have something like this: //render the layout Layout.layout = new Layout.Screen(); MyApp.SomeRegion.show(Layout.layout); //render a view into it var mView = new CompositeView({collection: data}); Layout.layout.SomeRegion.show(mView); That all works fine. The layout has 3 regions. The above has rendered a view into one of them. Now, later, I want (in response to some user interaction) to render another view into one of the other regions. So I try: var mView2 = new CompositeView({collection: data}); Layout.layout.SomeOtherRegion.show(mView); But 'SomeOtherRegion' no longer exists on Layout.layout. I looked at this in Firebug. In the first case Layout.layout has my three regions defined on it. In the second case, not. Though they are still there in the regions object. This is the same layout instance. It looks like once a layout has been rendered you cannot render into it again? There must be a way. Nb. I don't want to re-render the layout. I hope that makes sense --Justin Wyllie

    Read the article

  • NSArray/NSMutableArray : Passed by ref or by value???

    - by wgpubs
    Totally confused here. I have a PARENT UIViewController that needs to pass an NSMutableArray to a CHILD UIViewController. I'm expecting it to be passed by reference so that changes made in the CHILD will be reflected in the PARENT and vice-versa. But that is not the case. Both have a property declared as .. @property (nonatomic, retain) NSMutableArray *photos; Example: In PARENT: self.photos = [[NSMutableArray alloc] init]; ChildViewController *c = [[ChildViewController alloc] init ...]; c.photos = self.photos; ... ... ... In CHILD: [self.photos addObject:obj1]; [self.photos addObject:obj2]; NSLog(@"Count:%d", [self.photos count]) // Equals 2 as expected ... Back in PARENT: NSLog(@"Count:%d", [self.photos count]) // Equals 0 ... NOT EXPECTED I thought they'd both be accessing the same memory. Is this not the case? If it isn't ... how do I keep the two NSMutableArrays in sync?

    Read the article

  • xCode: iPhone Swipe Gesture crash

    - by David DelMonte
    I have an app that I'd like the swipe gesture to flip to a second view. The app is all set up with buttons that work. The swipe gesture though causes a crash ( “EXC_BAD_ACCESS”.). The gesture code is: - (void)handleSwipe:(UISwipeGestureRecognizer *)recognizer { NSLog(@"%s", __FUNCTION__); switch (recognizer.direction) { case (UISwipeGestureRecognizerDirectionRight): [self performSelector:@selector(flipper:)]; break; case (UISwipeGestureRecognizerDirectionLeft): [self performSelector:@selector(flipper:)]; break; default: break; } } and "flipper" looks like this: - (IBAction)flipper:(id)sender { FlashCardsAppDelegate *mainDelegate = (FlashCardsAppDelegate *)[[UIApplication sharedApplication] delegate]; [mainDelegate flipToFront]; } flipToBack (and flipToFront) look like this.. - (void)flipToBack { NSLog(@"%s", __FUNCTION__); BackViewController *theBackView = [[BackViewController alloc] initWithNibName:@"BackView" bundle:nil]; [self setBackViewController:theBackView]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1.0]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:window cache:YES]; [frontViewController.view removeFromSuperview]; [self.window addSubview:[backViewController view]]; [UIView commitAnimations]; [frontViewController release]; frontViewController = nil; [theBackView release]; // NSLog (@" FINISHED "); } Maybe I'm going about this the wrong way... All ideas are welcome...

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >