Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 206/931 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • How to read spring-application-context.xml and AnnotationConfigWebApplicationContext both in spring mvc

    - by Suvasis
    In case I want to read bean definitions from spring-application-context.xml, I would do this in web.xml file. <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml </param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> In case I want to read bean definitions through Java Configuration Class (AnnotationConfigWebApplicationContext), I would do this in web.xml <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.AnnotationConfigWebApplicationContext </param-value> </init-param> <init-param> <param-name>contextConfigLocation</param-name> <param-value> org.package.MyConfigAnnotatedClass </param-value> </init-param> </servlet> How do I use both in my application. like reading beans from both configuration xml file and annotated class. Is there a way to load spring beans in xml file while we are using AppConfigAnnotatedClass to instantiate/use rest of the beans.

    Read the article

  • Problem with fork exec kill when redirecting output in perl

    - by Edu
    I created a script in perl to run programs with a timeout. If the program being executed takes longer then the timeout than the script kills this program and returns the message "TIMEOUT". The script worked quite well until I decided to redirect the output of the executed program. When the stdout and stderr are being redirected, the program executed by the script is not being killed because it has a pid different than the one I got from fork. It seems perl executes a shell that executes my program in the case of redirection. I would like to have the output redirection but still be able to kill the program in the case of a timeout. Any ideas on how I could do that? A simplified code of my script is: #!/usr/bin/perl use strict; use warnings; use POSIX ":sys_wait_h"; my $timeout = 5; my $cmd = "very_long_program 1>&2 > out.txt"; my $pid = fork(); if( $pid == 0 ) { exec($cmd) or print STDERR "Couldn't exec '$cmd': $!"; exit(2); } my $time = 0; my $kid = waitpid($pid, WNOHANG); while ( $kid == 0 ) { sleep(1); $time ++; $kid = waitpid($pid, WNOHANG); print "Waited $time sec, result $kid\n"; if ($timeout > 0 && $time > $timeout) { print "TIMEOUT!\n"; #Kill process kill 9, $pid; exit(3); } } if ( $kid == -1) { print "Process did not exist\n"; exit(4); } print "Process exited with return code $?\n"; exit($?); Thanks for any help.

    Read the article

  • Serializing a part of object graph

    - by Felix
    Hi all, I have a problem regarding Java custom serialization. I have a graph of objects and want to configure where to stop when I serialize a root object from client to server. Let's make it a bit concrete, clear by giving a sample scenario. I have Classes of type Company Employee (abstract) Manager extends Employee Secretary extends Employee Analyst extends Employee Project Here are the relations: Company(1)---(n)Employee Manager(1)---(n)Project Analyst(1)---(n)Project Imagine, I'm on the client side and I want to create a new company, assign it 10 employees (new or some existing) and send this new company to the server. What I expect in this scenario is to serialize the company and all bounding employees to the server side, because I'll save the relations on the database. So far no problem, since the default Java serialization mechanism serializes the whole object graph, excluding the field which are static or transient. My goal is about the following scenario. Imagine, I loaded a company and its 1000 employees from the server to the client side. Now I only want to rename the company's name (or some other field, that directly belongs to the company) and update this record. This time, I want to send only the company object to the server side and not the whole list of employees (I just update the name, the employees are in this use case irrelevant). My aim also includes the configurability of saying, transfer the company AND the employees but not the Project-Relations, you must stop there. Do you know any possibility of achieving this in a generic way, without implementing the writeObject, readObject for every single Entity-Object? What would be your suggestions? I would really appreciate your answers. I'm open to any ideas and am ready to answer your questions in case something is not clear.

    Read the article

  • Avoid the problem with BigDecimal when migrating to Java 1.4 to Java 1.5+

    - by romaintaz
    Hello, I've recently migrated a Java 1.4 application to a Java 6 environment. Unfortunately, I encountered a problem with the BigDecimal storage in a Oracle database. To summarize, when I try to store a "7.65E+7" BigDecimal value (76,500,000.00) in the database, Oracle stores in reality the value of 7,650,000.00. This defect is due to the rewritting of the BigDecimal class in Java 1.5 (see here). In my code, the BigDecimal was created from a double using this kind of code: BigDecimal myBD = new BigDecimal("" + someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... In more than 99% of the cases, everything works fine. Except that in really few case, the bug mentioned above occurs. And that's quite annoying. If I change the previous code to avoid the use of the String constructor of BigDecimal, then I do not encounter the bug in my uses cases: BigDecimal myBD = new BigDecimal(someDoubleValue); someObject.setAmount(myBD); // Now let Hibernate persists my object in DB... However, how can I be sure that this solution is the correct way to handle the use of BigDecimal? So my question is to know how I have to manage my BigDecimal values to avoid this issue: Do not use the new BigDecimal(String) constructor and use directly the new BigDecimal(double)? Force Oracle to use toPlainString() instead of toString() method when dealing with BigDecimal (and in this case how to do that)? Any other solution? Environment information: Java 1.6.0_14 Hibernate 2.1.8 (yes, it is a quite old version) Oracle JDBC 9.0.2.0 and also tested with 10.2.0.3.0 Oracle database 10.2.0.3.0

    Read the article

  • QUnit Unit Testing: Test Mouse Click

    - by Ngu Soon Hui
    I have the following HTML code: <div id="main"> <form Id="search-form" action="/ViewRecord/AllRecord" method="post"> <div> <fieldset> <legend>Search</legend> <p> <label for="username">Staff name</label> <input id="username" name="username" type="text" value="" /> <label for="softype"> software type</label> <input type="submit" value="Search" /> </p> </fieldset> </div> </form> </div> And the following Javascript code ( with JQuery as the library): $(function() { $("#username").click(function() { $.getJSON("ViewRecord/GetSoftwareChoice", {}, function(data) { // use data to manipulate other controls }); }); }); Now, how to test $("#username").click so that for a given input, it calls the correct url ( in this case, its ViewRecord/GetSoftwareChoice) And, the output is expected (in this case, function(data)) behaves correctly? Any idea how to do this with QUnit? Edit: I read the QUnit examples, but they seem to be dealing with a simple scenario with no AJAX interaction. And although there are ASP.NET MVC examples, but I think they are really testing the output of the server to an AJAX call, i.e., it's still testing the server response, not the AJAX response. What I want is how to test the client side response.

    Read the article

  • NSArray/NSMutableArray : Passed by ref or by value???

    - by wgpubs
    Totally confused here. I have a PARENT UIViewController that needs to pass an NSMutableArray to a CHILD UIViewController. I'm expecting it to be passed by reference so that changes made in the CHILD will be reflected in the PARENT and vice-versa. But that is not the case. Both have a property declared as .. @property (nonatomic, retain) NSMutableArray *photos; Example: In PARENT: self.photos = [[NSMutableArray alloc] init]; ChildViewController *c = [[ChildViewController alloc] init ...]; c.photos = self.photos; ... ... ... In CHILD: [self.photos addObject:obj1]; [self.photos addObject:obj2]; NSLog(@"Count:%d", [self.photos count]) // Equals 2 as expected ... Back in PARENT: NSLog(@"Count:%d", [self.photos count]) // Equals 0 ... NOT EXPECTED I thought they'd both be accessing the same memory. Is this not the case? If it isn't ... how do I keep the two NSMutableArrays in sync?

    Read the article

  • How do I define a Calculated Measure in MDX based on a Dimension Attribute?

    - by ShaneD
    I would like to create a calculated measure that sums up only a specific subset of records in my fact table based on a dimension attribute. Given: Dimension Date LedgerLineItem {Charge, Payment, Write-Off, Copay, Credit} Measures LedgerAmount Relationships * LedgerLineItem is a degenerate dimension of FactLedger If I break down LedgerAmount by LedgerLineItem.Type I can easily see how much is charged, paid, credit, etc, but when I do not break it down by LedgerLineItem.Type I cannot easily add the credit, paid, credit, etc into a pivot table. I would like to create separate calculated measures that sum only specific type (or multiple types) of ledger facts. An example of the desired output would be: | Year | Charged | Total Paid | Amount - Ledger | | 2008 | $1000 | $600 | -$400 | | 2009 | $2000 | $1500 | -$500 | | Total | $3000 | $2100 | -$900 | I have tried to create the calculated measure a couple of ways and each one works in some circumstances but not in others. Now before anyone says do this in ETL, I have already done it in ETL and it works just fine. What I am trying to do as part of learning to understand MDX better is to figure out how to duplicate what I have done in the ETL in MDX as so far I am unable to do that. Here are two attempts I have made and the problems with them. This works only when ledger type is in the pivot table. It returns the correct amount of the ledger entries (although in this case it is identical to [amount - ledger] but when I try to remove type and just get the sum of all ledger entries it returns unknown. CASE WHEN ([Ledger].[Type].currentMember = [Ledger].[Type].&[Credit]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Paid]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Held Money: Copay]) THEN [Measures].[Amount - ledger] ELSE 0 END This works only when ledger type is not in the pivot table. It always returns the total payment amount, which is incorrect when I am slicing by type as I would only expect to see the credit portion under credit, the paid portion, under paid, $0 under charge, etc. sum({([Ledger].[Type].&[Credit]), ([Ledger].[Type].&[Paid]), ([Ledger].[Type].&[Held Money: Copay])}, [Measures].[Amount - ledger]) Is there any way to make this return the correct numbers regardless of whether Ledger.Type is included in my pivot table or not?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • xCode: iPhone Swipe Gesture crash

    - by David DelMonte
    I have an app that I'd like the swipe gesture to flip to a second view. The app is all set up with buttons that work. The swipe gesture though causes a crash ( “EXC_BAD_ACCESS”.). The gesture code is: - (void)handleSwipe:(UISwipeGestureRecognizer *)recognizer { NSLog(@"%s", __FUNCTION__); switch (recognizer.direction) { case (UISwipeGestureRecognizerDirectionRight): [self performSelector:@selector(flipper:)]; break; case (UISwipeGestureRecognizerDirectionLeft): [self performSelector:@selector(flipper:)]; break; default: break; } } and "flipper" looks like this: - (IBAction)flipper:(id)sender { FlashCardsAppDelegate *mainDelegate = (FlashCardsAppDelegate *)[[UIApplication sharedApplication] delegate]; [mainDelegate flipToFront]; } flipToBack (and flipToFront) look like this.. - (void)flipToBack { NSLog(@"%s", __FUNCTION__); BackViewController *theBackView = [[BackViewController alloc] initWithNibName:@"BackView" bundle:nil]; [self setBackViewController:theBackView]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1.0]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:window cache:YES]; [frontViewController.view removeFromSuperview]; [self.window addSubview:[backViewController view]]; [UIView commitAnimations]; [frontViewController release]; frontViewController = nil; [theBackView release]; // NSLog (@" FINISHED "); } Maybe I'm going about this the wrong way... All ideas are welcome...

    Read the article

  • Printing HTML blocks

    - by Lem0n
    I have a page with a few tables that I want to be printable. I want the following behaviour: 1) Add a page break if the next table fits in a single page, but won't fit in the current page (because of other stuff already printed on this page) 2) Print the "table header" again in case it's needed to break a table (I guess it's the default behaviour) Any ideas specially on the first issue? Maybe some CSS can help? I'll give on example. I have a page with 4 tables. All of them with 10 lines, except the third one, with 50 lines. The first and second goes on the first page. Since the third one won't fit in the same page, but will fit in a page alone, it's printed on a page alone... and then the forth table is printed on the third page (in case it doesn't fit together in the second page). But, if the third page had 300 lines and would be broke anyway, it could have started to be print in the first page.

    Read the article

  • Passing Variable Length Arrays to a function

    - by David Bella
    I have a variable length array that I am trying to pass into a function. The function will shift the first value off and return it, and move the remaining values over to fill in the missing spot, putting, let's say, a -1 in the newly opened spot. I have no problem passing an array declared like so: int framelist[128]; shift(framelist); However, I would like to be able to use a VLA declared in this manner: int *framelist; framelist = malloc(size * sizeof(int)); shift(framelist); I can populate the arrays the same way outside the function call without issue, but as soon as I pass them into the shift function, the one declared in the first case works fine, but the one in the second case immediately gives a segmentation fault. Here is the code for the queue function, which doesn't do anything except try to grab the value from the first part of the array... int shift(int array[]) { int value = array[0]; return value; } Any ideas why it won't accept the VLA? I'm still new to C, so if I am doing something fundamentally wrong, let me know.

    Read the article

  • How should I read from a buffered reader?

    - by Roman
    I have the following example of reading from a buffered reader: while ((inputLine = input.readLine()) != null) { System.out.println("I got a message from a client: " + inputLine); } The code in the loop println will be executed whenever something appears in the buffered reader (input in this case). In my case, if a client-application writes something to the socket, the code in the loop (in the server-application) will be executed. But I do not understand how it works. inputLine = input.readLine() waits until something appears in the buffered reader and when something appears there it returns true and the code in the loop is executed. But when null can be returned. There is another question. The above code was taken from a method which throws Exception and I use this code in the run method of the Thread. And when I try to put throws Exception before the run the compiler complains: overridden method does not throw exception. Without the throws exception I have another complain from the compiler: unreported exception. So, what can I do?

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • Core data migration failing with "Can't find model for source store" but managedObjectModel for source is present

    - by Ira Cooke
    I have a cocoa application using core-data, which is now at the 4th version of its managed object model. My managed object model contains abstract entities but so far I have managed to get migration working by creating appropriate mapping models and creating my persistent store using addPersistentStoreWithType:configuration:options:error and with the NSMigratePersistentStoresAutomaticallyOption set to YES. NSDictionary *optionsDictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; NSURL *url = [NSURL fileURLWithPath: [applicationSupportFolder stringByAppendingPathComponent: @"MyApp.xml"]]; NSError *error=nil; [theCoordinator addPersistentStoreWithType:NSXMLStoreType configuration:nil URL:url options:optionsDictionary error:&error] This works fine when I migrate from model version 3 to 4, which is a migration that involves adding attributes to several entities. Now when I try to add a new model version (version 5), the call to addPersistentStoreWithType returns nil and the error remains empty. The migration from 4 to 5 involves adding a single attribute. I am struggling to debug the problem and have checked all the following; The source database is in fact at version 4 and the persistentStoreCoordinator's managed object model is at version 5. The 4-5 mapping model as well as managed object models for versions 4 and 5 are present in the resources folder of my built application. I've tried various model upgrade paths. Strangely I find that upgrading from an early version 3 - 5 works .. but upgrading from 4 - 5 fails. I've tried adding a custom entity migration policy for migration of the entity whose attributes are changing ... in this case I overrode the method beginEntityMapping:manager:error: . Interestingly this method does get called when migration works (ie when I migrate from 3 to 4, or from 3 to 5 ), but it does not get called in the case that fails ( 4 to 5 ). I'm pretty much at a loss as to where to proceed. Any ideas to help debug this problem would be much appreciated.

    Read the article

  • Flex AS3: ComboBox set visible to false doesn't hide

    - by jolierouge
    I have a combobox in a view that receives information about application state changes, and then is supposed to show or hide it's children based on the whole application state. It receives state change messages, it traces the correct values, it does what it's supposed to do, however, it just doesn't seem to work. Essentially, all it needs to do is hide a combobox during one state, and show it again during another state. Here is the code: public function updateState(event:* = null):void { trace("Project Panel Updating State"); switch(ApplicationData.getSelf().currentState) { case 'login': this.visible = false; break; case 'grid': this.visible = true; listProjects.includeInLayout = false; listProjects.visible = false; trace("ListProjects: " + listProjects.visible); listLang.visible = true; break; default: break; } } Here is the MXML: <mx:HBox> <mx:Button id="btnLoad" x="422" y="84" label="Load" enabled="true" click="loadProject();"/> <mx:ComboBox id="listProjects" x="652" y="85" editable="true" change="listChange()" color="#050CA8" fontFamily="Arial" /> <mx:Label x="480" y="86" text="Language:" id="label3" fontFamily="Arial" /> <mx:ComboBox id="listLang" x="537" y="84" editable="true" dataProvider="{langList}" color="#050CA8" fontFamily="Arial" width="107" change="listLangChange(event)"/> <mx:CheckBox x="830" y="84" label="Languages in English" id="langCheckbox" click='toggleLang()'/> </mx:HBox>

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

  • how to use Remote Service ?

    - by LEE YONGGUN
    Hi im trying to use Remote Service btween two simple application, But its not easy to me. So any advice you have will help me. here`s my case. I made one app which is playing Music in service, There are two components. one is Activity controlling service by using three buttons, play,pause, stop. and it is working fine. and another one is just simple Activity which also has four buttons bind,play,stop,unbind. when i click bind, it`s confirmed by Toast msg, but when i click play button,it occurs error. i want to control first activitys Music playing service in second Activity. So im trying to use remote service. i made same .aidl file in each app project. In aidl file, i defined methods "playing","stoping" and i implement those methods in Music service class, implementation is simply use intent and startService & stopService. In DDMS there is "java.lang.SecurityException : Binder invocation to an incorrect interface" thats the case what im doing. So please tell me what`s the problem. any advice could help me. thanks Gun.

    Read the article

  • Java method keyword "final" and its use

    - by Lukas Eder
    When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example: interface Garble { int zork(); } interface Gnarf extends Garble { /** * This is the same as calling {@link #zblah(0)} */ int zblah(); int zblah(int defaultZblah); } And then abstract class AbstractGarble implements Garble { @Override public final int zork() { ... } } abstract class AbstractGnarf extends AbstractGarble implements Gnarf { // Here I absolutely want to fix the default behaviour of zblah // No Gnarf shouldn't be allowed to set 1 as the default, for instance @Override public final int zblah() { return zblah(0); } // This method is not implemented here, but in a subclass @Override public abstract int zblah(int defaultZblah); } I do this for several reasons: It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy) I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it. So the final keyword works perfectly for me. My question is: Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

    Read the article

  • WM_NCHITTEST and secondary monitor to left of primary monitor

    - by AlanKley
    The described setup with 2nd monitor to left of primary causes WM_NCHITTEST to send negative values which is apparently not supported according to this post. I have a custom control written in win32 that is like a Group control. It has a small clickable area. No MOUSE events are coming to my control when the window containing the custom control lies on a second monitor to the left of the primary monitor. SPY++ shows WM_NCHITTEST messages but no Mouse messages. When window is moved to primary monitor or secondary monitor is positioned to right of primary (all points are positive) then everything works fine. Below is how the WM_NCHITTEST is handled in my custom control. In general I need it to return HTTRANSPARENT so as not to obscure other controls placed inside of it. Anybody have any suggestions what funky coordinate translation I need to do and what to return in response to WM_NCHITTEST to get Mouse messages translated and sent to my control in the case where it is on a 2nd monitor placed to the left of the primary monitor? case WM_NCHITTEST: { POINT Pt = {LOWORD(lP), HIWORD(lP)}; int i; ScreenToClient (hWnd, &Pt); if (PtInRect (&rClickableArea, Pt)) { return(DefWindowProc( hWnd, Msg, wP, lP )); } } lReturn = HTTRANSPARENT; break;

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • The worker processcalls OpenSubKey but returns null by accessing Remote Registry service.

    - by Cary
    My web server is deployed in IIS 6. The web server starts the Remote Registry service in the remote machine successfully by creating a process to run some remote operation commands. This first line runs successfully. But the second line returns null. #1 RegistryKey remoteRegKey = RegistryKey.OpenRemoteBaseKey(RegistryHive.LocalMachine, "139.24.185.27"); #2 RegistryKey targetKey = remoteRegKey.OpenSubKey(@"SOFTWARE\Wow6432Node\XXXX\XXXX\Config\Modality", true); I tried to find the reason from MSDN. It tells only one case it would return null. The case is when the subkey does not exist. If it has not enough permission, it will throw exception. But the subkey really exists. I change another machine to debug my code with Visual Studio 2008. It can run two lines successfully. If it has enough permission, it should not only can open the LocalMachine, but also can open any of its subkeys. I am quite confusing about this.

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • UICollectionView with one static cell and N dynamic ones from a fetchresultscontroller exception

    - by nflacco
    I'm trying to make a UITableView that shows a blog post and comments for that post. My setup is a tableview in storyboard with two dynamic prototype cells. The first cell is for the post and should never change. The second cell represents the 0 to N comments. My cellForRowAtIndexPath method shows the post cell properly, but fails to get the comment at the given index path (though if I comment out the fetch I get the appropriate number of comment cells with a green background that I set as a visual debug thing). let comment = fetchedResultController.objectAtIndexPath(indexPath) as Comment I get the following exception on this line: 2014-08-24 15:06:40.712 MessagePosting[21767:3266409] *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[__NSArrayM objectAtIndex:]: index 1 beyond bounds [0 .. 0]' *** First throw call stack: ( 0 CoreFoundation 0x0000000101aa43e5 __exceptionPreprocess + 165 1 libobjc.A.dylib 0x00000001037f9967 objc_exception_throw + 45 2 CoreFoundation 0x000000010198f4c3 -[__NSArrayM objectAtIndex:] + 227 3 CoreData 0x00000001016e4792 -[NSFetchedResultsController objectAtIndexPath:] + 162 Section and cell setup: override func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete method implementation. // Return the number of rows in the section. switch section { case 0: return 1 default: if let realPost:Post = post { return fetchedResultController.sections[0].numberOfObjects } else { return 0 } } } override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { switch indexPath.section { case 0: let cell = tableView.dequeueReusableCellWithIdentifier(postViewCellIdentifier, forIndexPath: indexPath) as UITableViewCell cell.backgroundColor = lightGrey if let realPost:Post = self.post { cell.textLabel.text = realPost.text } return cell default: let cell = tableView.dequeueReusableCellWithIdentifier(commentCellIdentifier, forIndexPath: indexPath) as UITableViewCell cell.backgroundColor = UIColor.greenColor() let comment = fetchedResultController.objectAtIndexPath(indexPath) as Comment // <---------------------------- :( cell.textLabel.text = comment.text return cell } } FRC: func controllerDidChangeContent(controller: NSFetchedResultsController!) { tableView.reloadData() } func getFetchedResultController() -> NSFetchedResultsController { fetchedResultController = NSFetchedResultsController(fetchRequest: taskFetchRequest(), managedObjectContext: managedObjectContext, sectionNameKeyPath: nil, cacheName: nil) return fetchedResultController } func taskFetchRequest() -> NSFetchRequest { if let realPost:Post = self.post { let fetchRequest = NSFetchRequest(entityName: "Comment") let sortDescriptor = NSSortDescriptor(key: "date", ascending: false) fetchRequest.predicate = NSPredicate(format: "post = %@", realPost) fetchRequest.sortDescriptors = [sortDescriptor] return fetchRequest } else { return NSFetchRequest(entityName: "") } }

    Read the article

  • Phonegap/Cordova geolocation not working on Android

    - by Kreeki
    I'm having a trouble to get Geolocation working on Android in both emulator (even when I geo fix over telnet) and on device. Works on iOS, WP8 and in the browser. When I ask device for location using the following code, I always get an error (in my case custom Retrieving your position failed for unknown reason. with null both error code and error message). Related code: successHandler = (position) -> resolve App.Location.create lat: position.coords.latitude lng: position.coords.longitude errorHandler = (error) -> error = switch error.code when 1 App.LocationError.create message: 'You haven\'t shared your location.' when 2 App.LocationError.create message: 'Couldn\'t detect your current location.' when 3 App.LocationError.create message: 'Retrieving your position timeouted.' else App.LocationError.create message: 'Retrieving your position failed for unknown reason. Error code: ' + error.code + '. Error message: ' + error.message reject(error) options = maximumAge: Infinity # I also tried with 0 timeout: 60000 enableHighAccuracy: true navigator.geolocation.getCurrentPosition(successHandler, errorHandler, options) platforms/android/AndroidManifest.xml <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> www/config.xml (just in case) <feature name="Geolocation"> <param name="android-package" value="org.apache.cordova.GeoBroker" /> </feature> Using Cordova 3.1.0. Testing on Android 4.2. Plugin installed. Cordova.js included in index.html (other plugins like InAppBrowser are working fine). $ cordova plugins ls [ 'org.apache.cordova.console', 'org.apache.cordova.device', 'org.apache.cordova.dialogs', 'org.apache.cordova.geolocation', 'org.apache.cordova.inappbrowser', 'org.apache.cordova.vibration' ] I'm clueless. Am I missing something?

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >