Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 191/916 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • How do I define a Calculated Measure in MDX based on a Dimension Attribute?

    - by ShaneD
    I would like to create a calculated measure that sums up only a specific subset of records in my fact table based on a dimension attribute. Given: Dimension Date LedgerLineItem {Charge, Payment, Write-Off, Copay, Credit} Measures LedgerAmount Relationships * LedgerLineItem is a degenerate dimension of FactLedger If I break down LedgerAmount by LedgerLineItem.Type I can easily see how much is charged, paid, credit, etc, but when I do not break it down by LedgerLineItem.Type I cannot easily add the credit, paid, credit, etc into a pivot table. I would like to create separate calculated measures that sum only specific type (or multiple types) of ledger facts. An example of the desired output would be: | Year | Charged | Total Paid | Amount - Ledger | | 2008 | $1000 | $600 | -$400 | | 2009 | $2000 | $1500 | -$500 | | Total | $3000 | $2100 | -$900 | I have tried to create the calculated measure a couple of ways and each one works in some circumstances but not in others. Now before anyone says do this in ETL, I have already done it in ETL and it works just fine. What I am trying to do as part of learning to understand MDX better is to figure out how to duplicate what I have done in the ETL in MDX as so far I am unable to do that. Here are two attempts I have made and the problems with them. This works only when ledger type is in the pivot table. It returns the correct amount of the ledger entries (although in this case it is identical to [amount - ledger] but when I try to remove type and just get the sum of all ledger entries it returns unknown. CASE WHEN ([Ledger].[Type].currentMember = [Ledger].[Type].&[Credit]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Paid]) OR ([Ledger].[Type].currentMember = [Ledger].[Type].&[Held Money: Copay]) THEN [Measures].[Amount - ledger] ELSE 0 END This works only when ledger type is not in the pivot table. It always returns the total payment amount, which is incorrect when I am slicing by type as I would only expect to see the credit portion under credit, the paid portion, under paid, $0 under charge, etc. sum({([Ledger].[Type].&[Credit]), ([Ledger].[Type].&[Paid]), ([Ledger].[Type].&[Held Money: Copay])}, [Measures].[Amount - ledger]) Is there any way to make this return the correct numbers regardless of whether Ledger.Type is included in my pivot table or not?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Django + jQuery: Sometimes AJAX, but always DRY?

    - by Justin Myles Holmes
    Let's say I have an app (in Django) for which I want to sometimes (but not always) load content via ajax. An easy example is logging in. When the user logs in, I don't want to refresh the page, just change things around. Yet, if they are already logged in, and then arrive at (or refresh) the same page, I want it to show the same content. So, in the first case, obviously I do some sort of ajax login and load changes to the page accordingly. Easy enough. But what about in the second case? Do I go back through and add {% if user.authenticated %} all over the place? This seems cold, dark, and WET. On the other hand, I can just wrap all the ajaxy stuff in a javascript function, called loggedIn(), and run that if the user is authenticated. But then I'm faced with two http requests instead of one. Also undesirable. So what's the standard solution here?

    Read the article

  • How to read spring-application-context.xml and AnnotationConfigWebApplicationContext both in spring mvc

    - by Suvasis
    In case I want to read bean definitions from spring-application-context.xml, I would do this in web.xml file. <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml </param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> In case I want to read bean definitions through Java Configuration Class (AnnotationConfigWebApplicationContext), I would do this in web.xml <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.AnnotationConfigWebApplicationContext </param-value> </init-param> <init-param> <param-name>contextConfigLocation</param-name> <param-value> org.package.MyConfigAnnotatedClass </param-value> </init-param> </servlet> How do I use both in my application. like reading beans from both configuration xml file and annotated class. Is there a way to load spring beans in xml file while we are using AppConfigAnnotatedClass to instantiate/use rest of the beans.

    Read the article

  • Passing Variable Length Arrays to a function

    - by David Bella
    I have a variable length array that I am trying to pass into a function. The function will shift the first value off and return it, and move the remaining values over to fill in the missing spot, putting, let's say, a -1 in the newly opened spot. I have no problem passing an array declared like so: int framelist[128]; shift(framelist); However, I would like to be able to use a VLA declared in this manner: int *framelist; framelist = malloc(size * sizeof(int)); shift(framelist); I can populate the arrays the same way outside the function call without issue, but as soon as I pass them into the shift function, the one declared in the first case works fine, but the one in the second case immediately gives a segmentation fault. Here is the code for the queue function, which doesn't do anything except try to grab the value from the first part of the array... int shift(int array[]) { int value = array[0]; return value; } Any ideas why it won't accept the VLA? I'm still new to C, so if I am doing something fundamentally wrong, let me know.

    Read the article

  • xCode: iPhone Swipe Gesture crash

    - by David DelMonte
    I have an app that I'd like the swipe gesture to flip to a second view. The app is all set up with buttons that work. The swipe gesture though causes a crash ( “EXC_BAD_ACCESS”.). The gesture code is: - (void)handleSwipe:(UISwipeGestureRecognizer *)recognizer { NSLog(@"%s", __FUNCTION__); switch (recognizer.direction) { case (UISwipeGestureRecognizerDirectionRight): [self performSelector:@selector(flipper:)]; break; case (UISwipeGestureRecognizerDirectionLeft): [self performSelector:@selector(flipper:)]; break; default: break; } } and "flipper" looks like this: - (IBAction)flipper:(id)sender { FlashCardsAppDelegate *mainDelegate = (FlashCardsAppDelegate *)[[UIApplication sharedApplication] delegate]; [mainDelegate flipToFront]; } flipToBack (and flipToFront) look like this.. - (void)flipToBack { NSLog(@"%s", __FUNCTION__); BackViewController *theBackView = [[BackViewController alloc] initWithNibName:@"BackView" bundle:nil]; [self setBackViewController:theBackView]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1.0]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:window cache:YES]; [frontViewController.view removeFromSuperview]; [self.window addSubview:[backViewController view]]; [UIView commitAnimations]; [frontViewController release]; frontViewController = nil; [theBackView release]; // NSLog (@" FINISHED "); } Maybe I'm going about this the wrong way... All ideas are welcome...

    Read the article

  • Printing HTML blocks

    - by Lem0n
    I have a page with a few tables that I want to be printable. I want the following behaviour: 1) Add a page break if the next table fits in a single page, but won't fit in the current page (because of other stuff already printed on this page) 2) Print the "table header" again in case it's needed to break a table (I guess it's the default behaviour) Any ideas specially on the first issue? Maybe some CSS can help? I'll give on example. I have a page with 4 tables. All of them with 10 lines, except the third one, with 50 lines. The first and second goes on the first page. Since the third one won't fit in the same page, but will fit in a page alone, it's printed on a page alone... and then the forth table is printed on the third page (in case it doesn't fit together in the second page). But, if the third page had 300 lines and would be broke anyway, it could have started to be print in the first page.

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • backbone marionette - displaying a view in a layout region which has already been rendered

    - by Justin Wyllie
    I have something like this: //render the layout Layout.layout = new Layout.Screen(); MyApp.SomeRegion.show(Layout.layout); //render a view into it var mView = new CompositeView({collection: data}); Layout.layout.SomeRegion.show(mView); That all works fine. The layout has 3 regions. The above has rendered a view into one of them. Now, later, I want (in response to some user interaction) to render another view into one of the other regions. So I try: var mView2 = new CompositeView({collection: data}); Layout.layout.SomeOtherRegion.show(mView); But 'SomeOtherRegion' no longer exists on Layout.layout. I looked at this in Firebug. In the first case Layout.layout has my three regions defined on it. In the second case, not. Though they are still there in the regions object. This is the same layout instance. It looks like once a layout has been rendered you cannot render into it again? There must be a way. Nb. I don't want to re-render the layout. I hope that makes sense --Justin Wyllie

    Read the article

  • How should I read from a buffered reader?

    - by Roman
    I have the following example of reading from a buffered reader: while ((inputLine = input.readLine()) != null) { System.out.println("I got a message from a client: " + inputLine); } The code in the loop println will be executed whenever something appears in the buffered reader (input in this case). In my case, if a client-application writes something to the socket, the code in the loop (in the server-application) will be executed. But I do not understand how it works. inputLine = input.readLine() waits until something appears in the buffered reader and when something appears there it returns true and the code in the loop is executed. But when null can be returned. There is another question. The above code was taken from a method which throws Exception and I use this code in the run method of the Thread. And when I try to put throws Exception before the run the compiler complains: overridden method does not throw exception. Without the throws exception I have another complain from the compiler: unreported exception. So, what can I do?

    Read the article

  • Core data migration failing with "Can't find model for source store" but managedObjectModel for source is present

    - by Ira Cooke
    I have a cocoa application using core-data, which is now at the 4th version of its managed object model. My managed object model contains abstract entities but so far I have managed to get migration working by creating appropriate mapping models and creating my persistent store using addPersistentStoreWithType:configuration:options:error and with the NSMigratePersistentStoresAutomaticallyOption set to YES. NSDictionary *optionsDictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; NSURL *url = [NSURL fileURLWithPath: [applicationSupportFolder stringByAppendingPathComponent: @"MyApp.xml"]]; NSError *error=nil; [theCoordinator addPersistentStoreWithType:NSXMLStoreType configuration:nil URL:url options:optionsDictionary error:&error] This works fine when I migrate from model version 3 to 4, which is a migration that involves adding attributes to several entities. Now when I try to add a new model version (version 5), the call to addPersistentStoreWithType returns nil and the error remains empty. The migration from 4 to 5 involves adding a single attribute. I am struggling to debug the problem and have checked all the following; The source database is in fact at version 4 and the persistentStoreCoordinator's managed object model is at version 5. The 4-5 mapping model as well as managed object models for versions 4 and 5 are present in the resources folder of my built application. I've tried various model upgrade paths. Strangely I find that upgrading from an early version 3 - 5 works .. but upgrading from 4 - 5 fails. I've tried adding a custom entity migration policy for migration of the entity whose attributes are changing ... in this case I overrode the method beginEntityMapping:manager:error: . Interestingly this method does get called when migration works (ie when I migrate from 3 to 4, or from 3 to 5 ), but it does not get called in the case that fails ( 4 to 5 ). I'm pretty much at a loss as to where to proceed. Any ideas to help debug this problem would be much appreciated.

    Read the article

  • WM_NCHITTEST and secondary monitor to left of primary monitor

    - by AlanKley
    The described setup with 2nd monitor to left of primary causes WM_NCHITTEST to send negative values which is apparently not supported according to this post. I have a custom control written in win32 that is like a Group control. It has a small clickable area. No MOUSE events are coming to my control when the window containing the custom control lies on a second monitor to the left of the primary monitor. SPY++ shows WM_NCHITTEST messages but no Mouse messages. When window is moved to primary monitor or secondary monitor is positioned to right of primary (all points are positive) then everything works fine. Below is how the WM_NCHITTEST is handled in my custom control. In general I need it to return HTTRANSPARENT so as not to obscure other controls placed inside of it. Anybody have any suggestions what funky coordinate translation I need to do and what to return in response to WM_NCHITTEST to get Mouse messages translated and sent to my control in the case where it is on a 2nd monitor placed to the left of the primary monitor? case WM_NCHITTEST: { POINT Pt = {LOWORD(lP), HIWORD(lP)}; int i; ScreenToClient (hWnd, &Pt); if (PtInRect (&rClickableArea, Pt)) { return(DefWindowProc( hWnd, Msg, wP, lP )); } } lReturn = HTTRANSPARENT; break;

    Read the article

  • Flex AS3: ComboBox set visible to false doesn't hide

    - by jolierouge
    I have a combobox in a view that receives information about application state changes, and then is supposed to show or hide it's children based on the whole application state. It receives state change messages, it traces the correct values, it does what it's supposed to do, however, it just doesn't seem to work. Essentially, all it needs to do is hide a combobox during one state, and show it again during another state. Here is the code: public function updateState(event:* = null):void { trace("Project Panel Updating State"); switch(ApplicationData.getSelf().currentState) { case 'login': this.visible = false; break; case 'grid': this.visible = true; listProjects.includeInLayout = false; listProjects.visible = false; trace("ListProjects: " + listProjects.visible); listLang.visible = true; break; default: break; } } Here is the MXML: <mx:HBox> <mx:Button id="btnLoad" x="422" y="84" label="Load" enabled="true" click="loadProject();"/> <mx:ComboBox id="listProjects" x="652" y="85" editable="true" change="listChange()" color="#050CA8" fontFamily="Arial" /> <mx:Label x="480" y="86" text="Language:" id="label3" fontFamily="Arial" /> <mx:ComboBox id="listLang" x="537" y="84" editable="true" dataProvider="{langList}" color="#050CA8" fontFamily="Arial" width="107" change="listLangChange(event)"/> <mx:CheckBox x="830" y="84" label="Languages in English" id="langCheckbox" click='toggleLang()'/> </mx:HBox>

    Read the article

  • Java method keyword "final" and its use

    - by Lukas Eder
    When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example: interface Garble { int zork(); } interface Gnarf extends Garble { /** * This is the same as calling {@link #zblah(0)} */ int zblah(); int zblah(int defaultZblah); } And then abstract class AbstractGarble implements Garble { @Override public final int zork() { ... } } abstract class AbstractGnarf extends AbstractGarble implements Gnarf { // Here I absolutely want to fix the default behaviour of zblah // No Gnarf shouldn't be allowed to set 1 as the default, for instance @Override public final int zblah() { return zblah(0); } // This method is not implemented here, but in a subclass @Override public abstract int zblah(int defaultZblah); } I do this for several reasons: It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy) I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it. So the final keyword works perfectly for me. My question is: Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

  • how to use Remote Service ?

    - by LEE YONGGUN
    Hi im trying to use Remote Service btween two simple application, But its not easy to me. So any advice you have will help me. here`s my case. I made one app which is playing Music in service, There are two components. one is Activity controlling service by using three buttons, play,pause, stop. and it is working fine. and another one is just simple Activity which also has four buttons bind,play,stop,unbind. when i click bind, it`s confirmed by Toast msg, but when i click play button,it occurs error. i want to control first activitys Music playing service in second Activity. So im trying to use remote service. i made same .aidl file in each app project. In aidl file, i defined methods "playing","stoping" and i implement those methods in Music service class, implementation is simply use intent and startService & stopService. In DDMS there is "java.lang.SecurityException : Binder invocation to an incorrect interface" thats the case what im doing. So please tell me what`s the problem. any advice could help me. thanks Gun.

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • int considered harmful?

    - by Chris Becke
    Working on code meant to be portable between Win32 and Win64 and Cocoa, I am really struggling to get to grips with what the @#$% the various standards committees involved over the past decades were thinking when they first came up with, and then perpetuated, the crime against humanity that is the C native typeset - char, short, int and long. On the one hand, as a old-school c++ programmer, there are few statements that were as elegant and/or as simple as for(int i=0; i<some_max; i++) but now, it seems that, in the general case, this code can never be correct. Oh sure, given a particular version of MSVC or GCC, with specific targets, the size of 'int' can be safely assumed. But, in the case of writing very generic c/c++ code that might one day be used on 16 bit hardware, or 128, or just be exposed to a particularly weirdly setup 32/64 bit compiler, how does use int in c++ code in a way that the resulting program would have predictable behavior in any and all possible c++ compilers that implemented c++ according to spec. To resolve these unpredictabilities, C99 and C++98 introduced size_t, uintptr_t, ptrdiff_t, int8_t, int16_t, int32_t, int16_t and so on. Which leaves me thinking that a raw int, anywhere in pure c++ code, should really be considered harmful, as there is some (completely c++xx conforming) compiler, thats going to produce an unexpected or incorrect result with it. (and probably be a attack vector as well)

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • The worker processcalls OpenSubKey but returns null by accessing Remote Registry service.

    - by Cary
    My web server is deployed in IIS 6. The web server starts the Remote Registry service in the remote machine successfully by creating a process to run some remote operation commands. This first line runs successfully. But the second line returns null. #1 RegistryKey remoteRegKey = RegistryKey.OpenRemoteBaseKey(RegistryHive.LocalMachine, "139.24.185.27"); #2 RegistryKey targetKey = remoteRegKey.OpenSubKey(@"SOFTWARE\Wow6432Node\XXXX\XXXX\Config\Modality", true); I tried to find the reason from MSDN. It tells only one case it would return null. The case is when the subkey does not exist. If it has not enough permission, it will throw exception. But the subkey really exists. I change another machine to debug my code with Visual Studio 2008. It can run two lines successfully. If it has enough permission, it should not only can open the LocalMachine, but also can open any of its subkeys. I am quite confusing about this.

    Read the article

  • UICollectionView with one static cell and N dynamic ones from a fetchresultscontroller exception

    - by nflacco
    I'm trying to make a UITableView that shows a blog post and comments for that post. My setup is a tableview in storyboard with two dynamic prototype cells. The first cell is for the post and should never change. The second cell represents the 0 to N comments. My cellForRowAtIndexPath method shows the post cell properly, but fails to get the comment at the given index path (though if I comment out the fetch I get the appropriate number of comment cells with a green background that I set as a visual debug thing). let comment = fetchedResultController.objectAtIndexPath(indexPath) as Comment I get the following exception on this line: 2014-08-24 15:06:40.712 MessagePosting[21767:3266409] *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[__NSArrayM objectAtIndex:]: index 1 beyond bounds [0 .. 0]' *** First throw call stack: ( 0 CoreFoundation 0x0000000101aa43e5 __exceptionPreprocess + 165 1 libobjc.A.dylib 0x00000001037f9967 objc_exception_throw + 45 2 CoreFoundation 0x000000010198f4c3 -[__NSArrayM objectAtIndex:] + 227 3 CoreData 0x00000001016e4792 -[NSFetchedResultsController objectAtIndexPath:] + 162 Section and cell setup: override func tableView(tableView: UITableView!, numberOfRowsInSection section: Int) -> Int { // #warning Incomplete method implementation. // Return the number of rows in the section. switch section { case 0: return 1 default: if let realPost:Post = post { return fetchedResultController.sections[0].numberOfObjects } else { return 0 } } } override func tableView(tableView: UITableView!, cellForRowAtIndexPath indexPath: NSIndexPath!) -> UITableViewCell! { switch indexPath.section { case 0: let cell = tableView.dequeueReusableCellWithIdentifier(postViewCellIdentifier, forIndexPath: indexPath) as UITableViewCell cell.backgroundColor = lightGrey if let realPost:Post = self.post { cell.textLabel.text = realPost.text } return cell default: let cell = tableView.dequeueReusableCellWithIdentifier(commentCellIdentifier, forIndexPath: indexPath) as UITableViewCell cell.backgroundColor = UIColor.greenColor() let comment = fetchedResultController.objectAtIndexPath(indexPath) as Comment // <---------------------------- :( cell.textLabel.text = comment.text return cell } } FRC: func controllerDidChangeContent(controller: NSFetchedResultsController!) { tableView.reloadData() } func getFetchedResultController() -> NSFetchedResultsController { fetchedResultController = NSFetchedResultsController(fetchRequest: taskFetchRequest(), managedObjectContext: managedObjectContext, sectionNameKeyPath: nil, cacheName: nil) return fetchedResultController } func taskFetchRequest() -> NSFetchRequest { if let realPost:Post = self.post { let fetchRequest = NSFetchRequest(entityName: "Comment") let sortDescriptor = NSSortDescriptor(key: "date", ascending: false) fetchRequest.predicate = NSPredicate(format: "post = %@", realPost) fetchRequest.sortDescriptors = [sortDescriptor] return fetchRequest } else { return NSFetchRequest(entityName: "") } }

    Read the article

  • core-plot barchart does not work!!

    - by user355068
    Hello~ I try to draw bar chart!! numberForPlot is not good work, only CPBarPlotFieldBarLength set. -(NSNumber *)numberForPlot: (CPPlot *)plot field:(NSUInteger)fieldEnum recordIndex:(NSUInteger)index { NSDecimalNumber *num = nil; NSString *key = (fieldEnum == CPScatterPlotFieldX) ? @"x" : @"y"; if ( [plot isKindOfClass:[CPBarPlot class]] ) { switch ( fieldEnum ) { NSLog(@"fieldEnum = %d",fieldEnum); case CPBarPlotFieldBarLocation: num = (NSDecimalNumber *)[NSDecimalNumber numberWithUnsignedInteger:index]; NSLog(@"CPBarPlotFieldBarLocation return num = %@",num); break; case CPBarPlotFieldBarLength: num = (NSDecimalNumber *)[NSDecimalNumber numberWithUnsignedInteger: (index+1)*(index+1)]; if ( [plot.identifier isEqual:Plot3Identity] ) num = [[self.ExForPlot objectAtIndex:index] valueForKey:key]; NSLog(@"CPBarPlotFieldBarLength return num = %@",num); break; } } else { NSLog(@"...??.."); } return num; } # LOG 2010-06-01 02:43:19.424 myHealth[5071:207] CPBarPlotFieldBarLength return num = 0 2010-06-01 02:43:19.425 myHealth[5071:207] CPBarPlotFieldBarLength return num = 0 2010-06-01 02:43:19.425 myHealth[5071:207] CPBarPlotFieldBarLength return num = 30 # EXPlot set // Add some initial data NSMutableArray *contentArray = [NSMutableArray arrayWithCapacity:100]; int prevWeight = 0; int prevBmi = 0 ; if (sqlite3_prepare_v2(database, sql, -1, &statement, NULL) == SQLITE_OK) { while (sqlite3_step(statement)==SQLITE_ROW) { id x = [NSNumber numberWithInt:graphTimeStamp]; id y = [NSNumber numberWithInt:sqlite3_column_int(statement, 2)]; [contentArray addObject:[NSMutableDictionary dictionaryWithObjectsAndKeys:x, @"x", y, @"y", nil]]; } self.ExForPlot = contentArray;

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Web application date time localization best practice at 201x

    - by Hieu Lam
    Hi all, I have worked for various web projects but correct date time localization have not been done and considered throroughly so I want to ask this very typical problem here and I want to hear comments from expert in this problem What is the correct strategy for storing a date/time value from client from server As I understand, because of locale and timezone so we have to do the conversion, I have heard about GMT or UTC time and after do some search it seems that UTC is more accurate ? so we will convert from client time - UTC+0 when saving and when we read the value from server to client, we convert from server time back to client time again ? However, I see in some website, at the bottom have the sentence "All times are in UTC", "All times are in GMT" and also "All times are in your local time". So maybe not all the sites do the convertion back and forth ? And in that case the user has to manually do the date/time conversion ? How to display the date/time convenient to user based on his locale and region How to provide personalization on date/time value ? I had one time depends on vbscript to do the display and the format is read from windows regional and format settings automatically. But without vbscript how can we determine a date/time pattern for a user of a specific locale. Do we have to store a mapping between a locale and pattern somewhere and do the conversion at the server side ? Although date/time conversion is needed in most case, there's situation where only date matter for example if my birthday is 2 Feb 1980, it should be the same for all locale and no conversion should be done. How can we address this issue.

    Read the article

  • SQL Query: How to determine "Seen during N hour" if given two DateTime time stamps?

    - by efess
    Hello all. I'm writing a statistics application based off a SQLite database. There is a table which records when users Login and Logout (SessionStart, SessionEnd DateTimes). What i'm looking for is a query that can show what hours user have been logged in, in sort of a line graph way- so between the hours of 12:00 and 1:00AM there were 60 users logged in (at any point), between the hours of 1:00 and 2:00AM there were 54 users logged in, etc... And I want to be able to run a SUM of this, which is why I can't bring the records into .NET and iterate through them that way. I've come up with a rather primative approach, a subquery for each hour of the day, however this approach has proved to be slow and slow. I need to be able to calculate this for a couple hundred thousand records in a split second.. SELECT case when (strftime('%s',datetime(date(sessionstart), '+0 hours')) > strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+0 hours')) < strftime('%s',sessionend)) OR (strftime('%s',datetime(date(sessionstart), '+1 hours')) > strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+1 hours')) < strftime('%s',sessionend)) OR (strftime('%s',datetime(date(sessionstart), '+0 hours')) < strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+1 hours')) > strftime('%s',sessionend)) then 1 else 0 end as hour_zero, ... hour_one, ... hour_two, ........ hour_twentythree FROM UserSession I'm wondering what better way to determine if two DateTimes have been seen durring a particular hour (best case scenario, how many times it has crossed an hour if it was logged in multiple days, but not necessary)? The only other idea I had is have a "hour" table specific to this, and just tally up the hours the user has been seen at runtime, but I feel like this is more of a hack than the previous SQL. Any help would be greatly appreciated!

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >