Search Results

Search found 7097 results on 284 pages for 'calls'.

Page 60/284 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Transaction Isolation Level of Serializable not working for me

    - by Shahriar
    I have a website which is used by all branches of a store and what it does is that it records customer purchases into a table called myTransactions.myTransactions table has a column named SerialNumber.For each purchase i create a record in the transactions table and assign a serial to it.The stored procedure that does this calls a UDF function to get a new serialNumber before inserting the record.Like below : Create Procedure mytransaction_Insert as begin insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2,Value3,...., getTransactionNSerialNumber()) end Create function getTransactionNSerialNumber as begin RETURN isnull(SELECT TOP (1) SerialNumber FROM myTransactions READUNCOMMITTED ORDER BY SerialNumber DESC),0) + 1 end The website is being used by so many users in different stores at the same time and it is creating many duplicate serialNumbers(same SerialNumbers).So i added a Sql transaction with ReadCommitted level to the transaction and i still got duplicate transaction numbers.I changed it to SERIALIZABLE in order to lock the resources and i not only got duplicate transaction numbers(!!HOW!!) but i also got sporadic deadlocks between the same stored procedure calls.This is what i tried : (With ommissions of try catch blocks and rollbacks) Create Procedure mytransaction_Insert as begin SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRASNACTION ins insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2 , Value3, ...., getTransactionNSerialNumber()) COMMIT TRANSACTION ins SET TRANSACTION ISOLATION READCOMMITTED end I even copied the function that gets the serial number directly into the stored procedure instead of the UDF function call and still got duplicate serialNumbers.So,How can a stored procedure line create something Like the c# lock() {} block. By the way,i have to implement the transaction serial number using the same pattern and i can't change the serialNumber to any other identity field or whatever.And for some reasons i need to generate the serialNumber inside the databse and i can't move SerialNumber generation to application level. Thank you.

    Read the article

  • Loading animation Memory leak

    - by Ayaz Alavi
    Hi, I have written network class that is managing all network calls for my application. There are two methods showLoadingAnimationView and hideLoadingAnimationView that will show UIActivityIndicatorView in a view over my current viewcontroller with fade background. I am getting memory leaks somewhere on these two methods. Here is the code -(void)showLoadingAnimationView { textmeAppDelegate *textme = (textmeAppDelegate *)[[UIApplication sharedApplication] delegate]; [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES]; if(wrapperLoading != nil) { [wrapperLoading release]; } wrapperLoading = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; wrapperLoading.backgroundColor = [UIColor clearColor]; wrapperLoading.alpha = 0.8; UIView *_loadingBG = [[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, 320.0, 480.0)]; _loadingBG.backgroundColor = [UIColor blackColor]; _loadingBG.alpha = 0.4; circlingWheel = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleWhiteLarge]; CGRect wheelFrame = circlingWheel.frame; circlingWheel.frame = CGRectMake(((320.0 - wheelFrame.size.width) / 2.0), ((480.0 - wheelFrame.size.height) / 2.0), wheelFrame.size.width, wheelFrame.size.height); [wrapperLoading addSubview:_loadingBG]; [wrapperLoading addSubview:circlingWheel]; [circlingWheel startAnimating]; [textme.window addSubview:wrapperLoading]; [_loadingBG release]; [circlingWheel release]; } -(void)hideLoadingAnimationView { [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO]; wrapperLoading.alpha = 0.0; [self.wrapperLoading removeFromSuperview]; //[NSTimer scheduledTimerWithTimeInterval:0.8 target:wrapperLoading selector:@selector(removeFromSuperview) userInfo:nil repeats:NO]; } Here is how I am calling these two methods [NSThread detachNewThreadSelector:@selector(showLoadingAnimationView) toTarget:self withObject:nil]; and then somewhere later in the code i am using following function call to hide animation. [self hideLoadingAnimationView]; I am getting memory leaks when I call showLoadingAnimationView function. Anything wrong in the code or is there any better technique to show loading animation when we do network calls?

    Read the article

  • How can I find out what is stopping my object from disposing properly?

    - by SLC
    I have a rather large and complex game in C#, and in it are various monsters. The monsters are created in the game by a MonsterCreator, and each monster has various plugins that are loaded from external DLLs. When my monsters die, they call a method in the MonsterCreator, and the MonsterCreator removes them from the game map, removes them from its own internal list of monsters, and then finally calls the Dispose() method on the monster itself. The dispose method calls the dispose method of each plugin, then clears up any of its own code. This seems to work fine, with lots of monsters, but somewhere there is a bug that crops up after a while, where a monster dies, but it has already been removed - it seems the callback telling the MonsterCreator is being called over and over, when the monster should have been removed on the first call. The likely candidate is that some of the plugins for the monster register themselves with an Event that pulses them every X number of seconds so they can perform logic. Stepping through I can see that they are unregistering with the Event when they die, but something is getting through still and I don't know what it is. Do you have any advice for debugging the problem? I can't post code really because it's split across a ton of libraries and plugin dlls, so it's more a case of figuring out the best way to debug it. I threw an exception in code when the monster died callback method is thrown and the monster cannot be found on the map to be removed, so I have the misbehaviing Monster, is there a way I can see what is still linked to it?

    Read the article

  • Issue with clipping rectangles and back to front rendering

    - by Milo
    Here is my problem. My rendering algorithm renders from back to front. But logically, clipping rectangles need to be applied from front to back. Hence why the following does not work: void AguiWidgetManager::recursiveRender(const AguiWidget *root) { //recursively calls itself to render widgets from back to front AguiWidget* nonConstRoot = (AguiWidget*)root; if(!nonConstRoot->isVisable()) { return; } //push the clipping rectangle if(nonConstRoot->isClippingChildren()) { graphicsContext->pushClippingRect(nonConstRoot->getClippingRectangle()); } if(nonConstRoot->isEnabled()) { nonConstRoot->paint(AguiPaintEventArgs(true,graphicsContext)); for(std::vector<AguiWidget*>::const_iterator it = root->getPrivateChildBeginIterator(); it != root->getPrivateChildEndIterator(); ++it) { recursiveRender(*it); } for(std::vector<AguiWidget*>::const_iterator it = root->getChildBeginIterator(); it != root->getChildEndIterator(); ++it) { recursiveRender(*it); } } else { nonConstRoot->paint(AguiPaintEventArgs(false,graphicsContext)); for(std::vector<AguiWidget*>::const_iterator it = root->getPrivateChildBeginIterator(); it != root->getPrivateChildEndIterator(); ++it) { recursiveRenderDisabled(*it); } for(std::vector<AguiWidget*>::const_iterator it = root->getChildBeginIterator(); it != root->getChildEndIterator(); ++it) { recursiveRenderDisabled(*it); } } //release clipping rectangle if(nonConstRoot->isClippingChildren()) { graphicsContext->popClippingRect(); } } I could ofcourse go to the top of the tree, then apply clipping rectangles inward until I get to the currently rendered widget, but that would involve lots of clipping rectangles @ 60 frames per second. I want to minimize calls to pushing and popping rectangles. What could I do, Thanks

    Read the article

  • Possible Data Execution Prevention (DEP) problem in Windows 7

    - by Joel in Gö
    I have a serious problem with my .Net program. It calls a native dll, and then crashes instantly because it can't find a native method. This is behaviour we have seen before, whereby the C# compiler, in its infinite wisdom, sets the flag that the program is DEP compatible, even if it calls a native dll which patently is not. We have the standard workaround for this, where the flag is set to Not DEP Compatible in a post-build step, and this works fine. Everywhere except on my machine. I have Windows 7 32bit, and the program works fine on the Win 7 64bit machines that we have, as well as on Vista and XP; we have not yet been able to check on another Win7 32bit. However, on my machine the DataExecutionPolicy_SupportPolicy is 0, i.e. we have successfully switched DEP off. Does anyone know whether there is some situation in which it can still act? Or any other mechanism which could have the same effect? The dll in question also works fine when called from a native program. We are running out of ideas... any help would be much appreciated!

    Read the article

  • Tailing 'Jobs' with Perl under mod_perl

    - by Matthew
    Hi everyone, I've got this project running under mod_perl shows some information on a host. On this page is a text box with a dropdown that allows users to ping/nslookup/traceroute the host. The output is shown in the text box like a tail -f. It works great under CGI. When the user requests a ping it would make an AJAX call to the server, where it essentially starts the ping with the output going to a temp file. Then subsequent ajax calls would 'tail' the file so that the output was updated until the ping finished. Once the job finished, the temp file would be removed. However, under mod_perl no matter what I do I can's stop it from creating zombie processes. I've tried everything, double forking, using IPC::Run etc. In the end, system calls are not encouraged under mod_perl. So my question is, maybe there's a better way to do this? Is there a CPAN module available for creating command line jobs and tailing output that will work under mod_perl? I'm just looking for some suggestions. I know I could probably create some sort of 'job' daemon that I signal with details and get updates from. It would run the commands and keep track of their status etc. But is there a simpler way? Thanks in advance.

    Read the article

  • What's the best way to develop a debugging window for an ajax ASP.Net MVC application

    - by KallDrexx
    While developing my ASP.NET MVC, I have started to see the need for a debugging console window to assist in figuring out what is going right and wrong in my code. I read the last few chapters of the Pro Asp.net MVC book, and the author details how to use http modules to show page load/creation times and linq to sql query logs, both of which I definitely want to be able to see. However, since I am loading a lot of small sections of my page individually with ajax I don't want the debug information right there in the middle of my screen. So the idea I came up with was to have a separate browser window (open-able by a link or some javascript) with a console log, that can contain logged entries both from javascript and from the asp.net mvc run. The former should be relatively easy, but I'm having trouble coming up with a way to log the asp.net information in ajax requests. The direction I have been thinking of going is to create an httpmodule (like the Pro MVC book does), and have that module contain some that append the javascript's log to console calls with the messages. The issue I see with this is finding a way to get the log messages from the controller's action methods to the httpmodule's methods. The only way I see to do this is with a singleton, but I'm not sure if singletons are bad practice for a stateless web application. Furthermore, it seems like if I return json with my ajax calls (instead of pure html) then that won't work at all anyways and unless there is a way to add data to an existing json structure inside the httpmodule. How does everyone else handle this type of debugging in heavily ajax applications? For reference, the javascript library I am using is jquery.

    Read the article

  • video calling (center)

    - by rrejc
    We are starting to develop a new application and I'm searching for information/tips/guides on application architecture. Application should: read the data from an external (USB) device send the data to the remote server (through internet) receive the data from the remote server perform a video call with to the calling (support) center receive a video call call from the calling (support) center support touch screens In addition: some of the data should also be visible through the web page. So I was thinking about: On the server side: use the database (probably MS SQL) use ORM (nHibernate) to map the data from the DB to the domain objects create a layer with business logic in C# create a web (WCF) services (for client application) create an asp.net mvc application (for item 7.) to enable data view through the browser On the client side I would use WPF 4 application which will communicate with external device and the wcf services on the server. So far so good. Now the problem begins. I have no idea how to create a video call (outgoing or incoming) part of the application. I believe that there is no problem to communicate with microphone, speaker, camera with WPF/C#. But how to communicate with the call center? What protocol and encoding should be used? I think that I will need to create some kind of server which will: have a list of operators in the calling center and track which operator is occupied and which operator is free have a list of connected end users receive incoming calls from end users and delegate call to free operator delegate calls from calling center to the end user Any info, link, anything on where to start would be much appreciated. Many thanks!

    Read the article

  • Getting svn: E170000: Unrecognized URL scheme for my custom Svn Gradle plugin

    - by Ip Doh
    I wrote a custom gradle plugin using groovy to do basic svn tasks like, Checkout, Clean, Tag etc. The groovy class calls the svn command line client to do these operations, It works fine when i run it on my windows system but the same plugin gives the following error when i run it on a linux system (Centos). svn: E170000: Unrecognized URL scheme for '%22https://source.mycompany.net/svn/MyProject/trunk%22' Am able to make the same calls to the command line client through the command prompt or shell script without any issues. So what is the difference with Here is my code sample String command =String.format("svn co -r %d --non-interactive --trust-server-cert -- username %s --password %s --depth infinity \"%s\" \"%s\"", getRevision(), getUserName(), getUserPassword(), getSrcUrl(), getDir()); Process svnProcess = Runtime.getRuntime().exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(svnProcess.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(svnProcess.getErrorStream())); String statusOutputLine ="" while ((statusOutputLine = stdInput.readLine()) != null) { logger.quiet(" " + statusOutputLine); } while (( statusOutputLine = stdError.readLine()) != null) { logger.error(statusOutputLine) throw new Exception(statusOutputLine) } logger.quiet("Successfully Checked out the work space") i do have neon installed on the system -bash-4.1$ svn --version svn, version 1.6.11 (r934486) compiled Jun 25 2011, 11:30:15 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: ra_neon : Module for accessing a repository via WebDAV protocol using Neon. handles 'http' scheme handles 'https' scheme ra_svn : Module for accessing a repository using the svn network protocol. with Cyrus SASL authentication handles 'svn' scheme ra_local : Module for accessing a repository on local disk. handles 'file' scheme

    Read the article

  • Optimizing drawing on UITableViewCell

    - by Brian
    I am drawing content to a UITableViewCell and it is working well, but I'm trying to understand if there is a better way of doing this. Each cell has the following components: Thumbnail on the left side - could come from server so it is loaded async Title String - variable length so each cell could be different height Timestamp String Gradient background - the gradient goes from the top of the cell to the bottom and is semi-transparent so that background colors shine through with a gloss It currently works well. The drawing occurs as follows: UITableViewController inits/reuses a cell, sets needed data, and calls [cell setNeedsDisplay] The cell has a CALayer for the thumbnail - thumbnailLayer In the cell's drawRect it draws the gradient background and the two strings The cell's drawRect it then calls setIcon - which gets the thumbnail and sets the image as the contents of the thumbnailLayer. If the image is not found locally, it sets a loading image as the contents of the thumbnailLayer and asynchronously gets the thumbnail. Once the thumbnail is received, it is reset by calling setIcon again & resets the thumbnailLayer.contents This all currently works, but using Instruments I see that the thumbnail is compositing with the gradient. I have tried the following to fix this: setting the cell's backgroundView to a view whose drawRect would draw the gradient so that the cell's drawRect could draw the thumbnail and using setNeedsDisplayInRect would allow me to only redraw the thumbnail after it loaded --- but this resulted in the backgroundView's drawing (gradient) covering the cell's drawing (text). I would just draw the thumbnail in the cell's drawRect, but when setNeedsDisplay is called, drawRect will just overlap another image and the loading image may show through. I would clear the rect, but then I would have to redraw the gradient. I would try to draw the gradient in a CAGradientLayer and store a reference to it, so I can quickly redraw it, but I figured I'd have to redraw the gradient if the cell's height changes. Any ideas? I'm sure I'm missing something so any help would be great.

    Read the article

  • Autorotation and multiple view controllers

    - by alku83
    I'm creating an iPad application which should only work in portrait and portait upsidedown modes. For performance reasons in my applicationDidFinishLaunching method I'm creating several viewControllers, and adding them to my main window as subviews. I then hide the ones I don't want to see straight away. There is no tab bar or navigation controller. My problem is that only the first viewController seems to be receiving the rotate calls. I have verified this by swapping around the order in which I add the subviews to the main window and NSLog's. Is there some way I can force all the controllers to receive the calls? Some of my views are designed to lay over the top of another view, but this behind view will not always be the same one - so it seems to make sense to have the overlay view in a separate view controller. Am I doing something fundamentally wrong, and that's why it's not behaving as I would expect? EDIT: The accepted answer for this question seems to indicate the exact problem I'm facing: http://stackoverflow.com/questions/548142/uiviewcontroller-rotate-methods

    Read the article

  • Forcing file redirection on x64 for a 32-bit application

    - by Paul Alexander
    The silent redirection of 64-bit system files to their 32-bit equivalents can be turned off and reverted with Wow64DisableWow64FsRedirection and Wow64RevertWow64FsRedirection. We use this for certain file identity checks in our application. The problem is that in performing some of theses tasks, we might call a framework or Windows API in a DLL that has not yet been loaded. If redirection is enabled at that time, the wrong version of the dll may be loaded resulting in a XXX is not a valid Win32 application error. I've identified the few API calls in question and what I'd like to do force the redirection on for the duration of that call then revert it back - just the opposite of the provided Win32 APIs. Unfortunately these calls do not provide any sort of WOW64 compatibility flag like some of the registry methods do. The obvious alternative is to use Wow64EnableWow64FsRedirection, pass TRUE for Wow64FsEanbledRedirection. However there are a variety of warnings about the use of this method and a note that it is not compatible with Disable/Revert combo methods that have replaced it. Is there a safe way to force redirection on for a give Win32 call? The docs state the redirection is thread specific so I've considered spinning up a new thread for the specific call with appropriate locks and waits, but I was hoping for a simpler solution.

    Read the article

  • Why does this cast to Base class in virtual function give a segmentation fault?

    - by dehmann
    I want to print out a derived class using the operator<<. When I print the derived class, I want to first print its base and then its own content. But I ran into some trouble (see segfault below): class Base { public: friend std::ostream& operator<<(std::ostream&, const Base&); virtual void Print(std::ostream& out) const { out << "BASE!"; } }; std::ostream& operator<<(std::ostream& out, const Base& b) { b.Print(out); return out; } class Derived : public Base { public: virtual void Print(std::ostream& out) const { out << "My base: "; //((const Base*)this)->Print(out); // infinite, calls this fct recursively //((Base*)this)->Print(out); // segfault (from infinite loop?) ((Base)*this).Print(out); // OK out << " ... and myself."; } }; int main(int argc, char** argv){ Derived d; std::cout << d; return 0; } Why can't I cast in one of these ways? ((const Base*)this)->Print(out); // infinite, calls this fct recursively ((Base*)this)->Print(out); // segfault (from infinite loop?)

    Read the article

  • Poor LLVM JIT performance

    - by Paul J. Lucas
    I have a legacy C++ application that constructs a tree of C++ objects. I want to use LLVM to call class constructors to create said tree. The generated LLVM code is fairly straight-forward and looks repeated sequences of: ; ... %11 = getelementptr [11 x i8*]* %Value_array1, i64 0, i64 1 %12 = call i8* @T_string_M_new_A_2Pv(i8* %heap, i8* getelementptr inbounds ([10 x i8]* @0, i64 0, i64 0)) %13 = call i8* @T_QueryLoc_M_new_A_2Pv4i(i8* %heap, i8* %12, i32 1, i32 1, i32 4, i32 5) %14 = call i8* @T_GlobalEnvironment_M_getItemFactory_A_Pv(i8* %heap) %15 = call i8* @T_xs_integer_M_new_A_Pvl(i8* %heap, i64 2) %16 = call i8* @T_ItemFactory_M_createInteger_A_3Pv(i8* %heap, i8* %14, i8* %15) %17 = call i8* @T_SingletonIterator_M_new_A_4Pv(i8* %heap, i8* %2, i8* %13, i8* %16) store i8* %17, i8** %11, align 8 ; ... Where each T_ function is a C "thunk" that calls some C++ constructor, e.g.: void* T_string_M_new_A_2Pv( void *v_value ) { string *const value = static_cast<string*>( v_value ); return new string( value ); } The thunks are necessary, of course, because LLVM knows nothing about C++. The T_ functions are added to the ExecutionEngine in use via ExecutionEngine::addGlobalMapping(). When this code is JIT'd, the performance of the JIT'ing itself is very poor. I've generated a call-graph using kcachegrind. I don't understand all the numbers (and this PDF seems not to include commas where it should), but if you look at the left fork, the bottom two ovals, Schedule... is called 16K times and setHeightToAtLeas... is called 37K times. On the right fork, RAGreed... is called 35K times. Those are far too many calls to anything for what's mostly a simple sequence of call LLVM instructions. Something seems horribly wrong. Any ideas on how to improve the performance of the JIT'ing?

    Read the article

  • NSThread terminating too early

    - by JustinXXVII
    I have an app that uploads to Google Spreadsheets via the GData ObjC client for Mac/iPhone. It works fine as is. I'm trying to get the upload portion on its own thread and I'm attempting to call the upload method on a new thread. Look: -(void)establishNewThreadToUpload { [NSThread detachNewThreadSelector:@selector(uploadToGoogle) toTarget:self withObject:nil]; } -(void)uploadToGoogle { NSAutoReleasePool *pool = [[NSAutoReleasePool alloc] init]; //works fine [helper setNewServiceWithName:username password:password]; //works fine [helper fetchUserSpreadsheetFeed]; //inside the helper class, fetchUserSpreadsheet feed calls ANOTHER method, which //calls ANOTHER METHOD and so on, until the object is either uploaded or fails //However, once the class gets to the end of fetchUserSpreadsheetFeed //control is passed back to this method, and [pool release]; //is called. The thread terminates and nothing ever happens. } If I forget about using a separate thread, everything works like it's supposed to. I'm new to thread programming, so if there's something I'm missing, please clue me in! Thanks!

    Read the article

  • Using Pragma in Oracle Package Body

    - by asalamon74
    I'd like to create an Oracle Package and two functions in it: A public function ( function_public ) and a private one ( function_private ). The public function calls the private one. I'd like to add the same pragma to the functions: WNDS, WNPS. Without the pragma I can create a code like this: CREATE OR REPLACE PACKAGE PRAGMA_TEST AS FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2; END PRAGMA_TEST; CREATE OR REPLACE PACKAGE BODY PRAGMA_TEST AS FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2 IS BEGIN -- code END; FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2 IS BEGIN -- code -- here is a call for function_private -- code END; END PRAGMA_TEST; If I'd like to add WNDS, WNPS pragma to function_public I should also add the same pragma to function_private because function_public calls function_private. It seems to me pragma can be used only in the package declaration, and not in package body, so I have to declare function_private in the package as well: CREATE OR REPLACE PACKAGE PRAGMA_TEST AS FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2; PRAGMA RESTRICT_REFERENCES( function_private, WNDS, WNPS); FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2; PRAGMA RESTRICT_REFERENCES( function_public, WNDS, WNPS); END PRAGMA_TEST; CREATE OR REPLACE PACKAGE BODY PRAGMA_TEST AS FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2 IS BEGIN -- code END; FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2 IS BEGIN -- code -- here is a call for function_private -- code END; END PRAGMA_TEST; This solution makes my function_private public as well. Is there a solution to add pragma to a function which can be found only in the package body?

    Read the article

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

  • Strange behavior using getchar() and -O3

    - by Eduardo
    I have these two functions void set_dram_channel_width(int channel_width){ printf("one\n"); getchar(); } void set_dram_transaction_granularity(int cacheline_size){ printf("two\n"); getchar(); } //output: one f //my keyboard input two one f //keyboard input two one f //keyboard input //No more calls Then I change the functions to: void set_dram_channel_width(int channel_width){ printf("one\n"); } void set_dram_transaction_granularity(int cacheline_size){ printf("two\n"); getchar(); } //output one two f //keyboard input //No more calls Both functions are called by an external code, the code for both programs is the same, just changing the getchar() I get those two different outputs. Is this possible or there is something that is really wrong in my code? Thanks This is the output I get with GDB** For the first code (gdb) break mem-dram.c:374 Breakpoint 1 at 0x71c810: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 374. (gdb) break mem-dram.c:381 Breakpoint 2 at 0x71c7b0: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 381. (gdb) run -d ./tmp/MyBench2/ one f [Switching to Thread 47368811512112 (LWP 17507)] Breakpoint 1, set_dram_channel_width (channel_width=64) (gdb) c Continuing. two one f Breakpoint 2, set_dram_transaction_granularity (cacheline_size=64) (gdb) c Continuing. Breakpoint 1, set_dram_channel_width (channel_width=8) 374 void set_dram_channel_width(int channel_width){ (gdb) c Continuing. two one f For the second code (gdb) break mem-dram.c:374 Breakpoint 1 at 0x71c7b6: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 374. (gdb) break mem-dram.c:380 Breakpoint 2 at 0x71c7f0: file build/ALPHA_FS/mem/dramsim/mem-dram.c, line 380. (gdb) run one two f [Switching to Thread 46985688772912 (LWP 17801)] Breakpoint 1, set_dram_channel_width (channel_width=64) (gdb) c Continuing. Breakpoint 2, set_dram_transaction_granularity (cacheline_size=64) (gdb) c Continuing. Breakpoint 1, set_dram_channel_width (channel_width=8) (gdb) c Continuing.

    Read the article

  • Microsoft Detours - DetourUpdateThread?

    - by pault543
    Hi, I have a few quick questions about the Microsoft Detours Library. I have used it before (successfully), but I just had a thought about this function: LONG DetourUpdateThread(HANDLE hThread); I read elsewhere that this function will actually suspend the thread until the transaction completes. This seems odd since most sample code calls: DetourUpdateThread(GetCurrentThread()); Anyway, apparently this function "enlists" threads so that, when the transaction commits (and the detours are made), their instruction pointers are modified if they lie "within the rewritten code in either the target function or the trampoline function." My questions are: When the transaction commits, is the current thread's instruction pointer going to be within the DetourTransactionCommit function? If so, why should we bother enlisting it to be updated? Also, if the enlisted threads are suspended, how can the current thread continue executing (given that most sample code calls DetourUpdateThread(GetCurrentThread());)? Finally, could you suspend all threads for the current process, avoiding race conditions (considering that threads could be getting created and destroyed at any time)? Perhaps this is done when the transaction begins? This would allow us to enumerate threads more safely (as it seems less likely that new threads could be created), although what about CreateRemoteThread()? Thanks, Paul For reference, here is an extract from the simple sample: // DllMain function attaches and detaches the TimedSleep detour to the // Sleep target function. The Sleep target function is referred to // through the TrueSleep target pointer. BOOL WINAPI DllMain(HINSTANCE hinst, DWORD dwReason, LPVOID reserved) { if (dwReason == DLL_PROCESS_ATTACH) { DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourAttach(&(PVOID&)TrueSleep, TimedSleep); DetourTransactionCommit(); } else if (dwReason == DLL_PROCESS_DETACH) { DetourTransactionBegin(); DetourUpdateThread(GetCurrentThread()); DetourDetach(&(PVOID&)TrueSleep, TimedSleep); DetourTransactionCommit(); } return TRUE; }

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • Correct Interactive Website System Design Concepts / Methods?

    - by Xandel
    Hi all, I hope this question isn't too open ended, but a nudge in the right direction is all I need! I am currently building an online accounting system - the idea is that users can register, log in, and then create customers, generate invoices and other documents and eventually print / email those documents out. I am a Java programmer but unfortunately haven't had too much experience in web projects and their design concepts... This is what I have got thus far - A Tomcat web server which loads Spring. Spring handles my DAO's and required classes for the business logic. Tomcat serves JSP's containing the pages which make up the website. To make it interactive I have used JavaScript in the pages (jQuery and its AJAX calls) to send and receive JSON data (this is done by posting to a page which calls a handleAction() method in one of my classes). My question is, am I tackling this project in the right way? Am I using the right tools and methods? I understand there are literally countless ways of tackling any project but I would really love to get feedback with regards to tried and tested methods, general practices etc. Thanks in advance! Xandel

    Read the article

  • silverlight security with WCF service, Forms Authentication and Custom Form Ticket

    - by user74825
    I have a silverlight application with login on the silverlight page. It uses Forms Authentication with WCF authentication service and customer Membership Provider. Something like : http://blogs.msdn.com/phaniraj/archive/2009/09/10/using-the-ado-net-data-services-silverlight-client-library-in-x-domain-and-out-of-browser-scenarios-ii-forms-authentication.aspx So, SL page login page calls the WCF service authentication service, it validates using DB - brings back username and password. Now, in each subsequent calls (in Global.asax in Authenticate_Request, I get HttpContext.User.IsAuthenticated and HttpContext.User.UserName). I have all this working properly. But, I just don't want the username, but more information surrounding the user, like UserId, UserAddress, UserAssociateCustomer etc. I tried couple of different approaches. 1) Use HttpContext.Cache as a dictionary to save the item and get it off based on httpcontext.user.name, problem is cache can be erased if there memory is being used heavily. 2) Tried CustomFormsAuth Ticket, when forms authentication writes a ticket, I intercept CreatingCookie method and write additional info in formauthentication ticket, so that I can read it in subsequent requests, I am having problems with this approach, I don't find the ticket in subsequent requests. I read about how we should use REsponse.Redirect, but where do I redirect user from WCF call. How do you guys implement the above scenario? Any best practices.? Any issues you see with going on HTTPS? All examples (or most of them) just explains simple forms authentication with "I am logged in message".. Any suggestions ?

    Read the article

  • Why does setting this member in C fail?

    - by Lee Crabtree
    I'm writing a Python wrapper for a C++ library, and I'm getting a really weird when trying to set a struct's field in C. If I have a struct like this: struct Thing { PyOBJECT_HEAD unsigned int val; }; And have two functions like this: static PyObject* Thing_GetBit(Thing* self, PyObject* args) { unsigned int mask; if(!PyArg_ParseTuple(args, "I", &mask) Py_RETURN_FALSE; if((self->val & mask) != 0) Py_RETURN_TRUE; Py_RETURN_FALSE; } static PyObject* Thing_SetBit(Thing* self, PyObject* args) { unsigned int mask; bool on; if(!PyArg_ParseTuple(args, "Ii", &mask, &on)) Py_RETURN_FALSE; if(on) thing->val |= mask; else thing->val &= ~mask; Py_RETURN_TRUE; } Python code that calls the first method works just fine, giving back the value of the struct member. Calls to the SetBit method give an error about an object at address foo accessing memory at address bar, which couldn't be "written". I've poked around the code, and it's like I can look at the value all I want, both from C and Python, but the instant I try to set it, it blows up in my face. Am I missing something fundamental here?

    Read the article

  • ClassCastException When Calling an EJB Remotely that Exists on Same Server

    - by aaronvargas
    I have 2 ejbs. Ejb-A that calls Ejb-B. They are not in the same Ear. For portability Ejb-B may or may not exist on the same server. (There is an external property file that has the provider URLs of Ejb-B. I have no control over this.) Example Code: in Ejb-A EjbBDelegate delegateB = EjbBDelegateHelper.getRemoteDelegate(); // lookup from list of URLs from props... BookOfMagic bom = delegateB.getSomethingInteresting(); Use Cases/Outcomes: When Ejb-B DOES NOT EXIST on the same server as Ejb-A, everything works correctly. (it round-robbins through the URLs) When Ejb-B DOES EXIST on the same server, and Ejb-A happens to call Ejb-B on the same server, everything works correctly. When Ejb-B DOES EXIST on the same server, and Ejb-A calls Ejb-B on a different server, I get: javax.ejb.EJBException: nested exception is: java.lang.ClassCastException: $Proxy126 java.lang.ClassCastException: $Proxy126 I'm using Weblogic 10.0, Java 5, EJB3 Basically, if Ejb-B Exists on the server, it must be called ONLY on that server. Which leads me to believe that the class is getting loaded by a local classloader (on deployment?), then when called remotely, a different classloader is loading it. (causing the Exception) But it should work, as it should be Serialized into the destination classloader... What am I doing wrong?? Also, when reproducing this locally, Ejb-A would favor the Ejb-B on the same server, so it was difficult to reproduce. But this wasn't the case on other machines. NOTE: This all worked correctly for EJB2

    Read the article

  • Can a destructor be recursive?

    - by Cubbi
    Is this program well-defined, and if not, why exactly? #include <iostream> #include <new> struct X { int cnt; X (int i) : cnt(i) {} ~X() { std::cout << "destructor called, cnt=" << cnt << std::endl; if ( cnt-- > 0 ) this->X::~X(); // explicit recursive call to dtor } }; int main() { char* buf = new char[sizeof(X)]; X* p = new(buf) X(7); p->X::~X(); // explicit call to dtor delete[] buf; } My reasoning: although invoking a destructor twice is undefined behavior, per 12.4/14, what it says exactly is this: the behavior is undefined if the destructor is invoked for an object whose lifetime has ended Which does not seem to prohibit recursive calls. While the destructor for an object is executing, the object's lifetime has not yet ended, thus it's not UB to invoke the destructor again. On the other hand, 12.4/6 says: After executing the body [...] a destructor for class X calls the destructors for X's direct members, the destructors for X's direct base classes [...] which means that after the return from a recursive invocation of a destructor, all member and base class destructors will have been called, and calling them again when returning to the previous level of recursion would be UB. Therefore, a class with no base and only POD members can have a recursive destructor without UB. Am I right?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >