Search Results

Search found 59241 results on 2370 pages for 'application mode'.

Page 455/2370 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • How do I permanently disable Linux's console screen saver, system-wide?

    - by raldi
    I've got an Ubuntu server that boots up in text mode. It rarely has a screen or keyboard attached to it, but when I do attach a screen, I usually have to attach a keyboard too, because the darn console mode screen saver will be on and I'll need to hit a key to see what's going on. I'm aware that the setterm command can disable this, but it's a per-session thing. How can I make it so the machine never ever blanks the screen in text mode, even when it's first booted up and sitting at the login prompt?

    Read the article

  • IIS FTP server not working after purchase of SSL certificate

    - by Chris
    I've been connecting to my web server with active mode in FileZilla with no problems. Over the weekend, an SSL certificate was dropped into a folder that I access with FTP, and which contains files for the website. Now I am receiving a 425 error in active mode on the FTP root, so I can't really do anything but log in. In passive mode, I can connect and move around in the directory tree, but the connection seems shaky. Occasionally I'll time out, and I can't get access at all to the folder containing the SSL certificate. My question is how does the SSL certificate affect my FTP connection (if at all)? Does its presence demand the use of FTP over SSL? Note: As far as I know, the only change which occurred was the placement of the SSL certificate. Firewall settings, FTP client and server settings should all be the same as before, when everything was working.

    Read the article

  • Time Capsule + Ubee Router?

    - by Charlie
    I can't for the life of me figure this out. I recently had TWC installed in my house, and wanted to disable the NAT and router functions of it. I have a Time Capsule hooked up to it from LAN1 (on the Ubee) to the WAN port on the TC. The problems started occurring here. I figured the settings would be these: Ubee Configuration mode: Bridge DHCP: Off TC IPv4: 192.168.100.2 Subnet Mask: 255.255.255.0 Router Address: 192.168.100.1 DNS Servers: 8.8.8.8, 8.8.4.4 Router Mode: DHCP and NAT But using those settings, my TC says "Double NAT", so I have to change it all around to the default settings of the Ubee using NAT. This leads me to believe bridge mode doesn't actually turn off NAT...

    Read the article

  • Question about getting information off an old HD...

    - by user37983
    Ok so my old cpu was a sony vaio. its kinda old and had xp on it. My gf thrashed it - the cd drive went and she tried fixing it by messing with the configurations bios etc. the actual laptop is really fried the keyboard doesnt work, etc. However the harddrive is still intact. I tried putting the HD in my new cpu (toshiba runnig win7) and it looks like its gonna boot up it goes to the screen with the logo and the status bar starts to load. then it flashes a blue screen for a split second, and goes to the black screen where it says windows did not shut down properly and gives options to (start windows normally, safe mode, safe mode with networking, safe mode with prompts) ive tried every option but it always goes back to this screen. I need to get into the hd cuz i have very important files. is there anyway?

    Read the article

  • W3WP crashes when initializing a collection

    - by asbjornu
    I've created an ASP.NET MVC application that has an initializer attached to the PreApplicationStartMethodAttribute. When initializing, a collection is instantiated that implements an interface I've defined. When I instantiate this collection, w3wp.exe crashes with the following two incomprehensible entries in the event log: Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bd0eb Faulting module name: clr.dll, version: 4.0.30319.1, time stamp: 0x4ba21eeb Exception code: 0xc00000fd Fault offset: 0x0000000000001177 Faulting process id: 0x1348 Faulting application start time: 0x01cb0224882f4723 Faulting application path: c:\windows\system32\inetsrv\w3wp.exe Faulting module path: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\clr.dll Report Id: c6a0941e-6e17-11df-864d-000acd16dcdb And: Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7600.16385 P3: 4a5bd0eb P4: clr.dll P5: 4.0.30319.1 P6: 4ba21eeb P7: c00000fd P8: 0000000000001177 P9: P10: Attached files: These files may be available here: Analysis symbol: Rechecking for solution: 0 Report Id: c6a0941e-6e17-11df-864d-000acd16dcdb Report Status: 0 If I remove the instantiation of the collection, the application starts normally. If I leave the instantiation, w3wp crashes. If I modify the interface, w3wp still crashes. I've tried every variation I could come up with on the theme of keeping the instantiation but doing everything else differently, but w3wp still crashes. My biggest issue here is that I have absolutely no idea why w3wp is crashing. It's not a StackOverflowException or anything concrete like that, all I get is the unintelligent junk cited above. I've tried to use DebugDiag and IISState to debug the w3wp process, but DebugDiag is only available for post-dump analysis in x64 (I'm running on Windows 7 x64, so the w3wp process is thus 64 bit) and IISStat says the following when I try to run it: D:\Programs\iisstate>IISState.exe -p 9204 -d Symbol search path is: SRV*D:\Programs\iisstate\symbols*http://msdl.microsoft.com/download/symbols IISState is limited to processes associated with IIS. If you require a generic debugger, please use WinDBG or CDB. They are available for download from http://www.microsoft.com/ddk/debugging. This error may also occur if a debugger is already attached to the process being checked. Incorrect Process Attachment I've double-checked 10 times that the process ID of my w3wp process is correct. I'm suspecting that IISState too only can debug x86 processes. Setting a breakpoint anywhere in the application does absolutely nothing. The break point isn't hit and w3wp crashes as soon as the request comes through to IIS from the browser. Starting the application with F5 in Visual Studio 2010 or starting another application to get the w3wp process up and running and then attaching the VS2010 debugger to it and then visiting the faulting application doesn't help. I've also tried to add an HTTP module as described in KB-911816 as well as add this to my web.config file: <configuration> <runtime> <legacyUnhandledExceptionPolicy enabled="true" /> </runtime> </configuration> Needless to say, it makes absolutely no difference. So I'm left with no way to debug the w3wp process, no way to extract any information from it and complete garbage dumped in my event log. If anybody has any idea on how to debug this problem, please let me know! Update My collection was initialized based on RouteTable.Routes which might have thrown an exception (perhaps by not being initialized itself yet in such an early stage of the ASP.NET lifecycle). Postponing the communication with RouteTable.Routes until a later stage solved the problem. While I don't really need an answer to this question anymore, I still find it so obscure that I'll leave it for anyone to comment on and answer, because I found no existing posts on this problem anywhere, so it might be of good reference in the future.

    Read the article

  • Adding Core Data to existing iPhone project

    - by swalkner
    I'd like to add core data to an existing iPhone project, but I still get a lot of compile errors: NSManagedObjectContext undeclared Expected specifier-qualifier-list before 'NSManagedObjectModel' ... I already added the Core Data Framework to the target (right click on my project under "Targets", "Add" - "Existing Frameworks", "CoreData.framework"). My header-file: NSManagedObjectModel *managedObjectModel; NSManagedObjectContext *managedObjectContext; NSPersistentStoreCoordinator *persistentStoreCoordinator; [...] @property (nonatomic, retain, readonly) NSManagedObjectModel *managedObjectModel; @property (nonatomic, retain, readonly) NSManagedObjectContext *managedObjectContext; @property (nonatomic, retain, readonly) NSPersistentStoreCoordinator *persistentStoreCoordinator; What am I missing? Starting a new project is not an option... Thanks a lot! edit sorry, I do have those implementations... but it seems like the Library is missing... the implementation methods are full with compile error like "managedObjectContext undeclared", "NSPersistentStoreCoordinator undeclared", but also with "Expected ')' before NSManagedObjectContext" (although it seems like the parenthesis are correct)... #pragma mark - #pragma mark Core Data stack /** Returns the managed object context for the application. If the context doesn't already exist, it is created and bound to the persistent store coordinator for the application. */ - (NSManagedObjectContext *) managedObjectContext { if (managedObjectContext != nil) { return managedObjectContext; } NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator]; if (coordinator != nil) { managedObjectContext = [[NSManagedObjectContext alloc] init]; [managedObjectContext setPersistentStoreCoordinator: coordinator]; } return managedObjectContext; } /** Returns the managed object model for the application. If the model doesn't already exist, it is created by merging all of the models found in application bundle. */ - (NSManagedObjectModel *)managedObjectModel { if (managedObjectModel != nil) { return managedObjectModel; } managedObjectModel = [[NSManagedObjectModel mergedModelFromBundles:nil] retain]; return managedObjectModel; } /** Returns the persistent store coordinator for the application. If the coordinator doesn't already exist, it is created and the application's store added to it. */ - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"Core_Data.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; }

    Read the article

  • SQL server deadlock between INSERT and SELECT statement

    - by dtroy
    Hi! I've got a problem with multiple deadlocks on SQL server 2005. This one is between an INSERT and a SELECT statement. There are two tables. Table 1 and Table2. Table2 has Table1's PK (table1_id) as foreign key. Index on table1_id is clustered. The INSERT inserts a single row into table2 at a time. The SELCET joins the 2 tables. (it's a long query which might take up to 12 secs to run) According to my understanding (and experiments) the INSERT should acquire an IS lock on table1 to check referential integrity (which should not cause a deadlock). But, in this case it acquired an IX page lock The deadlock report: <deadlock-list> <deadlock victim="process968898"> <process-list> <process id="process8db1f8" taskpriority="0" logused="2424" waitresource="OBJECT: 5:789577851:0 " waittime="12390" ownerId="61831512" transactionname="user_transaction" lasttranstarted="2010-04-16T07:10:13.347" XDES="0x222a8250" lockMode="IX" schedulerid="1" kpid="3764" status="suspended" spid="52" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:13.350" lastbatchcompleted="2010-04-16T07:10:13.347" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read uncommitted (1)" xactid="61831512" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcTable2_Insert" line="18" stmtstart="576" stmtend="1148" sqlhandle="0x0300050079e62d06e9307f000b9d00000100000000000000"> INSERT INTO dbo.Table2 ( f1, table1_id, f2 ) VALUES ( @p1, @p_DocumentVersionID, @p1 ) </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 103671417] </inputbuf> </process> <process id="process968898" taskpriority="0" logused="0" waitresource="PAGE: 5:1:46510" waittime="7625" ownerId="61831406" transactionname="INSERT" lasttranstarted="2010-04-16T07:10:12.717" XDES="0x418ec00" lockMode="S" schedulerid="2" kpid="1724" status="suspended" spid="53" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:12.713" lastbatchcompleted="2010-04-16T07:10:12.713" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read committed (2)" xactid="61831406" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcGetList" line="64" stmtstart="3548" stmtend="11570" sqlhandle="0x03000500dbcec17e8d267f000b9d00000100000000000000"> <!-- XXXXXXXXXXXXXX...SELECT STATEMENT WITH Multiple joins including both Table2 table 1 and .... XXXXXXXXXXXXXXX --> </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 2126630619] </inputbuf> </process> </process-list> <resource-list> <pagelock fileid="1" pageid="46510" dbid="5" objectname="DatabaseName.dbo.table1" id="lock6236bc0" mode="IX" associatedObjectId="72057594042908672"> <owner-list> <owner id="process8db1f8" mode="IX"/> </owner-list> <waiter-list> <waiter id="process968898" mode="S" requestType="wait"/> </waiter-list> </pagelock> <objectlock lockPartition="0" objid="789577851" subresource="FULL" dbid="5" objectname="DatabaseName.dbo.Table2" id="lock970a240" mode="S" associatedObjectId="789577851"> <owner-list> <owner id="process968898" mode="S"/> </owner-list> <waiter-list> <waiter id="process8db1f8" mode="IX" requestType="wait"/> </waiter-list> </objectlock> </resource-list> </deadlock> </deadlock-list> Can anyone explain why the INSERT gets the IX page lock ? Am I not reading the deadlock report properly? BTW, I have not managed to reproduce this issue. Thanks!

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • HttpWebRequest response produces HTTP 422. Why?

    - by Simon
    Hi there. I'm trying to programmatically send a POST-request to a web-server in order to login an then perform other requests that require a login. This is my code: Encoding.UTF8.GetBytes( String.Format( "login={0}&password={1}&authenticity_token={2}", username, password, token)); //Create HTTP-request for login HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("http://www.xxx.xx/xx/xx"); request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = data.Length; request.CookieContainer = new CookieContainer(); request.Accept = "application/xml,application/xhtml+xml,text/html; +"q=0.9,text/plain ;q=0.8,image/png,*/*;q=0.5"; request.Referer = "http://www.garzantilinguistica.it/it/session"; request.Headers.Add("Accept-Language", "de-DE"); request.Headers.Add("Origin", "http://www.xxx.xx"); request.UserAgent = "C#"; request.Headers.Add("Accept-Encoding", "gzip, deflate"); After sending the request //Send post request var requestStream = request.GetRequestStream(); requestStream.Write(data, 0, data.Length); requestStream.Flush(); requestStream.Close(); ... I want to get the servers response: //Get Response StreamReader responseStreamReader = new StreamReader( request.GetResponse().GetResponseStream()); //WebException: HTTP 422! string content = responseStreamReader.ReadToEnd(); This piece of code fires the WebException, that tells me the server responded with HTTP 422 (unprocessable entity due to semantic errors) Then I compared (using a TCP/IP sniffers) the requests of my program and the browser (which of course produces a valid POST-request and gets the right response). (1) My program's request: POST /it/session HTTP/1.1 Content-Type: application/x-www-form-urlencoded Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain; q=0.8,image/png,*/*;q=0.5 Referer: http://www.garzantilinguistica.it/it/session Accept-Language: de-DE Origin: http://www.garzantilinguistica.it User-Agent: C# Accept-Encoding: gzip, deflate Host: www.garzantilinguistica.it Content-Length: 111 Expect: 100-continue Connection: Keep-Alive HTTP/1.1 100 Continue [email protected]&password=machivui&authenticity_token=4vLgtwP3nFNg4NeuG4MbUnU7sy4z91Wi8WJXH0POFmg= HTTP/1.1 422 Unprocessable Entity (2) The browser's request: POST /it/session HTTP/1.1 Host: www.garzantilinguistica.it Referer: http://www.garzantilinguistica.it/it/session Accept: application/xml,application/xhtml+xml,text/html;q=0.9, text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Language: de-DE Origin: http://www.garzantilinguistica.it User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; de-DE) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7 Accept-Encoding: gzip, deflate Content-Type: application/x-www-form-urlencoded Cookie: __utma=244184339.652523587.1275208707.1275208707.1275211298.2; __utmb=244184339.20.10.1275211298; __utmc=244184339; __utmz=244184339.1275208707.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); _garzanti2009_session=BAh7CDoPc2Vzc2lvbl9pZCIlZDg4MWZjNjg2YTRhZWE0NDQ0ZTJmMTU2YWY4ZTQ1NGU6EF9jc3JmX3Rva2VuIjFqRWdLdll3dTYwOTVVTEpNZkt6dG9jUCtaZ0o4V0FnV2V5ZnpuREx6QUlZPSIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6Rmxhc2g6OkZsYXNoSGFzaHsGOgplcnJvciIVbG9naW4gbm9uIHZhbGlkbwY6CkB1c2VkewY7CFQ%3D--4200fa769898dd156faa49e457baf660cf068d08 Content-Length: 144 Connection: keep-alive authenticity_token=jEgKvYwu6095ULJMfKztocP%2BZgJ8WAgWeyfznDLzAIY%3D&login=thespider14%40hotmail.com&password=machivui&remember_me=1&commit=Entra HTTP/1.1 302 Found Can someone help to understand which part of the request I am missing or what the main difference between the browser's and my request is? Why am I getting that 422?

    Read the article

  • Delete Stored Proc Deadlock in Sql Server

    - by Chris
    I am having the following deadlock in SQL Server 2005 with a specific delete stored proc and I can't figure out what I need to do to remedy it. <deadlock-list> <deadlock victim="processf3a868"> <process-list> <process id="processcae718" taskpriority="0" logused="0" waitresource="KEY: 7:72057594340311040 (b50041b389fe)" waittime="62" ownerId="1678098" transactionguid="0x950057256032d14db6a2c553a39a8279" transactionname="user_transaction" lasttranstarted="2010-05-26T13:45:23.517" XDES="0x8306c370" lockMode="RangeS-U" schedulerid="1" kpid="2432" status="suspended" spid="59" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-05-26T13:45:23.717" lastbatchcompleted="2010-05-26T13:45:23.717" clientapp=".Net SqlClient Data Provider" hostname="DEVELOPER01" hostpid="28104" loginname="DEVELOPER01\ServiceUser" isolationlevel="serializable (4)" xactid="1678098" currentdb="7" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056"> <executionStack> <frame procname="DB.dbo.sp_DeleteSecuritiesRecords" line="13" stmtstart="708" stmtend="918" sqlhandle="0x030007008b6b662229b10c014f9d00000100000000000000"> DELETE FROM tSecuritiesRecords WHERE [FilingID] = @filingID AND [AccountID] = @accountID </frame> </executionStack> <inputbuf> Proc [Database Id = 7 Object Id = 577137547] </inputbuf> </process> <process id="processf3a868" taskpriority="0" logused="0" waitresource="KEY: 7:72057594340311040 (4f00409af90f)" waittime="93" ownerId="1678019" transactionguid="0xb716547a8f7fdd40b342e5db6b3699fb" transactionname="user_transaction" lasttranstarted="2010-05-26T13:45:21.543" XDES="0x92617130" lockMode="X" schedulerid="3" kpid="13108" status="suspended" spid="57" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-05-26T13:45:23.717" lastbatchcompleted="2010-05-26T13:45:23.717" clientapp=".Net SqlClient Data Provider" hostname="DEVELOPER01" hostpid="28104" loginname="DEVELOPER01\ServiceUser" isolationlevel="serializable (4)" xactid="1678019" currentdb="7" lockTimeout="4294967295" clientoption1="673185824" clientoption2="128056"> <executionStack> <frame procname="DB.dbo.sp_DeleteSecuritiesRecords" line="13" stmtstart="708" stmtend="918" sqlhandle="0x030007008b6b662229b10c014f9d00000100000000000000"> DELETE FROM tSecuritiesRecords WHERE [FilingID] = @filingID AND [AccountID] = @accountID </frame> </executionStack> <inputbuf> Proc [Database Id = 7 Object Id = 577137547] </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057594340311040" dbid="7" objectname="DB.dbo.tSecuritiesRecords" indexname="PK_tTransactions" id="lock82416380" mode="RangeS-U" associatedObjectId="72057594340311040"> <owner-list> <owner id="processf3a868" mode="RangeS-U"/> </owner-list> <waiter-list> <waiter id="processcae718" mode="RangeS-U" requestType="convert"/> </waiter-list> </keylock> <keylock hobtid="72057594340311040" dbid="7" objectname="DB.dbo.tSecuritiesRecords" indexname="PK_tTransactions" id="lock825fd380" mode="RangeS-U" associatedObjectId="72057594340311040"> <owner-list> <owner id="processcae718" mode="RangeS-S"/> </owner-list> <waiter-list> <waiter id="processf3a868" mode="X" requestType="convert"/> </waiter-list> </keylock> </resource-list> </deadlock> </deadlock-list>

    Read the article

  • Push notification not received in ios5 device

    - by Surender Rathore
    I am sending push notification on device which has iOS 5.0.1, but notification is not received on iOS5 device. If i try to send notification on iOS4.3.3 device, it received on this device. My server is developed in c# and apple push notification certificate is used in .p12 format. My code to register the notification is - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Override point for customization after application launch. // Set the navigation controller as the window's root view controller and display. deviceToken = nil; self.window.rootViewController = self.navigationController; [self.window makeKeyAndVisible]; [[UIApplication sharedApplication] enabledRemoteNotificationTypes]; [[UIApplication sharedApplication] registerForRemoteNotificationTypes: UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeSound | UIRemoteNotificationTypeAlert]; return YES; } - (void)application:(UIApplication *)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)devToken { UIAlertView *alertView=[[UIAlertView alloc]initWithTitle:@"Succeeded in registering for APNS" message:@"Sucess" delegate:self cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alertView show]; [alertView release]; deviceToken = [devToken retain]; NSMutableString *dev = [[NSMutableString alloc] init]; NSRange r; r.length = 1; unsigned char c; for (int i = 0; i < [deviceToken length]; i++) { r.location = i; [deviceToken getBytes:&c range:r]; if (c < 10) { [dev appendFormat:@"0%x", c]; } else { [dev appendFormat:@"%x", c]; } } NSUserDefaults *def = [NSUserDefaults standardUserDefaults]; [def setObject:dev forKey:@"DeviceToken"]; [def synchronize]; NSLog(@"Registered for APNS %@\n%@", deviceToken, dev); [dev release]; } - (void)application:(UIApplication *)application didFailToRegisterForRemoteNotificationsWithError:(NSError *)error { NSLog(@"Failed to register %@", [error localizedDescription]); UIAlertView *alertView=[[UIAlertView alloc]initWithTitle:@"FAILED" message:@"Fail" delegate:self cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alertView show]; [alertView release]; deviceToken = nil; } - (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo { NSLog(@"Recieved Remote Notification %@", userInfo); NSDictionary *aps = [userInfo objectForKey:@"aps"]; NSDictionary *alert = [aps objectForKey:@"alert"]; //NSString *actionLocKey = [alert objectForKey:@"action-loc-key"]; NSString *body = [alert objectForKey:@"body"]; // NSMutableArray *viewControllers = [NSMutableArray arrayWithArray:[self.navigationController viewControllers]]; // // if ([viewControllers count] > 2) { // NSRange r; // r.length = [viewControllers count] - 2; // r.location = 2; // [viewControllers removeObjectsInRange:r]; // // MapViewController *map = (MapViewController*)[viewControllers objectAtIndex:1]; // [map askDriver]; // } // // [self.navigationController setViewControllers:viewControllers]; // [viewControllers release]; //NewBooking,"BookingId",Passenger_latitude,Passenger_longitude,Destination_latitude,Destination_longitude,Distance,Time,comments NSArray *arr = [body componentsSeparatedByString:@","]; if ([arr count]>0) { MapViewController *map = (MapViewController*)[[self.navigationController viewControllers] objectAtIndex:1]; [map askDriver:arr]; [self.navigationController popToViewController:[[self.navigationController viewControllers] objectAtIndex:1] animated:YES]; } //NSString *sound = [userInfo objectForKey:@"sound"]; //UIAlertView *alertV = [[UIAlertView alloc] initWithTitle:actionLocKey // message:body delegate:nil // cancelButtonTitle:@"Reject" // otherButtonTitles:@"Accept", nil]; // // [alertV show]; // [alertV release]; } Can anyone help me out with this issue. Why notification received on iOS4.3 device but not on iOS5? Thank you very much!!! in advance

    Read the article

  • Running ASP.NET Webforms and ASP.NET MVC side by side

    - by rajbk
    One of the nice things about ASP.NET MVC and its older brother ASP.NET WebForms is that they are both built on top of the ASP.NET runtime environment. The advantage of this is that, you can still run them side by side even though MVC and WebForms are different frameworks. Another point to note is that with the release of the ASP.NET routing in .NET 3.5 SP1, we are able to create SEO friendly URLs that do not map to specific files on disk. The routing is part of the core runtime environment and therefore can be used by both WebForms and MVC. To run both frameworks side by side, we could easily create a separate folder in your MVC project for all our WebForm files and be good to go. What this post shows you instead, is how to have an MVC application with WebForm pages  that both use a common master page and common routing for SEO friendly URLs.  A sample project that shows WebForms and MVC running side by side is attached at the bottom of this post. So why would we want to run WebForms and MVC in the same project?  WebForms come with a lot of nice server controls that provide a lot of functionality. One example is the ReportViewer control. Using this control and client report definition files (RDLC), we can create rich interactive reports (with charting controls). I show you how to use the ReportViewer control in a WebForm project here :  Creating an ASP.NET report using Visual Studio 2010. We can create even more advanced reports by using SQL reporting services that can also be rendered by the ReportViewer control. Now, consider the sample MVC application I blogged about called ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager. Assume you were given the requirement to add a UI to the MVC application where users could interact with a report and be given the option to export the report to Excel, PDF or Word. How do you go about doing it?   This is a perfect scenario to use the ReportViewer control and RDLCs. As you saw in the post on creating the ASP.NET report, the ReportViewer control is a Web Control and is designed to be run in a WebForm project with dependencies on, amongst others, a ScriptManager control and the beloved Viewstate.  Since MVC and WebForm both run under the same runtime, the easiest thing to is to add the WebForm application files (index.aspx, rdlc, related class files) into our MVC project. You can copy the files over from the WebForm project into the MVC project. Create a new folder in our MVC application called CommonReports. Add the index.aspx and rdlc file from the Webform project   Right click on the Index.aspx file and convert it to a web application. This will add the index.aspx.designer.cs file (this step is not required if you are manually adding a WebForm aspx file into the MVC project).    Verify that all the type names for the ObjectDataSources in code behind to point to the correct ProductRepository and fix any compiler errors. Right click on Index.aspx and select “View in browser”. You should see a screen like the one below:   There are two issues with our page. It does not use our site master page and the URL is not SEO friendly. Common Master Page The easiest way to use master pages with both MVC and WebForm pages is to have a common master page that each inherits from as shown below. The reason for this is most WebForm controls require them to be inside a Form control and require ControlState or ViewState. ViewMasterPages used in MVC, on the other hand, are designed to be used with content pages that derive from ViewPage with Viewstate turned off. By having a separate master page for MVC and WebForm that inherit from the Root master page,, we can set properties that are specific to each. For example, in the Webform master, we can turn on ViewState, add a form tag etc. Another point worth noting is that if you set a WebForm page to use a MVC site master page, you may run into errors like the following: A ViewMasterPage can be used only with content pages that derive from ViewPage or ViewPage<TViewItem> or Control 'MainContent_MyButton' of type 'Button' must be placed inside a form tag with runat=server. Since the ViewMasterPage inherits from MasterPage as seen below, we make our Root.master inherit from MasterPage, MVC.master inherit from ViewMasterPage and Webform.master inherits from MasterPage. We define the attributes on the master pages like so: Root.master <%@ Master Inherits="System.Web.UI.MasterPage"  … %> MVC.master <%@ Master MasterPageFile="~/Views/Shared/Root.Master" Inherits="System.Web.Mvc.ViewMasterPage" … %> WebForm.master <%@ Master MasterPageFile="~/Views/Shared/Root.Master" Inherits="NorthwindSales.Views.Shared.Webform" %> Code behind: public partial class Webform : System.Web.UI.MasterPage {} We make changes to our reports aspx file to use the Webform.master. See the source of the master pages in the sample project for a better understanding of how they are connected. SEO friendly links We want to create SEO friendly links that point to our report. A request to /Reports/Products should render the report located in ~/CommonReports/Products.aspx. Simillarly to support future reports, a request to /Reports/Sales should render a report in ~/CommonReports/Sales.aspx. Lets start by renaming our index.aspx file to Products.aspx to be consistent with our routing criteria above. As mentioned earlier, since routing is part of the core runtime environment, we ca easily create a custom route for our reports by adding an entry in Global.asax. public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}");   //Custom route for reports routes.MapPageRoute( "ReportRoute", // Route name "Reports/{reportname}", // URL "~/CommonReports/{reportname}.aspx" // File );     routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } With our custom route in place, a request to Reports/Employees will render the page at ~/CommonReports/Employees.aspx. We make this custom route the first entry since the routing system walks the table from top to bottom, and the first route to match wins. Note that it is highly recommended that you write unit tests for your routes to ensure that the mappings you defined are correct. Common Menu Structure The master page in our original MVC project had a menu structure like so: <ul id="menu"> <li> <%=Html.ActionLink("Home", "Index", "Home") %></li> <li> <%=Html.ActionLink("Products", "Index", "Products") %></li> <li> <%=Html.ActionLink("Help", "Help", "Home") %></li> </ul> We want this menu structure to be common to all pages/views and hence should reside in Root.master. Unfortunately the Html.ActionLink helpers will not work since Root.master inherits from MasterPage which does not have the helper methods available. The quickest way to resolve this issue is to use RouteUrl expressions. Using  RouteUrl expressions, we can programmatically generate URLs that are based on route definitions. By specifying parameter values and a route name if required, we get back a URL string that corresponds to a matching route. We move our menu structure to Root.master and change it to use RouteUrl expressions: <ul id="menu"> <li> <asp:HyperLink ID="hypHome" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=home,action=index%>">Home</asp:HyperLink></li> <li> <asp:HyperLink ID="hypProducts" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=products,action=index%>">Products</asp:HyperLink></li> <li> <asp:HyperLink ID="hypReport" runat="server" NavigateUrl="<%$RouteUrl:routename=ReportRoute,reportname=products%>">Product Report</asp:HyperLink></li> <li> <asp:HyperLink ID="hypHelp" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=home,action=help%>">Help</asp:HyperLink></li> </ul> We are done adding the common navigation to our application. The application now uses a common theme, routing and navigation structure. Conclusion We have seen how to do the following through this post Add a WebForm page from a WebForm project to an existing ASP.NET MVC application Use a common master page for both WebForm and MVC pages Use routing for SEO friendly links Use a common menu structure for both WebForm and MVC. The sample project is attached below. Version: VS 2010 RTM Remember to change your connection string to point to your Northwind database NorthwindSalesMVCWebform.zip

    Read the article

  • July 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m super excited to announce the July 2013 release of the Ajax Control Toolkit. You can download the new version of the Ajax Control Toolkit from CodePlex (http://ajaxControlToolkit.CodePlex.com) or install the Ajax Control Toolkit from NuGet: With this release, we have completely rewritten the way the Ajax Control Toolkit combines, minifies, gzips, and caches JavaScript files. The goal of this release was to improve the performance of the Ajax Control Toolkit and make it easier to create custom Ajax Control Toolkit controls. Improving Ajax Control Toolkit Performance Previous releases of the Ajax Control Toolkit optimized performance for a single page but not multiple pages. When you visited each page in an app, the Ajax Control Toolkit would combine all of the JavaScript files required by the controls in the page into a new JavaScript file. So, even if every page in your app used the exact same controls, visitors would need to download a new combined Ajax Control Toolkit JavaScript file for each page visited. Downloading new scripts for each page that you visit does not lead to good performance. In general, you want to make as few requests for JavaScript files as possible and take maximum advantage of caching. For most apps, you would get much better performance if you could specify all of the Ajax Control Toolkit controls that you need for your entire app and create a single JavaScript file which could be used across your entire app. What a great idea! Introducing Control Bundles With this release of the Ajax Control Toolkit, we introduce the concept of Control Bundles. You define a Control Bundle to indicate the set of Ajax Control Toolkit controls that you want to use in your app. You define Control Bundles in a file located in the root of your application named AjaxControlToolkit.config. For example, the following AjaxControlToolkit.config file defines two Control Bundles: <ajaxControlToolkit> <controlBundles> <controlBundle> <control name="CalendarExtender" /> <control name="ComboBox" /> </controlBundle> <controlBundle name="CalendarBundle"> <control name="CalendarExtender"></control> </controlBundle> </controlBundles> </ajaxControlToolkit> The first Control Bundle in the file above does not have a name. When a Control Bundle does not have a name then it becomes the default Control Bundle for your entire application. The default Control Bundle is used by the ToolkitScriptManager by default. For example, the default Control Bundle is used when you declare the ToolkitScriptManager like this:  <ajaxToolkit:ToolkitScriptManager runat=”server” /> The default Control Bundle defined in the file above includes all of the scripts required for the CalendarExtender and ComboBox controls. All of the scripts required for both of these controls are combined, minified, gzipped, and cached automatically. The AjaxControlToolkit.config file above also defines a second Control Bundle with the name CalendarBundle. Here’s how you would use the CalendarBundle with the ToolkitScriptManager: <ajaxToolkit:ToolkitScriptManager runat="server"> <ControlBundles> <ajaxToolkit:ControlBundle Name="CalendarBundle" /> </ControlBundles> </ajaxToolkit:ToolkitScriptManager> In this case, only the JavaScript files required by the CalendarExtender control, and not the ComboBox, would be downloaded because the CalendarBundle lists only the CalendarExtender control. You can use multiple named control bundles with the ToolkitScriptManager and you will get all of the scripts from both bundles. Support for ControlBundles is a new feature of the ToolkitScriptManager that we introduced with this release. We extended the ToolkitScriptManager to support the Control Bundles that you can define in the AjaxControlToolkit.config file. Let me be explicit about the rules for Control Bundles: 1. If you do not create an AjaxControlToolkit.config file then the ToolkitScriptManager will download all of the JavaScript files required for all of the controls in the Ajax Control Toolkit. This is the easy but low performance option. 2. If you create an AjaxControlToolkit.config file and create a ControlBundle without a name then the ToolkitScriptManager uses that Control Bundle by default. For example, if you plan to use only the CalendarExtender and ComboBox controls in your application then you should create a default bundle that lists only these two controls. 3. If you create an AjaxControlToolkit.config file and create one or more named Control Bundles then you can use these named Control Bundles with the ToolkitScriptManager. For example, you might want to use different subsets of the Ajax Control Toolkit controls in different sections of your app. I should also mention that you can use the AjaxControlToolkit.config file with custom Ajax Control Toolkit controls – new controls that you write. For example, here is how you would register a set of custom controls from an assembly named MyAssembly: <ajaxControlToolkit> <controlBundles> <controlBundle name="CustomBundle"> <control name="MyAssembly.MyControl1" assembly="MyAssembly" /> <control name="MyAssembly.MyControl2" assembly="MyAssembly" /> </controlBundle> </ajaxControlToolkit> What about ASP.NET Bundling and Minification? The idea of Control Bundles is similar to the idea of Script Bundles used in ASP.NET Bundling and Minification. You might be wondering why we didn’t simply use Script Bundles with the Ajax Control Toolkit. There were several reasons. First, ASP.NET Bundling does not work with scripts embedded in an assembly. Because all of the scripts used by the Ajax Control Toolkit are embedded in the AjaxControlToolkit.dll assembly, ASP.NET Bundling was not an option. Second, Web Forms developers typically think at the level of controls and not at the level of individual scripts. We believe that it makes more sense for a Web Forms developer to specify the controls that they need in an app (CalendarExtender, ToggleButton) instead of the individual scripts that they need in an app (the 15 or so scripts required by the CalenderExtender). Finally, ASP.NET Bundling does not work with older versions of ASP.NET. The Ajax Control Toolkit needs to support ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Therefore, using ASP.NET Bundling was not an option. There is nothing wrong with using Control Bundles and Script Bundles side-by-side. The ASP.NET 4.0 and 4.5 ToolkitScriptManager supports both approaches to bundling scripts. Using the AjaxControlToolkit.CombineScriptsHandler Browsers cache JavaScript files by URL. For example, if you request the exact same JavaScript file from two different URLs then the exact same JavaScript file must be downloaded twice. However, if you request the same JavaScript file from the same URL more than once then it only needs to be downloaded once. With this release of the Ajax Control Toolkit, we have introduced a new HTTP Handler named the AjaxControlToolkit.CombineScriptsHandler. If you register this handler in your web.config file then the Ajax Control Toolkit can cache your JavaScript files for up to one year in the future automatically. You should register the handler in two places in your web.config file: in the <httpHandlers> section and the <system.webServer> section (don’t forget to register the handler for the AjaxFileUpload while you are there!). <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </httpHandlers> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add name="CombineScriptsHandler" verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </handlers> <system.webServer> The handler is only used in release mode and not in debug mode. You can enable release mode in your web.config file like this: <compilation debug=”false”> You also can override the web.config setting with the ToolkitScriptManager like this: <act:ToolkitScriptManager ScriptMode=”Release” runat=”server”/> In release mode, scripts are combined, minified, gzipped, and cached with a far future cache header automatically. When the handler is not registered, scripts are requested from the page that contains the ToolkitScriptManager: When the handler is registered in the web.config file, scripts are requested from the handler: If you want the best performance, always register the handler. That way, the Ajax Control Toolkit can cache the bundled scripts across page requests with a far future cache header. If you don’t register the handler then a new JavaScript file must be downloaded whenever you travel to a new page. Dynamic Bundling and Minification Previous releases of the Ajax Control Toolkit used a Visual Studio build task to minify the JavaScript files used by the Ajax Control Toolkit controls. The disadvantage of this approach to minification is that it made it difficult to create custom Ajax Control Toolkit controls. Starting with this release of the Ajax Control Toolkit, we support dynamic minification. The JavaScript files in the Ajax Control Toolkit are minified at runtime instead of at build time. Scripts are minified only when in release mode. You can specify release mode with the web.config file or with the ToolkitScriptManager ScriptMode property. Because of this change, the Ajax Control Toolkit now depends on the Ajax Minifier. You must include a reference to AjaxMin.dll in your Visual Studio project or you cannot take advantage of runtime minification. If you install the Ajax Control Toolkit from NuGet then AjaxMin.dll is added to your project as a NuGet dependency automatically. If you download the Ajax Control Toolkit from CodePlex then the AjaxMin.dll is included in the download. This change means that you no longer need to do anything special to create a custom Ajax Control Toolkit. As an open source project, we hope more people will contribute to the Ajax Control Toolkit (Yes, I am looking at you.) We have been working hard on making it much easier to create new custom controls. More on this subject with the next release of the Ajax Control Toolkit. A Single Visual Studio Solution We also made substantial changes to the Visual Studio solution and projects used by the Ajax Control Toolkit with this release. This change will matter to you only if you need to work directly with the Ajax Control Toolkit source code. In previous releases of the Ajax Control Toolkit, we maintained separate solution and project files for ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Starting with this release, we now support a single Visual Studio 2012 solution that takes advantage of multi-targeting to build ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5 versions of the toolkit. This change means that you need Visual Studio 2012 to open the Ajax Control Toolkit project downloaded from CodePlex. For details on how we setup multi-targeting, please see Budi Adiono’s blog post: http://www.budiadiono.com/2013/07/25/visual-studio-2012-multi-targeting-framework-project/ Summary You can take advantage of this release of the Ajax Control Toolkit to significantly improve the performance of your website. You need to do two things: 1) You need to create an AjaxControlToolkit.config file which lists the controls used in your app and 2) You need to register the AjaxControlToolkit.CombineScriptsHandler in the web.config file. We made substantial changes to the Ajax Control Toolkit with this release. We think these changes will result in much better performance for multipage apps and make the process of building custom controls much easier. As always, we look forward to hearing your feedback.

    Read the article

  • How To Restore Firefox Options To Default Without Uninstalling

    - by Gopinath
    Firefox plugins are awesome and they are the pillars for the huge success of Firefox browser. Plugins vary from simple ones like changing color scheme of the browser to powerful ones likes changing the behavior of the browser itself. Recently I installed one of the powerful Firefox plugins and played around to tweak the behavior of the browser. At the end of my half an hour play, Firefox has completely become useless and stopped rending web pages properly. To continue using Firefox I had to restore it to default settings. But I don’t like to uninstall and then install it again as it’s a time consuming process and also I’ll loose all the plugins I’m using. How did I restore the default settings in a single click? Default Settings Restore Through Safe Mode Options It’s very easy to restore default settings of Firefox with the safe mode options. All we need to do is 1.  Close all the Firefox browser windows that are open 2. Launch Firefox in safe mode 3. Choose the option Reset all user preferences to Firefox defaults 4. Click on Make Changes and Restart button. Note: When Firefox restore the default settings, it erases all the stored passwords, browser history and other settings you have done. That’s all. This excellent feature of Firefox saved me from great pain and hope it’s going to help you too. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • How To Restore Firefox Options To Default Without Uninstalling

    - by Gopinath
    Firefox plugins are awesome and they are the pillars for the huge success of Firefox browser. Plugins vary from simple ones like changing color scheme of the browser to powerful ones likes changing the behavior of the browser itself. Recently I installed one of the powerful Firefox plugins and played around to tweak the behavior of the browser. At the end of my half an hour play, Firefox has completely become useless and stopped rending web pages properly. To continue using Firefox I had to restore it to default settings. But I don’t like to uninstall and then install it again as it’s a time consuming process and also I’ll loose all the plugins I’m using. How did I restore the default settings in a single click? Default Settings Restore Through Safe Mode Options It’s very easy to restore default settings of Firefox with the safe mode options. All we need to do is 1.  Close all the Firefox browser windows that are open 2. Launch Firefox in safe mode 3. Choose the option Reset all user preferences to Firefox defaults 4. Click on Make Changes and Restart button. Note: When Firefox restore the default settings, it erases all the stored passwords, browser history and other settings you have done. That’s all. This excellent feature of Firefox saved me from great pain and hope it’s going to help you too. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • How to Reuse Your Old Wi-Fi Router as a Network Switch

    - by Jason Fitzpatrick
    Just because your old Wi-Fi router has been replaced by a newer model doesn’t mean it needs to gather dust in the closet. Read on as we show you how to take an old and underpowered Wi-Fi router and turn it into a respectable network switch (saving your $20 in the process). Image by mmgallan. Why Do I Want To Do This? Wi-Fi technology has changed significantly in the last ten years but Ethernet-based networking has changed very little. As such, a Wi-Fi router with 2006-era guts is lagging significantly behind current Wi-Fi router technology, but the Ethernet networking component of the device is just as useful as ever; aside from potentially being only 100Mbs instead of 1000Mbs capable (which for 99% of home applications is irrelevant) Ethernet is Ethernet. What does this matter to you, the consumer? It means that even though your old router doesn’t hack it for your Wi-Fi needs any longer the device is still a perfectly serviceable (and high quality) network switch. When do you need a network switch? Any time you want to share an Ethernet cable among multiple devices, you need a switch. For example, let’s say you have a single Ethernet wall jack behind your entertainment center. Unfortunately you have four devices that you want to link to your local network via hardline including your smart HDTV, DVR, Xbox, and a little Raspberry Pi running XBMC. Instead of spending $20-30 to purchase a brand new switch of comparable build quality to your old Wi-Fi router it makes financial sense (and is environmentally friendly) to invest five minutes of your time tweaking the settings on the old router to turn it from a Wi-Fi access point and routing tool into a network switch–perfect for dropping behind your entertainment center so that your DVR, Xbox, and media center computer can all share an Ethernet connection. What Do I Need? For this tutorial you’ll need a few things, all of which you likely have readily on hand or are free for download. To follow the basic portion of the tutorial, you’ll need the following: 1 Wi-Fi router with Ethernet ports 1 Computer with Ethernet jack 1 Ethernet cable For the advanced tutorial you’ll need all of those things, plus: 1 copy of DD-WRT firmware for your Wi-Fi router We’re conducting the experiment with a Linksys WRT54GL Wi-Fi router. The WRT54 series is one of the best selling Wi-Fi router series of all time and there’s a good chance a significant number of readers have one (or more) of them stuffed in an office closet. Even if you don’t have one of the WRT54 series routers, however, the principles we’re outlining here apply to all Wi-Fi routers; as long as your router administration panel allows the necessary changes you can follow right along with us. A quick note on the difference between the basic and advanced versions of this tutorial before we proceed. Your typical Wi-Fi router has 5 Ethernet ports on the back: 1 labeled “Internet”, “WAN”, or a variation thereof and intended to be connected to your DSL/Cable modem, and 4 labeled 1-4 intended to connect Ethernet devices like computers, printers, and game consoles directly to the Wi-Fi router. When you convert a Wi-Fi router to a switch, in most situations, you’ll lose two port as the “Internet” port cannot be used as a normal switch port and one of the switch ports becomes the input port for the Ethernet cable linking the switch to the main network. This means, referencing the diagram above, you’d lose the WAN port and LAN port 1, but retain LAN ports 2, 3, and 4 for use. If you only need to switch for 2-3 devices this may be satisfactory. However, for those of you that would prefer a more traditional switch setup where there is a dedicated WAN port and the rest of the ports are accessible, you’ll need to flash a third-party router firmware like the powerful DD-WRT onto your device. Doing so opens up the router to a greater degree of modification and allows you to assign the previously reserved WAN port to the switch, thus opening up LAN ports 1-4. Even if you don’t intend to use that extra port, DD-WRT offers you so many more options that it’s worth the extra few steps. Preparing Your Router for Life as a Switch Before we jump right in to shutting down the Wi-Fi functionality and repurposing your device as a network switch, there are a few important prep steps to attend to. First, you want to reset the router (if you just flashed a new firmware to your router, skip this step). Following the reset procedures for your particular router or go with what is known as the “Peacock Method” wherein you hold down the reset button for thirty seconds, unplug the router and wait (while still holding the reset button) for thirty seconds, and then plug it in while, again, continuing to hold down the rest button. Over the life of a router there are a variety of changes made, big and small, so it’s best to wipe them all back to the factory default before repurposing the router as a switch. Second, after resetting, we need to change the IP address of the device on the local network to an address which does not directly conflict with the new router. The typical default IP address for a home router is 192.168.1.1; if you ever need to get back into the administration panel of the router-turned-switch to check on things or make changes it will be a real hassle if the IP address of the device conflicts with the new home router. The simplest way to deal with this is to assign an address close to the actual router address but outside the range of addresses that your router will assign via the DHCP client; a good pick then is 192.168.1.2. Once the router is reset (or re-flashed) and has been assigned a new IP address, it’s time to configure it as a switch. Basic Router to Switch Configuration If you don’t want to (or need to) flash new firmware onto your device to open up that extra port, this is the section of the tutorial for you: we’ll cover how to take a stock router, our previously mentioned WRT54 series Linksys, and convert it to a switch. Hook the Wi-Fi router up to the network via one of the LAN ports (consider the WAN port as good as dead from this point forward, unless you start using the router in its traditional function again or later flash a more advanced firmware to the device, the port is officially retired at this point). Open the administration control panel via  web browser on a connected computer. Before we get started two things: first,  anything we don’t explicitly instruct you to change should be left in the default factory-reset setting as you find it, and two, change the settings in the order we list them as some settings can’t be changed after certain features are disabled. To start, let’s navigate to Setup ->Basic Setup. Here you need to change the following things: Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable Save with the “Save Settings” button and then navigate to Setup -> Advanced Routing: Operating Mode: Router This particular setting is very counterintuitive. The “Operating Mode” toggle tells the device whether or not it should enable the Network Address Translation (NAT)  feature. Because we’re turning a smart piece of networking hardware into a relatively dumb one, we don’t need this feature so we switch from Gateway mode (NAT on) to Router mode (NAT off). Our next stop is Wireless -> Basic Wireless Settings: Wireless SSID Broadcast: Disable Wireless Network Mode: Disabled After disabling the wireless we’re going to, again, do something counterintuitive. Navigate to Wireless -> Wireless Security and set the following parameters: Security Mode: WPA2 Personal WPA Algorithms: TKIP+AES WPA Shared Key: [select some random string of letters, numbers, and symbols like JF#d$di!Hdgio890] Now you may be asking yourself, why on Earth are we setting a rather secure Wi-Fi configuration on a Wi-Fi router we’re not going to use as a Wi-Fi node? On the off chance that something strange happens after, say, a power outage when your router-turned-switch cycles on and off a bunch of times and the Wi-Fi functionality is activated we don’t want to be running the Wi-Fi node wide open and granting unfettered access to your network. While the chances of this are next-to-nonexistent, it takes only a few seconds to apply the security measure so there’s little reason not to. Save your changes and navigate to Security ->Firewall. Uncheck everything but Filter Multicast Firewall Protect: Disable At this point you can save your changes again, review the changes you’ve made to ensure they all stuck, and then deploy your “new” switch wherever it is needed. Advanced Router to Switch Configuration For the advanced configuration, you’ll need a copy of DD-WRT installed on your router. Although doing so is an extra few steps, it gives you a lot more control over the process and liberates an extra port on the device. Hook the Wi-Fi router up to the network via one of the LAN ports (later you can switch the cable to the WAN port). Open the administration control panel via web browser on the connected computer. Navigate to the Setup -> Basic Setup tab to get started. In the Basic Setup tab, ensure the following settings are adjusted. The setting changes are not optional and are required to turn the Wi-Fi router into a switch. WAN Connection Type: Disabled Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable In addition to disabling the DHCP server, also uncheck all the DNSMasq boxes as the bottom of the DHCP sub-menu. If you want to activate the extra port (and why wouldn’t you), in the WAN port section: Assign WAN Port to Switch [X] At this point the router has become a switch and you have access to the WAN port so the LAN ports are all free. Since we’re already in the control panel, however, we might as well flip a few optional toggles that further lock down the switch and prevent something odd from happening. The optional settings are arranged via the menu you find them in. Remember to save your settings with the save button before moving onto a new tab. While still in the Setup -> Basic Setup menu, change the following: Gateway/Local DNS : [IP address of primary router, e.g. 192.168.1.1] NTP Client : Disable The next step is to turn off the radio completely (which not only kills the Wi-Fi but actually powers the physical radio chip off). Navigate to Wireless -> Advanced Settings -> Radio Time Restrictions: Radio Scheduling: Enable Select “Always Off” There’s no need to create a potential security problem by leaving the Wi-Fi radio on, the above toggle turns it completely off. Under Services -> Services: DNSMasq : Disable ttraff Daemon : Disable Under the Security -> Firewall tab, uncheck every box except “Filter Multicast”, as seen in the screenshot above, and then disable SPI Firewall. Once you’re done here save and move on to the Administration tab. Under Administration -> Management:  Info Site Password Protection : Enable Info Site MAC Masking : Disable CRON : Disable 802.1x : Disable Routing : Disable After this final round of tweaks, save and then apply your settings. Your router has now been, strategically, dumbed down enough to plod along as a very dependable little switch. Time to stuff it behind your desk or entertainment center and streamline your cabling.     

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • CodePlex Daily Summary for Wednesday, March 17, 2010

    CodePlex Daily Summary for Wednesday, March 17, 2010New Projectschaosreader: A simple RSS reader.CRM 4.0 Customization GUID Update: The CRM 4.0 customization GUID update is an open source C# console application that automatically replaces GUID values in your exported workflow cu...DotNetNuke® Skin Bright: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Noel Jerke of SiteToolset. This simple and clean business...DotNetNuke® Skin Go: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by DnnGo Corporation. The skin uses web standard DIV+CSS tec...DotNetNuke® Skin J10blend: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Timthy Maler of 2M Studio Design. J10-Black v01.00.00 inc...DotNetNuke® Skin Recipe: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by dnnprofis.at. For mobile devices the skin changes to a mobile...DotNetNuke® Skin SpaceSmurfs: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Eric Johnson of Personify Design. This fun personal skin was ins...ERDOS6 - Web: A Web Project about ERDOS 6Flickrlight: Flickrlight is a personal fun project out of love of Flickr and Silverlight. You can experience it here: http://www.flickrlight.net.GsGrid: Extracting data from Gaussian grid file and grid file calculationiLocator: iLocator is a collaborative educational mapping game for children developed on Microsoft Surface. This game encourages players to collaborate with ...Javascript CallObject SOAP AJAX Helper: CallObject is a Javascript based AJAX helper, it facilitates wrapping of basic soap calls (as long as simple data types are used), asynchronous ret...kbTrainer: kbTrainer is a simple to use HTML application for typing speed training. A lot of features completed in basic. 2 learning keywords layouts -- engli...Laboratório de Engenharia de Software - Projeto: Criado para estudar e aplicar novas tecnologias web.Maxilds Powershell Scripts: Repo of my powershell scriptsNamespacifier: Namespacifier is a C#.NET library and console application to fix XML documents containing multiple default namespaces. It gives prefixes to defaul...OData SDK for Objective-C: This is a CTP of the OData SDK for Objective-C. The library targets iPhone devices and Mac OS X and it is designed to facilitate the connection wit...Open Data App Framework (ODAF): The Open Data Application Framework (ODAF) is a framework that allows cities to easily map existing civic Open Data landmarks, and allow users to r...QuickieB2B: QuickieB2B is web application which main target is to provide quick info about products. It's designed for small companies who have a big number of...RayView: Rayview is an easy-to-use Raytracing-Framework based on Microsoft XNA.Robotics Studio application to navigate Lego Mindstorms robot through labyrinth: A project for Software Systems Analysis and Design Tools subject at the Kaunas University of Technology. The main point of the project is to code L...SharePoint Icon Integration: SharePoint Icon Integration makes it easier for SharePoint Administrators / Developers to add a icon (pdf) to the SharePoint farm. You will no long...TestVersion: Testing VersionieringTimecard: SoftSource Timecard project.T-shirt Cannon: So the Coding4Fun team had two weeks to build two robots able to drive, aim, and shoot t-shirts with a Windows Phone during a MIX10 Keynote demo of...USTF: This project is a bit secretive right now.Windows Azure Command-line Tools for PHP Developers: The “Windows Azure Command-line tools for PHP” provide a command-line experience to developers who wish to develop, package, and deploy PHP applic...New ReleasesCaramel Engine: CaramelEngine Alpha Build 0.0.0.1a: This is an early alpha release of the Engine and it's functionality. Be sure to have the using CaramelEngine statement. This release is for people...Coot: Preview: Basic preview On the first use you have to click Create New Session and Login. After this you can just click Screen Saver each time. Settings sho...CycleMania Starter Kit EAP - ASP.NET 4.0 Problem - Design - Solution: Cyclemania 0.08.32: The latest alpha release.DeepZoomContainer, Expanded DeepZoom for Silverlight & Windows Phone 7 Series: Release ver. 1.20 for Windows Phone 7 Series: SolutionMerge PathAnimation solution into one MouseWheel elimination PathAnimationWP7 Port DeepZoomContainerProject rebuilt for WP7 support De...Desktop Google Reader: 1.3 (the social release): NewsSharing Liking Mail item Labels / Tags Send to Twitter Read It Later http://readitlaterlist.com/ Instapaper http://www.instapaper.com/ Favicons...DotNetNuke® Blog: 04.00.00 RC 2: PLEASE NOTE: Please do not upgrade previous version of the Beta releases - please start from 03.05.01 This is a RELEASE CANDIDATE, and as such ...DotNetNuke® Community Edition: 05.03.00: New FeaturesTemplated User Profiles - User profile pages are now publicly viewable Photo field in User Profile - Users can upload a photo to thei...DotNetNuke® Skin Bright: Bright Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by Noel Jerke of SiteToolset. This simple and clean business...DotNetNuke® Skin Go: Go Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by DnnGo Corporation. The skin uses web standard DIV+CSS tec...DotNetNuke® Skin J10blend: J10 Blend Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Out of the box" category by Timthy Maler of 2M Studio Design. J10-Black v01.00.00 incl...DotNetNuke® Skin Recipe: Recipe Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Standards" category by dnnprofis.at. For mobile devices the skin changes to a mobile f...DotNetNuke® Skin SpaceSmurfs: Space Smurfs Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Eric Johnson of Personify Design. This fun personal skin was ins...Dynamo: Dynamo v0.1 Beta: The following is included: Dynamo dlls Antlr dlls Hello world Simple Plugin example Application Dependency injection Singleton Managment ...ExtremeML: ExtremeML v1.0 Beta 1: Timed to accompany the RTM release of the OpenXML SDK v2.0, this is the first Beta release of ExtremeML (it was previously classified as a preview ...Family Tree Analyzer: Version 1.1.1.1: Version 1.1.1.1 Lots of Gedcom parsing fixes it should crash a whole lot less often and tolerate more "interesting" or "quirky" Gedcom entries. Add...Family Tree Analyzer: Version 1.2.0.1: Version 1.2.0.1 Added option to treat residence facts as Census Facts IGI Search now permits default country selection ie: what to use if it doesn...Flickrlight: Flickrlight: Current release is for idea sharing. There are not many design patterns being used. Please bare with the mess. :-) In order to run the applicat...Gherkin editor: Alpha 0.1: Most of the code at this point is the same as the Avalon.Sample from code project, just changed the name, removed extra languages and added syntax ...GsGrid: gsgrid1.6.4: gsgrid1.6.4GsGrid: gsgrid1.6.4-src: gsgrid1.6.4-srcHTML Template Repeater Module: Version 01.00.02: GeneralThe HTML Template Repeater Module is a direct replacement for the Core DotNetNuke Text/HTML module. Use it where you need to repeat the form...Images Compiler: Release 0.1: Last alpha buildJavascript CallObject SOAP AJAX Helper: Beta Release, 0.2.1: Beta Release, 0.2.1 Contains only core objectskbTrainer: kbtrainer 1.25u: kbTrainer is a simple to use speed typing training HTML application. A lot of features. All ither info availiable on http://code.google.com/p/kbtr...MapWindow6: MapWindow 6.0 msi (March 16): This version fixes a bug where selected points were not drawing correctly.Mesopotamia Experiment: Mesopotamia 1.2.43: Release Notes New Features - Scenario Name on title bar - Show organisms in Scnearios with simple stats Bug Fixes - Removed app domain recyling an...MFCMAPI: March 2010 Release: If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. Build: 6.0.0.1018 The 64 bit bu...MVVM Light Toolkit: MVVM Light Toolkit V3: Download the Zip file and extract it to a local folder. Then, follow the instructions on the Installation page http://www.galasoft.ch/mvvm/installi...NETXPF: 1.0.2: Changes: - Added a class "IOUtils" with methods for reading streams and GZip-compressing HTTP responses - Fixed a bug in the size formatter (excep...OData SDK for Objective-C: OData SDK for Objective-C CTP: The current release supports read-only operations only and it has been tested on a limited set of scenarios. The download include a sample iPhone a...Open Data App Framework (ODAF): ODAF 1.0: Initial beta release.Selection Maker: Selection Maker 1.1: New FeaturesContext Menu for ListView added Bug FixesFixed: If the users press Copy/Cut Button when no item is selected in ListView the ListView cl...Selection Maker: Selection Maker 1.2: Bug Fixes:a minor bug fixedSimple.NET: Simple.Mocking 1.0.0.5: Initial version of a new mocking framework for .NET Revision 1: Expect.AnyInocationOn<T>(T target) changed to Expect.AnyInocationOn(object target...SQL Server Extended Properties Quick Editor: New release 1.5.4: Whats new: Move preferences to application settings and add a form to edit preferences. Support to add, modify and delete operations could be made ...SuperModel - A Dynamic View-Model Generator: 1.0.0.0 - Tyra: The final 1.0 release, now less intrusive! If you don't want to implement ISuperModel, simply implement INotifyPropertyChanged.Timecard: Timecard Initial Release: The zipped version of the Initial Checkin.Transparent Persistence.Net: TP.Net 0.1.0: This is the initial alpha release. It's working for small set of use-cases (basic access to Cassandra).VCC: Latest build, v2.1.30316.0: Automatic drop of latest buildVFPnfe: Projeto Ajuda PAF-ECF: Este projeto visa ajudar aos desenvolvedores para homologação do PAF-ECF , sob licença publica GNU/GPL para ver mais detalhes do mesmo assista o vi...Visual Studio DSite: Gif Animator: This program will make an animate gif. (Program written in Vb.Net 2008)Most Popular ProjectsMetaSharpLiveUpload to FacebookSkype Voice ChangerLiveUpload to YouTubeSIPSorceryChartPart for SharePointTFS Branching Guide 2.0TouchFlo DetacherNPandaySnippet EditorMost Active ProjectsLINQ to TwitterRawrOData SDK for PHPDirectQpatterns & practices – Enterprise LibraryBlogEngine.NETN2 CMSOpen Data App Framework (ODAF)NB_Store - Free DotNetNuke Ecommerce Catalog ModuleMapWindow6

    Read the article

  • system overrides my mount parameters in /etc/fstab

    - by valya
    [.../~]$ mount /dev/sda4 on / type ext4 (rw,commit=60,commit=0) [.../~]$ cat /etc/fstab # UNCONFIGURED FSTAB FOR BASE SYSTEM UUID=70739c04-fcb6-4747-803c-824f9c894f41 / ext4 defaults,commit=60 0 1 What can I do about it? It seems strange. I want to be able to set any commit time I want Edit: added /proc/mounts contents [.../~]$ cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 none /dev devtmpfs rw,relatime,size=886332k,nr_inodes=221583,mode=755 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 /dev/disk/by-uuid/70739c04-fcb6-4747-803c-824f9c894f41 / ext4 rw,relatime,barrier=1,data=ordered 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 none /var/run tmpfs rw,nosuid,relatime,mode=755 0 0 none /var/lock tmpfs rw,nosuid,nodev,noexec,relatime 0 0 /dev/sda3 /media/megahard fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 cgroup /dev/cgroup/cpu cgroup rw,relatime,cpu,release_agent=/usr/local/sbin/cgroup_clean 0 0 gvfs-fuse-daemon /home/va1en0k/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0

    Read the article

  • Reminder: True WCF Asynchronous Operation

    - by Sean Feldman
    A true asynchronous service operation is not the one that returns void, but the one that is marked as IsOneWay=true using BeginX/EndX asynchronous operations (thanks Krzysztof). To support this sort of fire-and-forget invocation, Windows Communication Foundation offers one-way operations. After the client issues the call, Windows Communication Foundation generates a request message, but no correlated reply message will ever return to the client. As a result, one-way operations can't return values, and any exception thrown on the service side will not make its way to the client. One-way calls do not equate to asynchronous calls. When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call. However, once the call is queued, the client is unblocked and can continue executing while the service processes the operation in the background. This usually gives the appearance of asynchronous calls.

    Read the article

  • Experience your music in a whole new way with Zune for PC

    - by Matthew Guay
    Tired of the standard Media Player look and feel, and want something new and innovative?  Zune offers a fresh, new way to enjoy your music, videos, pictures, and podcasts, whether or not you own a Zune device. Microsoft started out on a new multimedia experience for PCs and mobile devices with the launch of the Zune several years ago.  The Zune devices have been well received and noted for their innovative UI, and the Zune HD’s fluid interface is the foundation for the widely anticipated Windows Phone 7.  But regardless of whether or not you have a Zune Device, you can still use the exciting new UI and services directly from your PC.  Zune for Windows is a very nice media player that offers a music and video store and wide support for multimedia formats including those used in Apple products.  And if you enjoy listening to a wide variety of music, it also offers the Zune Pass which lets you stream an unlimited number of songs to your computer and download 10 songs for keeps per month for $14.99/month. Or you can do a pre-paid music card as well.  It does all this using the new Metro UI which beautifully shows information using text in a whole new way.  Here’s a quick look at setting up and using Zune on your PC. Getting Started Download the installer (link below), and run it to begin setup.  Please note that Zune offers a separate version for computers running the 64 bit version of Windows Vista or 7, so choose it if your computer is running these. Once your download is finished, run the installer to setup Zune on your computer.  Accept the EULA when prompted. If there are any updates available, they will automatically download and install during the setup.  So, if you’re installing Zune from a disk (for example, one packaged with a Zune device), you don’t have to worry if you have the latest version.  Zune will proceed to install on your computer.   It may prompt you to restart your computer after installation; click Restart Now so you can proceed with your Zune setup.  The reboot appears to be for Zune device support, and the program ran fine otherwise without rebooting, so you could possibly skip this step if you’re not using a Zune device.  However, to be on the safe side, go ahead and reboot. After rebooting, launch Zune.  It will play a cute introduction video on first launch; press skip if you don’t want to watch it. Zune will now ask you if you want to keep the default settings or change them.  Choose Start to keep the defaults, or Settings to customize to your wishes.  Do note that the default settings will set Zune as your default media player, so click Settings if you wish to change this. If you choose to change the default settings, you can change how Zune finds and stores media on your computer.  In Windows 7, Zune will by default use your Windows 7 Libraries to manage your media, and will in fact add a new Podcasts library to Windows 7. If your media is stored on another location, such as on a server, then you can add this to the Library.  Please note that this adds the location to your system-wide library, not just the Zune player. There’s one last step.  Enter three of your favorite artists, and Zune will add Smart DJ mixes to your Quickplay list based on these.  Some less famous or popular artists may not be recognized, so you may have to try another if your choice isn’t available.  Or, you can click Skip if you don’t want to do this right now. Welcome to Zune!  This is the default first page, QuickPlay, where you can easily access your pinned and new items.   If you have a Zune account, or would like to create a new one, click Sign In on the top. Creating a new account is quick and simple, and if you’re new to Zune, you can try out a 14 day trial of Zune Pass for free if you want. Zune allows you to share your listening habits and favorites with friends or the world, but you can turn this off or change it if you like. Using Zune for Windows To access your media, click the Collection link on the top left.  Zune will show all the media you already have stored on your computer, organized by artist and album. Right-click on any album, and you choose to have Zune find album art or do a variety of other tasks with the media.   When playing media, you can view it in several unique ways.  First, the default Mix view will show related tracks to the music you’re playing from Smart DJ.  You can either play these fully if you’re a Zune Pass subscriber, or otherwise you can play 30 second previews. Then, for many popular artists, Zune will change the player background to show pictures and information in a unique way while the music is playing.  The information may range from history about the artist to the popularity of the song being played.   Zune also works as a nice viewer for the pictures on your computer. Start a slideshow, and Zune will play your pictures with nice transition effects and music from your library. Zune Store The Zune Store offers a wide variety of music, TV shows, and videos for purchase.  If you’re a Zune Pass subscriber, you can listen to or download any song without purchasing it; otherwise, you can preview a 30 second clip first. Zune also offers a wide selection of Podcasts you can subscribe to for free. Using Zune for PC with a Zune Device If you have a Zune device attached to your computer, you can easily add media files to it by simply dragging them to the Zune device icon in the left corner.  In the future, this will also work with Windows Phone 7 devices. If you have a Zune HD, you can also download and add apps to your device. Here’s the detailed information window for the weather app.  Click Download to add it to your device.   Mini Mode The Zune player generally takes up a large portion of your screen, and is actually most impressive when run maximized.  However, if you’re simply wanting to enjoy your tunes while you’re using your computer, you can use the Mini mode to still view music info and control Zune in a smaller mode.  Click the Mini Player button near the window control buttons in the top right to activate it. Now Zune will take up much less of your desktop.  This window will stay on top of other windows so you can still easily view and control it. Zune will display an image of the artist if one is available, and this shows up in Mini mode more often than it does in the full mode. And, in Windows 7, you could simply minimize Zune as you can control it directly from the taskbar thumbnail preview.   Even more controls are available from Zune’s jumplist in Windows 7.  You can directly access your Quickplay links or choose to shuffle all music without leaving the taskbar. Settings Although Zune is designed to be used without confusing menus and settings, you can tweak the program to your liking from the settings panel.  Click Settings near the top left of the window. Here you can change file storage, types, burn, metadata, and many more settings.  You can also setup Zune to stream media to your XBOX 360 if you have one.   You can also customize Zune’s look with a variety of modern backgrounds and gradients. Conclusion If you’re ready for a fresh way to enjoy your media, Zune is designed for you.  It’s innovative UI definitely sets it apart from standard media players, and is very pleasing to use.  Zune is especially nice if your computer is using XP, Vista Home Basic, or 7 Starter as these versions of Windows don’t include Media Center.  Additionally, the mini player mode is a nice touch that brings a feature of Windows 7’s Media Player to XP and Vista.  Zune is definitely one of our favorite music apps.  Try it out, and get a fresh view of your music today! Link Download Zune for Windows Similar Articles Productive Geek Tips Redeem Pre-paid Zune Card Points for Zune Marketplace MediaUpdate Your Zune Player SoftwaredoubleTwist is an iTunes Alternative that Supports Several DevicesFind Free or Cheap Indie Music at Amie StreetAmie Street Downloader Makes Purchasing Music Easier TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • SQL SERVER – Database in RESTORING State for Long Time

    - by Pinal Dave
    A very interesting question I received the other day. “Our database has been in restoring stage for a long time. We have already restored all the necessary files there. After restoring the files we are expecting that  the database will be in operational mode, however, it is continuously in the restoring mode. Any suggestion?” The question is very common. I sent user follow up emails to understand what is actually going on with the user. I realized after restoring their bak files and log files their database was in the restoring state because they had not restored the latest log file with RECOVERY options. As they had completed all the database restore sequence (bak and log in order), the real need for them was to recover the database from norecovery state. User can restore log files till the database is no recovery mode. If the database is recovered it will be in operation and it can continue database operation. If the database has another operations we cannot restore further log as the chain of the log file after the database is recovered is meaningless. This is the reason why the database has to be norecovery state when it is restored. There are three different ways to recover the database. 1) Recover the database manually with following command. RESTORE DATABASE database_name WITH RECOVERY 2) Recover the database with the last log file. RESTORE LOG database_name FROM backup_device WITH RECOVERY 3) Recover the database when bak is restored RESTORE DATABASE database_name FROM backup_device WITH RECOVERY To understand how the backup restores timeline works read Backup Timeline and Understanding of Database Restore Process in Full Recovery Model. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Monday, June 07, 2010

    CodePlex Daily Summary for Monday, June 07, 2010New ProjectsAgile Personal Development Methodology: 本项目并不是软件开发项目,它是一个关注个人能力发展的项目,希望通过大家的积极参与和实践,在价值观、原则和最佳实践的基础上不断丰富和完善这些内容。我将主要从生活、工作和个人三个主要方面来指导个人如何快速地提高自己的能力。在工作方面,首先关注的是IT技术人员。 希望本方法论的不断完善,能够对不同阶...Altairis Mail Toolkit: Altairis Mail Toolkit is a component for .NET developers making easy to send templated and localized e-mail messages. Uses standard resource mechan...AmazonPriceTicker: AmazonPriceTicker ist ein kleines ASP.NET-Projekt für unser Studium.CC.Hearts Screen Saver: A complete screensaver that draws pretty hearts. Supports all standard screensaver functionality (preview window, options, multi-monitor). Written...Controle Financeiro com DDD: Projeto para fins de estudo das tencologias: - DDD - TDD - Asp Net MVCDebugWriterTextBox: This is modified TextBox which can catch up Debug.Write() and display log. Also it can write log data to file - all you need is to set up file name!DEIConsultingDev: DEI Consulting DEVEasy Augmented Reality Suite for Silverlight: Easy Augmented Reality Suite for SilverlightEvent Broker: Simple event broker with an hierarchical implementation.fleet It: A WPF Client for Team Space. Developed using WPF and C#. Various URL Shortening services integrated.Gest-Bus: Gest-BusHNV Project: Projects created by Hoàng, Ngọc, VũHotel Management System: Hotel Management System : concerned in making a complete website for a hotel every thing in hotel in just one web application : Finance , managemen...ISEN découverte majeure application mobile: ISEN découverte majeure application mobile contrôle d'un pc par un téléphone portable avec plateforme windows 7 LiveSequence: Based in the sequence diagramming control of Nauman Leghari.NHash: This is a simple project that integrates with Explorer and Computes MD5 and SHA1.NHibernate Sidekick Library: NHibernate Sidekick is a library intended to assist in the development of multi-tiered applications using the NHibernate ORM framework.ScrewTurn ASP.NET Proxy Membership Provider: Plugin for the ScrewTurn Wiki System to use Standard ASP.NET Membership and Role providers.Silverlight SNS: We are going to develop a SNS web application based on Silver lightSilverPoiMap: SilverPoiMap provides interactive searching and management of Points of Interest. It is a Facebook client application which allow you to connect to...Simple Resource Localization Editor: Simple editor that simplifies localization and synchronization of .NET resource files (.resx).SimpleBlog.NET MVC: A very simple blog engine developed in ASP.NET 3.5 MVC2 and C#.Siverlight Project: Hope that everybody continue to develop it.SpaceConquest: Incorporated standard design patterns to build a peer to peer game in Java. The game rules were similar in complexity to games like Civilization an...Yasts: Yasts - Yet Another Space Trading Sim - This is a learning environment project.New ReleasesAgile Personal Development Methodology: 敏捷个人-认识自我,管理自我.pdf: 去年我在blog上写了个人管理系列的一些blog,其中一些文章深受大家的喜欢。想到写这个系列是源于在实施敏捷Scrum方法时,对方法实施是否对人的水平需要高要求的一些思考。自组织团队是建立在敏捷个人之上的,没有个人就没有团队,实施Scrum对人要求不高,但想实施得好,那么对人的要求肯定不低。 ...Altairis Mail Toolkit: 1.0.0 RTM: Initial releaseAndrew's XNA Helpers: Andrew's Xna Helpers V1.1: Currently only supports X86 projects since portions of the code have to be reworked to work with the xbox. I do plan to code it for the 360 though....C#Mail: Higuchi.Mail.dll (2010.6.6 ver): Higuchi.Mail.dll (2010.4.30 ver)Community Forums NNTP bridge: Community Forums NNTP Bridge V31: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Html Agility Pack: Experimental Xpath Updates: In efforts to update make Html Agility Packs Xpath support to be closer to the System.Xml.Xpath implementation I have updated HAP to have all nodes...imdb movie downloader: myImdb 0.9.2: myImdb 0.9.2 Fully changed...and added some more features.. working with XML movie list... Used Backgroundworker more clever results and guess m...InfoService: InfoService v1.6 - MPE1 or RAR Package: InfoService Release v1.6.0.136 Please read Plugin installation for installation instructions.ISEN découverte majeure application mobile: appli traitement d'image: but : capturer, redimensionner l'image de l'écran d'un pc.Jeremi Stadler: Clipboard Manager: Version 1.0.5.14 It's finally here! I have been working on this one the whole night but it's worth it ;) The program catches clipboard changes an...mesoBoard: mesoBoard 0.9 - Beta: mesoBoard version 0.9 beta Released under the New BSD License. http://mesoboard.codeplex.com/license http://mesoboard.comMiniTwitter: 1.13: MiniTwitter 1.13 変更内容 修正 ダイレクトメッセージの取得に失敗するバグを修正 タイムラインを編集すると落ちるバグを修正 リストのインポートで最初の 20 人しか取得できない問題を修正 追加 URL マウスオーバーでの画像のプレビュー機能を実装MiniTwitter: 1.13.1: MiniTwitter 1.13.1 更新内容 修正 タイムライン追加、編集ダイアログでキャンセルを選ぶと落ちるバグを修正MSTS Editors & Tools: Simis Editor v0.4: Simis Editor v0.4 Open and Save dialogs support full filename filters from BNFs (e.g. "tsection.dat"). Added statusbar and menu help text. Adde...NHibernate Sidekick Library: 0.6.0: v0.6.0 - Initial Release TODONLog - Advanced .NET Logging: Nightly Build 2010.06.06.001: Changes since the last build:2010-06-05 21:50:21 Jarek Kowalski fixed doc for ${document-uri} 2010-06-05 20:21:53 Jarek Kowalski removed Layout re...NQ - Component-oriented Framework: NQ Core 0.90: Main changes in 0.90 release: Introduction of an additional built-in component loader to load component and service information from XML files ins...PhluffyFotos: v4 Windows Azure: This release has been updated to Visual Studio 2010 as well as the latest StorageClient library. Make sure you run the Provision.cmd in order to b...RIA Services Essentials: Book Club Application (June 6, 2010): The initial release of the BookClub application based on the MIX presentation with a few changes: 1. Some bug fixes 2. Added the ability to Like a ...Samurai.Workflow: 1.1 Stable Release: Removed reference to WPF assemblies so non-WPF applications can use the workflow. For WPF apps, the workflow will use reflection to seek out UI thr...SilverPoiMap: SilverPoiMap Beta: Beta VersionSimple Resource Localization Editor: Release 1: First release.Siverlight Project: Auto Arrange Panel Project: Auto Arrange Panel ProjectSmart Voice: Smart Voice 0.2: Changelog: Corrected a few bugsSmart Voice: Smart Voice 0.2.1: Changelog: Fixed a major bug that was slowing the application Added opt in for usage data In order to contribute with user data, please change t...Spider Compiler: Release 0.1: Contains the setup for the spider compiler. This release includes the changeset #66980.Spider Compiler: Release 0.2: This release includes the changeset #67003.The Lounge Repository: Lounge Repo Binary Release: All in one binary download of the Lounge Repository. Improvements: -More tolerant towards schema changes -Bug fixed regarding array normalization ...UrzaGatherer: UrzaGatherer v2.0.1: This release integrate the support of the Full Database Backup.Web/Cloud Applications Development Framework | Visual WebGui: 6.4 beta 3: This is the last beta version before the official release of Visual WebGui, including support for Visual Studio 2010 and it is fully featured with ...WinGet: Alpha 2: This is the second release of WinGet (0.5.0.2). Is is alpha quality not suitable for use in a production environment and it has bugs (see Known Iss...WoW Character Viewer: WoW Viewer: Newly redesigned layout of the original WoW Character Viewer. Faster access, cleaner layout.Most Popular ProjectsCommunity Forums NNTP bridgeASP.NET MVC Time PlannerMoonyDesk (windows desktop widgets)NeatUploadOutSyncFileupload AJAXViperWorks IgnitionAgUnit - Silverlight unit testing with ReSharperSQL Nexus ToolSmith Async .NET Memcached ClientMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryRawrGMap.NET - Great Maps for Windows Forms & PresentationStyleCopN2 CMSsmark C# LibraryFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >