Search Results

Search found 7669 results on 307 pages for 'dealing with clients'.

Page 298/307 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Pass a Delphi class to a C++ function/method that expects a class with __thiscall methods.

    - by Alan G.
    I have some MSVC++ compiled DLL's for which I have created COM-like (lite) interfaces (abstract Delphi classes). Some of those classes have methods that need pointers to objects. These C++ methods are declared with the __thiscall calling convention (which I cannot change), which is just like __stdcall, except a this pointer is passed on the ECX register. I create the class instance in Delphi, then pass it on to the C++ method. I can set breakpoints in Delphi and see it hitting the exposed __stdcall methods in my Delphi class, but soon I get a STATUS_STACK_BUFFER_OVERRUN and the app has to exit. Is it possible to emulate/deal with __thiscall on the Delphi side of things? If I pass an object instantiated by the C++ system then all is good, and that object's methods are called (as would be expected), but this is useless - I need to pass Delphi objects. Edit 2010-04-19 18:12 This is what happens in more detail: The first method called (setLabel) exits with no error (though its a stub method). The second method called (init), enters then dies when it attempts to read the vol parameter. C++ Side #define SHAPES_EXPORT __declspec(dllexport) // just to show the value class SHAPES_EXPORT CBox { public: virtual ~CBox() {} virtual void init(double volume) = 0; virtual void grow(double amount) = 0; virtual void shrink(double amount) = 0; virtual void setID(int ID = 0) = 0; virtual void setLabel(const char* text) = 0; }; Delphi Side IBox = class public procedure destroyBox; virtual; stdcall; abstract; procedure init(vol: Double); virtual; stdcall; abstract; procedure grow(amount: Double); virtual; stdcall; abstract; procedure shrink(amount: Double); virtual; stdcall; abstract; procedure setID(val: Integer); virtual; stdcall; abstract; procedure setLabel(text: PChar); virtual; stdcall; abstract; end; TMyBox = class(IBox) protected FVolume: Double; FID: Integer; FLabel: String; // public constructor Create; destructor Destroy; override; // BEGIN Virtual Method implementation procedure destroyBox; override; stdcall; // empty - Dont need/want C++ to manage my Delphi objects, just call their methods procedure init(vol: Double); override; stdcall; // FVolume := vol; procedure grow(amount: Double); override; stdcall; // Inc(FVolume, amount); procedure shrink(amount: Double); override; stdcall; // Dec(FVolume, amount); procedure setID(val: Integer); override; stdcall; // FID := val; procedure setLabel(text: PChar); override; stdcall; // Stub method; empty. // END Virtual Method implementation property Volume: Double read FVolume; property ID: Integer read FID; property Label: String read FLabel; end; I would have half expected using stdcall alone to work, but something is messing up, not sure what, perhaps something to do with the ECX register being used? Help would be greatly appreciated. Edit 2010-04-19 17:42 Could it be that the ECX register needs to be preserved on entry and restored once the function exits? Is the this pointer required by C++? I'm probably just reaching at the moment based on some intense Google searches. I found something related, but it seems to be dealing with the reverse of this issue.

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Java client listening to WebSphere MQ Server?

    - by user595234
    I need to write a Java client listening to WebSphere MQ Server. Message is put into a queue in the server. I developed this code, but am not sure it is correct or not. If correct, then how can I test it? This is a standalone Java project, no application server support. Which jars I should put into classpath? I have the MQ settings, where I should put into my codes? Standard JMS can skip these settings? confusing .... import javax.jms.Destination; import javax.jms.Message; import javax.jms.MessageConsumer; import javax.jms.MessageListener; import javax.jms.Queue; import javax.jms.QueueConnection; import javax.jms.QueueConnectionFactory; import javax.jms.QueueReceiver; import javax.jms.QueueSession; import javax.jms.Session; import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; public class Main { Context jndiContext = null; QueueConnectionFactory queueConnectionFactory = null; QueueConnection queueConnection = null; QueueSession queueSession = null; Queue controlQueue = null; QueueReceiver queueReceiver = null; private String queueSubject = ""; private void start() { try { queueConnection.start(); queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE); Destination destination = queueSession.createQueue(queueSubject); MessageConsumer consumer = queueSession.createConsumer(destination); consumer.setMessageListener(new MyListener()); } catch (Exception e) { e.printStackTrace(); } } private void close() { try { queueSession.close(); queueConnection.close(); } catch (Exception e) { e.printStackTrace(); } } private void init() { try { jndiContext = new InitialContext(); queueConnectionFactory = (QueueConnectionFactory) this.jndiLookup("QueueConnectionFactory"); queueConnection = queueConnectionFactory.createQueueConnection(); queueConnection.start(); } catch (Exception e) { System.err.println("Could not create JNDI API " + "context: " + e.toString()); System.exit(1); } } private class MyListener implements MessageListener { @Override public void onMessage(Message message) { System.out.println("get message:" + message); } } private Object jndiLookup(String name) throws NamingException { Object obj = null; if (jndiContext == null) { try { jndiContext = new InitialContext(); } catch (NamingException e) { System.err.println("Could not create JNDI API " + "context: " + e.toString()); throw e; } } try { obj = jndiContext.lookup(name); } catch (NamingException e) { System.err.println("JNDI API lookup failed: " + e.toString()); throw e; } return obj; } public Main() { } public static void main(String[] args) { new Main(); } } MQ Queue setting <queue-manager> <name>AAA</name> <port>1423</port> <hostname>ddd</hostname> <clientChannel>EEE.CLIENTS.00</clientChannel> <securityClass>PKIJCExit</securityClass> <transportType>1</transportType> <targetClientMatching>1</targetClientMatching> </queue-manager> <queues> <queue-details id="queue-1"> <name>GGGG.NY.00</name> <transacted>false</transacted> <acknowledgeMode>1</acknowledgeMode> <targetClient>1</targetClient> </queue-details> </queues>

    Read the article

  • Help with OpenVPN setup on Windows Server 2003

    - by Bill Johnson
    Hi all, Just wondering if someone can assist me further with the set-up of OpenVPN on my Windows Server 2003. I have configured Win Server as per the following guide: http://tinyurl.com/kxusv and I'm now at the stage of Creating the config files. I have a few questions that I need some assistance with. My server IP is 192.168.1.10 and my routers IP address is 192.168.1.1 (the router is a Netgear DGN2000). I have edited the server.ovpn file as per the following: push "dhcp-option DNS X.X.X.X" # Replace the Xs with the IP address of the DNS for your home network (usually your ISP's DNS) push "dhcp-option DNS X.X.X.X" # A second DNS server if you have one to include my ISP DNS and I have not edited anything else. Now my issue is with the client1.opvpn file as per the below: client dev tap #dev-node MyTAP #If you renamed your TAP interface or have more than one TAP interface then remove the # at the beginning and change "MyTAP" to its name proto udp remote YOURHOST.dyndns.org 1194 #You will need to enter you dyndns account or static IP address here. The number following it is the port you set in the server's config route 192.168.1.0 255.255.255.0 vpn_gateway 3 #This it the IP address scheme and subnet of your normal network your server is on. Your router would usually be 192.168.1.1 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\client1.crt" # Change the next two lines to match the files in the keys directory. This should be be different for each client. key "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\client1.key" # This file should be kept secret ns-cert-type server cipher BF-CBC # Blowfish (default) encrytion comp-lzo verb 1 To me it looks like I will need to amend the following: remote YOURHOST.dyndns.org 1194 #You will need to enter you dyndns account or static IP address here. The number following it is the port you set in the server's config route 192.168.1.0 255.255.255.0 vpn_gateway 3 #This it the IP address scheme and subnet of your normal network your server is on. Your router would usually be 192.168.1.1 So, should the first line be the static IP of the machine that I'm applying this to? The IP address of the server (192.168.1.10) or something else? I'm also stuck on the second part 'route 192.168.1.0 255.255.255.0 vpn_gateway 3' Should this be the router IP which is 192.168.1.1 and the subnet is 255.255.255.0 and that is all I need to alter? The final part that I'm stuggling with is Configuring the router. Basically I have a Netgear DGN2000 and as it mentions that the router should be configured to port forward port 1194 to the server’s IP address of 192.168.1.150 all I have been able to do is in 'Firewall Rules' and on 'Inbound Services', set the Service to 'Any(ALL) and Send to LAN Server point to 1923.168.1.150. I'm not sure if this is correct? It is the following stage of the help guide that I'm struggling with and really need some help with: You need to make sure the port you configured OpenVPN to listen on is forwarded on the router to the IP address of your server. On the WRT54G, port forwarding is configured in the “Applications & Gaming” section. Enter 1194 for the port, UDP for the protocol, and 192.168.1.150 for the IP address. Make sure the entry is enabled and then save the setting. Next, you need to add an entry to the router’s Routing Table. This will enable the router to properly route requests from the clients to the TAP interface of the server. On the WRT54G you would go to the “Setup” page and then the “Advanced Routing” section. Enter the follwing info to make the entry: Enter Route Name: openVPN Destination LAN IP: 192.168.10.0 Subnet Mask: 255.255.255.252 Default Gateway: 192.168.1.150 Interface: LAN & Wireless Once the info has been typed in make sure you save the setting. Can anyone possibly guide me through setting this part up with my Netgear router. I see that once I have these 2 parts complete I'm there so I would really appreciate someone walking me through what is required in completing this. Much appreciated.

    Read the article

  • Self-relation messes up contents in fetching

    - by holographix
    Hi folks, I'm dealing with an annoying problem in core data I've got a table named Character, which is made as follows I'm filling the table in various steps: 1) fill the attributes of the table 2) fill the Character Relation (charRel) FYI charRel is defined as follows I'm feeding the contents by pulling the data from an xml, the feeding code is this curStr = [[NSMutableString stringWithString:[curStr stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]] retain]; NSLog(@"Parsing relation within these keys %@, in order to get'em associated",curStr); NSArray *chunks = [curStr componentsSeparatedByString: @","]; for( NSString *relId in chunks ) { NSLog(@"Associating %@ with id %@",[currentCharacter valueForKey:@"character_id"], relId); NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", relId]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:[self managedObjectContext] ]]; [request setPredicate:predicate]; NSerror *error = nil; NSArray *results = [[self managedObjectContext] executeFetchRequest:request error:&error]; // error handling code if(error != nil) { NSLog(@"[SYMBOL CORRELATION]: retrieving correlated symbol error: %@", [error localizedDescription]); } else if([results count] > 0) { Character *relatedChar = [results objectAtIndex:0]; // grab the first result in the stack, could be done better! [currentCharacter addCharRelObject:relatedChar]; //VICE VERSA RELATIONS NSArray *charRels = [relatedChar valueForKey:@"charRel"]; BOOL alreadyRelated = NO; for(Character *charRel in charRels) { if([[charRel valueForKey:@"character_id"] isEqual:[currentCharacter valueForKey:@"character_id"]]) { alreadyRelated = YES; break; } } if(!alreadyRelated) { NSLog(@"\n\t\trelating %@ with %@", [relatedChar valueForKey:@"character_id"], [currentCharacter valueForKey:@"character_id"]); [relatedChar addCharRelObject:currentCharacter]; } } else { NSLog(@"[SYMBOL CORRELATION]: related symbol was not found! ##SKIPPING-->"); } [request release]; } NSLog(@"\t\t### TOTAL OF REALTIONS FOR ID %@: %d\n%@", [currentCharacter valueForKey:@"character_id"], [[currentCharacter valueForKey:@"charRel"] count], currentCharacter); error = nil; /* SAVE THE CONTEXT */ if (![managedObjectContext save:&error]) { NSLog(@"Whoops, couldn't save the symbol record: %@", [error localizedDescription]); NSArray* detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey]; if(detailedErrors != nil && [detailedErrors count] > 0) { for(NSError* detailedError in detailedErrors) { NSLog(@"\n################\t\tDetailedError: %@\n################", [detailedError userInfo]); } } else { NSLog(@" %@", [error userInfo]); } } at this point when I print out the values of the currentCharacter, everything looks perfect. every relation is in its place. in example in this log we can clearly see that this element has got 3 items in charRel: <Character: 0x5593af0> (entity: Character; id: 0x55938c0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p86> ; data: { catRel = "<relationship fault: 0x9a29db0 'catRel'>"; charRel = ( "0x9a1f870 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p74>", "0x9a14bd0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p109>", "0x558ba00 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p5>" ); "character_id" = 254; examplesRel = "<relationship fault: 0x9a29df0 'examplesRel'>"; meaning = "\n Left"; pinyin = "\n zu\U01d2"; "pronunciation_it" = "\n zu\U01d2"; strokenumber = 5; text = "\n \n <p>The most ancient form of this symbol"; unicodevalue = "\n \U5de6"; }) then when I'm in need of retrieving this item I perform an extraction, like this: // at first I get the single Character record NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSError *error; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", self.char_id ]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:_context ]]; [request setPredicate:predicate]; NSArray *fetchedObjs = [_context executeFetchRequest:request error:&error]; when, for instance, I print out in NSLog the contents of charRel NSArray *correlations = [singleCharacter valueForKey:@"charRel"]; NSLog(@"CHARACTER OBJECT \n%@", correlations); I get this Relationship fault for (<NSRelationshipDescription: 0x5568520>), name charRel, isOptional 1, isTransient 0, entity Character, renamingIdentifier charRel, validation predicates (), warnings (), versionHashModifier (null), destination entity Character, inverseRelationship (null), minCount 1, maxCount 99 on 0x6937f00 hope that I made myself clear. this thing is driving me insane, I've googled all over world, but I couldn't find a solution (and this make me think to as issue related to bad coding somehow :P). thank you in advance guys. k

    Read the article

  • Center footer fixed at the bottom IE

    - by Mirko
    I am coding a web interface for a University project and I have been dealing with this issue: I want my footer fixed at the bottom so it is in place no matter which screen I am using or if I toggle the full screen mode It works in all the other browsers except IE7 (I do not have to support previous versions) HTML <div id="menu"> <a href="information.html" rel="shadowbox;height=500;width=650" title="INFORMATION" > <img src="images/info.png" alt="information icon" /> </a> <a href="images/bricks_of_destiny.jpg" rel="shadowbox[gallery]" title="IMAGES" > <img src="images/image.png" alt="image icon" /> </a> <a href="music_player.swf" title="MUSIC" rel="shadowbox;height=400;width=600" > <img src="images/music.png" alt="music icon" /> </a> <a href="#" title="MOVIES"><img src="images/television.png" alt="movies icon" /></a> <a href="quotes.html" title="QUOTES" rel="shadowbox;height=300;width=650" > <img src="images/male_user.png" alt="male user icon" /> </a> <a href="#" title="REFERENCES"> <img src="images/search_globe.png" alt="search globe icon" /> </a> </div> <a href="images/destiny_1.jpg" rel="shadowbox[gallery]" title="IMAGES"></a> <a href="images/destiny_carma_jewell.jpg" rel="shadowbox[gallery]" title="IMAGES"></a> <a href="images/destiny-joan-marie.jpg" rel="shadowbox[gallery]" title="IMAGES"></a> <a href="images/pursuing_destiny.jpg" rel="shadowbox[gallery]" title="IMAGES"></a> <div class="clear"></div> <div id="destiny"> Discover more about the word <span class="strong">DESTINY </span>! Click one of the icon above! (F11 Toggle Full / Standard screen) </div> <div id="footer"> <ul id="breadcrumbs"> <li>Disclaimer</li> <li> | Icons by: <a href="http://dryicons.com/" rel="shadowbox">dryicons.com</a></li> <li> | Website by: <a href="http://www.eezzyweb.com/" rel="shadowbox">eezzyweb</a></li> <li> | <a href="http://jquery.com/" rel="shadowbox">jQuery</a></li> </ul> </div> </div> CSS: #wrapper{ text-align:center; margin:0 auto; width:750px; height:430px; border:1px solid #fff; } #menu{ position:relative; margin:0 auto; top:350px; width:450px; height:60px; } #destiny{ position:relative; top:380px; color:#FFF; font-size:1.5em; font-weight:bold; border:1px solid #fff; } #breadcrumbs{ list-style:none; } #breadcrumbs li{ display:inline; color:#FFF; } #footer{ position:absolute; width:750px; height:60px; margin:0 auto; text-align:center; border:1px solid #fff; bottom:0; } .clear{ clear:both; } The white borders are there only for debugging purposes The application is hosted at http://www.eezzyweb.com/destiny/ Any suggestion is appreciated

    Read the article

  • Put/Post json not working with ODataController if Model has Int64

    - by daryl
    I have this Data Object with an Int64 column: [TableAttribute(Name="dbo.vw_RelationLineOfBusiness")] [DataServiceKey("ProviderRelationLobId")] public partial class RelationLineOfBusiness { #region Column Mappings private System.Guid _Lineofbusiness; private System.String _ContractNumber; private System.Nullable<System.Int32> _ProviderType; private System.String _InsuredProviderType; private System.Guid _ProviderRelationLobId; private System.String _LineOfBusinessDesc; private System.String _CultureCode; private System.String _ContractDesc; private System.Nullable<System.Guid> _ProviderRelationKey; private System.String _ProviderRelationNbr; **private System.Int64 _AssignedNbr;** When I post/Put object through my OData controller using HttpClient and NewtsonSoft: partial class RelationLineOfBusinessController : ODataController { public HttpResponseMessage PutRelationLineOfBusiness([FromODataUri] System.Guid key, Invidasys.VidaPro.Model.RelationLineOfBusiness entity) the entity object is null and the error in my modelstate : "Cannot convert a primitive value to the expected type 'Edm.Int64'. See the inner exception for more details." I noticed when I do a get on my object using the below URL: Invidasys.Rest.Service/VidaPro/RelationLineOfBusiness(guid'c6824edc-23b4-4f76-a777-108d482c0fee') my json looks like the following - I noticed that the AssignedNbr is treated as a string. { "odata.metadata":"Invidasys.Rest.Service/VIDAPro/$metadata#RelationLineOfBusiness/@Element", "Lineofbusiness":"ba129c95-c5bb-4e40-993e-c28ca86fffe4","ContractNumber":null,"ProviderType":null, "InsuredProviderType":"PCP","ProviderRelationLobId":"c6824edc-23b4-4f76-a777-108d482c0fee", "LineOfBusinessDesc":"MEDICAID","CultureCode":"en-US","ContractDesc":null, "ProviderRelationKey":"a2d3b61f-3d76-46f4-9887-f2b0c8966914","ProviderRelationNbr":"4565454645", "AssignedNbr":"1000000045","Ispar":true,"ProviderTypeDesc":null,"InsuredProviderTypeDesc":"Primary Care Physician", "StartDate":"2012-01-01T00:00:00","EndDate":"2014-01-01T00:00:00","Created":"2014-06-13T10:59:33.567", "CreatedBy":"Michael","Updated":"2014-06-13T10:59:33.567","UpdatedBy":"Michael" } When I do a PUT with httpclient the JSON is showing up in my restful services as the following and the json for the AssignedNbr column is not in quotes which results in the restful services failing to build the JSON back to an object. I played with the JSON and put the AssignedNbr in quotes and the request goes through correctly. {"AssignedNbr":1000000045,"ContractDesc":null,"ContractNumber":null,"Created":"/Date(1402682373567-0700)/", "CreatedBy":"Michael","CultureCode":"en-US","EndDate":"/Date(1388559600000-0700)/","InsuredProviderType":"PCP", "InsuredProviderTypeDesc":"Primary Care Physician","Ispar":true,"LineOfBusinessDesc":"MEDICAID", "Lineofbusiness":"ba129c95-c5bb-4e40-993e-c28ca86fffe4","ProviderRelationKey":"a2d3b61f-3d76-46f4-9887-f2b0c8966914", "ProviderRelationLobId":"c6824edc-23b4-4f76-a777-108d482c0fee","ProviderRelationNbr":"4565454645","ProviderType":null, "ProviderTypeDesc":null,"StartDate":"/Date(1325401200000-0700)/","Updated":"/Date(1408374995760-0700)/","UpdatedBy":"ED"} The reason we wanted to expose our business model as restful services was to hide any data validation and expose all our databases in format that is easy to develop against. I looked at the DataServiceContext to see if it would work and it does but it uses XML to communicate between the restful services and the client. Which would work but DataServiceContext does not give the level of messaging that HttpRequestMessage/HttpResponseMessage gives me for informing users on the errors/missing information with their post. We are planning on supporting multiple devices from our restful services platform but that requires that I can use NewtonSoft Json as well as Microsoft's DataContractJsonSerializer if need be. My question is for a restful service standpoint - is there a way I can configure/code the restful services to take in the AssignedNbr as in JSON as without the quotes. Or from a JSON standpoint is their a way I can get the JSON built without getting into the serializing business nor do I want our clients to have deal with custom serializers if they want to write their own apps against our restful services. Any suggestions? Thanks.

    Read the article

  • Resumable upload from Java client to Grails web application?

    - by dersteps
    After almost 2 workdays of Googling and trying several different possibilities I found throughout the web, I'm asking this question here, hoping that I might finally get an answer. First of all, here's what I want to do: I'm developing a client and a server application with the purpose of exchanging a lot of large files between multiple clients on a single server. The client is developed in pure Java (JDK 1.6), while the web application is done in Grails (2.0.0). As the purpose of the client is to allow users to exchange a lot of large files (usually about 2GB each), I have to implement it in a way, so that the uploads are resumable, i.e. the users are able to stop and resume uploads at any time. Here's what I did so far: I actually managed to do what I wanted to do and stream large files to the server while still being able to pause and resume uploads using raw sockets. I would send a regular request to the server (using Apache's HttpClient library) to get the server to send me a port that was free for me to use, then open a ServerSocket on the server and connect to that particular socket from the client. Here's the problem with that: Actually, there are at least two problems with that: I open those ports myself, so I have to manage open and used ports myself. This is quite error-prone. I actually circumvent Grails' ability to manage a huge amount of (concurrent) connections. Finally, here's what I'm supposed to do now and the problem: As the problems I mentioned above are unacceptable, I am now supposed to use Java's URLConnection/HttpURLConnection classes, while still sticking to Grails. Connecting to the server and sending simple requests is no problem at all, everything worked fine. The problems started when I tried to use the streams (the connection's OutputStream in the client and the request's InputStream in the server). Opening the client's OutputStream and writing data to it is as easy as it gets. But reading from the request's InputStream seems impossible to me, as that stream is always empty, as it seems. Example Code Here's an example of the server side (Groovy controller): def test() { InputStream inStream = request.inputStream if(inStream != null) { int read = 0; byte[] buffer = new byte[4096]; long total = 0; println "Start reading" while((read = inStream.read(buffer)) != -1) { println "Read " + read + " bytes from input stream buffer" //<-- this is NEVER called } println "Reading finished" println "Read a total of " + total + " bytes" // <-- 'total' will always be 0 (zero) } else { println "Input Stream is null" // <-- This is NEVER called } } This is what I did on the client side (Java class): public void connect() { final URL url = new URL("myserveraddress"); final byte[] message = "someMessage".getBytes(); // Any byte[] - will be a file one day HttpURLConnection connection = url.openConnection(); connection.setRequestMethod("GET"); // other methods - same result // Write message DataOutputStream out = new DataOutputStream(connection.getOutputStream()); out.writeBytes(message); out.flush(); out.close(); // Actually connect connection.connect(); // is this placed correctly? // Get response BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String line = null; while((line = in.readLine()) != null) { System.out.println(line); // Prints the whole server response as expected } in.close(); } As I mentioned, the problem is that request.inputStream always yields an empty InputStream, so I am never able to read anything from it (of course). But as that is exactly what I'm trying to do (so I can stream the file to be uploaded to the server, read from the InputStream and save it to a file), this is rather disappointing. I tried different HTTP methods, different data payloads, and also rearranged the code over and over again, but did not seem to be able to solve the problem. What I hope to find I hope to find a solution to my problem, of course. Anything is highly appreciated: hints, code snippets, library suggestions and so on. Maybe I'm even having it all wrong and need to go in a totally different direction. So, how can I implement resumable file uploads for rather large (binary) files from a Java client to a Grails web application without manually opening ports on the server side?

    Read the article

  • OpenVPN IPv6 over IPv4 tunnel

    - by user66779
    Today I installed OpenVPN 2.3rc2 on both my windows 7 client machine and centos 6 server. This new version of OpenVPN provides full compatibility for IPv6. The Problem: I am currently able to connect to the server (through the IPv4 tunnel) and ping the IPv6 address which is assigned to my client and I can also ping the tun0 interface on the server. However, I cannot browse to any IPv6 websites. My vps provider has given me this: 2607:f840:0044:0022:0000:0000:0000:0000/64 is routed to this server (2607:f840:0:3f:0:0:0:eda). This is ifconfig after setup with OpenVPN running: eth0 Link encap:Ethernet HWaddr 00:16:3E:12:77:54 inet addr:208.111.39.160 Bcast:208.111.39.255 Mask:255.255.255.0 inet6 addr: 2607:f740:0:3f::eda/64 Scope:Global inet6 addr: fe80::216:3eff:fe12:7754/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2317253 errors:0 dropped:7263 overruns:0 frame:0 TX packets:1977414 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1696120096 (1.5 GiB) TX bytes:1735352992 (1.6 GiB) Interrupt:29 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 inet6 addr: 2607:f740:44:22::1/64 Scope:Global UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:739567 errors:0 dropped:0 overruns:0 frame:0 TX packets:1218240 errors:0 dropped:1542 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:46512557 (44.3 MiB) TX bytes:1559930874 (1.4 GiB) So OpenVPN is sucessfully creating a tun0 interface and assigning clients IPv6 addresses using 2607:f840:44:22::/64. The first client to connect is getting 2607:f840:44:22::1000 and the second 2607:f840:44:22::1001, and so on... plus 1 each time. After connecting as the first client, I can ping from my windows client machine 2607:f740:44:22::1 and 2607:f740:44:22::1000. However, I have no access to IPv6 websites. I believe the problem is that the tun0 IPv6 addressees are not being forwarded to the eth0 interface. This is the firewall running on the server: #!/bin/sh # # iptables configuration script # # Flush all current rules from iptables # iptables -F iptables -t nat -F # # Allow SSH connections on tcp port 22 # iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -j ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # Accept connections on 1195 for vpn access from client # iptables -A INPUT -i eth0 -p udp --dport 1195 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 1195 -m state --state ESTABLISHED -j ACCEPT # # Apply forwarding for OpenVPN Tunneling # iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 209.111.39.160 iptables -A FORWARD -j REJECT # # Enable forwarding # echo 1 > /proc/sys/net/ipv4/ip_forward # # Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # # IPv6 # IP6TABLES=/sbin/ip6tables $IP6TABLES -F INPUT $IP6TABLES -F FORWARD $IP6TABLES -F OUTPUT echo -n "1" >/proc/sys/net/ipv6/conf/all/forwarding echo -n "1" >/proc/sys/net/ipv6/conf/all/proxy_ndp echo -n "0" >/proc/sys/net/ipv6/conf/all/autoconf echo -n "0" >/proc/sys/net/ipv6/conf/all/accept_ra $IP6TABLES -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p icmpv6 -j ACCEPT $IP6TABLES -P INPUT ACCEPT $IP6TABLES -P FORWARD ACCEPT $IP6TABLES -P OUTPUT ACCEPT Server.conf: server-ipv6 2607:f840:44:22::/64 server 10.8.0.0 255.255.255.0 port 1195 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh2048.pem ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" keepalive 10 60 tls-auth ta.key 0 cipher AES-256-CBC comp-lzo user nobody group nobody persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 5 Client.conf: client dev tun nobind keepalive 10 60 hand-window 15 remote 209.111.39.160 1195 udp persist-key persist-tun ca ca.crt key client1.key cert client1.crt remote-cert-tls server tls-auth ta.key 1 comp-lzo verb 3 cipher AES-256-CBC I'm not sure where I am going wrong, it could be the firewall, or something missing from server or client.conf. This version of OpenVPN was only released yesterday, and there's little info on the internet about how to setup an IPv6 over IPv4 vpn tunnel. I've read the manual for this new version of OpenVPN (parts pertaining to IPv6) and it provides very little info too. Thanks for any help.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • non blocking client server chat application in java using nio

    - by Amith
    I built a simple chat application using nio channels. I am very much new to networking as well as threads. This application is for communicating with server (Server / Client chat application). My problem is that multiple clients are not supported by the server. How do I solve this problem? What's the bug in my code? public class Clientcore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void coreClient() { System.out.println("please enter the text"); BufferedReader stdin=new BufferedReader(new InputStreamReader(System.in)); SocketChannel sc = null; try { sc = SocketChannel.open(); sc.configureBlocking(false); sc.connect(new InetSocketAddress(8888)); int i=0; while (!sc.finishConnect()) { } for(int ii=0;ii>-22;ii++) { System.out.println("Enter the text"); String HELLO_REQUEST =stdin.readLine().toString(); if(HELLO_REQUEST.equalsIgnoreCase("end")) { break; } System.out.println("Sending a request to HelloServer"); ByteBuffer buffer = ByteBuffer.wrap(HELLO_REQUEST.getBytes()); sc.write(buffer); } } catch (IOException e) { e.printStackTrace(); } finally { if (sc != null) { try { sc.close(); } catch (Exception e) { e.printStackTrace(); } } } } public void run() { try { coreClient(); } catch(Exception ej) { ej.printStackTrace(); }}} public class ServerCore extends Thread { SelectionKey selkey=null; Selector sckt_manager=null; public void run() { try { coreServer(); } catch(Exception ej) { ej.printStackTrace(); } } private void coreServer() { try { ServerSocketChannel ssc = ServerSocketChannel.open(); try { ssc.socket().bind(new InetSocketAddress(8888)); while (true) { sckt_manager=SelectorProvider.provider().openSelector(); ssc.configureBlocking(false); SocketChannel sc = ssc.accept(); register_server(ssc,SelectionKey.OP_ACCEPT); if (sc == null) { } else { System.out.println("Received an incoming connection from " + sc.socket().getRemoteSocketAddress()); printRequest(sc); System.err.println("testing 1"); String HELLO_REPLY = "Sample Display"; ByteBuffer buffer = ByteBuffer.wrap(HELLO_REPLY.getBytes()); System.err.println("testing 2"); sc.write(buffer); System.err.println("testing 3"); sc.close(); }}} catch (IOException e) { e.printStackTrace(); } finally { if (ssc != null) { try { ssc.close(); } catch (IOException e) { e.printStackTrace(); } } } } catch(Exception E) { System.out.println("Ex in servCORE "+E); } } private static void printRequest(SocketChannel sc) throws IOException { ReadableByteChannel rbc = Channels.newChannel(sc.socket().getInputStream()); WritableByteChannel wbc = Channels.newChannel(System.out); ByteBuffer b = ByteBuffer.allocate(1024); // read 1024 bytes while (rbc.read(b) != -1) { b.flip(); while (b.hasRemaining()) { wbc.write(b); System.out.println(); } b.clear(); } } public void register_server(ServerSocketChannel ssc,int selectionkey_ops)throws Exception { ssc.register(sckt_manager,selectionkey_ops); }} public class HelloClient { public void coreClientChat() { Clientcore t=new Clientcore(); new Thread(t).start(); } public static void main(String[] args)throws Exception { HelloClient cl= new HelloClient(); cl.coreClientChat(); }} public class HelloServer { public void coreServerChat() { ServerCore t=new ServerCore(); new Thread(t).start(); } public static void main(String[] args) { HelloServer st= new HelloServer(); st.coreServerChat(); }}

    Read the article

  • jQuery doesn't work after an Ajax post

    - by user1758979
    I'm using jQuery to sort a list of entries, between <LI></LI> tags, and then an Ajax post to validate the order and 'update' the page with the content returned. $.ajax({url: "./test.php?id=<?php echo $id; ?>&action=modify", contenttype: "application/x-www-form-urlencoded;charset=utf-8", data: {myJson: data}, type: 'post', success: function(data) { $('html').html(data); OnloadFunction (); } }); Then, I lose the ability to sort the list (I'm not sure if clear...). I tried to move the content of the $(document).ready inside the OnloadFunction (), and call it with <script>OnloadFunction ();</script> inside the block dealing with the modifications to do : $action= $_GET['action']; if ($action == "modify") { // Code here } but it doesn't work... I can't figure out how to do that. Could anyone help ? I stripped out the main part of the code to keep only the essential (filename: test.php) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <script type="text/javascript" src="jquery-1.8.2.min.js"></script> <script type="text/javascript" src="jquery-ui-1.9.0.custom.min.js"></script> <script> $(document).ready(function(){ //alert("I am ready"); OnloadFunction (); }); function OnloadFunction () { $(function() { $("#SortColumn ul").sortable({ opacity: 0.6, cursor: 'move', update: function() {} }); }); //alert('OnloadFunction ends'); } function valider(){ var SortedId = new Array(); SortIdNb = 0; $('#SortColumn ul li').each(function() { SortedId.push(this.id); }); var data = { /* Real code contains an array with the <li> id */ CheckedId: "CheckedId", SortedId: SortedId, }; data = JSON.stringify(data); $.ajax({url: "./test.php?id=<?php echo $id; ?>&action=modify", contenttype: "application/x-www-form-urlencoded;charset=utf-8", data: {myJson: data}, type: 'post', success: function(data) { //alert(data); $('html').html(data); OnloadFunction (); } }); } </script> </head> <body> <? $action= $_GET['action']; $id = $_GET['id']; if ($id == 0) {$id=1;} $id += 1; if ($action == "modify") { echo "action: modify<br>"; echo "id (àvèc aççént$): ".$id."<br>"; // "(àvèc aççént$)" to check characters because character set is incorrect after the ajax post $data = json_decode($_POST['myJson'], true); // PHP code here to treat the new list send via the post and update the database print_r($data); } ?> <!-- PHP code here to get the following list from the database --> <div id="SortColumn"> <ul> <li id="recordsArray_1">recordsArray_1</li> <li id="recordsArray_2">recordsArray_2</li> <li id="recordsArray_3">recordsArray_3</li> <li id="recordsArray_4">recordsArray_4</li> <li id="recordsArray_5">recordsArray_5</li> </ul> </div> <input type="button" value="Modifier" onclick="valider();"> </body> </html>

    Read the article

  • Is it normal that .Net combobox postback always contains data items?

    - by BlueFox
    I have a dynamically populated ComponentArt:ComboBox with Autopostback property set to true. So every time the selection changes, a postback to server would be fired. Everything seems to work fine except that the lists of all available items are also posted back to the server. from Firebug: ComboBox1_Data %3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3E150%20Mile%20House%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAach%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAachen%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAaheim%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAakrehamn%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAalbeke%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAalen%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAalst%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAalter%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3E%C3%84%C3%A4nekoski%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAarau%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAarberg%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAarbergen%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAarburg%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAarebakken%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAarschot%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAarsele%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAartrijke%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAartselaar%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3Cc%3E%3Cr%3E%3Cc%3E%3Cr%3E%3Cc%3EText%3C%2Fc%3E%3Cc%3EAavasaksa%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E%3C%2Fc%3E%3C%2Fr%3E ComboBox1_Input Aalst ComboBox1_SelectedIndex 7 As most my clients are using slow connections, this amount of postback has a huge impact on their user experience. Since I'm storing the viewstates in session already, there's really no need for any of the component to restore states from the client. So I'm wondering if this is normal for ComponentArt:ComboBox to do this and not other controls, or this is the normal way of doing things?

    Read the article

  • Can i create different observables and different corresponding observers in java?

    - by mithun1538
    Hello everyone, Currently, I have one observable and many observers. What i need is different observables, and depending on the observable, different observers. How do I achieve this? ( For understanding, assume I have different apples - say apple1 apple2... I have observer_1 observing apple1, observer_2 observing apple2, observer_3 observing apple 2 and so on..). I tried creating different objects of the Observable class, but since observers are observing the same class of observable, I don't know how to access a particular instance of the Observable. I have included the following servlet code that contains Observer and Observable classes: public class CustomerServlet extends HttpServlet { public String getNextMessage() { // Create a message sink to wait for a new message from the // message source. return new MessageSink().getNextMessage(source); } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ObjectOutputStream dout = new ObjectOutputStream(response.getOutputStream()); String recMSG = getNextMessage(); dout.writeObject(recMSG); dout.flush(); } public void broadcastMessage(String message) { // Send the message to all the HTTP-connected clients by giving the // message to the message source source.sendMessage(message); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { try { ObjectInputStream din= new ObjectInputStream(request.getInputStream()); String message = (String)din.readObject(); ObjectOutputStream dout = new ObjectOutputStream(response.getOutputStream()); dout.writeObject("1"); dout.flush(); if (message != null) { broadcastMessage(message); } // Set the status code to indicate there will be no response response.setStatus(response.SC_NO_CONTENT); } catch (Exception e) { e.printStackTrace(); } } @Override public String getServletInfo() { return "Short description"; }// </editor-fold> MessageSource source = new MessageSource(); } class MessageSource extends Observable { public void sendMessage(String message) { setChanged(); notifyObservers(message); } } class MessageSource extends Observable { public void sendMessage(String message) { setChanged(); notifyObservers(message); } } class MessageSink implements Observer { String message = null; // set by update() and read by getNextMessage() // Called by the message source when it gets a new message synchronized public void update(Observable o, Object arg) { // Get the new message message = (String)arg; // Wake up our waiting thread notify(); } // Gets the next message sent out from the message source synchronized public String getNextMessage(MessageSource source) { // Tell source we want to be told about new messages source.addObserver(this); // Wait until our update() method receives a message while (message == null) { try { wait(); } catch (Exception e) { System.out.println("Exception has occured! ERR ERR ERR"); } } // Tell source to stop telling us about new messages source.deleteObserver(this); // Now return the message we received // But first set the message instance variable to null // so update() and getNextMessage() can be called again. String messageCopy = message; message = null; return messageCopy; } }

    Read the article

  • SOAP and NHibernate Session in C#

    - by Anonymous Coward
    In a set of SOAP web services the user is authenticated with custom SOAP header (username/password). Each time the user call a WS the following Auth method is called to authenticate and retrieve User object from NHibernate session: [...] public Services : Base { private User user; [...] public string myWS(string username, string password) { if( Auth(username, password) ) { [...] } } } public Base : WebService { protected static ISessionFactory sesFactory; protected static ISession session; static Base { Configuration conf = new Configuration(); [...] sesFactory = conf.BuildSessionFactory(); } private bool Auth(...) { session = sesFactory.OpenSession(); MembershipUser user = null; if (UserCredentials != null && Membership.ValidateUser(username, password)) { luser = Membership.GetUser(username); } ... try { user = (User)session.Get(typeof(User), luser.ProviderUserKey.ToString()); } catch { user = null; throw new [...] } return user != null; } } When the WS work is done the session is cleaned up nicely and everything works: the WSs create, modify and change objects and Nhibernate save them in the DB. The problems come when an user (same username/password) calls the same WS at same time from different clients (machines). The state of the saved objects are inconsistent. How do I manage the session correctly to avoid this? I searched and the documentation about Session management in NHibernate is really vast. Should I Lock over user object? Should I set up a "session share" management between WS calls from same user? Should I use Transaction in some savvy way? Thanks Update1 Yes, mSession is 'session'. Update2 Even with a non-static session object the data saved in the DB are inconsistent. The pattern I use to insert/save object is the following: var return_value = [...]; try { using(ITransaction tx = session.Transaction) { tx.Begin(); MyType obj = new MyType(); user.field = user.field - obj.field; // The fields names are i.e. but this is actually what happens. session.Save(user); session.Save(obj); tx.Commit(); return_value = obj.another_field; } } catch ([...]) { // Handling exceptions... } finally { // Clean up session.Flush(); session.Close(); } return return_value; All new objects (MyType) are correctly saved but the user.field status is not as I would expect. Even obj.another_field is correct (the field is an ID with generated=on-save policy). It is like 'user.field = user.field - obj.field;' is executed more times then necessary.

    Read the article

  • Adapting non-iterable containers to be iterated via custom templatized iterator

    - by DAldridge
    I have some classes, which for various reasons out of scope of this discussion, I cannot modify (irrelevant implementation details omitted): class Foo { /* ... irrelevant public interface ... */ }; class Bar { public: Foo& get_foo(size_t index) { /* whatever */ } size_t size_foo() { /* whatever */ } }; (There are many similar 'Foo' and 'Bar' classes I'm dealing with, and it's all generated code from elsewhere and stuff I don't want to subclass, etc.) [Edit: clarification - although there are many similar 'Foo' and 'Bar' classes, it is guaranteed that each "outer" class will have the getter and size methods. Only the getter method name and return type will differ for each "outer", based on whatever it's "inner" contained type is. So, if I have Baz which contains Quux instances, there will be Quux& Baz::get_quux(size_t index), and size_t Baz::size_quux().] Given the design of the Bar class, you cannot easily use it in STL algorithms (e.g. for_each, find_if, etc.), and must do imperative loops rather than taking a functional approach (reasons why I prefer the latter is also out of scope for this discussion): Bar b; size_t numFoo = b.size_foo(); for (int fooIdx = 0; fooIdx < numFoo; ++fooIdx) { Foo& f = b.get_foo(fooIdx); /* ... do stuff with 'f' ... */ } So... I've never created a custom iterator, and after reading various questions/answers on S.O. about iterator_traits and the like, I came up with this (currently half-baked) "solution": First, the custom iterator mechanism (NOTE: all uses of 'function' and 'bind' are from std::tr1 in MSVC9): // Iterator mechanism... template <typename TOuter, typename TInner> class ContainerIterator : public std::iterator<std::input_iterator_tag, TInner> { public: typedef function<TInner& (size_t)> func_type; ContainerIterator(const ContainerIterator& other) : mFunc(other.mFunc), mIndex(other.mIndex) {} ContainerIterator& operator++() { ++mIndex; return *this; } bool operator==(const ContainerIterator& other) { return ((mFunc.target<TOuter>() == other.mFunc.target<TOuter>()) && (mIndex == other.mIndex)); } bool operator!=(const ContainerIterator& other) { return !(*this == other); } TInner& operator*() { return mFunc(mIndex); } private: template<typename TOuter, typename TInner> friend class ContainerProxy; ContainerIterator(func_type func, size_t index = 0) : mFunc(func), mIndex(index) {} function<TInner& (size_t)> mFunc; size_t mIndex; }; Next, the mechanism by which I get valid iterators representing begin and end of the inner container: // Proxy(?) to the outer class instance, providing a way to get begin() and end() // iterators to the inner contained instances... template <typename TOuter, typename TInner> class ContainerProxy { public: typedef function<TInner& (size_t)> access_func_type; typedef function<size_t ()> size_func_type; typedef ContainerIterator<TOuter, TInner> iter_type; ContainerProxy(access_func_type accessFunc, size_func_type sizeFunc) : mAccessFunc(accessFunc), mSizeFunc(sizeFunc) {} iter_type begin() const { size_t numItems = mSizeFunc(); if (0 == numItems) return end(); else return ContainerIterator<TOuter, TInner>(mAccessFunc, 0); } iter_type end() const { size_t numItems = mSizeFunc(); return ContainerIterator<TOuter, TInner>(mAccessFunc, numItems); } private: access_func_type mAccessFunc; size_func_type mSizeFunc; }; I can use these classes in the following manner: // Sample function object for taking action on an LMX inner class instance yielded // by iteration... template <typename TInner> class SomeTInnerFunctor { public: void operator()(const TInner& inner) { /* ... whatever ... */ } }; // Example of iterating over an outer class instance's inner container... Bar b; /* assume populated which contained items ... */ ContainerProxy<Bar, Foo> bProxy( bind(&Bar::get_foo, b, _1), bind(&Bar::size_foo, b)); for_each(bProxy.begin(), bProxy.end(), SomeTInnerFunctor<Foo>()); Empirically, this solution functions correctly (minus any copy/paste or typos I may have introduced when editing the above for brevity). So, finally, the actual question: I don't like requiring the use of bind() and _1 placeholders, etcetera by the caller. All they really care about is: outer type, inner type, outer type's method to fetch inner instances, outer type's method to fetch count inner instances. Is there any way to "hide" the bind in the body of the template classes somehow? I've been unable to find a way to separately supply template parameters for the types and inner methods separately... Thanks! David

    Read the article

  • How can i get more than one jpg. or txt file from any folder?

    - by Phsika
    Dear Sirs; i have two Application to listen network Stream : Server.cs on the other hand; send file Client.cs. But i want to send more files on a stream from any folder. For example. i have C:/folder whish has got 3 jpg files. My client must run. Also My server.cs get files on stream: Client.cs: private void btn_send2_Click(object sender, EventArgs e) { string[] paths= null; paths= System.IO.Directory.GetFiles(@"C:\folder" + @"\", "*.jpg", System.IO.SearchOption.AllDirectories); byte[] Dizi; TcpClient Gonder = new TcpClient("127.0.0.1", 51124); FileStream Dosya; FileInfo Dos; NetworkStream Akis; foreach (string path in paths) { Dosya = new FileStream(path , FileMode.OpenOrCreate); Dos = new FileInfo(path ); Dizi = new byte[(int)Dos.Length]; Dosya.Read(Dizi, 0, (int)Dos.Length); Akis = Gonder.GetStream(); Akis.Write(Dizi, 0, (int)Dosya.Length); Gonder.Close(); Akis.Flush(); Dosya.Close(); } } Also i have Server.cs void Dinle() { TcpListener server = null; try { Int32 port = 51124; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); server = new TcpListener(localAddr, port); server.Start(); Byte[] bytes = new Byte[1024 * 250000]; // string ReceivedPath = "C:/recieved"; while (true) { MessageBox.Show("Waiting for a connection... "); TcpClient client = server.AcceptTcpClient(); MessageBox.Show("Connected!"); NetworkStream stream = client.GetStream(); if (stream.CanRead) { saveFileDialog1.ShowDialog(); // burasi degisecek string pathfolder = saveFileDialog1.FileName; StreamWriter yaz = new StreamWriter(pathfolder); string satir; StreamReader oku = new StreamReader(stream); while ((satir = oku.ReadLine()) != null) { satir = satir + (char)13 + (char)10; yaz.WriteLine(satir); } oku.Close(); yaz.Close(); client.Close(); } } } catch (SocketException e) { Console.WriteLine("SocketException: {0}", e); } finally { // Stop listening for new clients. server.Stop(); } Console.WriteLine("\nHit enter to continue..."); Console.Read(); } Please look Client.cs: icollected all files from "c:\folder" paths= System.IO.Directory.GetFiles(@"C:\folder" + @"\", "*.jpg", System.IO.SearchOption.AllDirectories); My Server.cs how to get all files from stream?

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • When should I use Areas in TFS instead of Team Projects

    - by Martin Hinshelwood
    Well, it depends…. If you are a small company that creates a finite number of internal projects then you will find it easier to create a single project for each of your products and have TFS do the heavy lifting with reporting, SharePoint sites and Version Control. But what if you are not… Update 9th March 2010 Michael Fourie gave me some feedback which I have integrated. Ed Blankenship via @edblankenship offered encouragement and a nice quote. Ewald Hofman gave me a couple of Cons, and maybe a few more soon. Ewald’s company, Avanade, currently uses Areas, but it looks like the manual management is getting too much and the project is getting cluttered. What if you are likely to have hundreds of projects, possibly with a multitude of internal and external projects? You might have 1 project for a customer or 10. This is the situation that most consultancies find themselves in and thus they need a more sustainable and maintainable option. What I am advocating is that we should have 1 “Team Project” per customer, and use areas to create “sub projects” within that single “Team Project”. "What you describe is what we generally do internally and what we recommend. We make very heavy use of area path to categorize the work within a larger project." - Brian Harry, Microsoft Technical Fellow & Product Unit Manager for Team Foundation Server   "We tend to use areas to segregate multiple projects in the same team project and it works well." - Tiago Pascoal, Visual Studio ALM MVP   "In general, I believe this approach provides consistency [to multi-product engagements] and lowers the administration and maintenance costs. All good." - Michael Fourie, Visual Studio ALM MVP   “@MrHinsh BTW, I'm very much a fan of very large, if not huge, team projects in TFS. Just FYI :) Use Areas & Iterations.” Ed Blankenship, Visual Studio ALM MVP   This would mean that SSW would have a single Team Project called “SSW” that contains all of our internal projects and consequently all of the Areas and Iteration move down one hierarchy to accommodate this. Where we would have had “\SSW\Sprint 1” we now have “\SSW\SqlDeploy\Sprint1” with “SqlDeploy” being our internal project. At the moment SSW has over 70 internal projects and more than 170 total projects in TFS. This method has long term benefits that help to simplify the support model for companies that often have limited internal support time and many projects. But, there are implications as TFS does not provide this model “out-of-the-box”. These implications stretch across Areas, Iterations, Queries, Project Portal and Version Control. Michael made a good comment, he said: I agree with your approach, assuming that in a multi-product engagement with a client, they are happy to adopt the same process template across all products. If they are not, then it’ll either be easy to convince them or there is a valid reason for having a different template - Michael Fourie, Visual Studio ALM MVP   At SSW we have a standard template that we use and this is applied across the board, to all of our projects. We even apply any changes to the core process template to all of our existing projects as well. If you have multiple projects for the same clients on multiple templates and you want to keep it that way, then this approach will not work for you. However, if you want to standardise as we have at SSW then this approach may benefit you as well. Implications around Areas Areas should be used for topological classification/isolation of work items. You can think of this as architecture areas, organisational areas or even the main features of your application. In our scenario there is an additional top level item that represents the Project / Product that we want to chop our Team Project into. Figure: Creating a sub area to represent a product/project is easy. <teamproject> <teamproject>\<Functional Area/module whatever> Becomes: <teamproject> <teamproject>\<ProjectName>\ <teamproject>\<ProjectName>\<Functional Area/module whatever> Implications around Iterations Iterations should be used for chronological classification/isolation of work items. This could include isolated time boxes, milestones or release timelines and really depends on the logical flow of your project or projects. Due to the new level in Area we need to add the same level to Iteration. This is primarily because it is unlikely that the sprints in each of your projects/products will start and end at the same time. This is just a reality of managing multiple projects. Figure: Adding the same Area value to Iteration as the top level item adds flexibility to Iteration. <teamproject>\Sprint 1 Or <teamproject>\Release 1\Sprint 1 Becomes: <teamproject>\<ProjectName>\Sprint 1 Or <teamproject>\<ProjectName>\Release 1\Sprint 1 Implications around Queries Queries are used to filter your work items based on a specified level of granularity. There are a number of queries that are built into a project created using the MSF Agile 5.0 template, but we now have multiple projects and it would be a pain to have to edit all of the work items every time we changed project, and that would only allow one team to work on one project at a time.   Figure: The Queries that are created in a normal MSF Agile 5.0 project do not quite suit our new needs. In order for project contributors to be able to query based on their project we need a couple of things. The first thing I did was to create an “_Area Template” folder that has a copy of the project layout with all the queries setup to filter based on the “_Area Template” Area and the “_Sprint template” you can see in the Area and Iteration views. Figure: The template is currently easily drag and drop, but you then need to edit the queries to point at the right Area and Iteration. This needs a tool. I then created an “Areas” folder to hold all of the area specific queries. So, when you go to create a new TFS Sub-Project you just drag “_Area Template” while holding “Ctrl” and drop it onto “Areas”. There is a little setup here. That said I managed it in around 10 minutes which is not so bad, and I can imagine it being quite easy to build a tool to create these queries Figure: These new queries can be configured in around 10 minutes, which includes setting up the Area and Iteration as well. Version Control What about your source code? Well, that is the easiest of the lot. Just create a sub folder for each of your projects/products.   Figure: Creating sub folders in source control is easy as “Right click | Create new folder”. <teamproject>\DEV\Main\ Becomes: <teamproject>\<ProjectName>\DEV\Main\ Conclusion I think it is up to each company to make a call on how you want to configure your Team Projects and it depends completely on how many projects/products you are going to have for each customer including yourself. If we decide to utilise this route it will require some configuration to get our 170+ projects into this format, and I will probably be writing some tools to help. Pros You only have one project to upgrade when a process template changes – After going through an upgrade of over 170 project prior to the changes in the RC I can tell you that that many projects is no fun. Standardises your Process Template – You will always have the same Process implementation across projects/products without exception You get tighter control over the permissions – Yes, you can do this on a standard Team Project, but it gets a lot easier with practice. You can “move” work items from one “product” to another – Have we not always wanted to do that. You can rename your projects – Wahoo: everyone wants to do this, now you can. One set of Reporting Services reports to manage – You set an area and iteration to run reports anyway, so you may as well set both. Simplified Check-In Policies– There is only one set of check-in policies per client. This simplifies administration of policies. Simplified Alerts – As alerts are applied across multiple projects this simplifies your alert rules as per client. Cons All of these cons could be mitigated by a custom tool that helps automate creation of “Sub-projects” within Team Projects. This custom tool could create areas, Iteration, permissions, SharePoint and queries. It just does not exist yet :) You need to configure the Areas and Iterations You need to configure the permissions You may need to configure sub sites for SharePoint (depends on your requirement) – If you have two projects/products in the same Team Project then you will not see the burn down for each one out-of-the-box, but rather a cumulative for the Team Project. This is not really that much of a problem as you would have to configure your burndown graphs for your current iteration anyway. note: When you create a sub site to a TFS linked portal it will inherit the settings of its parent site :) This is fantastic as it means that you can easily create sub sites and then set the Area and Iteration path in each of the reports to be the correct one. Every team wants their own customization (via Ewald Hofman) - small teams of 2 persons against teams of 30 – or even outsourcing – need their own process, you cannot allow that because everybody gets the same work item types. note: Luckily at SSW this is not a problem as our template is standardised across all projects and customers. Large list of builds (via Ewald Hofman) – As the build list in Team Explorer is just a flat list it can get very cluttered. note: I would mitigate this by removing any build that has not been run in over 30 days. The build template and workflow will still be available in version control, but it will clean the list. Feedback Now that I have explained this method, what do you think? What other pros and cons can you see? What do you think of this approach? Will you be using it? What tools would you like to support you?   Technorati Tags: Visual Studio ALM,TFS Administration,TFS,Team Foundation Server,Project Planning,TFS Customisation

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >