Search Results

Search found 21168 results on 847 pages for 'microsoft onenote 2007'.

Page 433/847 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Excessive CPU Utilization for Bind 9.8.1 `named` processes

    - by justinzane
    I just noticed that named is eating vast amounts of CPU time for a very small network with only a few domains. Can someone help me determine what is misconfigured, please? Or how to debug this. top top - 14:13:08 up 25 days, 14:16, 1 user, load average: 1.04, 1.04, 1.05 Tasks: 149 total, 1 running, 148 sleeping, 0 stopped, 0 zombie %Cpu(s): 17.3 us, 4.3 sy, 0.0 ni, 78.2 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem: 2042776 total, 1347916 used, 694860 free, 249396 buffers KiB Swap: 3976080 total, 30552 used, 3945528 free, 574164 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17445 bind 20 0 244m 42m 3124 S 99.4 2.2 2345:03 named rndc stats +++ Statistics Dump +++ (1352931389) ++ Incoming Requests ++ 65869 QUERY ++ Incoming Queries ++ 31809 A 241 NS 3 CNAME 27455 SOA 276 PTR 123 MX 462 TXT 5400 AAAA 7 A6 1 DS 14 DNSKEY 15 SPF 55 AXFR 8 ANY ++ Outgoing Queries ++ [View: internal] 22206 A 509 NS 10 SOA 25 PTR 12 MX 524 TXT 4851 AAAA 62 DNSKEY 19 SPF 3157 DLV [View: external] 87 A 2 NS 80 AAAA 120 DNSKEY 7 DLV [View: _bind] ++ Name Server Statistics ++ 65869 IPv4 requests received 27670 requests with EDNS(0) received 112 TCP requests received 65652 responses sent 20 truncated responses sent 27670 responses with EDNS(0) sent 62920 queries resulted in successful answer 37117 queries resulted in authoritative answer 28482 queries resulted in non authoritative answer 7 queries resulted in referral answer 591 queries resulted in nxrrset 53 queries resulted in SERVFAIL 2081 queries resulted in NXDOMAIN 14530 queries caused recursion 162 duplicate queries received 55 requested transfers completed ++ Zone Maintenance Statistics ++ 109536 IPv4 notifies sent ++ Resolver Statistics ++ [Common] [View: internal] 29362 IPv4 queries sent 2013 IPv6 queries sent 28531 IPv4 responses received 4209 NXDOMAIN received 6 SERVFAIL received 31 FORMERR received 32 EDNS(0) query failures 3359 query retries 836 query timeouts 5348 IPv4 NS address fetches 3271 IPv6 NS address fetches 83 IPv4 NS address fetch failed 2779 IPv6 NS address fetch failed 17421 DNSSEC validation attempted 12731 DNSSEC validation succeeded 4690 DNSSEC NX validation succeeded 21104 queries with RTT 10-100ms 7418 queries with RTT 100-500ms 3 queries with RTT 500-800ms 1 queries with RTT 800-1600ms [View: external] 192 IPv4 queries sent 104 IPv6 queries sent 192 IPv4 responses received 2 NXDOMAIN received 104 query retries 44 IPv4 NS address fetches 44 IPv6 NS address fetches 1 IPv4 NS address fetch failed 1 IPv6 NS address fetch failed 4 DNSSEC validation attempted 3 DNSSEC validation succeeded 1 DNSSEC NX validation succeeded 152 queries with RTT 10-100ms 40 queries with RTT 100-500ms [View: _bind] ++ Cache DB RRsets ++ [View: internal (Cache: internal)] 2007 A 652 NS 131 CNAME 1 MX 32 TXT 421 AAAA 28 DS 244 RRSIG 110 NSEC 3 DNSKEY 2 !A 2 !TXT 89 !AAAA 2 !SPF 14 !DLV 148 NXDOMAIN [View: external (Cache: external)] 55 A 12 NS 34 AAAA 2 DS 10 RRSIG 1 DNSKEY [View: _bind (Cache: _bind)] ++ Socket I/O Statistics ++ 82958 UDP/IPv4 sockets opened 2118 UDP/IPv6 sockets opened 4 TCP/IPv4 sockets opened 1 TCP/IPv6 sockets opened 82956 UDP/IPv4 sockets closed 2117 UDP/IPv6 sockets closed 58 TCP/IPv4 sockets closed 15 UDP/IPv4 socket bind failures 2117 UDP/IPv6 socket connect failures 29554 UDP/IPv4 connections established 59 TCP/IPv4 connections accepted 2117 UDP/IPv6 send errors 5 UDP/IPv4 recv errors ++ Per Zone Query Statistics ++ --- Statistics Dump --- (1352931389)

    Read the article

  • Abnormal hangs and restarts Ubuntu 8.04

    - by jai-ho
    Hi, I am using Ubuntu 8.04 LTS and seeing the following behaviors: The system hangs after a while and becomes completely unresponsive. The system sometimes restarts itself ! Can you please help me identify what is the problem? Also please mention where should I look for the possible cause of this error. Thanks. EDIT: Got the following from the dmesg output (the system got hung and had to restart) [ 15.452015] Driver 'sr' needs updating - please use bus_type methods [ 15.456882] Driver 'sd' needs updating - please use bus_type methods [ 15.457987] sr0: scsi3-mmc drive: 52x/52x writer cd/rw xa/form2 cdda tray [ 15.457993] Uniform CD-ROM driver Revision: 3.20 [ 15.458058] sr 0:0:1:0: Attached scsi CD-ROM sr0 [ 15.463028] sd 1:0:0:0: [sda] 156301488 512-byte hardware sectors (80026 MB) [ 15.463051] sd 1:0:0:0: [sda] Write Protect is off [ 15.463055] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 15.463083] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 15.463151] sd 1:0:0:0: [sda] 156301488 512-byte hardware sectors (80026 MB) [ 15.463167] sd 1:0:0:0: [sda] Write Protect is off [ 15.463171] sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 15.463197] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 15.463202] sda:<5sr 0:0:1:0: Attached scsi generic sg0 type 5 [ 15.464634] sd 1:0:0:0: Attached scsi generic sg1 type 0 [ 15.470120] sda1 sda2 < sda5 [ 15.495536] sd 1:0:0:0: [sda] Attached SCSI disk [ 15.759549] Attempting manual resume [ 15.759554] swsusp: Resume From Partition 8:5 [ 15.759556] PM: Checking swsusp image. [ 15.759742] PM: Resume from disk failed. [ 15.779964] EXT3-fs: INFO: recovery required on readonly filesystem. [ 15.779970] EXT3-fs: write access will be enabled during recovery. [ 19.904204] kjournald starting. Commit interval 5 seconds [ 19.904235] EXT3-fs: sda1: orphan cleanup on readonly fs [ 19.904245] ext3_orphan_cleanup: deleting unreferenced inode 303260 [ 19.904304] ext3_orphan_cleanup: deleting unreferenced inode 303329 [ 19.932763] ext3_orphan_cleanup: deleting unreferenced inode 3801871 [ 19.932785] ext3_orphan_cleanup: deleting unreferenced inode 3801874 [ 19.932798] ext3_orphan_cleanup: deleting unreferenced inode 3801910 [ 19.951253] ext3_orphan_cleanup: deleting unreferenced inode 3801912 [ 19.951266] ext3_orphan_cleanup: deleting unreferenced inode 3801914 [ 19.951278] ext3_orphan_cleanup: deleting unreferenced inode 3959212 [ 19.951299] ext3_orphan_cleanup: deleting unreferenced inode 3959213 [ 19.960335] ext3_orphan_cleanup: deleting unreferenced inode 3959215 [ 19.963531] ext3_orphan_cleanup: deleting unreferenced inode 3801875 [ 19.963545] ext3_orphan_cleanup: deleting unreferenced inode 3663727 [ 19.963565] ext3_orphan_cleanup: deleting unreferenced inode 3663708 [ 19.963577] ext3_orphan_cleanup: deleting unreferenced inode 4072122 [ 19.963597] ext3_orphan_cleanup: deleting unreferenced inode 4072157 [ 19.968616] ext3_orphan_cleanup: deleting unreferenced inode 4072159 [ 19.970252] ext3_orphan_cleanup: deleting unreferenced inode 4072160 [ 19.970264] ext3_orphan_cleanup: deleting unreferenced inode 4072161 [ 19.992889] ext3_orphan_cleanup: deleting unreferenced inode 4072264 [ 19.992903] ext3_orphan_cleanup: deleting unreferenced inode 4072267 [ 19.999585] ext3_orphan_cleanup: deleting unreferenced inode 4072268 [ 20.008329] ext3_orphan_cleanup: deleting unreferenced inode 4072270 [ 20.008343] ext3_orphan_cleanup: deleting unreferenced inode 4072123 [ 20.008360] ext3_orphan_cleanup: deleting unreferenced inode 4072452 [ 20.008374] ext3_orphan_cleanup: deleting unreferenced inode 4072453 [ 20.008385] ext3_orphan_cleanup: deleting unreferenced inode 4072124 [ 20.008398] ext3_orphan_cleanup: deleting unreferenced inode 311574 [ 20.008413] ext3_orphan_cleanup: deleting unreferenced inode 967890 [ 20.008420] EXT3-fs: sda1: 28 orphan inodes deleted [ 20.008423] EXT3-fs: recovery complete. [ 20.082622] EXT3-fs: mounted filesystem with ordered data mode. [ 29.025379] input: PC Speaker as /devices/platform/pcspkr/input/input2 [ 29.187133] Linux agpgart interface v0.102 [ 29.225338] iTCO_vendor_support: vendor-support=0 [ 29.259662] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.02 (26-Jul-2007)

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • Attempting to ping RPC endpoint 6001/6004 (Exchange Information Store) on server on Exchange2010

    - by MadBoy
    I have Exchange 2010 in hosting setup like: TMG 2010 as load balancer Exchange 2010 x 2 (CAS,MAILBOX,HUB on each server) AD1, AD2 machines File witness All people currently connect thru OWA or POP3/SMTP and that works fine. The problem is autodiscovery doesn't work and RPC in terms of setting up Outlook doesn't work too. It doesn't work if I am connected with VPN or not. The thing is it used to work. Before reinstall of my machine 2 days ago I was able to get mails successfully thru Outlook that was set up using autodiscovery (but I was getting reports setting up of new clients wasn't working - so not sure why my outlook continued to work). I used https://www.testexchangeconnectivity.com to track it down and basically the message is more or less this: Attempting to ping RPC endpoint 6004 (NSPI Proxy Interface) on server autodiscover.domain.pl. The attempt to ping the endpoint failed. Additional Details The RPC_S_SERVER_UNAVAILABLE error (0x6ba) was thrown by the RPC Runtime process. I tried different solutions like disabling IP v6, followed couple of links and did all they proposed and it's still at the very same point: C:\Users\admin>netstat -a | find "6001" TCP 0.0.0.0:6001 EXCHANGE2:0 LISTENING TCP [::]:6001 EXCHANGE2:0 LISTENING C:\Users\admin>netstat -a | find "6002" C:\Users\admin>netstat -a | find "6003" C:\Users\admin>netstat -a | find "6004" I followed (and few others): http://helewix.com/blog/index.php/Microsoft-Solutions/2011/02/10/exchange-2010-how-to-open-ports-6001-6002-and-6004-on-your-server-for-telnet-to-work-and-rpc-to-be-able-to-connect-2 http://blogs.technet.com/b/exchange/archive/2008/06/20/3405633.aspx http://messagexchange.blogspot.com/2008/12/outlook-anywhere-failing-rpc-end-points.html Although most relate to Exchange 2007 and I have Exchange 2010 but there's not much things I can find on Exchange 2010 for the current problem. After applying all of those solutions error 6004 changed into error 6001 which doesn't bring me to my problems any closer. At this point even thou error was 6001 and 6004 was no more the 6004 port was still closed while 6001 stayed open. Attempting to ping RPC endpoint 6001 (Exchange Information Store) on server autodiscover.domain.pl. The attempt to ping the endpoint failed. Additional Details The RPC_S_SERVER_UNAVAILABLE error (0x6ba) was thrown by the RPC Runtime process. C:\Users\admin>netstat -a | find "6001" TCP 0.0.0.0:6001 EXCHANGE2:0 LISTENING TCP [::]:6001 EXCHANGE2:0 LISTENING C:\Users\admin>netstat -a | find "6002" C:\Users\admin>netstat -a | find "6003" C:\Users\admin>netstat -a | find "6004" So I reverted back to square one. I suspect it's a problem with TMG but really can't be sure. I tried multiple combinations but all fail.

    Read the article

  • Disable Acer eRecovery system

    - by Joel Coehoorn
    The meat of this question is that I'm looking for a way to either require a password before using a recovery partition or "break" the recovery partition (specifically, Acer eRecovery) in a way that I can later "unbreak" only by booting normally into windows first. Here's the full details: I have a set of new Acer Veriton n260g machines in a computer lab. A lot work went into setting up this lab to work well - for example, Office 2007 and other programs needed by the students were installed, all windows updates are applied, and a default desktop is setup. All in all it's several hours work to fully set up one machine. Unfortunately, I don't currently have the ability to easily image these machines, and even if I did I would want to avoid downtime even while an image is restored. Therefore, I've taken steps to lock them down — namely DeepFreeze and a bios password to prevent booting from anywhere but the frozen hard drive. DeepFreeze is an amazing product — as long as you boot from the frozen hard drive, there is no way to actually make permanent changes to that hard drive. Anything you do is wiped after the machine restarts. It lets me give students the leeway to do what they want on lab computers without worrying about them breaking something. The problem is that even with the bios locked and set to only boot from the hard drive, these Acers still have a simple way to choose a different boot source: shut them down and put a paper click in a little hole at the top while you turn it on again. This puts them into the "Acer eRecovery" mode. This by itself is no big deal — you can still power cycle with no impact. But if you then click through the menu to reset the machine (we're now past the point of curiosity and on to intent) it will wipe the hard drive and restore it to the original state. Of course, a few students have already figured this out and reset a couple machines. That's unfortunate, but inevitable. I don't want to destroy the ability to do this entirely (which I could by repartitioning the drives to remove the recovery partition) but I would like a way to require a password first, or "break" the recovery system in a way that I can "unbreak" only if I first un-freeze the hard drive in DeepFreeze. Any ideas?

    Read the article

  • Outlook refuses to connect to Exchange

    - by wfaulk
    Outlook 2007 under Windows XP connecting to Exchange 2003 SP2: when started, it flips back and forth between "Connecting to Exchange Server" and "Disconnected" three or four times, then gives up and stays disconnected. I tried deleting the ost file (which was nearly 2GB), turning Cached mode on and off, recreating the account inside the Mail control panel, changing the account to use HTTP, and probably some other things. None of it seemed to make any difference, until … After fiddling with it for a while, I got this absurd error message dialog at startup, and it exits after I click OK: Cannot start Microsoft Office Outlook. Cannot open the Outlook window. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. (I'm not sure if I can even trust that message. It's so long, it just feels like a random offset into Outlook's stack of error messages.) Either way, the Exchange server is available to everyone else, and is available via OWA from that computer. I ran Process Explorer against Outlook and it showed 5 or so ESTABLISHED connections to our Exchange server, plus listening on two UDP ports, and two CLOSE_WAIT connections to localhost. If I managed to look at Outlook's IP connections while it was doing its Connecting/Disconnected dance, it had a huge number of connections open to the Exchange server. It more than filled ProcExp's dialog box; I'm guessing at least 20, probably more. The only other odd thing is that our network admin at some point added a wildcard DNS record to the domain name that we use for email, and now Outlook will sometimes (always?) start by complaining about autodiscover.example.com's SSL certificate. There is a web server there, but it doesn't have any sort of email autodiscover anything on it. It doesn't make any difference if I click "OK" or "Cancel" (or whatever the buttons are). I also added a bogus entry for the hostname to Windows' hosts file, pointing it at 127.0.0.2, and it stopped complaining about the certificate. (The CLOSE_WAIT sockets above were from before I made this change, and went away after.) I don't think this is related, as the same problem should exist for everyone, but it might be. This is the second time this user has had this problem. The first time, I never found a solution other than reinstalling Outlook. Now that it's a pattern, I'd like to find a permanent solution, rather than assume it's a random glitch.

    Read the article

  • centos 6 ps aux hangs up

    - by Guntis
    I have problem with my server. Server is running centos 6 (CloudLinux Server release 6.2). uname -a = 2.6.32-320.4.1.lve1.1.4.el6.x86_64 That is a kvm guest. On host is debian 6. If i run command ps aux, it stuck on random process (shows some processes only), top command is working fine. htop doesn't work too (black screen). top - 12:11:51 up 34 min, 1 user, load average: 4.26, 6.71, 16.15 Tasks: 201 total, 7 running, 192 sleeping, 0 stopped, 2 zombie Cpu(s): 7.9%us, 2.8%sy, 0.0%ni, 87.5%id, 1.6%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 9862044k total, 2359484k used, 7502560k free, 171720k buffers Swap: 10485720k total, 0k used, 10485720k free, 1336872k cached server has one Intel(R) Xeon(R) CPU E5606 @ 2.13GHz, free -m total used free shared buffers cached Mem: 9630 2336 7293 0 170 1324 -/+ buffers/cache: 841 8789 Swap: 10239 0 10239 php -v PHP 5.3.19 (cli) (built: Nov 28 2012 10:03:07) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies with the ionCube PHP Loader v4.2.2, Copyright (c) 2002-2012, by ionCube Ltd., and with Zend Guard Loader v3.3, Copyright (c) 1998-2010, by Zend Technologies with Suhosin v0.9.33, Copyright (c) 2007-2012, by SektionEins GmbH mysql Server version: 5.1.63-cll php -i disable_functions => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_e xec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php _uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_set sid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_de code, xmlrpc_server_create, putenv, show_source,mail => apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, pos ix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exe c, syslog, system, xmlrpc_entity_decode, xmlrpc_server_create, putenv, show_source,mail ... suhosin.executor.disable_eval => Off => Off suhosin.executor.eval.blacklist => include,include_once,require,require_once,curl_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak, syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nice,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl_wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_cr eate, socket_create_listen,socket_create_pair,link,register_shutdown_function,register_tick_function,gzinflate => include,include_once,require,require_once,c url_init,fpassthru,base64_encode,base64_decode,mail,exec,system,proc_open,leak,syslog,pfsockopen,shell_exec,ini_restore,symlink,stream_socket_server,proc_nic e,popen,proc_get_status,dl, pcntl_exec, pcntl_fork, pcntl_signal,pcntl_waitpid, pcntl_wexitstatus, pcntl_wifexited, pcntl_wifsignaled,pcntl_wifstopped, pcntl _wstopsig, pcntl_wtermsig, socket_accept,socket_bind, socket_connect, socket_create, socket_create_listen,socket_create_pair,link,register_shutdown_function, register_tick_function,gzinflate Sometimes i cannot kill httpd process. I run kill -9 PID even several times, and nothing happens. php runs via suphp. I learned somewhere that it can be trojan. I ran strace ps aux and it stops on open("/proc/PID/cmdline", O_RDONLY) If i reboot server, problem is gone but after some time it is back again .. :( Thanks.

    Read the article

  • Why does NX Client for Windows silently closes after connection?

    - by pavel
    Hey! I connect remotely to my Ubuntu server from Vista machine. Now I need to run a GUI application on the server (Wireshark). So I decided to use FreeNX server/client to view Ubuntu GUI on Vista I have successfully installed FreeNX on Ubuntu and NX Client on Vista. I was following this guide Unfortunately, now I found myself stuck with the following problem. At the client, the !M logo window appears, but after a few seconds that window just closes, even without showing any error message. Guys, I'm really stuck, please help! Maybe I should have installed some graphical environment on the server? These are the details from NX client, it seems there are no errors. ----------------- Info: Display running with pid '7768' and handler '0x670d24'. NXPROXY - Version 3.4.0 Copyright (C) 2001, 2007 NoMachine. See http://www.nomachine.com/ for more information. Info: Proxy running in client mode with pid '2168'. Session: Starting session at 'Sat Dec 19 10:58:35 2009'. Warning: Connected to remote version 3.3.0 with local version 3.4.0. Info: Connection with remote proxy completed. Info: Using WAN link parameters 768/24/1/0. Info: Using cache parameters 4/4096KB/16384KB/16384KB. Info: Using pack method 'adaptive-9' with session 'kde'. Info: Using ZLIB data compression 1/1/32. Info: Using ZLIB stream compression 1/1. Info: No suitable cache file found. Info: Forwarding X11 connections to display ':0'. Info: Listening to font server connections on port '11000'. Session: Session started at 'Sat Dec 19 10:58:35 2009'. Info: Established X server connection. Info: Using shared memory parameters 0/0K. Session: Terminating session at 'Sat Dec 19 10:58:37 2009'. Session: Session terminated at 'Sat Dec 19 10:58:37 2009'. -----------

    Read the article

  • Event ID 9331 MSExchangeSA & Event ID 9335 MSExchangeSA

    - by George
    I get this two Exchange 2010 Global Address book related event IDs: Event ID 9331 MSExchangeSA OABGen encountered error 80004005 (internal ID 50101f1) accessing the public folder database while generating the offline address list for address list '/'. -\Default Offline Address List and Event ID 9335 MSExchangeSA OABGen encountered error 80004005 while cleaning the offline address list public folders under /o=xxxxx xxxx/cn=addrlists/cn=oabs/cn=Default Offline Address List. Please make sure the public folder database is mounted and replicas exist of the offline address list folders. No offline address lists have been generated. Please check the event log for more information. -\Default Offline Address List It is Exchange 2010 SP2 sitting on Windows 2008 enterprise edition. Essentially the issue is that the global address book is not being updated on Outlook clients. We are using Outlook 2007 and 2010. So far I have tried running the following command: Update-FileDistributionService -Identity ExchangeServer -Type "OAB" And I tried this solution as well: 1) Make sure the Microsoft Exchange System Attendant is running. It will be set to start automatically by default, but it doesn't. This is a known issue. Start this service manually. When running, you will not get an error when trying to update the GAL. 2) "Apply" any changes made to any address lists before the GAL will update Outlook properly. In Organization Configuration - Mailbox in EMC, view the properties of the Default Global Address Book in the Offline Address Book tab. In the properties window, select the Address Lists tab. This shows which address lists makes up the GAL. 3) Close the properties window and select the Address Lists tab in the Organization Configuration - Mailbox. Right-click each address list used by the Def GAL and click "Apply" (make sure the "Immediately" radio button is checked). 4) Last, go back to the Offline Address Book tab, right-click the GAL and select "Update". After a few send/receives in the Outlook clients, their Glogal Address List should update to show the latest changes. Neither one of those solutions helped. So I am not really sure what to do here. Also, I am aware of changing registry on each local computers, but it would be close to impossible as we have 8 offices in 3 different countries. Any suggestions? EDIT 7.XII.2012 @ 10.35 I forgot to mention that we did rebuild the address book and that didn't help.

    Read the article

  • How should I convert a physical drive to a VHD for use with VirtualPC?

    - by RBerteig
    I have the hard disks from a PC that was happily running Windows Me until is it suffered an unknown hardware failure. The drives are intact, and can be mounted and read on other PCs. We have data backups, but there is licensed software installed that may not be possible to migrate to newer versions running on a more modern platform making the idea of just booting a virtual image attractive. Is it possible to make VHDs from the drives such that I can boot them in VirtualPC? If not VirtualPC, would it be possible in any other virtualization tool? Edit: Some more details.... The system was running Windows Me, but upgraded from Windows 95 (or possibly 98). It can't have been more than a Pentium II, but I will have to look at the motherboard to confirm that. There were no "exotic" devices installed, and nothing beyond the usual legacy stuff that would need to survive into a virtual machine. The licensed software did not have a dongle, so I won't need to worry about virtualizing a physical dongle of some kind. Licenses were probably died to the disk serial number. There were two HDs, both IDE. The boot disk is about 6GB, and the spare data disk is 12GB, but nearly empty. I have a small bias in favor of VirtualPC just because its free and I've used it successfully in the past. But this is a good excuse to revisit the state of the art. I do know from direct experience that it is possible to install and boot DOS 5.0 and Win95 in VirtualPC, but the VM extensions weren't available so the experience isn't as seamless as I would have liked. A very old DirectX game that failed miserably under XP SP2 runs really nicely on that VM, and actually plays better in a lot of ways than it did on period hardware, so that gives me hope that this is possible. Edit 2: Well, I'm closer than I was when I asked... so thanks to all for helpful suggestions and hints to what I should be trying. I used WinImage to copy the disks, and VirtualPC 2007 to attempt to boot. So far, I have it booting in safe mode, but hanging with a black screen otherwise. I strongly suspect that the copy of Artisoft Lantastic 8.0 (anyone else remember them?) that is still installed for networking with even older PCs that mostly don't exist any more is the culprit there. In my infinite free time, I will try to resolve the differences between a Safe Mode boot and a normal boot, and feel that it is likely to yield to pressure. I'd accept more than one answer if I could... this isn't as black and white a question as the one accepted answer convention assumes.

    Read the article

  • Suggestions for splitting server roles amongst Hyper-V virtual servers / RAID6 or RAID10? / AppAssure

    - by Anon
    We have 2 Hyper-V hosts at present running 1 virtual server that was converted from a physical box running all roles. My plan is to split the roles over various virtual machines, upgrading to the latest software versions as I go, and use the backup server as a standby in case the main server fails. AppAssure backup software has a feature called Virtual Standby, so the VHD's can be ready to be fired up on the backup server if necessary. Off-site backups will be done via external USB drive for now. I'm just seeking some input/suggestions into how I'm planning to split the roles out amongst various virtual servers. Also, I'm curious how to setup the storage on the servers. We do not have any NAS's, SAN'S or any budget for this. What would the best RAID level be to use? I'm thinking either RAID6 (which is currently used) however I'm concerned about the write speeds, or RAID10 but again I'm worried that I can only lose 1 drive (from the same mirror) as opposed to any 2 with RAID6. I realise I have a hot swap for this, but what if a further drive fails during a rebuild? Is the write penalty of RAID6 worth the extra reliability over RAID10? Or will it be too slow with all the roles I am planning, therefore RAID10 is my only real option? The reason for the needed redundancy is I am the only technician and I'm not always on-site. Options I've considered: 1) 5 drives in RAID6 set, 200gb for host OS, rest for VM storage. 1 drive for hot swap - this is how it is currently setup 2) 4 drives in RAID10 set, 200gb for host OS, rest for VM storage. 2 drives for hot swap 3) 4 drives in RAID10 set for VM storage, 2 drives in RAID1 set for host OS. No drives for hot swap - While this is probably the best option with the amount of drives I have, I don't like the idea of having no hot swap 4) 3 drives in RAID6 set for VM storage, 2 drives in RAID1 set for host OS. 1 drive for hot swap All options give us enough storage capacity for our files, etc. We don't have any budget for extra drives or extra hot swap HD chassis for the servers. We have about 70 clients and about 150 users. MAIN SERVER Intel Xeon 5520 @ 2.27 GHz (2 processors) 16GB RAM 6 x 1TB Seagate Barracuda ES.2 Enterprise SATA drives Intel SRCSATAWB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: DC01 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM DC02 - Active Directory Domain Controller / DNS server / Global catalog - 1GB RAM Member Server - DHCP server, File server, Print server - 1GB RAM SCCM Member Server - 4GB RAM Third Party Software Member Server - A/V server, Ticketing software, etc - 4GB RAM Exchange 2007 - 4GB RAM - however we are probably migrating to a hosted solution, therefore freeing up resources BACKUP SERVER Intel Xeon E5410 @ 2.33GHz (2 processors) 16GB RAM 6 x 2TB WD RE4 SATA drives Intel SRCSASRB RAID controller Virtual machine workload using Hyper-V on Windows Server 2008 R2: AppAssure backup software - 8GB RAM

    Read the article

  • Exchange 2010 Internal Auto Discover Migrate away from current .local DNS name

    - by Bryan
    We have an Exchange 2010 Server, running within our Active Directory domain, with an internal hostname of server.example.local. The server is configured for Exchange anywhere, but currently has a self signed certificate with a name of server.example.local installed. Internally, clients connect and work fine, but externally, we are having certificate errors as you would expect. I'm about to purchase a UCC SSL Certificate to install on the server with all the relevant SANs on the certificate to correct this, but due to obvious problem obtaining a trusted cert with .local as a subject alternative name, I'm looking to configure clients on the internal network so that they don't use any reference to the .local hostname. I've configured our external DNS name for the server as exchange.example.com, and have created an CNAME for autodiscover.example.com which also (correctly) points to exchange.example.com. I've also configured internal DNS records for these two hostnames which point to the internal interface of the same server. I don't anticipate any problems here. I'm now trying to reconfigure Auto Discover internally, so that Outlook attempts to connect to exchange.example.com. I've followed the steps in KB940726 to prepare for this, and this appeared to work fine. No errors were generated and I was able to verify the CAS name in AD using ADSI edit. I've just tried testing this with a newly created test user account complete with a new Exchange mailbox, and Outlook 2007 connects fine on the internal network, but looking deeper in the Exchange profile, Outlook is still resolving the server name as server.example.local. Could it be the self signed cert, that is causing Outlook to display the server name as server.example.local, or is there still something wrong with my internal autodiscover configuration? Edit I've proven it isn't the certificate that is responsible for outlook returning server.example.local, by installing another self certified certificate with a name of test.example.com. When creating a new outlook profile, I get the mismatch error I'm expceting, but after accepting the cert, and finishing the config of the Outlook profile, again it still shows server.example.local as the server name. This means that if I were to purchase the UCC cert now, that external client would work fine, but internal clients would show a certificate name mismatch. Any ideas where to start diagnosing this?

    Read the article

  • Custom url rewrite stop working after 20 seconds?

    - by 101224863727594634919
    Hi all I have a simple question about using of custom URL rewrite module - http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx. After period of time redirects stop working. When I trace non working requests I found that -URL_CACHE_ACCESS_END PhysicalPath URLInfoFromCache true URLInfoAddedToCache false ErrorCode 0 ErrorCode The operation completed successfully. (0x0) Is there any option to disable URL cache access for website. Thanks in advance.

    Read the article

  • Windows 7 & Virtual PC and Internet (gateway) problems on host PC

    - by Mufasa
    I upgraded to Windows 7 on a PC that is a few years old. The CPU was one revision away from having Hyper-V on it. So, I had to install Microsoft Virtual PC 2007 (v6.0.156.0) to run full XP instances instead of the seamless XP virtualization that is advertised so much. That's fine though; the 'older' version is useful since I use it to run different versions of the whole XP/IE stack for testing. (I'm a web developer.) ...And for the one 16-bit application we still use at the office for scheduling. * sigh * The virtual instances work fine, including networking. My issue is that after a reboot or coming out of sleep mode, my host Windows 7 won't connect to the Internet. It will connect to the local network fine. If I disable the "Virtual Machine Network Services" item (I'll call "VMNS" from here on) in the LAN Connection properties box, it starts working. But than the Virtual PC instances lose their network connectivity. If I re-enable VMNS again in the same instance, everything works (Internet on host and in the virtualized instances). But after the next reboot/sleep cycle this starts over. The route table gave me a clue though. When doing a cycle w/ VMNS enabled: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.3.51 20 0.0.0.0 0.0.0.0 10.0.10.10 10.0.3.51 276 ... After VMNS is disabled, the first route goes away. I assume that is for VMNS to intercept virtualized instance's network connections and forward them correctly? Just a guess though. More info: I checked my Firewall settings and Services (because I'm sort of a control nazi and turn off a lot) but couldn't find anything that made sense and if turned on changed anything. So it might be something there I'm missing, but I don't know what. My current hacked solution: So, I figured I'd mess with the routes myself to see if that helped, it did. If I run a route delete 0.0.0.0 on the universal (0.0.0.0) gateway routes, and add back in just the 2nd line with route add 0.0.0.0 mask 0.0.0.0 10.0.10.10--the one that points to my actual gateway (10.0.10.10)--then I don't have to mess with the disable/enable cycle of VMNS, and everything works. Running those two commands is faster then bringing up connection options and disabling and re-enabling VMNS, but I still don't want to have use that hack script every boot either. (Oh, and I also tried messing with hard-coding TCP/IP settings in my network adapter, including setting high metrics, etc., but that didn't help either.) Any suggestions on the right way to fix this?

    Read the article

  • System won't boot: Gigabyte HD 7790 1GB OC GPU issue or Corsair VS550 PSU issue?

    - by MGOwen
    Installed a new GPU, and PC won't boot. Turn it on and: No monitor signal at all (tried HDMI and VGA via DVI, on 2 working monitors). CPU and GPU fans DO spin, but No system beeps, no sounds from drives (they might make a small noise in the first 1 second or so, but there's definitely no OS loading or anything like that) If hit "power off" button it turns off immediately (no holding down for 3 seconds like usual) If I put my old HD 5670 GPU back in, everything works fine. But (plot twist!) card is not totally dead. My friend put it in his PC, and it works fine (he even played a game for 15 minutes, no issues). He has a Corsair TX850 850W and a Gigabyte MB. So my main theory is: the GPU isn't getting enough power from the PSU. But is it: Bad PSU? Seems unlikely, since it works fine with the other GPU. Also, the PSU Is brand new and 550W (single 42A/504W 12V rail). Overkill for this GPU. Corsair is a decent brand, but maybe just mine is faulty? Bad GPU? Could it be drawing more power than it should be, somehow, or something? Supposedly HD 7790 needs only 21A/75W on the 12v rail, though this one is factory overclocked a bit... but should that triple the power requirement? Something else? Could there be a motherboard incompatibility somehow? Both MB and GPU are less than a year old and PCI Express 3.0 x16. Things I've tried: Re-seating the video card Testing PC with old GPU (works fine, same PCIe slot). Checked AMD's stated amp/watt requirements of a 7790 and my PSU (see above). My PSU can output twice the amps (single rail) and 5x the Wattage a 7790 needs. Here are the full specs: Gigabyte HD 7790 1GB OC GPU Corsair VS550 550W PSU 4GB RAM AsRock H61M U3S3 motherboard i3-2100 500GB SATA HDD (2007-ish) blu-ray drive (new) PCI 802.11g card Edit: Motherboard BIOS Update seems to have fixed it. (If anyone has same problem and it doesn't work, comment here).

    Read the article

  • Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps?

    - by cwd
    I have Google Apps installed and I have tried to set up Outlook 2007 to send messages via SMTP. I followed the guide, selecting what I believe are all the correct settings. Yes, I am using POP for incoming, that is intentional but I don't believe it should affect outgoing messages. When I log into gmail (google apps) for my company, I can send a message that has an 8MB attachment (pdf file, not zipped or anything) and it sends fine. However, when I send the same message in Outlook with that same 8mb attachment it fails. Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps? The message headers are (some info omitted for privacy): Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 552 552 #5.3.4 message size exceeds limit (state 17). ----- Original message ----- DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=company.com; s=google; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:x-mailer:thread-index:content-language; bh=7d4i/Cbt0v0sY3zt5lN6y5CdvxjbRmTBG4AuBuMxtF4=; b=IJwwxuIEdg1E4zXuGjeDod+1w3RYBBCNzSsqpuX77ih36HSiq++s3ZCQXPeU9CIZVg K8JPJQu9xjivYYjrRaYwyeowLIu0GIdR2h4kKEkFM/GNC2RFF3VwVgj+gvi5eqVZIuWn osT5/VEm10IED6B54NPOtGMgFTci6a57zzVKE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-type:x-mailer:thread-index:content-language :x-gm-message-state; bh=7d4i/Cbt0v0sY3zt5lN6y5CdvxjbRmTBG4AuBuMxtF4=; b=LjTecjok5K71Bymp6tZqAL2XCz03hWROV1mTK8Vf2AeEJwtel9ACu9kE5jW5iJqckb upYKPzoqYLBwAPOzMb9asWoTAZPzC7LMG65eDUc2/ZEvGqXrZs3ziUxwhF4t169yRVuy /6nm/aAt5uPMLPdobxGTJ8ahOIku1Z3gW+OcvZ6ERk1Av/bvuln09vcnyJIrHGh7eK8n cbGVxmK0aecgSPgIj2NALbHkyuxwj+LEBRV6uiz3THDjxAiNfsO5UFjV59sD+lVSBT3z ThOGE8WEXRnKHuP3FuKXyeUxKBZ2CxpWJpvDuS9EsFkln7zkISYEsRA0nUA6GSGi2Z/n 8YUg== Received: by 10.60.169.197 with SMTP id ag5mr12254920oec.137.1351036287413; Tue, 23 Oct 2012 16:51:27 -0700 (PDT) References: Date: Tue, 23 Oct 2012 19:51:16 -0400 Message-ID: <003a01cdb179$4bb2ca60$e3185f20$@com> MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_NextPart_000_003B_01CDB157.C4A12A60" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Ac2xVCHGxoC7DDOkQBK3JSXowHb0EQAEB7agAAA/YKAAAIGcQAAAngfQAABAAPAAAFe7gAAAadvw AALgvLA= Content-Language: en-us X-Gm-Message-State: ALoCoQniMq7Fnh+NlfoWjTJPvKWbkhEaftSaFo9ZVvtRpWufTmhlRDx1a9Jf+wmYcbRh896gygNr The company I am sending email to is a company that uses Google Apps for Teams. This is their apps admin login. Should I be worried about that message? My Settings On the Google apps side I have set my SPF record and set / verified my DKIM key. Here are my outlook settings: Why am I unable to send an attachment with Outlook via SMTP that I am able to send via Gmail / Google Apps?

    Read the article

  • problem installing netbeans on freeBSD

    - by solomon
    hi guys, I want to install netbeans on my freeBSD7.2 machine.I followed the instructions listed in http://typo.submonkey.net/articles/2007/7/13/netbeans-on-freebsd But, when i try the third step, which is running the netbeans using "./netbeans/bin/netbeans" i found the following error "etc/netbeans.cluster" file cann't be found. What do u think could be the reason. I can't figure it out, plz help me. Thanx in advance

    Read the article

  • OpenSSH (Windows) does not forward X11

    - by Shulhi Sapli
    I'm running Ubuntu 13.04 in VM and I wanted to do X11 forwarding to my host (Win 8), so far it works fine using PuTTY and XMing server for Windows. But I am curious why it doesn't work if I use OpenSSH binaries (it comes together with Git for windows). This is what I've done so far: ssh -X [email protected] (also tried with -Y) then gedit but received error of Cannot open display. echo $DISPLAY came out as empty. So, I try to export DISPLAY=localhost:0.0 but it still won't work. The DISPLAY environment that I set is exactly as when it runs with Putty. I also try changing the DISPLAY to 192.168.2.3:0.0 and other display number as well, but still it won't work. Of course I could just use Putty to make it work, but I was wondering why OpenSSH binaries does not work. I have enabled all settings required in both /etc/ssh/ssh_config and /etc/ssh/sshd_config. If I run with -v option, this is what I get F:\SkyDrive\Projects> ssh -X -v [email protected] OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to 192.168.2.3 [192.168.2.3] port 22. debug1: Connection established. debug1: identity file /c/Users/Shulhi/.ssh/identity type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_rsa type -1 debug1: identity file /c/Users/Shulhi/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1p1 Debian-4 debug1: match: OpenSSH_6.1p1 Debian-4 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.2.3' is known and matches the RSA host key. debug1: Found key in /c/Users/Shulhi/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Shulhi/.ssh/identity debug1: Trying private key: /c/Users/Shulhi/.ssh/id_rsa debug1: Next authentication method: password [email protected]'s password: It seems that there is no request for X11 (I'm not sure if there is should be one too here). Any pointers why it doesn't work?

    Read the article

  • Using jQuery to Insert a New Database Record

    - by Stephen Walther
    The goal of this blog entry is to explore the easiest way of inserting a new record into a database using jQuery and .NET. I’m going to explore two approaches: using Generic Handlers and using a WCF service (In a future blog entry I’ll take a look at OData and WCF Data Services). Create the ASP.NET Project I’ll start by creating a new empty ASP.NET application with Visual Studio 2010. Select the menu option File, New Project and select the ASP.NET Empty Web Application project template. Setup the Database and Data Model I’ll use my standard MoviesDB.mdf movies database. This database contains one table named Movies that looks like this: I’ll use the ADO.NET Entity Framework to represent my database data: Select the menu option Project, Add New Item and select the ADO.NET Entity Data Model project item. Name the data model MoviesDB.edmx and click the Add button. In the Choose Model Contents step, select Generate from database and click the Next button. In the Choose Your Data Connection step, leave all of the defaults and click the Next button. In the Choose Your Data Objects step, select the Movies table and click the Finish button. Unfortunately, Visual Studio 2010 cannot spell movie correctly :) You need to click on Movy and change the name of the class to Movie. In the Properties window, change the Entity Set Name to Movies. Using a Generic Handler In this section, we’ll use jQuery with an ASP.NET generic handler to insert a new record into the database. A generic handler is similar to an ASP.NET page, but it does not have any of the overhead. It consists of one method named ProcessRequest(). Select the menu option Project, Add New Item and select the Generic Handler project item. Name your new generic handler InsertMovie.ashx and click the Add button. Modify your handler so it looks like Listing 1: Listing 1 – InsertMovie.ashx using System.Web; namespace WebApplication1 { /// <summary> /// Inserts a new movie into the database /// </summary> public class InsertMovie : IHttpHandler { private MoviesDBEntities _dataContext = new MoviesDBEntities(); public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; // Extract form fields var title = context.Request["title"]; var director = context.Request["director"]; // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return success context.Response.Write("success"); } public bool IsReusable { get { return true; } } } } In Listing 1, the ProcessRequest() method is used to retrieve a title and director from form parameters. Next, a new Movie is created with the form values. Finally, the new movie is saved to the database and the string “success” is returned. Using jQuery with the Generic Handler We can call the InsertMovie.ashx generic handler from jQuery by using the standard jQuery post() method. The following HTML page illustrates how you can retrieve form field values and post the values to the generic handler: Listing 2 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input name="title" /> <br /> <label>Director:</label> <input name="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { $.post("InsertMovie.ashx", $("form").serialize(), insertCallback); }); function insertCallback(result) { if (result == "success") { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html>     When you open the page in Listing 2 in a web browser, you get a simple HTML form: Notice that the page in Listing 2 includes the jQuery library. The jQuery library is included with the following SCRIPT tag: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> The jQuery library is included on the Microsoft Ajax CDN so you can always easily include the jQuery library in your applications. You can learn more about the CDN at this website: http://www.asp.net/ajaxLibrary/cdn.ashx When you click the Add Movie button, the jQuery post() method is called to post the form data to the InsertMovie.ashx generic handler. Notice that the form values are serialized into a URL encoded string by calling the jQuery serialize() method. The serialize() method uses the name attribute of form fields and not the id attribute. Notes on this Approach This is a very low-level approach to interacting with .NET through jQuery – but it is simple and it works! And, you don’t need to use any JavaScript libraries in addition to the jQuery library to use this approach. The signature for the jQuery post() callback method looks like this: callback(data, textStatus, XmlHttpRequest) The second parameter, textStatus, returns the HTTP status code from the server. I tried returning different status codes from the generic handler with an eye towards implementing server validation by returning a status code such as 400 Bad Request when validation fails (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html ). I finally figured out that the callback is not invoked when the textStatus has any value other than “success”. Using a WCF Service As an alternative to posting to a generic handler, you can create a WCF service. You create a new WCF service by selecting the menu option Project, Add New Item and selecting the Ajax-enabled WCF Service project item. Name your WCF service InsertMovie.svc and click the Add button. Modify the WCF service so that it looks like Listing 3: Listing 3 – InsertMovie.svc using System.ServiceModel; using System.ServiceModel.Activation; namespace WebApplication1 { [ServiceBehavior(IncludeExceptionDetailInFaults=true)] [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MovieService { private MoviesDBEntities _dataContext = new MoviesDBEntities(); [OperationContract] public bool Insert(string title, string director) { // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return movie (with primary key) return true; } } }   The WCF service in Listing 3 uses the Entity Framework to insert a record into the Movies database table. The service always returns the value true. Notice that the service in Listing 3 includes the following attribute: [ServiceBehavior(IncludeExceptionDetailInFaults=true)] You need to include this attribute if you want to get detailed error information back to the client. When you are building an application, you should always include this attribute. When you are ready to release your application, you should remove this attribute for security reasons. Using jQuery with the WCF Service Calling a WCF service from jQuery requires a little more work than calling a generic handler from jQuery. Here are some good blog posts on some of the issues with using jQuery with WCF: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/ http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx http://www.west-wind.com/Weblog/posts/896411.aspx http://www.west-wind.com/weblog/posts/324917.aspx http://professionalaspnet.com/archive/tags/WCF/default.aspx The primary requirement when calling WCF from jQuery is that the request use JSON: The request must include a content-type:application/json header. Any parameters included with the request must be JSON encoded. Unfortunately, jQuery does not include a method for serializing JSON (Although, oddly, jQuery does include a parseJSON() method for deserializing JSON). Therefore, we need to use an additional library to handle the JSON serialization. The page in Listing 4 illustrates how you can call a WCF service from jQuery. Listing 4 – Default2.aspx <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; // JSONify the data data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Insert", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result result = result["d"]; if (result === true) { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html> There are several things to notice about Listing 4. First, notice that the page includes both the jQuery library and Douglas Crockford’s JSON2 library: <script src="Scripts/json2.js" type="text/javascript"></script> You need to include the JSON2 library to serialize the form values into JSON. You can download the JSON2 library from the following location: http://www.json.org/js.html When you click the button to submit the form, the form data is converted into a JavaScript object: // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; Next, the data is serialized into JSON using the JSON2 library: // JSONify the data var data = JSON.stringify(data); Finally, the form data is posted to the WCF service by calling the jQuery ajax() method: // Post it $.ajax({   type: "POST",   contentType: "application/json; charset=utf-8",   url: "MovieService.svc/Insert",   data: data,   dataType: "json",   success: insertCallback }); You can’t use the standard jQuery post() method because you must set the content-type of the request to be application/json. Otherwise, the WCF service will reject the request for security reasons. For details, see the Scott Guthrie blog post: http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx The insertCallback() method is called when the WCF service returns a response. This method looks like this: function insertCallback(result) {   // unwrap result   result = result["d"];   if (result === true) {       alert("Movie added!");   } else {     alert("Could not add movie!");   } } When we called the jQuery ajax() method, we set the dataType to JSON. That causes the jQuery ajax() method to deserialize the response from the WCF service from JSON into a JavaScript object automatically. The following value is passed to the insertCallback method: {"d":true} For security reasons, a WCF service always returns a response with a “d” wrapper. The following line of code removes the “d” wrapper: // unwrap result result = result["d"]; To learn more about the “d” wrapper, I recommend that you read the following blog posts: http://encosia.com/2009/02/10/a-breaking-change-between-versions-of-aspnet-ajax/ http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ Summary In this blog entry, I explored two methods of inserting a database record using jQuery and .NET. First, we created a generic handler and called the handler from jQuery. This is a very low-level approach. However, it is a simple approach that works. Next, we looked at how you can call a WCF service using jQuery. This approach required a little more work because you need to serialize objects into JSON. We used the JSON2 library to perform the serialization. In the next blog post, I want to explore how you can use jQuery with OData and WCF Data Services.

    Read the article

  • Where does ASP.NET Web API Fit?

    - by Rick Strahl
    With the pending release of ASP.NET MVC 4 and the new ASP.NET Web API, there has been a lot of discussion of where the new Web API technology fits in the ASP.NET Web stack. There are a lot of choices to build HTTP based applications available now on the stack - we've come a long way from when WebForms and Http Handlers/Modules where the only real options. Today we have WebForms, MVC, ASP.NET Web Pages, ASP.NET AJAX, WCF REST and now Web API as well as the core ASP.NET runtime to choose to build HTTP content with. Web API definitely squarely addresses the 'API' aspect - building consumable services - rather than HTML content, but even to that end there are a lot of choices you have today. So where does Web API fit, and when doesn't it? But before we get into that discussion, let's talk about what a Web API is and why we should care. What's a Web API? HTTP 'APIs' (Microsoft's new terminology for a service I guess)  are becoming increasingly more important with the rise of the many devices in use today. Most mobile devices like phones and tablets run Apps that are using data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching moving into even traditional desktop applications. The pending Windows 8 release promises an app like platform for both the desktop and other devices, that also emphasizes consuming data from the Cloud. Likewise many Web browser hosted applications these days are relying on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server generated HTML data to load up the user interface with data. These mobile or rich Web applications use their HTTP connection to return data rather than HTML markup in the form of JSON or XML typically. But an API can also serve other kinds of data, like images or other binary files, or even text data and HTML (although that's less common). A Web API is what feeds rich applications with data. ASP.NET Web API aims to service this particular segment of Web development by providing easy semantics to route and handle incoming requests and an easy to use platform to serve HTTP data in just about any content format you choose to create and serve from the server. But .NET already has various HTTP Platforms The .NET stack already includes a number of technologies that provide the ability to create HTTP service back ends, and it has done so since the very beginnings of the .NET platform. From raw HTTP Handlers and Modules in the core ASP.NET runtime, to high level platforms like ASP.NET MVC, Web Forms, ASP.NET AJAX and the WCF REST engine (which technically is not ASP.NET, but can integrate with it), you've always been able to handle just about any kind of HTTP request and response with ASP.NET. The beauty of the raw ASP.NET platform is that it provides you everything you need to build just about any type of HTTP application you can dream up from low level APIs/custom engines to high level HTML generation engine. ASP.NET as a core platform clearly has stood the test of time 10+ years later and all other frameworks like Web API are built on top of this ASP.NET core. However, although it's possible to create Web APIs / Services using any of the existing out of box .NET technologies, none of them have been a really nice fit for building arbitrary HTTP based APIs. Sure, you can use an HttpHandler to create just about anything, but you have to build a lot of plumbing to build something more complex like a comprehensive API that serves a variety of requests, handles multiple output formats and can easily pass data up to the server in a variety of ways. Likewise you can use ASP.NET MVC to handle routing and creating content in various formats fairly easily, but it doesn't provide a great way to automatically negotiate content types and serve various content formats directly (it's possible to do with some plumbing code of your own but not built in). Prior to Web API, Microsoft's main push for HTTP services has been WCF REST, which was always an awkward technology that had a severe personality conflict, not being clear on whether it wanted to be part of WCF or purely a separate technology. In the end it didn't do either WCF compatibility or WCF agnostic pure HTTP operation very well, which made for a very developer-unfriendly environment. Personally I didn't like any of the implementations at the time, so much so that I ended up building my own HTTP service engine (as part of the West Wind Web Toolkit), as have a few other third party tools that provided much better integration and ease of use. With the release of Web API for the first time I feel that I can finally use the tools in the box and not have to worry about creating and maintaining my own toolkit as Web API addresses just about all the features I implemented on my own and much more. ASP.NET Web API provides a better HTTP Experience ASP.NET Web API differentiates itself from the previous Microsoft in-box HTTP service solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike WCF REST or ASP.NET AJAX with ASMX, it’s a brand new platform rather than bolted on technology that is supposed to work in the context of an existing framework. The strength of the new ASP.NET Web API is that it combines the best features of the platforms that came before it, to provide a comprehensive and very usable HTTP platform. Because it's based on ASP.NET and borrows a lot of concepts from ASP.NET MVC, Web API should be immediately familiar and comfortable to most ASP.NET developers. Here are some of the features that Web API provides that I like: Strong Support for URL Routing to produce clean URLs using familiar MVC style routing semantics Content Negotiation based on Accept headers for request and response serialization Support for a host of supported output formats including JSON, XML, ATOM Strong default support for REST semantics but they are optional Easily extensible Formatter support to add new input/output types Deep support for more advanced HTTP features via HttpResponseMessage and HttpRequestMessage classes and strongly typed Enums to describe many HTTP operations Convention based design that drives you into doing the right thing for HTTP Services Very extensible, based on MVC like extensibility model of Formatters and Filters Self-hostable in non-Web applications  Testable using testing concepts similar to MVC Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available in a straight forward and flexible manner. Looking at the list above you can see that a lot of functionality is very similar to ASP.NET MVC, so many ASP.NET developers should feel quite comfortable with the concepts of Web API. The Routing and core infrastructure of Web API are very similar to how MVC works providing many of the benefits of MVC, but with focus on HTTP access and manipulation in Controller methods rather than HTML generation in MVC. There’s much improved support for content negotiation based on HTTP Accept headers with the framework capable of detecting automatically what content the client is sending and requesting and serving the appropriate data format in return. This seems like such a little and obvious thing, but it's really important. Today's service backends often are used by multiple clients/applications and being able to choose the right data format for what fits best for the client is very important. While previous solutions were able to accomplish this using a variety of mixed features of WCF and ASP.NET, Web API combines all this functionality into a single robust server side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that is not built in, there are lots of hooks and overrides for most behaviors, and even many low level hook points that allow you to plug in custom functionality with relatively little effort. No Brainers for Web API There are a few scenarios that are a slam dunk for Web API. If your primary focus of an application or even a part of an application is some sort of API then Web API makes great sense. HTTP ServicesIf you're building a comprehensive HTTP API that is to be consumed over the Web, Web API is a perfect fit. You can isolate the logic in Web API and build your application as a service breaking out the logic into controllers as needed. Because the primary interface is the service there's no confusion of what should go where (MVC or API). Perfect fit. Primary AJAX BackendsIf you're building rich client Web applications that are relying heavily on AJAX callbacks to serve its data, Web API is also a slam dunk. Again because much if not most of the business logic will probably end up in your Web API service logic, there's no confusion over where logic should go and there's no duplication. In Single Page Applications (SPA), typically there's very little HTML based logic served other than bringing up a shell UI and then filling the data from the server with AJAX which means the business logic required for data retrieval and data acceptance and validation too lives in the Web API. Perfect fit. Generic HTTP EndpointsAnother good fit are generic HTTP endpoints that to serve data or handle 'utility' type functionality in typical Web applications. If you need to implement an image server, or an upload handler in the past I'd implement that as an HTTP handler. With Web API you now have a well defined place where you can implement these types of generic 'services' in a location that can easily add endpoints (via Controller methods) or separated out as more full featured APIs. Granted this could be done with MVC as well, but Web API seems a clearer and more well defined place to store generic application services. This is one thing I used to do a lot of in my own libraries and Web API addresses this nicely. Great fit. Mixed HTML and AJAX Applications: Not a clear Choice  For all the commonality that Web API and MVC share they are fundamentally different platforms that are independent of each other. A lot of people have asked when does it make sense to use MVC vs. Web API when you're dealing with typical Web application that creates HTML and also uses AJAX functionality for rich functionality. While it's easy to say that all 'service'/AJAX logic should go into a Web API and all HTML related generation into MVC, that can often result in a lot of code duplication. Also MVC supports JSON and XML result data fairly easily as well so there's some confusion where that 'trigger point' is of when you should switch to Web API vs. just implementing functionality as part of MVC controllers. Ultimately there's a tradeoff between isolation of functionality and duplication. A good rule of thumb I think works is that if a large chunk of the application's functionality serves data Web API is a good choice, but if you have a couple of small AJAX requests to serve data to a grid or autocomplete box it'd be overkill to separate out that logic into a separate Web API controller. Web API does add overhead to your application (it's yet another framework that sits on top of core ASP.NET) so it should be worth it .Keep in mind that MVC can generate HTML and JSON/XML and just about any other content easily and that functionality is not going away, so just because you Web API is there it doesn't mean you have to use it. Web API is not a full replacement for MVC obviously either since there's not the same level of support to feed HTML from Web API controllers (although you can host a RazorEngine easily enough if you really want to go that route) so if you're HTML is part of your API or application in general MVC is still a better choice either alone or in combination with Web API. I suspect (and hope) that in the future Web API's functionality will merge even closer with MVC so that you might even be able to mix functionality of both into single Controllers so that you don't have to make any trade offs, but at the moment that's not the case. Some Issues To think about Web API is similar to MVC but not the Same Although Web API looks a lot like MVC it's not the same and some common functionality of MVC behaves differently in Web API. For example, the way single POST variables are handled is different than MVC and doesn't lend itself particularly well to some AJAX scenarios with POST data. Code Duplication I already touched on this in the Mixed HTML and Web API section, but if you build an MVC application that also exposes a Web API it's quite likely that you end up duplicating a bunch of code and - potentially - infrastructure. You may have to create authentication logic both for an HTML application and for the Web API which might need something different altogether. More often than not though the same logic is used, and there's no easy way to share. If you implement an MVC ActionFilter and you want that same functionality in your Web API you'll end up creating the filter twice. AJAX Data or AJAX HTML On a recent post's comments, David made some really good points regarding the commonality of MVC and Web API's and its place. One comment that caught my eye was a little more generic, regarding data services vs. HTML services. David says: I see a lot of merit in the combination of Knockout.js, client side templates and view models, calling Web API for a responsive UI, but sometimes late at night that still leaves me wondering why I would no longer be using some of the nice tooling and features that have evolved in MVC ;-) You know what - I can totally relate to that. On the last Web based mobile app I worked on, we decided to serve HTML partials to the client via AJAX for many (but not all!) things, rather than sending down raw data to inject into the DOM on the client via templating or direct manipulation. While there are definitely more bytes on the wire, with this, the overhead ended up being actually fairly small if you keep the 'data' requests small and atomic. Performance was often made up by the lack of client side rendering of HTML. Server rendered HTML for AJAX templating gives so much better infrastructure support without having to screw around with 20 mismatched client libraries. Especially with MVC and partials it's pretty easy to break out your HTML logic into very small, atomic chunks, so it's actually easy to create small rendering islands that can be used via composition on the server, or via AJAX calls to small, tight partials that return HTML to the client. Although this is often frowned upon as to 'heavy', it worked really well in terms of developer effort as well as providing surprisingly good performance on devices. There's still plenty of jQuery and AJAX logic happening on the client but it's more manageable in small doses rather than trying to do the entire UI composition with JavaScript and/or 'not-quite-there-yet' template engines that are very difficult to debug. This is not an issue directly related to Web API of course, but something to think about especially for AJAX or SPA style applications. Summary Web API is a great new addition to the ASP.NET platform and it addresses a serious need for consolidation of a lot of half-baked HTTP service API technologies that came before it. Web API feels 'right', and hits the right combination of usability and flexibility at least for me and it's a good fit for true API scenarios. However, just because a new platform is available it doesn't meant that other tools or tech that came before it should be discarded or even upgraded to the new platform. There's nothing wrong with continuing to use MVC controller methods to handle API tasks if that's what your app is running now - there's very little to be gained by upgrading to Web API just because. But going forward Web API clearly is the way to go, when building HTTP data interfaces and it's good to see that Microsoft got this one right - it was sorely needed! Resources ASP.NET Web API AspConf Ask the Experts Session (first 5 minutes) © Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What’s new in ASP.NET 4.0: Core Features

    - by Rick Strahl
    Microsoft released the .NET Runtime 4.0 and with it comes a brand spanking new version of ASP.NET – version 4.0 – which provides an incremental set of improvements to an already powerful platform. .NET 4.0 is a full release of the .NET Framework, unlike version 3.5, which was merely a set of library updates on top of the .NET Framework version 2.0. Because of this full framework revision, there has been a welcome bit of consolidation of assemblies and configuration settings. The full runtime version change to 4.0 also means that you have to explicitly pick version 4.0 of the runtime when you create a new Application Pool in IIS, unlike .NET 3.5, which actually requires version 2.0 of the runtime. In this first of two parts I'll take a look at some of the changes in the core ASP.NET runtime. In the next edition I'll go over improvements in Web Forms and Visual Studio. Core Engine Features Most of the high profile improvements in ASP.NET have to do with Web Forms, but there are a few gems in the core runtime that should make life easier for ASP.NET developers. The following list describes some of the things I've found useful among the new features. Clean web.config Files Are Back! If you've been using ASP.NET 3.5, you probably have noticed that the web.config file has turned into quite a mess of configuration settings between all the custom handler and module mappings for the various web server versions. Part of the reason for this mess is that .NET 3.5 is a collection of add-on components running on top of the .NET Runtime 2.0 and so almost all of the new features of .NET 3.5 where essentially introduced as custom modules and handlers that had to be explicitly configured in the config file. Because the core runtime didn't rev with 3.5, all those configuration options couldn't be moved up to other configuration files in the system chain. With version 4.0 a consolidation was possible, and the result is a much simpler web.config file by default. A default empty ASP.NET 4.0 Web Forms project looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> </configuration> Need I say more? Configuration Transformation Files to Manage Configurations and Application Packaging ASP.NET 4.0 introduces the ability to create multi-target configuration files. This means it's possible to create a single configuration file that can be transformed based on relatively simple replacement rules using a Visual Studio and WebDeploy provided XSLT syntax. The idea is that you can create a 'master' configuration file and then create customized versions of this master configuration file by applying some relatively simplistic search and replace, add or remove logic to specific elements and attributes in the original file. To give you an idea, here's the example code that Visual Studio creates for a default web.Release.config file, which replaces a connection string, removes the debug attribute and replaces the CustomErrors section: <?xml version="1.0"?> <configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"> <connectionStrings> <add name="MyDB" connectionString="Data Source=ReleaseSQLServer;Initial Catalog=MyReleaseDB;Integrated Security=True" xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/> </connectionStrings> <system.web> <compilation xdt:Transform="RemoveAttributes(debug)" /> <customErrors defaultRedirect="GenericError.htm" mode="RemoteOnly" xdt:Transform="Replace"> <error statusCode="500" redirect="InternalError.htm"/> </customErrors> </system.web> </configuration> You can see the XSL transform syntax that drives this functionality. Basically, only the elements listed in the override file are matched and updated – all the rest of the original web.config file stays intact. Visual Studio 2010 supports this functionality directly in the project system so it's easy to create and maintain these customized configurations in the project tree. Once you're ready to publish your application, you can then use the Publish <yourWebApplication> option on the Build menu which allows publishing to disk, via FTP or to a Web Server using Web Deploy. You can also create a deployment package as a .zip file which can be used by the WebDeploy tool to configure and install the application. You can manually run the Web Deploy tool or use the IIS Manager to install the package on the server or other machine. You can find out more about WebDeploy and Packaging here: http://tinyurl.com/2anxcje. Improved Routing Routing provides a relatively simple way to create clean URLs with ASP.NET by associating a template URL path and routing it to a specific ASP.NET HttpHandler. Microsoft first introduced routing with ASP.NET MVC and then they integrated routing with a basic implementation in the core ASP.NET engine via a separate ASP.NET routing assembly. In ASP.NET 4.0, the process of using routing functionality gets a bit easier. First, routing is now rolled directly into System.Web, so no extra assembly reference is required in your projects to use routing. The RouteCollection class now includes a MapPageRoute() method that makes it easy to route to any ASP.NET Page requests without first having to implement an IRouteHandler implementation. It would have been nice if this could have been extended to serve *any* handler implementation, but unfortunately for anything but a Page derived handlers you still will have to implement a custom IRouteHandler implementation. ASP.NET Pages now include a RouteData collection that will contain route information. Retrieving route data is now a lot easier by simply using this.RouteData.Values["routeKey"] where the routeKey is the value specified in the route template (i.e., "users/{userId}" would use Values["userId"]). The Page class also has a GetRouteUrl() method that you can use to create URLs with route data values rather than hardcoding the URL: <%= this.GetRouteUrl("users",new { userId="ricks" }) %> You can also use the new Expression syntax using <%$RouteUrl %> to accomplish something similar, which can be easier to embed into Page or MVC View code: <a runat="server" href='<%$RouteUrl:RouteName=user, id=ricks %>'>Visit User</a> Finally, the Response object also includes a new RedirectToRoute() method to build a route url for redirection without hardcoding the URL. Response.RedirectToRoute("users", new { userId = "ricks" }); All of these routines are helpers that have been integrated into the core ASP.NET engine to make it easier to create routes and retrieve route data, which hopefully will result in more people taking advantage of routing in ASP.NET. To find out more about the routing improvements you can check out Dan Maharry's blog which has a couple of nice blog entries on this subject: http://tinyurl.com/37trutj and http://tinyurl.com/39tt5w5. Session State Improvements Session state is an often used and abused feature in ASP.NET and version 4.0 introduces a few enhancements geared towards making session state more efficient and to minimize at least some of the ill effects of overuse. The first improvement affects out of process session state, which is typically used in web farm environments or for sites that store application sensitive data that must survive AppDomain restarts (which in my opinion is just about any application). When using OutOfProc session state, ASP.NET serializes all the data in the session statebag into a blob that gets carried over the network and stored either in the State server or SQL Server via the Session provider. Version 4.0 provides some improvement in this serialization of the session data by offering an enableCompression option on the web.Config <Session> section, which forces the serialized session state to be compressed. Depending on the type of data that is being serialized, this compression can reduce the size of the data travelling over the wire by as much as a third. It works best on string data, but can also reduce the size of binary data. In addition, ASP.NET 4.0 now offers a way to programmatically turn session state on or off as part of the request processing queue. In prior versions, the only way to specify whether session state is available is by implementing a marker interface on the HTTP handler implementation. In ASP.NET 4.0, you can now turn session state on and off programmatically via HttpContext.Current.SetSessionStateBehavior() as part of the ASP.NET module pipeline processing as long as it occurs before the AquireRequestState pipeline event. Output Cache Provider Output caching in ASP.NET has been a very useful but potentially memory intensive feature. The default OutputCache mechanism works through in-memory storage that persists generated output based on various lifetime related parameters. While this works well enough for many intended scenarios, it also can quickly cause runaway memory consumption as the cache fills up and serves many variations of pages on your site. ASP.NET 4.0 introduces a provider model for the OutputCache module so it becomes possible to plug-in custom storage strategies for cached pages. One of the goals also appears to be to consolidate some of the different cache storage mechanisms used in .NET in general to a generic Windows AppFabric framework in the future, so various different mechanisms like OutputCache, the non-Page specific ASP.NET cache and possibly even session state eventually can use the same caching engine for storage of persisted data both in memory and out of process scenarios. For developers, the OutputCache provider feature means that you can now extend caching on your own by implementing a custom Cache provider based on the System.Web.Caching.OutputCacheProvider class. You can find more info on creating an Output Cache provider in Gunnar Peipman's blog at: http://tinyurl.com/2vt6g7l. Response.RedirectPermanent ASP.NET 4.0 includes features to issue a permanent redirect that issues as an HTTP 301 Moved Permanently response rather than the standard 302 Redirect respond. In pre-4.0 versions you had to manually create your permanent redirect by setting the Status and Status code properties – Response.RedirectPermanent() makes this operation more obvious and discoverable. There's also a Response.RedirectToRoutePermanent() which provides permanent redirection of route Urls. Preloading of Applications ASP.NET 4.0 provides a new feature to preload ASP.NET applications on startup, which is meant to provide a more consistent startup experience. If your application has a lengthy startup cycle it can appear very slow to serve data to clients while the application is warming up and loading initial resources. So rather than serve these startup requests slowly in ASP.NET 4.0, you can force the application to initialize itself first before even accepting requests for processing. This feature works only on IIS 7.5 (Windows 7 and Windows Server 2008 R2) and works in combination with IIS. You can set up a worker process in IIS 7.5 to always be running, which starts the Application Pool worker process immediately. ASP.NET 4.0 then allows you to specify site-specific settings by setting the serverAutoStartEnabled on a particular site along with an optional serviceAutoStartProvider class that can be used to receive "startup events" when the application starts up. This event in turn can be used to configure the application and optionally pre-load cache data and other information required by the app on startup.  The configuration settings need to be made in applicationhost.config: <sites> <site name="WebApplication2" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PreWarmup" /> </site> </sites> <serviceAutoStartProviders> <add name="PreWarmup" type="PreWarmupProvider,MyAssembly" /> </serviceAutoStartProviders> Hooking up a warm up provider is optional so you can omit the provider definition and reference. If you do define it here's what it looks like: public class PreWarmupProvider System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // initialization for app } } This code fires and while it's running, ASP.NET/IIS will hold requests from hitting the pipeline. So until this code completes the application will not start taking requests. The idea is that you can perform any pre-loading of resources and cache values so that the first request will be ready to perform at optimal performance level without lag. Runtime Performance Improvements According to Microsoft, there have also been a number of invisible performance improvements in the internals of the ASP.NET runtime that should make ASP.NET 4.0 applications run more efficiently and use less resources. These features come without any change requirements in applications and are virtually transparent, except that you get the benefits by updating to ASP.NET 4.0. Summary The core feature set changes are minimal which continues a tradition of small incremental changes to the ASP.NET runtime. ASP.NET has been proven as a solid platform and I'm actually rather happy to see that most of the effort in this release went into stability, performance and usability improvements rather than a massive amount of new features. The new functionality added in 4.0 is minimal but very useful. A lot of people are still running pure .NET 2.0 applications these days and have stayed off of .NET 3.5 for some time now. I think that version 4.0 with its full .NET runtime rev and assembly and configuration consolidation will make an attractive platform for developers to update to. If you're a Web Forms developer in particular, ASP.NET 4.0 includes a host of new features in the Web Forms engine that are significant enough to warrant a quick move to .NET 4.0. I'll cover those changes in my next column. Until then, I suggest you give ASP.NET 4.0 a spin and see for yourself how the new features can help you out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Customize the SimpleMembership in ASP.NET MVC 4.0

    - by thangchung
    As we know, .NET 4.5 have come up to us, and come along with a lot of new interesting features as well. Visual Studio 2012 was also introduced some days ago. They made us feel very happy with cool improvement along with us. Performance when loading code editor is very good at the moment (immediate after click on the solution). I explore some of cool features at these days. Some of them like Json.NET integrated in ASP.NET MVC 4.0, improvement on asynchronous action, new lightweight theme on Visual Studio, supporting very good on mobile development, improvement on authentication… I reviewed them, and found out that in this version of .NET Microsoft was not only developed new feature that suggest from community but also focused on improvement performance of existing features or components. Besides that, they also opened source more projects, like Entity Framework, Reactive Extensions, ASP.NET Web Stack… At the moment, I feel Microsoft want to open source more and more their projects. Today, I am going to dive in deep on new SimpleMembership model. It is really good because in this security model, Microsoft actually focus on development needs. As we know, in the past, they introduce some of provider supplied for coding security like MembershipProvider, RoleProvider… I don’t need to talk but everyone that have ever used it know that they were actually hard to use, and not easy to maintain and unit testing. Why? Because every time you inherit it, you need to override all methods inside it. Some people try to abstract it by introduce more method with virtual keyword, and try to implement basic behavior, so in the subclass we only need to override the method that need for their business. But to me, it’s only the way to work around. ASP.NET team and Web Matrix knew about it, so they built the new features based on existing components on .NET framework. And one of component that comes to us is SimpleMembership and SimpleRole. They implemented the Façade pattern on the top of those, and called it is WebSecurity. In the web, we can call WebSecurity anywhere we want, and make a call to inside wrapper of it. I read a lot of them on web blog, on technical news, on MSDN as well. Matthew Osborn had an excellent article about it at his blog. Jon Galloway had an article like this at here. He analyzed why old membership provider not fixed well to ASP.NET MVC and how to get over it. Those are very good to me. It introduced to me about how to doing SimpleMembership on it, how to doing it on new ASP.NET MVC web application. But one thing, those didn’t tell me was how to doing it on existing security model (that mean we already had Users and Roles on legacy system, and how we can integrate it to this system), that’s a reason I will introduce it today. I have spent couples of hours to see what’s inside this, and try to make one example to clarify my concern. And it’s lucky that I can make it working well.The first thing, we need to create new ASP.NET MVC application on Visual Studio 2012. We need to choose Internet type for this web application. ASP.NET MVC actually creates all needs components for the basic membership and basic role. The cool feature is DoNetOpenAuth come along with it that means we can log-in using facebook, twitter or Windows Live if you want. But it’s only for LocalDb, so we need to change it to fix with existing database model on SQL Server. The next step we have to make SimpleMembership can understand which database we use and show it which column need to point to for the ID and UserName. I really like this feature because SimpleMembership on need to know about the ID and UserName, and they don’t care about rest of it. I assume that we have an existing database model like So we will point it in code like The codes for it, we put on InitializeSimpleMembershipAttribute like [AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)]     public sealed class InitializeSimpleMembershipAttribute : ActionFilterAttribute     {         private static SimpleMembershipInitializer _initializer;         private static object _initializerLock = new object();         private static bool _isInitialized;         public override void OnActionExecuting(ActionExecutingContext filterContext)         {             // Ensure ASP.NET Simple Membership is initialized only once per app start             LazyInitializer.EnsureInitialized(ref _initializer, ref _isInitialized, ref _initializerLock);         }         private class SimpleMembershipInitializer         {             public SimpleMembershipInitializer()             {                 try                 {                     WebSecurity.InitializeDatabaseConnection("DefaultDb", "User", "Id", "UserName", autoCreateTables: true);                 }                 catch (Exception ex)                 {                     throw new InvalidOperationException("The ASP.NET Simple Membership database could not be initialized. For more information, please see http://go.microsoft.com/fwlink/?LinkId=256588", ex);                 }             }         }     }And decorating it in the AccountController as below [Authorize]     [InitializeSimpleMembership]     public class AccountController : ControllerIn this case, assuming that we need to override the ValidateUser to point this to existing User database table, and validate it. We have to add one more class like public class CustomAdminMembershipProvider : SimpleMembershipProvider     {         // TODO: will do a better way         private const string SELECT_ALL_USER_SCRIPT = "select * from [dbo].[User]private where UserName = '{0}'";         private readonly IEncrypting _encryptor;         private readonly SimpleSecurityContext _simpleSecurityContext;         public CustomAdminMembershipProvider(SimpleSecurityContext simpleSecurityContext)             : this(new Encryptor(), new SimpleSecurityContext("DefaultDb"))         {         }         public CustomAdminMembershipProvider(IEncrypting encryptor, SimpleSecurityContext simpleSecurityContext)         {             _encryptor = encryptor;             _simpleSecurityContext = simpleSecurityContext;         }         public override bool ValidateUser(string username, string password)         {             if (string.IsNullOrEmpty(username))             {                 throw new ArgumentException("Argument cannot be null or empty", "username");             }             if (string.IsNullOrEmpty(password))             {                 throw new ArgumentException("Argument cannot be null or empty", "password");             }             var hash = _encryptor.Encode(password);             using (_simpleSecurityContext)             {                 var users =                     _simpleSecurityContext.Users.SqlQuery(                         string.Format(SELECT_ALL_USER_SCRIPT, username));                 if (users == null && !users.Any())                 {                     return false;                 }                 return users.FirstOrDefault().Password == hash;             }         }     }SimpleSecurityDataContext at here public class SimpleSecurityContext : DbContext     {         public DbSet<User> Users { get; set; }         public SimpleSecurityContext(string connStringName) :             base(connStringName)         {             this.Configuration.LazyLoadingEnabled = true;             this.Configuration.ProxyCreationEnabled = false;         }         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             base.OnModelCreating(modelBuilder);                          modelBuilder.Configurations.Add(new UserMapping());         }     }And Mapping for User as below public class UserMapping : EntityMappingBase<User>     {         public UserMapping()         {             this.Property(x => x.UserName);             this.Property(x => x.DisplayName);             this.Property(x => x.Password);             this.Property(x => x.Email);             this.ToTable("User");         }     }One important thing, you need to modify the web.config to point to our customize SimpleMembership <membership defaultProvider="AdminMemberProvider" userIsOnlineTimeWindow="15">       <providers>         <clear/>         <add name="AdminMemberProvider" type="CIK.News.Web.Infras.Security.CustomAdminMembershipProvider, CIK.News.Web.Infras" />       </providers>     </membership>     <roleManager enabled="false">       <providers>         <clear />         <add name="AdminRoleProvider" type="CIK.News.Web.Infras.Security.AdminRoleProvider, CIK.News.Web.Infras" />       </providers>     </roleManager>The good thing at here is we don’t need to modify the code on AccountController. We only need to modify on SimpleMembership and Simple Role (if need). Now build all solutions, run it. We should see a screen like thisIf I login to Twitter button at the bottom of this page, we will be transfer to twitter authentication pageYou have to waiting for a moment Afterwards it will transfer you back to your admin screenYou can find all source codes at my MSDN code. I will really happy if you guys feel free to put some comments as below. It will be helpful to improvement my code in the future. Thank for all your readings. 

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >