Search Results

Search found 13537 results on 542 pages for 'installation failure'.

Page 488/542 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • Postfix not sending/allowing receiving of messages after server (hardware) changed

    - by 537mfb
    We had na old notebook runing Ubuntu 12.04 working as a web/ftp/mail server and it worked but since the notebook was a notebook and pretty old and unreliable, a desktop was bought to replace it before it stopped working all together. Due to issues with the new desktop's vídeo card, we couldn't use Ubuntu 12.04 so we installed Ubuntu 13.10 and wen't about configuring it. Since we removed the notebook from the network, we kept the same Computer Name and local IP address to make things as close to the old server as possible configuration-wise. However, something has gone wrong since Postfix is throwing error 451 4.3.0 lookup faillure on every attempt to send a mail, and no email can be received either. Our main.cf file is a copy of the one we were using (and working) on the old server (notice we use EHCP) # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name powered by Easy Hosting Control Panel (ehcp) on Ubuntu, www.ehcp.net biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no myhostname = m21-traducoes.com.pt relayhost = mydestination = localhost, 89.152.248.139 mynetworks = 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/16, 10.0.0.0/8, 89.152.248.0/24 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, proxy:mysql:/etc/postfix/mysql-virtual_email2email.cf transport_maps = proxy:mysql:/etc/postfix/mysql-virtual_transports.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtp_use_tls = yes smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom virtual_create_maildirsize = yes virtual_mailbox_extended = yes virtual_mailbox_limit_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailbox_limit_maps.cf virtual_mailbox_limit_override = yes virtual_maildir_limit_message = "The user you are trying to reach is over quota." virtual_overquota_bounce = yes debug_peer_list = sender_canonical_maps = debug_peer_level = 1 proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $mynetworks $virtual_mailbox_limit_maps $transport_maps alias_maps = hash:/etc/aliases smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtpd_destination_concurrency_limit = 2 smtpd_destination_rate_delay = 1s smtpd_extra_recipient_limit = 10 disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 This configuration was working before but now everytime i try to send a mail in squirrelmail it reports: Message not sent. Server replied: Requested action aborted: error in processing 451 4.3.0 <[email protected]>: Temporary lookup failure And i can't send mail to it from outsider either. Any ideas? EDIT: Here are some issues MXToolBox reports to my domain, answering hopefully to @Teun Vink: BlackList Mail Server Web Server DNS Error 4 0 2 0 Warnings 0 0 0 3 Passed 0 6 3 12 So the domain is on some blacklist, but that doesn't explain the error at all No mail server issues found (except it's not working) Those two web server errors it's because i don't have HTTPS workin (No SSL Certificate) so the test fails Those 3 DNS warnings we're already there when it was working with the other machine and are related to stuff i can't control: SOA Refresh Value is outside of the recommended range SOA Expire Value out of recommended range SOA NXDOMAIN Value too high I've searched and as far as i can tell only the guys who sold the retail can change those values and they won't. Edit2: I half solved the issue.on the new machine postfix was installed but postfix-mysql waasn't so he couldn't connect to the database (rookie mistake). After fixing that, i can now send mails to the outsider without any issues, however i am still not able to receive mails from utside. The sender doesn't get any message warning about the non-delivery but the message doesn't fall in the inbox and the log shows: Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: NOQUEUE: reject: RCPT from re lay4.ptmail.sapo.pt[212.55.154.24]: 451 4.3.5 <relay4.ptmail.sapo.pt[212.55.154. 24]>: Client host rejected: Server configuration error; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<sapo.pt> Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: disconnect from relay4.ptmail .sapo.pt[212.55.154.24]

    Read the article

  • can't connect 2 subnets through RRAS 2008 r2

    - by mcdwight6
    I'm working on a project for a networking class. In VMWare Workstation, I have to set up a 2008 r2 server with DHCP reservations for 2 clients on separate subnets and have them ping each other. Here is the output of the route print command: =========================================================================== Interface List 13 ...00 50 56 2a e7 11 ...... Intel(R) PRO/1000 MT Network Connection #3 10 ...00 0c 29 66 88 dd ...... Intel(R) PRO/1000 MT Network Connection 1 ........................... Software Loopback Interface 1 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 11 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 14 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 16 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 17 ...00 00 00 00 00 00 00 e0 isatap.{5B8FB196-616F-4168-A020-03E63A309CEC} =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.0.2 266 0.0.0.0 0.0.0.0 On-link 223.6.6.2 266 10.0.0.0 255.0.0.0 On-link 10.0.0.2 266 10.0.0.2 255.255.255.255 On-link 10.0.0.2 266 10.255.255.255 255.255.255.255 On-link 10.0.0.2 266 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.6.0.0 255.255.0.0 On-link 10.0.0.2 11 128.6.255.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.0 255.255.255.0 On-link 10.0.0.2 11 223.6.6.0 255.255.255.0 On-link 223.6.6.2 266 223.6.6.2 255.255.255.255 On-link 223.6.6.2 266 223.6.6.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.255 255.255.255.255 On-link 223.6.6.2 266 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.0.2 266 224.0.0.0 240.0.0.0 On-link 223.6.6.2 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.0.2 266 255.255.255.255 255.255.255.255 On-link 223.6.6.2 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.0.0.2 Default 0.0.0.0 0.0.0.0 128.6.0.2 Default 0.0.0.0 0.0.0.0 223.6.6.2 Default 128.6.0.0 255.255.0.0 10.0.0.2 1 223.6.6.0 255.255.255.0 10.0.0.2 1 =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 14 1010 2002::/16 On-link 14 266 2002:8006:2::8006:2/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None My problem is that although I have set up both dynamic and persistent static routes in my r2 server, neither of the clients can ping even the NIC outside its own subnet. For example Client A can ping the NIC at 10.0.0.2 and vice-versa, but it gets a general transmit failure when it tries to ping the card at 223.6.6.2, let alone trying to ping the other client. I have completely disabled the firewalls on all machines and anything else I could think of, without success. What am I missing? Edit: Since posting this, I also noticed that the default gateways on my 2 NICs keep getting zeroed out. Does anyone know a fix for this?

    Read the article

  • "Service Unavailable" when browsing to static HTML page in non-application IIS website on Windows 2003 (possibly SharePoint WSS 2.0 related?)

    - by Jordan Rieger
    Background: My client has an old Pentium III Windows 2003 server whose 16/36 GB disks are dying. On it he has a database-driven web site and email application that needs further customization by a developer (me). First we need to get it working on the new server. The original developer is no longer available to provide a system setup guide. So my client got a tech who imaged the old drives over to the new server and managed to get it booting. But the IIS-driven site no longer works. In fact it seems that IIS itself does not work. Problem: Service Unavailable when attempting to browse from the server itself to the URL for a local Web Site called test which I setup in IIS to serve a single static index.htm file. This I did to isolate the problem, and eliminate the client's application from the equation. The site is setup on port 80 with the host header "test.myclientsdomain.com", and I used the etc\hosts file to point that host at the local IP. I know the host entry took effect because I can ping it. When doing an iisreset, I get: Attempting start... Restart attempt failed. IIS Admin Service or a service dependent on IIS Admin is not active. It most likely failed to start, which may mean that it's disabled. Despite this message, the services all stay in the Started state. The only relevant System event logs I found are: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1002 Date: 11/4/2012 Time: 11:04:47 PM User: N/A Computer: ALPHA1 Description: Application pool 'DefaultAppPool' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1039 Date: 11/4/2012 Time: 11:13:12 PM User: N/A Computer: ALPHA1 Description: A process serving application pool 'DefaultAppPool' reported a failure. The process id was '5636'. The data field contains the error number. Data: 0000: 7e 00 07 80 ~.. And one Application event log: Event Type: Error Event Source: Windows SharePoint Services 2.0 Event Category: None Event ID: 1000 Date: 11/4/2012 Time: 11:34:04 PM User: N/A Computer: ALPHA1 Description: #50070: Unable to connect to the database STS_Config on ALPHA2\SharePoint. Check the database connection information and make sure that the database server is running. That last log tells me that the tech may have initially tried to have both the old and the new server running, by renaming the new server from ALPHA1 to ALPHA2. And perhaps SharePoint grabbed onto that change, and now can't tell that the machine name has been switched back to the old ALPHA1. But why would SharePoint interfere with a static IIS web site serving a single HTML file? The test site is not even within an Application pool (I clicked the Remove button.) What I have tried/eliminated: No relevant services seem to be disabled: IIS Admin, WWW Publishing, Sharepoint Timer Giving Full Control to All Users/Everyone on the c:\inetpub\test folder serving my test site. I can connect to and query the local SharePoint config database (ALPHA1\SHAREPOINT\STS_CONFIG) from SSMS. But when I try to do stsadm -o setconfigdb -connect -databaseserver ALPHA1\SHAREPOINT it tells me The SharePoint admininstration port does not exist. Please use stsadm.exe to create it. And when I do that, using the port 9487 specified in the IIS SharePoint Admin site config, it tells me the port is already in use. Needless to say, simply browsing to the admin site gives me a similar error about being unable to reach the config database. I didn't want to go further down the SharePoint path as it may be completed unrelated to my IIS issue, and I don't even know yet if SharePoint is required for this application to work. The app itself is ASP.Net/C#/Silverlight and a little MS Word integration (maybe that's where the SharePoint stuff comes in.)

    Read the article

  • x11vnc working in Ubuntu 10.10

    - by pablorc
    I'm trying to start x11vnc in a Ubuntu 10.10 (my server is in Amazon EC2), but I have the next error $ sudo x11vnc -forever -usepw -httpdir /usr/share/vnc-java/ -httpport 5900 -auth /usr/sbin/gdm 25/11/2010 13:29:51 passing arg to libvncserver: -httpport 25/11/2010 13:29:51 passing arg to libvncserver: 5900 25/11/2010 13:29:51 -usepw: found /home/ubuntu/.vnc/passwd 25/11/2010 13:29:51 x11vnc version: 0.9.10 lastmod: 2010-04-28 pid: 3504 25/11/2010 13:29:51 XOpenDisplay(":0.0") failed. 25/11/2010 13:29:51 Trying again with XAUTHLOCALHOSTNAME=localhost ... 25/11/2010 13:29:51 *************************************** 25/11/2010 13:29:51 *** XOpenDisplay failed (:0.0) *** x11vnc was unable to open the X DISPLAY: ":0.0", it cannot continue. *** There may be "Xlib:" error messages above with details about the failure. Some tips and guidelines: ** An X server (the one you wish to view) must be running before x11vnc is started: x11vnc does not start the X server. (however, see the -create option if that is what you really want). ** You must use -display <disp>, -OR- set and export your $DISPLAY environment variable to refer to the display of the desired X server. - Usually the display is simply ":0" (in fact x11vnc uses this if you forget to specify it), but in some multi-user situations it could be ":1", ":2", or even ":137". Ask your administrator or a guru if you are having difficulty determining what your X DISPLAY is. ** Next, you need to have sufficient permissions (Xauthority) to connect to the X DISPLAY. Here are some Tips: - Often, you just need to run x11vnc as the user logged into the X session. So make sure to be that user when you type x11vnc. - Being root is usually not enough because the incorrect MIT-MAGIC-COOKIE file may be accessed. The cookie file contains the secret key that allows x11vnc to connect to the desired X DISPLAY. - You can explicitly indicate which MIT-MAGIC-COOKIE file should be used by the -auth option, e.g.: x11vnc -auth /home/someuser/.Xauthority -display :0 x11vnc -auth /tmp/.gdmzndVlR -display :0 you must have read permission for the auth file. See also '-auth guess' and '-findauth' discussed below. ** If NO ONE is logged into an X session yet, but there is a greeter login program like "gdm", "kdm", "xdm", or "dtlogin" running, you will need to find and use the raw display manager MIT-MAGIC-COOKIE file. Some examples for various display managers: gdm: -auth /var/gdm/:0.Xauth -auth /var/lib/gdm/:0.Xauth kdm: -auth /var/lib/kdm/A:0-crWk72 -auth /var/run/xauth/A:0-crWk72 xdm: -auth /var/lib/xdm/authdir/authfiles/A:0-XQvaJk dtlogin: -auth /var/dt/A:0-UgaaXa Sometimes the command "ps wwwwaux | grep auth" can reveal the file location. Starting with x11vnc 0.9.9 you can have it try to guess by using: -auth guess (see also the x11vnc -findauth option.) Only root will have read permission for the file, and so x11vnc must be run as root (or copy it). The random characters in the filenames will of course change and the directory the cookie file resides in is system dependent. See also: http://www.karlrunge.com/x11vnc/faq.html I've already tried with some -auth options but the error persist. I have gdm running. Thank you in advance

    Read the article

  • Mysqld not starting due to apparent db corruption

    - by pitosalas
    I am very new at admining mysql, and bad for me, something caused the db to get clobbered. There are many error messages in the log that I am not sure how to safely proceed. Can you give some tips? Here's the log: 110107 15:07:15 mysqld started 110107 15:07:15 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 110107 15:07:15 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 35 515914826. InnoDB: Doing recovery: scanned up to log sequence number 35 515915839 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1697553664 110107 15:07:15 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx with id 0 1697553198, 1 rows to undoInnoDB: Error: trying to access page number 3522914176 in space 0, InnoDB: space name ./ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10 110107 15:07:15InnoDB: Assertion failure in thread 3086403264 in file fil0fil.c line 3922 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html InnoDB: about forcing recovery. mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=0 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 217599 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=(nil) Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0xbffc55ac, backtrace may not be correct. Stack range sanity check OK, backtrace follows: 0x8139eec 0x83721d5 0x833d897 0x833db71 0x832aa38 0x835f025 0x835f7a3 0x830a77e 0x8326b57 0x831c825 0x8317b8d 0x82a9e66 0x8315732 0x834fc9a 0x828d7c3 0x81c29dd 0x81b5620 0x813d9fe 0x40fdf3 0x80d5ff1 New value of fp=(nil) failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 110107 15:07:15 mysqld ended

    Read the article

  • Windows 7 is stuck at "Starting Windows" when I attempt to boot computer

    - by Eli
    Basically, whenever I turn on my computer, it gets to the Starting Windows phase and just stays there. The startup animation still plays, yet it gets nowhere. I have tried booting into safe mode, however it gets stuck at loading CLASSPNP.SYS. It then freezes there and doesn't continue booting. I have tried booting into recovery mode from the hard drive, and it freezes after displaying the background image. I have tried booting from a recovery CD, which works, and I was able to use system restore. However, using system restore did not fix it, and it still is stuck at the Starting Windows screen. I have tried booting a Windows CD (Windows 8 Retail Installer) to see if I could upgrade it to fix this issue, however that froze at a blank screen after it got past the boot logo. I have tried changing around the BIOS settings (including resetting), to no avail. I have tried re-plugging the internal PSU cables (this is a custom-built desktop), yet this has changed nothing. I can boot into a loopback Ubuntu install on the same drive, which works fine, other than the fact that it has issues with some of the USB ports and the network card. This system has worked fine for the past few months, completely stable, and nothing in the configuration has changed before this error started happening. Startup Repair on the Windows recovery CD doesn't find any issues. Unplugging my secondary hard drive or swapping around memory doesn't change anything. The hard drive itself is fine, it hasn't shown any signs of failure and once again, boots my other OS fine. If anyone could help with this, that would be great. I can't seem to find any possible solution to this. If it makes any difference, my system specs are as follows: AMD FX-8320 Gigabyte GA-970A-D3 4GB of DDR3 Radeon HD 6870 550w PSU I'd like to not have to reinstall Windows, for I have more than a terabyte of data that I would have to back up if that becomes the only option. EDIT: I have since tried the following: Tried the solution involving restoring files from RegBackup, which changed nothing. Tried testing everything with Hiren's boot CD, everything comes back as fine. Tried disabling everything unnecessary in the BIOS and unplugging everything unneeded, it still hangs. Tried swapping out every possible combination of RAM, it still has the same result. The RAM is not at fault it seems Tried every GPU I own (which is many!) and it still hangs at the exact same place. Tried minimizing the power consumption as much as possible, even using an old PCI graphics card. It still hangs at the same place in the same way, signifying that it's not the PSU at fault. Tried resetting the BIOS again, still nothing. Tried every possible combination of BIOS options, even downclocking everything, it still hangs in the same spot. Tried upgrading the BIOS from version FB to FD, which changed nothing. Based on this, I would conclude the motherboard to be at fault. Are there any other possibilities? I don't want to spend $150 for a new motherboard. EDIT 2: This is what it gets stuck at when I try to boot into safe mode: Note the slight graphical corruption at the top of the screen. No matter how I set up the system, this seems to be there. In addition, either it has stopped booting into safe mode now, or it takes upwards of 2+ hours, and I haven't left it running for that long.

    Read the article

  • Monitor randomly shutting down, computer accepting no input, need to restart to get working

    - by Sebastian Lamerichs
    First off, spec list: OS: Windows 7 Ultimate 64-bit SP1 CPU: i7-4820k @ 3.7GHz (stock) GPU: Two 3GB Radeon HD 7970s @ 1.05GHz Mobo: AsRock X79 Extreme6 HDD: 2TB Seagate Barracuda 7200rpm RAM: 16GB quad-channel Kingston 1600MHz PSU: Antec HCG 900W Monitors: Acer S220HQL 1920x1080 + ViewSonic VA2251 1920x1080. Plugged into different GPUs. My problem is that, on a daily-ish basis, my monitors will turn off and not turn back on. My computer will still be running, GPU/CPU/case fans all still going, but the monitors will not turn back on. Additionally, it seems to cease all network activity. It doesn't seem to log any errors at all. I've verified that this is not a monitor issue, as when I press the num/caps/scroll lock buttons on my keyboard, the lights don't change, so the computer is clearly not accepting input. I have noticed a few other people on the internet with this problem, and some have claimed that it was solved by disabling PCI-Express Link State Power Management, but the issue still occurs for me after this. Whilst my CPU and GPUs both run at 100% 24/7, the temperatures are certainly not at dangerous levels, with the CPU averaging 65°C and the GPUs at 70°C and 78°C average. All components are brand new. I have tried forcing MSI Afterburner to start when Windows starts and to force a constant voltage, as this fixed the issue for a few days for another user, but he reported back saying that it had stopped working properly again, so I'm not putting too much faith in this working. Many people have said to adjust display sleep mode settings, but this will clearly not work, as the keyboard lights would still work if the monitors were the issue. The closest I can get to a log file for this issue is the following Folding@Home logs: 14:45:21:WU01:FS00:0x17:Completed 1120000 out of 2000000 steps (56%) 14:46:43:WU00:FS01:0x17:Completed 480000 out of 2000000 steps (24%) 14:46:49:WU01:FS00:0x17:Completed 1140000 out of 2000000 steps (57%) 14:48:30:WU01:FS00:0x17:Completed 1160000 out of 2000000 steps (58%) 14:49:55:WU01:FS00:0x17:Completed 1180000 out of 2000000 steps (59%) As you can see, the second GPU (FS01) stops computation approximately three and a half minutes before the issue occurs (it should be completing 1% every 80-120 seconds), and the first GPU (FS00) continues for a few minutes more before the logs just end. As far as I can tell, the computer has a network failure at the time the first GPU stops working, the latest IRC message I received from this time was at 14:47:58. That being said, there could have just not been any messages between then and 14:50:00, so I'm going to be connecting a laptop to the same bouncer to double-check if it happens again. The GPUs functioned perfectly well in another computer for a significant period of time, so I'm fairly confident that they aren't the issue, which means that this is being caused by either software or the motherboard, or possibly RAM. I really hope it's software. I heard from a forum board that there was a patch from Microsoft that fixed this problem, but "I've forgot which KB it was or the google search terms I used to find the patch, LOL.", so that's not much help. Haven't seen it mentioned by anyone else on about a dozen threads about this issue either. The computer is plugged in via a surge-protected power board, and I've run several other computers and pieces of hardware through it with no issues, so that is not the cause. I have just set the hard disk to never turn off, although I don't believe that that will solve the issue. Strangely, this has only happened when I'm not at the computer (which is actually a minority of the time). Until today it had only happened when I had not been actively using the computer for 6 hours, but today it happened within 10-30 minutes of me last using the computer actively. I have enabled file logging from MSI Afterburner, so hopefully this will shed some light on the issue, but I'm not too optimistic. I've heard that it could be a motherboard problem, but I figured I should ask around before RMAing it. Any help?

    Read the article

  • Can not open port 3306 on Ubuntu using iptables

    - by user94626
    I am trying to open port 3306 (for remote mysql connections) on my ubuntu 12.04 server machine but for the life of me can't get the damned thing to work! Here is what I did: 1) list current firewall rules: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 225 16984 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 220 69605 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 486 54824 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 4 208 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 4 208 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 735 182K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 225 16984 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 2) try to connect from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect 3) try to add a new rule to iptables: iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT 4) make sure the new rule is added: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 359 25972 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 251 78665 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 628 64420 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 5 260 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 5 260 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 919 213K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 359 25972 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 which appears to be the case (last line in "Chain INPUT" section). 5) try to connect again from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect which is failing again. 6) try to flush all rules: $> sudo iptables -F 7) this time I CAN CONNECT. 8) reboot server and try to connect, FAILURE. I suspect since the new rule is being appended at the end it will have no effect as there appears to be a "reject all" sort of rule before it. If this is the case, how to make sure the new rule is added in the right order? Otherwise, what am I missing? Please help.

    Read the article

  • Week in Geek: New Security Flaw Confirmed for Internet Explorer Edition

    - by Asian Angel
    This week we learned how to use a PC to stay entertained while traveling for the holidays, create quality photo prints with free software, share links between any browser and any smartphone, create perfect Christmas photos using How-To Geek’s 10 best how-to photo guides, and had fun decorating Firefox with a collection of Holiday 2010 Personas themes. Photo by Repoort. Random Geek Links Photo by Asian Angel. Critical 0-Day Flaw Affects All Internet Explorer Versions, Microsoft Warns Microsoft has confirmed a zero-day vulnerability affecting all supported versions of Internet Explorer, including IE8, IE7 and IE6. Note: Article contains link to Microsoft Security Advisory detailing two work-arounds until a security update is released. Hackers targeting human rights, indie media groups Hackers are increasingly hitting the Web sites of human rights and independent media groups in an attempt to silence them, says a new study released this week by Harvard University’s Berkman Center for Internet & Society. OpenBSD: audits give no indication of back doors So far, the analyses of OpenBSD’s crypto and IPSec code have not provided any indication that the system contains back doors for listening to encrypted VPN connections. But the developers have already found two bugs during their current audits. Sophos: Beware Facebook’s new facial-recognition feature Facebook’s new facial recognition software might result in undesirable photos of users being circulated online, warned a security expert, who urged users to keep abreast with the social network’s privacy settings to prevent the abovementioned scenario from becoming a reality. Microsoft withdraws flawed Outlook update Microsoft has withdrawn update KB2412171 for Outlook 2007, released last Patch Tuesday, after a number of user complaints. Skype: Millions still without service Skype was still working to right itself going into the holiday weekend from a major outage that began this past Wednesday. Mozilla improves sync setup and WebGL in Firefox 4 beta 8 Firefox 4.0 beta 8 brings better support for WebGL and introduces an improved setup process for Firefox Sync that simplifies the steps for configuring the synchronization service across multiple devices. Chrome OS the litmus test for cloud The success or failure of Google’s browser-oriented Chrome OS will be the litmus test to decide if the cloud is capable of addressing user needs for content and services, according to a new Ovum report released Monday. FCC Net neutrality rules reach mobile apps The Federal Communications Commission (FCC) finally released its long-expected regulations on Thursday and the related explanations total a whopping 194 pages. One new item that was not previously disclosed: mobile wireless providers can’t block “applications that compete with the provider’s” own voice or video telephony services. KDE and the Document Foundation join Open Invention Network The KDE e.V. and the Document Foundation (TDF) have both joined the Open Invention Network (OIN) as licensees, expanding the organization’s roster of supporters. Report: SEC looks into Hurd’s ousting from HP The scandal surrounding Mark Hurd’s departure from the world’s largest technology company in August has officially drawn attention from the U.S. Securities and Exchange Commission. Report: Google requests delay of new Google TVs Google TV is apparently encountering a bit of static that has resulted in a programming change. Geek Video of the Week This week we have a double dose of geeky video goodness for you with the original Mac vs PC video and the trailer for the sequel. Photo courtesy of Peacer. Mac vs PC Photo courtesy of Peacer. Mac vs PC 2 Trailer Random TinyHacker Links Awesome Tools To Extract Audio From Video Here’s a list of really useful, and free tools to rip audio from videos. Getting Your iPhone Out of Recovery Mode Is your iPhone stuck in recovery mode? This tutorial will help you get it out of that state. Google Shared Spaces Quickly create a shared space and collaborate with friends online. McAfee Internet Security 2011 – Upgrade not worthy of a version change McAfee has released their 2011 version of security products. And as this review details, the upgrades are minimal when compared to their 2010 products. For more information, check out the review. 200 Countries Plotted Hans Rosling’s famous lectures combine enormous quantities of public data with a sport’s commentator’s style to reveal the story of the world’s past, present and future development. Now he explores stats in a way he has never done before – using augmented reality animation. Super User Questions Enjoy looking through this week’s batch of popular questions and answers from Super User. How to restore windows 7 to a known working state every time it boots? Is there an easy way to mass-transfer all files between two computers? Coffee spilled inside computer, damaged hard drive Computer does not boot after ram upgrade Keyboard not detected when trying to install Ubuntu 10.10 How-To Geek Weekly Article Recap Have you had a super busy week while preparing for the holiday weekend? Then here is your chance to get caught up on your reading with our five hottest articles for the week. Ask How-To Geek: Rescuing an Infected PC, Installing Bloat-free iTunes, and Taming a Crazy Trackpad How to Use the Avira Rescue CD to Clean Your Infected PC Eight Geektacular Christmas Projects for Your Day Off VirtualBox 4.0 Rocks Extensions and a Simplified GUI Ask the Readers: How Many Monitors Do You Use with Your Computer? One Year Ago on How-To Geek Here are more great articles from one year ago for you to read and enjoy during the holiday break. Enjoy Distraction-Free Writing with WriteMonkey Shutter is a State of Art Screenshot Tool for Ubuntu Get Hex & RGB Color Codes the Easy Way Find User Scripts for Your Favorite Websites the Easy Way Access Your Unsorted Bookmarks the Easy Way (Firefox) The Geek Note That “wraps” things up for this week and we hope that everyone enjoys the rest of their holiday break! Found a great tip during the break? Then be sure to send it in to us at [email protected]. Photo by ArSiSa7. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Simon’s Cat Explores the Christmas Tree! [Video] The Outdoor Lights Scene from National Lampoon’s Christmas Vacation [Video] The Famous Home Alone Pizza Delivery Scene [Classic Video] Chronicles of Narnia: The Voyage of the Dawn Treader Theme for Windows 7 Cardinal and Rabbit Sharing a Tree on a Cold Winter Morning Wallpaper An Alternate Star Wars Christmas Special [Video]

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • WinInet Apps failing when Internet Explorer is set to Offline Mode

    - by Rick Strahl
    Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to: WinInet Error 2: The system cannot find the file specified Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established. In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working. So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect. I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy. And the Culprit is: Internet Explorer’s Work Offline Option The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down: When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients). What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack. This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline. Fixing the Issue in your Code If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle. In FoxPro code I use: DECLARE INTEGER InternetSetOption ;    IN WININET.DLL ;    INTEGER HINTERNET,;    INTEGER dwFlags,;    INTEGER @dwValue,;    INTEGER cbSize lnOptionValue = 1   && BOOL TRUE pass by reference   *** Set needed SSL flags lnResult=InternetSetOption(this.hHttpSession,;    INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77    @lnOptionValue ,4)   DECLARE INTEGER HttpOpenRequest ;    IN WININET.DLL ;    INTEGER hHTTPHandle,;    STRING lpzReqMethod,;    STRING lpzPage,;    STRING lpzVersion,;    STRING lpzReferer,;    STRING lpzAcceptTypes,;    INTEGER dwFlags,;    INTEGER dwContextw     hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;    lcVerb,;    tcPage,;    NULL,NULL,NULL,;    INTERNET_FLAG_RELOAD + ;    IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;    this.nHTTPServiceFlags,0) …  And this fixes the issue at least for IE 9… In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?© Rick Strahl, West Wind Technologies, 2005-2011Posted in HTTP  Windows  

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • Web Browser Control &ndash; Specifying the IE Version

    - by Rick Strahl
    I use the Internet Explorer Web Browser Control in a lot of my applications to display document type layout. HTML happens to be one of the most common document formats and displaying data in this format – even in desktop applications, is often way easier than using normal desktop technologies. One issue the Web Browser Control has that it’s perpetually stuck in IE 7 rendering mode by default. Even though IE 8 and now 9 have significantly upgraded the IE rendering engine to be more CSS and HTML compliant by default the Web Browser control will have none of it. IE 9 in particular – with its much improved CSS support and basic HTML 5 support is a big improvement and even though the IE control uses some of IE’s internal rendering technology it’s still stuck in the old IE 7 rendering by default. This applies whether you’re using the Web Browser control in a WPF application, a WinForms app, a FoxPro or VB classic application using the ActiveX control. Behind the scenes all these UI platforms use the COM interfaces and so you’re stuck by those same rules. Rendering Challenged To see what I’m talking about here are two screen shots rendering an HTML 5 doctype page that includes some CSS 3 functionality – rounded corners and border shadows - from an earlier post. One uses IE 9 as a standalone browser, and one uses a simple WPF form that includes the Web Browser control. IE 9 Browser:   Web Browser control in a WPF form: The IE 9 page displays this HTML correctly – you see the rounded corners and shadow displayed. Obviously the latter rendering using the Web Browser control in a WPF application is a bit lacking. Not only are the new CSS features missing but the page also renders in Internet Explorer’s quirks mode so all the margins, padding etc. behave differently by default, even though there’s a CSS reset applied on this page. If you’re building an application that intends to use the Web Browser control for a live preview of some HTML this is clearly undesirable. Feature Delegation via Registry Hacks Fortunately starting with Internet Explore 8 and later there’s a fix for this problem via a registry setting. You can specify a registry key to specify which rendering mode and version of IE should be used by that application. These are not global mind you – they have to be enabled for each application individually. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe The value to set this key to is (taken from MSDN here) as decimal values: 9999 (0x270F) Internet Explorer 9. Webpages are displayed in IE9 Standards mode, regardless of the !DOCTYPE directive. 9000 (0x2328) Internet Explorer 9. Webpages containing standards-based !DOCTYPE directives are displayed in IE9 mode. 8888 (0x22B8) Webpages are displayed in IE8 Standards mode, regardless of the !DOCTYPE directive. 8000 (0x1F40) Webpages containing standards-based !DOCTYPE directives are displayed in IE8 mode. 7000 (0x1B58) Webpages containing standards-based !DOCTYPE directives are displayed in IE7 Standards mode.   The added key looks something like this in the Registry Editor: With this in place my Html Html Help Builder application which has wwhelp.exe as its main executable now works with HTML 5 and CSS 3 documents in the same way that Internet Explorer 9 does. Incidentally I accidentally added an ‘empty’ DWORD value of 0 to my EXE name and that worked as well giving me IE 9 rendering. Although not documented I suspect 0 (or an invalid value) will default to the installed browser. Don’t have a good way to test this but if somebody could try this with IE 8 installed that would be great: What happens when setting 9000 with IE 8 installed? What happens when setting 0 with IE 8 installed? Don’t forget to add Keys for Host Environments If you’re developing your application in Visual Studio and you run the debugger you may find that your application is still not rendering right, but if you run the actual generated EXE from Explorer or the OS command prompt it works. That’s because when you run the debugger in Visual Studio it wraps your application into a debugging host container. For this reason you might want to also add another registry key for yourapp.vshost.exe on your development machine. If you’re developing in Visual FoxPro make sure you add a key for vfp9.exe to see the rendering adjustments in the Visual FoxPro development environment. Cleaner HTML - no more HTML mangling! There are a number of additional benefits to setting up rendering of the Web Browser control to the IE 9 engine (or even the IE 8 engine) beyond the obvious rendering functionality. IE 9 actually returns your HTML in something that resembles the original HTML formatting, as opposed to the IE 7 default format which mangled the original HTML content. If you do the following in the WPF application: private void button2_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; MessageBox.Show(doc.body.outerHtml); } you get different output depending on the rendering mode active. With the default IE 7 rendering you get: <BODY><DIV> <H1>Rounded Corners and Shadows - Creating Dialogs in CSS</H1> <DIV class=toolbarcontainer><A class=hoverbutton href="./"><IMG src="../../css/images/home.gif"> Home</A> <A class=hoverbutton href="RoundedCornersAndShadows.htm"><IMG src="../../css/images/refresh.gif"> Refresh</A> </DIV> <DIV class=containercontent> <FIELDSET><LEGEND>Plain Box</LEGEND><!-- Simple Box with rounded corners and shadow --> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Box with Header</LEGEND> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV class="gridheaderleft roundbox-top">Box with a Header</DIV> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox-bottom">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Dialog Style Window</LEGEND> <DIV style="POSITION: relative; WIDTH: 450px" id=divDialog class="dialog boxshadow" jQuery16107208195684204002="2"> <DIV style="POSITION: relative" class=dialog-header> <DIV class=closebox></DIV>User Sign-in <DIV class=closebox jQuery16107208195684204002="3"></DIV></DIV> <DIV class=descriptionheader>This dialog is draggable and closable</DIV> <DIV class=dialog-content><LABEL>Username:</LABEL> <INPUT name=txtUsername value=" "> <LABEL>Password</LABEL> <INPUT name=txtPassword value=" "> <HR> <INPUT id=btnLogin value=Login type=button> </DIV> <DIV class=dialog-statusbar>Ready</DIV></DIV></FIELDSET> </DIV> <SCRIPT type=text/javascript>     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </SCRIPT> </DIV></BODY> Now lest you think I’m out of my mind and create complete whacky HTML rooted in the last century, here’s the IE 9 rendering mode output which looks a heck of a lot cleaner and a lot closer to my original HTML of the page I’m accessing: <body> <div>         <h1>Rounded Corners and Shadows - Creating Dialogs in CSS</h1>     <div class="toolbarcontainer">         <a class="hoverbutton" href="./"> <img src="../../css/images/home.gif"> Home</a>         <a class="hoverbutton" href="RoundedCornersAndShadows.htm"> <img src="../../css/images/refresh.gif"> Refresh</a>     </div>         <div class="containercontent">     <fieldset>         <legend>Plain Box</legend>                <!-- Simple Box with rounded corners and shadow -->             <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                              <div style="background: khaki;" class="boxcontenttext roundbox">                     Simple Rounded Corner Box.                 </div>             </div>     </fieldset>     <fieldset>         <legend>Box with Header</legend>         <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                          <div class="gridheaderleft roundbox-top">Box with a Header</div>             <div style="background: khaki;" class="boxcontenttext roundbox-bottom">                 Simple Rounded Corner Box.             </div>         </div>     </fieldset>       <fieldset>         <legend>Dialog Style Window</legend>         <div style="width: 450px; position: relative;" id="divDialog" class="dialog boxshadow">             <div style="position: relative;" class="dialog-header">                 <div class="closebox"></div>                 User Sign-in             <div class="closebox"></div></div>             <div class="descriptionheader">This dialog is draggable and closable</div>                    <div class="dialog-content">                             <label>Username:</label>                 <input name="txtUsername" value=" " type="text">                 <label>Password</label>                 <input name="txtPassword" value=" " type="text">                                 <hr/>                                 <input id="btnLogin" value="Login" type="button">                        </div>             <div class="dialog-statusbar">Ready</div>         </div>     </fieldset>     </div> <script type="text/javascript">     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </script>        </div> </body> IOW, in IE9 rendering mode IE9 is much closer (but not identical) to the original HTML from the page on the Web that we’re reading from. As a side note: Unfortunately, the browser feature emulation can't be applied against the Html Help (CHM) Engine in Windows which uses the Web Browser control (or COM interfaces anyway) to render Html Help content. I tried setting up hh.exe which is the help viewer, to use IE 9 rendering but a help file generated with CSS3 features will simply show in IE 7 mode. Bummer - this would have been a nice quick fix to allow help content served from CHM files to look better. HTML Editing leaves HTML formatting intact In the same vane, if you do any inline HTML editing in the control by setting content to be editable, IE 9’s control does a much more reasonable job of creating usable and somewhat valid HTML. It also leaves the original content alone other than the text your are editing or adding. No longer is the HTML output stripped of excess spaces and reformatted in IEs format. So if I do: private void button3_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; doc.body.contentEditable = true; } and then make some changes to the document by typing into it using IE 9 mode, the document formatting stays intact and only the affected content is modified. The created HTML is reasonably clean (although it does lack proper XHTML formatting for things like <br/> <hr/>). This is very different from IE 7 mode which mangled the HTML as soon as the page was loaded into the control. Any editing you did stripped out all white space and lost all of your existing XHTML formatting. In IE 9 mode at least *most* of your original formatting stays intact. This is huge! In Html Help Builder I have supported HTML editing for a long time but the HTML mangling by the Web Browser control made it very difficult to edit the HTML later. Previously IE would mangle the HTML by stripping out spaces, upper casing all tags and converting many XHTML safe tags to its HTML 3 tags. Now IE leaves most of my document alone while editing, and creates cleaner and more compliant markup (with exception of self-closing elements like BR/HR). The end result is that I now have HTML editing in place that's much cleaner and actually capable of being manually edited. Caveats, Caveats, Caveats It wouldn't be Internet Explorer if there weren't some major compatibility issues involved in using this various browser version interaction. The biggest thing I ran into is that there are odd differences in some of the COM interfaces and what they return. I specifically ran into a problem with the document.selection.createRange() function which with IE 7 compatibility returns an expected text range object. When running in IE 8 or IE 9 mode however. I could not retrieve a valid text range with this code where loEdit is the WebBrowser control: loRange = loEdit.document.selection.CreateRange() The loRange object returned (here in FoxPro) had a length property of 0 but none of the other properties of the TextRange or TextRangeCollection objects were available. I figured this was due to some changed security settings but even after elevating the Intranet Security Zone and mucking with the other browser feature flags pertaining to security I had no luck. In the end I relented and used a JavaScript function in my editor document that returns a selection range object: function getselectionrange() { var range = document.selection.createRange(); return range; } and call that JavaScript function from my host applications code: *** Use a function in the document to get around HTML Editing issues loRange = loEdit.document.parentWindow.getselectionrange(.f.) and that does work correctly. This wasn't a big deal as I'm already loading a support script file into the editor page so all I had to do is add the function to this existing script file. You can find out more how to call script code in the Web Browser control from a host application in a previous post of mine. IE 8 and 9 also clamp down the security environment a little more than the default IE 7 control, so there may be other issues you run into. Other than the createRange() problem above I haven't seen anything else that is breaking in my code so far though and that's encouraging at least since it uses a lot of HTML document manipulation for the custom editor I've created (and would love to replace - any PROFESSIONAL alternatives anybody?) Registry Key Installation for your Application It’s important to remember that this registry setting is made per application, so most likely this is something you want to set up with your installer. Also remember that 32 and 64 bit settings require separate settings in the registry so if you’re creating your installer you most likely will want to set both keys in the registry preemptively for your application. I use Tarma Installer for all of my application installs and in Tarma I configure registry keys for both and set a flag to only install the latter key group in the 64 bit version: Because this setting is application specific you have to do this for every application you install unfortunately, but this also means that you can safely configure this setting in the registry because it is after only applied to your application. Another problem with install based installation is version detection. If IE 8 is installed I’d want 8000 for the value, if IE 9 is installed I want 9000. I can do this easily in code but in the installer this is much more difficult. I don’t have a good solution for this at the moment, but given that the app works with IE 7 mode now, IE 9 mode is just a bonus for the moment. If IE 9 is not installed and 9000 is used the default rendering will remain in use.   It sure would be nice if we could specify the IE rendering mode as a property, but I suspect the ActiveX container has to know before it loads what actual version to load up and once loaded can only load a single version of IE. This would account for this annoying application level configuration… Summary The registry feature emulation has been available for quite some time, but I just found out about it today and started experimenting around with it. I’m stoked to see that this is available as I’d pretty much given up in ever seeing any better rendering in the Web Browser control. Now at least my apps can take advantage of newer HTML features. Now if we could only get better HTML Editing support somehow <snicker>… ah can’t have everything.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  Windows  

    Read the article

  • Monitor your Hard Drive’s Health with Acronis Drive Monitor

    - by Matthew Guay
    Are you worried that your computer’s hard drive could die without any warning?  Here’s how you can keep tabs on it and get the first warning signs of potential problems before you actually lose your critical data. Hard drive failures are one of the most common ways people lose important data from their computers.  As more of our memories and important documents are stored digitally, a hard drive failure can mean the loss of years of work.  Acronis Drive Monitor helps you avert these disasters by warning you at the first signs your hard drive may be having trouble.  It monitors many indicators, including heat, read/write errors, total lifespan, and more. It then notifies you via a taskbar popup or email that problems have been detected.  This early warning lets you know ahead of time that you may need to purchase a new hard drive and migrate your data before it’s too late. Getting Started Head over to the Acronis site to download Drive Monitor (link below).  You’ll need to enter your name and email, and then you can download this free tool. Also, note that the download page may ask if you want to include a trial of their for-pay backup program.  If you wish to simply install the Drive Monitor utility, click Continue without adding. Run the installer when the download is finished.  Follow the prompts and install as normal. Once it’s installed, you can quickly get an overview of your hard drives’ health.  Note that it shows 3 categories: Disk problems, Acronis backup, and Critical Events.  On our computer, we had Seagate DiskWizard, an image backup utility based on Acronis Backup, installed, and Acronis detected it. Drive Monitor stays running in your tray even when the application window is closed.  It will keep monitoring your hard drives, and will alert you if there’s a problem. Find Detailed Information About Your Hard Drives Acronis’ simple interface lets you quickly see an overview of how the drives on your computer are performing.  If you’d like more information, click the link under the description.  Here we see that one of our drives have overheated, so click Show disks to get more information. Now you can select each of your drives and see more information about them.  From the Disk overview tab that opens by default, we see that our drive is being monitored, has been running for a total of 368 days, and that it’s health is good.  However, it is running at 113F, which is over the recommended max of 107F.   The S.M.A.R.T. parameters tab gives us more detailed information about our drive.  Most users wouldn’t know what an accepted value would be, so it also shows the status.  If the value is within the accepted parameters, it will report OK; otherwise, it will show that has a problem in this area. One very interesting piece of information we can see is the total number of Power-On Hours, Start/Stop Count, and Power Cycle Count.  These could be useful indicators to check if you’re considering purchasing a second hand computer.  Simply load this program, and you’ll get a better view of how long it’s been in use. Finally, the Events tab shows each time the program gave a warning.  We can see that our drive, which had been acting flaky already, is routinely overheating even when our other hard drive was running in normal temperature ranges. Monitor Acronis Backups And Critical Errors In addition to monitoring critical stats of your hard drives, Acronis Drive Monitor also keeps up with the status of your backup software and critical events reported by Windows.  You can access these from the front page, or via the links on the left hand sidebar.  If you have any edition of any Acronis Backup product installed, it will show that it was detected.  Note that it can only monitor the backup status of the newest versions of Acronis Backup and True Image. If no Acronis backup software was installed, it will show a warning that the drive may be unprotected and will give you a link to download Acronis backup software.   If you have another backup utility installed that you wish to monitor yourself, click Configure backup monitoring, and then disable monitoring on the drives you’re monitoring yourself. Finally, you can view any detected Critical events from the Critical events tab on the left. Get Emailed When There’s a Problem One of Drive Monitor’s best features is the ability to send you an email whenever there’s a problem.  Since this program can run on any version of Windows, including the Server and Home Server editions, you can use this feature to stay on top of your hard drives’ health even when you’re not nearby.  To set this up, click Options in the top left corner. Select Alerts on the left, and then click the Change settings link to setup your email account. Enter the email address which you wish to receive alerts, and a name for the program.  Then, enter the outgoing mail server settings for your email.  If you have a Gmail account, enter the following information: Outgoing mail server (SMTP): smtp.gmail.com Port: 587 Username and Password: Your gmail address and password Check the Use encryption box, and then select TLS from the encryption options.   It will now send a test message to your email account, so check and make sure it sent ok. Now you can choose to have the program automatically email you when warnings and critical alerts appear, and also to have it send regular disk status reports.   Conclusion Whether you’ve got a brand new hard drive or one that’s seen better days, knowing the real health of your it is one of the best ways to be prepared before disaster strikes.  It’s no substitute for regular backups, but can help you avert problems.  Acronis Drive Monitor is a nice tool for this, and although we wish it wasn’t so centered around their backup offerings, we still found it a nice tool. Link Download Acronis Drive Monitor (registration required) Similar Articles Productive Geek Tips Quick Tip: Change Monitor Timeout From Command LineAnalyze and Manage Hard Drive Space with WinDirStatMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersDefrag Multiple Hard Drives At Once In WindowsFind Your Missing USB Drive on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • Why won't fetchmail work all of a sudden?

    - by SirCharlo
    I ran a chmod 777 * on my home folder. (I know, I know. I'll never do it again.) Ever since then, fetchmail seems to be broken. I use it to fetch mail from an Exchange 2003 mailbox through DAVMail and OWA. The problem is that fetchmail complains about an "expunge mismatch" whenever I get a new message. It deletes the message from the Exchange mailbox, yet it never forwards it. There seems to be a problem somwhere along the mail processing, but I haven't been able to pinpoint where. Any help would be appreciated. Here are the relevant config files. ~/fetchmailrc: set no bouncemail defaults: antispam -1 batchlimit 100 poll localhost with protocol imap and port 1143 user domain\\user password Password is root no rewrite mda "/usr/bin/procmail -f %F -d %T"; ~/procmailrc: :0 * ^Subject.*ack | expand | sed -e 's/[ ]*$//g' | sed -e 's/^/ /' > /usr/local/nagios/libexec/mail_acknowledgement ~/.forward: | "/usr/bin/procmail" And here is the output when I run fetchmail -f /root/.fetchmailrc -vv: fetchmail: WARNING: Running as root is discouraged. Old UID list from localhost: <empty> Scratch list of UIDs: <empty> fetchmail: 6.3.19 querying localhost (protocol IMAP) at Tue 03 Jul 2012 09:46:36 AM EDT: poll started Trying to connect to 127.0.0.1/1143...connected. fetchmail: IMAP< * OK [CAPABILITY IMAP4REV1 AUTH=LOGIN] IMAP4rev1 DavMail 3.9.7-1870 server ready fetchmail: IMAP> A0001 CAPABILITY fetchmail: IMAP< * CAPABILITY IMAP4REV1 AUTH=LOGIN fetchmail: IMAP< A0001 OK CAPABILITY completed fetchmail: Protocol identified as IMAP4 rev 1 fetchmail: GSSAPI error gss_inquire_cred: Unspecified GSS failure. Minor code may provide more information fetchmail: GSSAPI error gss_inquire_cred: fetchmail: No suitable GSSAPI credentials found. Skipping GSSAPI authentication. fetchmail: If you want to use GSSAPI, you need credentials first, possibly from kinit. fetchmail: IMAP> A0002 LOGIN "domain\\user" * fetchmail: IMAP< A0002 OK Authenticated fetchmail: selecting or re-polling default folder fetchmail: IMAP> A0003 SELECT "INBOX" fetchmail: IMAP< * 1 EXISTS fetchmail: IMAP< * 1 RECENT fetchmail: IMAP< * OK [UIDVALIDITY 1] fetchmail: IMAP< * OK [UIDNEXT 344] fetchmail: IMAP< * FLAGS (\Answered \Deleted \Draft \Flagged \Seen $Forwarded Junk) fetchmail: IMAP< * OK [PERMANENTFLAGS (\Answered \Deleted \Draft \Flagged \Seen $Forwarded Junk)] fetchmail: IMAP< A0003 OK [READ-WRITE] SELECT completed fetchmail: 1 message waiting after first poll fetchmail: IMAP> A0004 EXPUNGE fetchmail: IMAP< A0004 OK EXPUNGE completed fetchmail: 1 message waiting after expunge fetchmail: IMAP> A0005 SEARCH UNSEEN fetchmail: IMAP< * SEARCH 1 fetchmail: 1 is unseen fetchmail: IMAP< A0005 OK SEARCH completed fetchmail: 1 is first unseen 1 message for domain\user at localhost. fetchmail: IMAP> A0006 FETCH 1 RFC822.SIZE fetchmail: IMAP< * 1 FETCH (UID 343 RFC822.SIZE 1350) fetchmail: IMAP< A0006 OK FETCH completed fetchmail: IMAP> A0007 FETCH 1 RFC822.HEADER fetchmail: IMAP< * 1 FETCH (UID 343 RFC822.HEADER {1350} reading message domain\user@localhost:1 of 1 (1350 header octets) fetchmail: about to deliver with: /usr/bin/procmail -f '[email protected]' -d 'root' # fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< Bonne journ=E9e.. fetchmail: IMAP< fetchmail: IMAP< Company Name fetchmail: IMAP< My Name fetchmail: IMAP< IT fetchmail: IMAP< Tel: (XXX) XXX-XXXX xXXX fetchmail: IMAP< www.domain.com=20 fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< -----Message d'origine----- fetchmail: IMAP< De=A0: User [mailto:[email protected]]=20 fetchmail: IMAP< Envoy=E9=A0: 2 juillet 2012 15:50 fetchmail: IMAP< =C0=A0: Informatique fetchmail: IMAP< Objet=A0: PROBLEM: photo fetchmail: IMAP< fetchmail: IMAP< Notification Type: PROBLEM fetchmail: IMAP< Author:=20 fetchmail: IMAP< Comment:=20 fetchmail: IMAP< fetchmail: IMAP< Host: Photos fetchmail: IMAP< Hostname: photo fetchmail: IMAP< State: DOWN fetchmail: IMAP< Address: XXX.XX.X.XX fetchmail: IMAP< fetchmail: IMAP< Date/Time: Mon Jul 2 15:49:38 EDT 2012 fetchmail: IMAP< fetchmail: IMAP< Info: CRITICAL - XXX.XX.X.XX: rta nan, lost 100% fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< ) fetchmail: IMAP< A0007 OK FETCH completed fetchmail: IMAP> A0008 FETCH 1 BODY.PEEK[TEXT] fetchmail: IMAP< * 1 FETCH (UID 343 BODY[TEXT] {539} (539 body octets) ******************************* fetchmail: IMAP< ) fetchmail: IMAP< A0008 OK FETCH completed flushed fetchmail: IMAP> A0009 STORE 1 +FLAGS (\Seen \Deleted) fetchmail: IMAP< * 1 FETCH (UID 343 FLAGS (\Seen \Deleted)) fetchmail: IMAP< * 1 EXPUNGE fetchmail: IMAP< A0009 OK STORE completed fetchmail: IMAP> A0010 EXPUNGE fetchmail: IMAP< A0010 OK EXPUNGE completed fetchmail: mail expunge mismatch (0 actual != 1 expected) fetchmail: IMAP> A0011 LOGOUT fetchmail: IMAP< * BYE Closing connection fetchmail: IMAP< A0011 OK LOGOUT completed fetchmail: client/server synchronization error while fetching from domain\user@localhost fetchmail: 6.3.19 querying localhost (protocol IMAP) at Tue 03 Jul 2012 09:46:36 AM EDT: poll completed Merged UID list from localhost: <empty> fetchmail: Query status=7 (ERROR) fetchmail: normal termination, status 7

    Read the article

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    - by MikeD
    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort. First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.(It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:1. Step 1: Determine if this server is hosting the primary replica.This is a TSQL step using this script:declare @agName sysname = 'AGTest'set nocount on declare @primaryReplica sysnameselect @primaryReplica = agState.primary_replicafrom sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where ag.name = @AGname if not exists(   select *    from sys.dm_hadr_availability_group_states agState   join sys.availability_groups ag on agstate.group_id = ag.group_id   where @@Servername = agstate.primary_replica    and ag.name = @AGname)begin   raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica) endThis script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error. For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.Job Step 2: Update the CNAME entry in DNS with this server's name.I used a PowerShell script to accomplish this:$cname = "SPSQL.contoso.com"$query = "Select * from MicrosoftDNS_CNAMEType"$dns1 = "dc01.contoso.com"$dns2 = "dc02.contoso.com"if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true){    $dnsServer = $dns1}elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true) {   $dnsServer = $dns2}else{  $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2   Throw $msg}$record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }$thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."$currentServer = $record.RecordData if ($currentServer -eq $thisServer ) {     $cname + " CNAME is up to date: " + $currentServer}else{    $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer    $record.RecordData = $thisServer    $record.put()}This script does a few things:finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)gets the FQDN of this server (GetHostEntry)checks if the CNAME record is correct and updates it if necessary(You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account. When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.When the server that is hosting the primary replica is running the job, the job history looks like this:The job history on the secondary server looks like this: When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias. Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).    

    Read the article

  • WP7 Tips–Part I– Media File Coding Techniques to help pass the Windows Phone 7 Marketplace Certification Requirements

    - by seaniannuzzi
    Overview Developing an application that plays media files on a Windows Phone 7 Device seems fairly straight forward.  However, what can make this a bit frustrating are the necessary requirements in order to pass the WP7 marketplace requirements so that your application can be published.  If you are new to this development, be aware of these common challenges that are likely to be made.  Below are some techniques and recommendations on how optimize your application to handle playing MP3 and/or WMA files that needs to adhere to the marketplace requirements.   Windows Phone 7 Certification Requirements Windows Phone 7 Developers Blog   Some common challenges are: Not prompting the user if another media file is playing in the background before playing your media file Not allowing the user to control the volume Not allowing the user to mute the sound Not allowing the media to be interrupted by a phone call  To keep this as simple as possible I am only going to focus on what “not to do” and what “to do” in order to implement a simple media solution. Things you will need or may be useful to you before you begin: Visual Studio 2010 Visual Studio 2010 Feature Packs Windows Phone 7 Developer Tools Visual Studio 2010 Express for Windows Phone Windows Phone Emulator Resources Silverlight 4 Tools For Visual Studio XNA Game Studio 4.0 Microsoft Expression Blend for Windows Phone Note: Please keep in mind you do not need all of these downloaded and installed, it is just easier to have all that you need now rather than add them on later.   Objective Summary Create a Windows Phone 7 – Windows Media Sample Application.  The application will implement many of the required features in order to pass the WP7 marketplace certification requirements in order to publish an application to WP7’s marketplace. (Disclaimer: I am not trying to indicate that this application will always pass as the requirements may change or be updated)   Step 1: – Create a New Windows Phone 7 Project   Step 2: – Update the Title and Application Name of your WP7 Application For this example I changed: the Title to: “DOTNETNUZZI WP7 MEDIA SAMPLE - v1.00” and the Page Title to:  “media magic”. Note: I also updated the background.   Step 3: – XAML - Media Element Preparation and Best Practice Before we begin the next step I just wanted to point out a few things that you should not do as a best practice when developing an application for WP7 that is playing music.  Please keep in mind that these requirements are not the same if you are playing Sound Effects and are geared towards playing media in the background.   If you have coded this – be prepared to change it:   To avoid a failure from the market place remove all of your media source elements from your XAML or simply create them dynamically.  To keep this simple we will remove the source and set the AutoPlay property to false to ensure that there are no media elements are active when the application is started. Proper example of the media element with No Source:   Some Additional Settings - Add XAML Support for a Mute Button   Step 4: – Boolean to handle toggle of Mute Feature Step 5: – Add Event Handler for Main Page Load   Step 6: – Add Reference to the XNA Framework   Step 7: – Add two Using Statements to Resolve the Namespace of Media and the Application Bar using Microsoft.Xna.Framework.Media; using Microsoft.Phone.Shell;   Step 8: – Add the Method to Check the Media State as Shown Below   Step 9: – Add Code to Mute the Media File Step 10: – Add Code to Play the Media File //if the state of the media has been checked you are good to go. media_sample.Play(); Note: If we tried to perform this operation at this point you will receive the following error: System.InvalidOperationException was unhandled Message=FrameworkDispatcher.Update has not been called. Regular FrameworkDispatcher.Update calls are necessary for fire and forget sound effects and framework events to function correctly. See http://go.microsoft.com/fwlink/?LinkId=193853 for details. StackTrace:        at Microsoft.Xna.Framework.FrameworkDispatcher.AddNewPendingCall(ManagedCallType callType, UInt32 arg)        at Microsoft.Xna.Framework.UserAsyncDispatcher.HandleManagedCallback(ManagedCallType managedCallType, UInt32 managedCallArgs) at Microsoft.Xna.Framework.UserAsyncDispatcher.AsyncDispatcherThreadFunction()            It is not recommended that you just add the FrameworkDispatcher.Update(); call before playing the media file. It is recommended that you implement the following class to your solution and implement this class in the app.xaml.cs file.   Step 11: – Add FrameworkDispatcher Features I recommend creating a class named XNAAsyncDispatcher and adding the following code:   After you have added the code accordingly, you can now implement this into your app.xaml.cs file as highlighted below.   Note:  If you application sound file is not playing make sure you have the proper “Build Action” set such as Content.   Running the Sample Now that we have some of the foundation created you should be able to run the application successfully.  When the application launches your sound options should be set accordingly when the “checkMediaState” method is called.  As a result the application will properly setup the media options and/or alert the user accordinglyper the certification requirements.  In addition, the sample also shows a quick way to mute the sound in your application by simply removing the URI source of the media file.  If everything successfully compiled the application should look similar to below.                 <sound playing>   Summary At this point we have a fully functional application that provides techniques on how to avoid some common challenges when working with media files and developing applications for Windows Phone 7.  The techniques mentioned above should make things a little easier and helpful in getting your WP7 application approved and published on the Marketplace.  The next blog post will be titled: WP7 Tips–Part II - How to write code that will pass the Windows Phone 7 Marketplace Requirements for Themes (light and dark). If anyone has any questions or comments please comment on this blog. 

    Read the article

  • Wrapping ASP.NET Client Callbacks

    - by Ricardo Peres
    Client Callbacks are probably the less known (and I dare say, less loved) of all the AJAX options in ASP.NET, which also include the UpdatePanel, Page Methods and Web Services. The reason for that, I believe, is it’s relative complexity: Get a reference to a JavaScript function; Dynamically register function that calls the above reference; Have a JavaScript handler call the registered function. However, it has some the nice advantage of being self-contained, that is, doesn’t need additional files, such as web services, JavaScript libraries, etc, or static methods declared on a page, or any kind of attributes. So, here’s what I want to do: Have a DOM element which exposes a method that is executed server side, passing it a string and returning a string; Have a server-side event that handles the client-side call; Have two client-side user-supplied callback functions for handling the success and error results. I’m going to develop a custom control without user interface that does the registration of the client JavaScript method as well as a server-side event that can be hooked by some handler on a page. My markup will look like this: 1: <script type="text/javascript"> 1:  2:  3: function onCallbackSuccess(result, context) 4: { 5: } 6:  7: function onCallbackError(error, context) 8: { 9: } 10:  </script> 2: <my:CallbackControl runat="server" ID="callback" SendAllData="true" OnCallback="OnCallback"/> The control itself looks like this: 1: public class CallbackControl : Control, ICallbackEventHandler 2: { 3: #region Public constructor 4: public CallbackControl() 5: { 6: this.SendAllData = false; 7: this.Async = true; 8: } 9: #endregion 10:  11: #region Public properties and events 12: public event EventHandler<CallbackEventArgs> Callback; 13:  14: [DefaultValue(true)] 15: public Boolean Async 16: { 17: get; 18: set; 19: } 20:  21: [DefaultValue(false)] 22: public Boolean SendAllData 23: { 24: get; 25: set; 26: } 27:  28: #endregion 29:  30: #region Protected override methods 31:  32: protected override void Render(HtmlTextWriter writer) 33: { 34: writer.AddAttribute(HtmlTextWriterAttribute.Id, this.ClientID); 35: writer.RenderBeginTag(HtmlTextWriterTag.Span); 36:  37: base.Render(writer); 38:  39: writer.RenderEndTag(); 40: } 41:  42: protected override void OnInit(EventArgs e) 43: { 44: String reference = this.Page.ClientScript.GetCallbackEventReference(this, "arg", "onCallbackSuccess", "context", "onCallbackError", this.Async); 45: String script = String.Concat("\ndocument.getElementById('", this.ClientID, "').callback = function(arg, context, onCallbackSuccess, onCallbackError){", ((this.SendAllData == true) ? "__theFormPostCollection.length = 0; __theFormPostData = ''; WebForm_InitCallback(); " : String.Empty), reference, ";};\n"); 46:  47: this.Page.ClientScript.RegisterStartupScript(this.GetType(), String.Concat("callback", this.ClientID), script, true); 48:  49: base.OnInit(e); 50: } 51:  52: #endregion 53:  54: #region Protected virtual methods 55: protected virtual void OnCallback(CallbackEventArgs args) 56: { 57: EventHandler<CallbackEventArgs> handler = this.Callback; 58:  59: if (handler != null) 60: { 61: handler(this, args); 62: } 63: } 64:  65: #endregion 66:  67: #region ICallbackEventHandler Members 68:  69: String ICallbackEventHandler.GetCallbackResult() 70: { 71: CallbackEventArgs args = new CallbackEventArgs(this.Context.Items["Data"] as String); 72:  73: this.OnCallback(args); 74:  75: return (args.Result); 76: } 77:  78: void ICallbackEventHandler.RaiseCallbackEvent(String eventArgument) 79: { 80: this.Context.Items["Data"] = eventArgument; 81: } 82:  83: #endregion 84: } And the event argument class: 1: [Serializable] 2: public class CallbackEventArgs : EventArgs 3: { 4: public CallbackEventArgs(String argument) 5: { 6: this.Argument = argument; 7: this.Result = String.Empty; 8: } 9:  10: public String Argument 11: { 12: get; 13: private set; 14: } 15:  16: public String Result 17: { 18: get; 19: set; 20: } 21: } You will notice two properties on the CallbackControl: Async: indicates if the call should be made asynchronously or synchronously (the default); SendAllData: indicates if the callback call will include the view and control state of all of the controls on the page, so that, on the server side, they will have their properties set when the Callback event is fired. The CallbackEventArgs class exposes two properties: Argument: the read-only argument passed to the client-side function; Result: the result to return to the client-side callback function, set from the Callback event handler. An example of an handler for the Callback event would be: 1: protected void OnCallback(Object sender, CallbackEventArgs e) 2: { 3: e.Result = String.Join(String.Empty, e.Argument.Reverse()); 4: } Finally, in order to fire the Callback event from the client, you only need this: 1: <input type="text" id="input"/> 2: <input type="button" value="Get Result" onclick="document.getElementById('callback').callback(callback(document.getElementById('input').value, 'context', onCallbackSuccess, onCallbackError))"/> The syntax of the callback function is: arg: some string argument; context: some context that will be passed to the callback functions (success or failure); callbackSuccessFunction: some function that will be called when the callback succeeds; callbackFailureFunction: some function that will be called if the callback fails for some reason. Give it a try and see if it helps!

    Read the article

  • CodePlex Daily Summary for Sunday, June 06, 2010

    CodePlex Daily Summary for Sunday, June 06, 2010New ProjectsActive Worlds Dot Net Wrapper (Based on AwSdk): Active Worlds Dot Net Wrapper (Based on AwSdk)Combina: Smart calculator for large combinatorial calculations.Concurrent Cache: ConcurrentCache is a smart output cache library extending OutputCacheProvider. It consists of in memory, cache files and compressed files modes and...Decay: Personal use. For learningFazTalk: FazTalk is a suite of tools and products that are designed to improve collaboration and workflow interactions. FazTalk takes an innovative approach...grouped: A peer to peer text editor, written in C# [update] I wrote this little thing a while back and even forgot about it, I stopped coding for more tha...HitchARide MVC 2 Sample: An MVC 2 sample written as part of the Microsoft 2010 London Web Camp based on the wireframes at http://schematics.earthware.co.uk/hitcharide. Not...Inspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who ne...NetFileBrowser - TinyMCE: tinyMCE file plugin with asp.netOil Slick Live Feeds: All live feeds from BP's Remotely Operated VehiclesParticle Lexer: Parser and Tokenizer libraryPdf Form Tool: Pdf Form Tool demonstrates how the iTextSharp library could be used to fill PDF forms. The input data is provided as a csv file. The application ...Planning Poker Windows Mobile 7: This project is a Planning Poker application for Windows Mobile 7 (and later?). RandomRat: RandomRat is a program for generating random sets that meet specific criteriaScience.NET: A scientific library written in managed code. It supports advanced mathematics (algebra system, sequences, statistics, combinatorics...), data stru...Spider Compiler: Spider Compiler parses the input of a spider programming source file and compiles it (with help of csc.exe; the C#-Compiler) to an exe-file. This p...Sununpro: sunun's project for study by team foundation server.TFS Buddy: An application that manipulates your I-Buddy whenever something happens in your Team Foundation ServerValveSoft: ValveSysWiiMote Physics: WiiMote Physics is an application that allows you to retrieve data from your WiiMote or Balance Board and display it in real-time. It has a number...WinGet: WinGet is a download manager for Windows. You can drag links onto the WinGet Widget and it will download a file on the selected folder. It is dev...XProject.NET: A project management and team collaboration platformNew Releases.NET DiscUtils: Version 0.9 Preview: This release is still under development. New features available in this release: Support for accessing short file names stored in WIM files Incr...Active Worlds Dot Net Wrapper (Based on AwSdk): Active World Dot Net Wrapper (0.0.1.85): Based on AwSdk 85AwSdk UnOfficial Wrapper Howto Use: C# using AwWrapper; VB.Net Import AwWrapperAjaxControlToolkit additional extenders: ZhecheAjaxControls for .NET3.5: Used AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412. Fixed deadlock in long operation canceling Some other fixesAnyCAD: AnyCAD.v1.2.ENU.Install: http://www.anycad.net Parametric Modeling *3D: Sphere, Box, Cylinder, Cone •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extr...Community Forums NNTP bridge: Community Forums NNTP Bridge V29: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Concurrent Cache: 1.0: This is the first release for the ConcurrentCache library.Configuration Section Designer: 2.0.0: This is the first Beta Release for VS 2010 supportDoxygen Browser Addin for VS: Doxygen Browser Addin - v0.1.4 Beta: Support for Visual Studio 2010 improved the logging of errors (Event Logs) Fixed some issues/bugs Hot key for navigation "Control + F1, Contr...Folder Bookmarks: Folder Bookmarks 1.6.2: The latest version of Folder Bookmarks (1.6.2), with new UI changes. Once you have extracted the file, do not delete any files/folders. They are n...HERB.IQ: Beta 0.1 Source code release 5: Beta 0.1 Source code release 5Inspiration.Web: Initial release (deployment package): Initial release (deployment package)NetFileBrowser - TinyMCE: Demo Project: Demo ProjectNetFileBrowser - TinyMCE: NetFileBrowser: NetImageBrowserNLog - Advanced .NET Logging: Nightly Build 2010.06.05.001: Changes since the last build:2010-06-04 23:29:42 Jarek Kowalski Massive update to documentation generator. 2010-05-28 15:41:42 Jarek Kowalski upda...Oil Slick Live Feeds: Oil Slick Live Feeds 0.1: A the first release, with feeds from the MS Skandi, Boa Deep C, Enterprise and Q4000. They are live streams from the ROV's monitoring the damaged...Pcap.Net: Pcap.Net 0.7.0 (46671): Pcap.Net - June 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes...sqwarea: Sqwarea 0.0.289.0 (alpha): API supportTFS Buddy: TFS Buddy First release (Beta 1): This is the first release of the TFS Buddy.Visual Studio DSite: Looping Animation (Visual C++ 2008): A solider firing a bullet that loops and displays an explosion everytime it hits the edge of the form.WiiMote Physics: WiiMote Physics v4.0: v4.0.0.1 Recovered from existing compiled assembly after hard drive failure Now requires .NET 4.0 (it seems to make it run faster) Added new c...WinGet: Alpha 1: First Alpha of WinGet. It includes all the planned features but it contains many bugs. Packaged using 7-Zip and ClickOnce.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSIonics Isapi Rewrite FilterStyleCopsmark C# LibraryFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • C#: Optional Parameters - Pros and Pitfalls

    - by James Michael Hare
    When Microsoft rolled out Visual Studio 2010 with C# 4, I was very excited to learn how I could apply all the new features and enhancements to help make me and my team more productive developers. Default parameters have been around forever in C++, and were intentionally omitted in Java in favor of using overloading to satisfy that need as it was though that having too many default parameters could introduce code safety issues.  To some extent I can understand that move, as I’ve been bitten by default parameter pitfalls before, but at the same time I feel like Java threw out the baby with the bathwater in that move and I’m glad to see C# now has them. This post briefly discusses the pros and pitfalls of using default parameters.  I’m avoiding saying cons, because I really don’t believe using default parameters is a negative thing, I just think there are things you must watch for and guard against to avoid abuses that can cause code safety issues. Pro: Default Parameters Can Simplify Code Let’s start out with positives.  Consider how much cleaner it is to reduce all the overloads in methods or constructors that simply exist to give the semblance of optional parameters.  For example, we could have a Message class defined which allows for all possible initializations of a Message: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message() 5: : this(string.Empty) 6: { 7: } 8:  9: public Message(string text) 10: : this(text, null) 11: { 12: } 13:  14: public Message(string text, IDictionary<string, string> properties) 15: : this(text, properties, -1) 16: { 17: } 18:  19: public Message(string text, IDictionary<string, string> properties, long timeToLive) 20: { 21: // ... 22: } 23: }   Now consider the same code with default parameters: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message(string text = "", IDictionary<string, string> properties = null, long timeToLive = -1) 5: { 6: // ... 7: } 8: }   Much more clean and concise and no repetitive coding!  In addition, in the past if you wanted to be able to cleanly supply timeToLive and accept the default on text and properties above, you would need to either create another overload, or pass in the defaults explicitly.  With named parameters, though, we can do this easily: 1: var msg = new Message(timeToLive: 100);   Pro: Named Parameters can Improve Readability I must say one of my favorite things with the default parameters addition in C# is the named parameters.  It lets code be a lot easier to understand visually with no comments.  Think how many times you’ve run across a TimeSpan declaration with 4 arguments and wondered if they were passing in days/hours/minutes/seconds or hours/minutes/seconds/milliseconds.  A novice running through your code may wonder what it is.  Named arguments can help resolve the visual ambiguity: 1: // is this days/hours/minutes/seconds (no) or hours/minutes/seconds/milliseconds (yes) 2: var ts = new TimeSpan(1, 2, 3, 4); 3:  4: // this however is visually very explicit 5: var ts = new TimeSpan(days: 1, hours: 2, minutes: 3, seconds: 4);   Or think of the times you’ve run across something passing a Boolean literal and wondered what it was: 1: // what is false here? 2: var sub = CreateSubscriber(hostname, port, false); 3:  4: // aha! Much more visibly clear 5: var sub = CreateSubscriber(hostname, port, isBuffered: false);   Pitfall: Don't Insert new Default Parameters In Between Existing Defaults Now let’s consider a two potential pitfalls.  The first is really an abuse.  It’s not really a fault of the default parameters themselves, but a fault in the use of them.  Let’s consider that Message constructor again with defaults.  Let’s say you want to add a messagePriority to the message and you think this is more important than a timeToLive value, so you decide to put messagePriority before it in the default, this gives you: 1: public class Message 2: { 3: public Message(string text = "", IDictionary<string, string> properties = null, int priority = 5, long timeToLive = -1) 4: { 5: // ... 6: } 7: }   Oh boy have we set ourselves up for failure!  Why?  Think of all the code out there that could already be using the library that already specified the timeToLive, such as this possible call: 1: var msg = new Message(“An error occurred”, myProperties, 1000);   Before this specified a message with a TTL of 1000, now it specifies a message with a priority of 1000 and a time to live of -1 (infinite).  All of this with NO compiler errors or warnings. So the rule to take away is if you are adding new default parameters to a method that’s currently in use, make sure you add them to the end of the list or create a brand new method or overload. Pitfall: Beware of Default Parameters in Inheritance and Interface Implementation Now, the second potential pitfalls has to do with inheritance and interface implementation.  I’ll illustrate with a puzzle: 1: public interface ITag 2: { 3: void WriteTag(string tagName = "ITag"); 4: } 5:  6: public class BaseTag : ITag 7: { 8: public virtual void WriteTag(string tagName = "BaseTag") { Console.WriteLine(tagName); } 9: } 10:  11: public class SubTag : BaseTag 12: { 13: public override void WriteTag(string tagName = "SubTag") { Console.WriteLine(tagName); } 14: } 15:  16: public static class Program 17: { 18: public static void Main() 19: { 20: SubTag subTag = new SubTag(); 21: BaseTag subByBaseTag = subTag; 22: ITag subByInterfaceTag = subTag; 23:  24: // what happens here? 25: subTag.WriteTag(); 26: subByBaseTag.WriteTag(); 27: subByInterfaceTag.WriteTag(); 28: } 29: }   What happens?  Well, even though the object in each case is SubTag whose tag is “SubTag”, you will get: 1: SubTag 2: BaseTag 3: ITag   Why?  Because default parameter are resolved at compile time, not runtime!  This means that the default does not belong to the object being called, but by the reference type it’s being called through.  Since the SubTag instance is being called through an ITag reference, it will use the default specified in ITag. So the moral of the story here is to be very careful how you specify defaults in interfaces or inheritance hierarchies.  I would suggest avoiding repeating them, and instead concentrating on the layer of classes or interfaces you must likely expect your caller to be calling from. For example, if you have a messaging factory that returns an IMessage which can be either an MsmqMessage or JmsMessage, it only makes since to put the defaults at the IMessage level since chances are your user will be using the interface only. So let’s sum up.  In general, I really love default and named parameters in C# 4.0.  I think they’re a great tool to help make your code easier to read and maintain when used correctly. On the plus side, default parameters: Reduce redundant overloading for the sake of providing optional calling structures. Improve readability by being able to name an ambiguous argument. But remember to make sure you: Do not insert new default parameters in the middle of an existing set of default parameters, this may cause unpredictable behavior that may not necessarily throw a syntax error – add to end of list or create new method. Be extremely careful how you use default parameters in inheritance hierarchies and interfaces – choose the most appropriate level to add the defaults based on expected usage. Technorati Tags: C#,.NET,Software,Default Parameters

    Read the article

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >