Search Results

Search found 22078 results on 884 pages for 'composite primary key'.

Page 569/884 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • RDP with multiple monitors, display preferences get reset?

    - by Martijn Kooij
    Problem: When I connect to my pc at the office via RDP all the application windows I had previously carefully placed on either monitor 1 or 2 will be "scrambled". Either all applications show on monitor 1 and monitor 2 is empty, or they have switched 1 <- 2. Expected behaviour: When I connect I see all the application windows on exactly the same position and in the exact same size as I left them the night before. I have the exact same monitors at home as I have at work: Primary 2560x1440, Secondary 900x1440. Yesterday I tried switching the physical cables on the host machine hoping that the hardware order of the monitors was the difference. But this morning my secondary monitor was completely blank, not even the taskbar (which I had set to ONLY show on the secondary). Somewhere there must be something to help Windows understand which physical monitor is which virtual RDP monitor is which RDP "server" monitor... Are there more options than switching the cables? This one has been bothering me for a long long time now, I hope someone has a solution or workaround for me. Edit I want to use both monitors, so I have checked the "Use all monitors" setting in the RDP client. For example I leave my mail and total commander on the right monitor, and visual studio and Firefox on the left monitor. When I connect to RDP I want to see those applications on the same positions and sizes.

    Read the article

  • Is it possible to recover from the Windows 8 refresh feature?

    - by Warren P
    The intention of the Windows 8 RTM (released version) Refresh feature is to restore the system to the way it was when I first installed. It didn't though. Almost everything that came in the start-screen (not a menu any more) is gone, not just third party apps I installed, but EVERYTHING other than the icon for internet explorer, and the icon for the store, and the desktop, were wiped. Out of the box Windows 8 had a pretty large list of things installed, and it seems that the Refresh feature wipes all of them out. Is it possible to really get the system back to a fresh install state, or should I just re-install from the DVD I made? (I have access to Windows 8 RTM, legally through the MS Action Pack subscription.) I suspect that if I create a new account, I might get a new desktop with a default set of icons, but I'm hoping it might be possible to do this without using a different login. The second problem with Windows 8 refresh is it seems to trash the ACL's on my C: drive, taking all permissions away for access to the C: drive, making nothing on it visible or readable to my primary and only login account. I believe it might be possible to undo the damage done by the "refresh" with some judicious use of icacls from the command prompt.

    Read the article

  • How to prevent response to who-has requests on virtual eth interface?

    - by user42881
    Hi, we use small embedded X86 linux servers equipped with a single physical ethernet port as a gateway for an IP video surveillance application. Each downstream IP cam is mapped to a separate virtual IP address like this: real eth0 IP address= 192.168.1.1, camera 1 (eth0:1) =192.168.1.61, camera 2 (eth0:2) =192.168.1.62, etc. etc. all on the same eth0 physical port. This approach works well, except that a specific third-party windows video recording application running on a separate PC on the same LAN, automatically pings the virtual IPs looking for unique who-has responses on system startup and, when it gets back the same eth0 MAC address for each virtual interface, freaks out and won't allow us to subsequently manually enter those addresses. The windows app doesn't mind, tho, if it receives no answer to the who-has ping. My question - how can we either (a) shut off the who-has responses just for the virtual eth0:x interfaces while keeping them for the primary physical eth0 port, or, in the alternative, spoof a valid but different MAC address for each virtual interface? Thanks!

    Read the article

  • What happened to 'Copy Public Gallery Link' for folders in Photos in Dropbox?

    - by Transgenic
    I've been trying to figure out how to use the 'Photos' folder properly, but it does not seem to contain the functionality that is mentioned on various websites. From what I've read, you are supposed to create a folder within the 'Photos' folder. At that point, it becomes a public gallery for which you can access the context menu item Copy Public Gallery Link. However, I do not have this context menu item. I found a topic on the DropBox forum, but it is 2 years old. Even the various website information mentioned is over a year old. I found a topic from this year on SuperUser that mentions about the Share link menu item. I understand that this functionality is basically the same as Copy Public Gallery Link, however, the key difference is that it opens the website for which I need to click 'Copy link to this page' link, which then adds it to the clipboard. My primary questions are: Did they phase out the Copy Public Gallery Link from the context menu entirely? If not, is there a way to re-enable this menu item? If not, am I using the Photos folder properly? Lastly, if there is no way to get this menu item, is there some kind of script or hack-around that will let me simplify the Shark link + Copy link to this page functionality without requiring it to open my web browser? Note: I had to edit my post to remove other website links that I mention since I'm considered a 'New User' here and am only able to post 2 hyperlinks. Sorry.

    Read the article

  • Linux: Three default gateways?

    - by Daniel
    My server has three default gateways, how can that be? Shouldn't there be one default gw? I have three NICs, each attached to a separate subnet: server1:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.5.0.0 * 255.255.255.224 U 0 0 0 eth3 localnet * 255.255.255.224 U 0 0 0 eth0 192.168.8.0 * 255.255.255.192 U 0 0 0 eth1 default 10.5.0.1 0.0.0.0 UG 0 0 0 eth3 default 192.168.8.1 0.0.0.0 UG 0 0 0 eth1 default 10.1.0.1 0.0.0.0 UG 0 0 0 eth0 Sometimes, I can't ping a host on the Internet, sometimes I can. What I want is traffic to the Internet (0.0.0.0) routed through a specific NIC. Can I just add a route for 0.0.0.0 and default gw to one of the eth0-3 interfaces? Will it break my connection? I'm using Debian, here is my /etc/network/interfaces: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug eth0 iface eth0 inet static address 10.1.0.4 netmask 255.255.255.224 network 10.1.0.0 broadcast 10.1.0.31 gateway 10.1.0.1 allow-hotplug eth1 iface eth1 inet static address 192.168.8.4 netmask 255.255.255.192 network 192.168.8.0 broadcast 192.168.8.63 gateway 192.168.8.1 allow-hotplug eth3 iface eth3 inet static address 10.5.0.4 netmask 255.255.255.224 network 10.5.0.0 broadcast 10.5.0.31 gateway 10.5.0.1

    Read the article

  • Make call with alternate provider if NOANSWER

    - by adaptive
    I have two voip providers, one free an the other paid. The free provider only allows local calls to certain area codes, so I need to fall back to the the paid provider if a call fails. At the moment, I have the following context in my extensions.conf file: [globals] ; freephoneline.ca PRIMARY_PROVIDER=fpl ; voip.ms SECONDARY_PROVIDER=voipms [local] exten => _NXXNXXXXXX,1,Set(CALLERID(name)=${OUTGOING_NAME}) exten => _NXXNXXXXXX,n,Dial(SIP/${EXTEN}@${PRIMARY_PROVIDER}) exten => _NXXNXXXXXX,n,Set(CALLERID(num)=${OUTGOING_NUMBER}) exten => _NXXNXXXXXX,n,Dial(SIP/1${EXTEN}@${SECONDARY_PROVIDER}) exten => _NXXNXXXXXX,n,Hangup() I checked the logs and noticed that the free provider responds with NOANSWER if a call is not allowed (Even though it plays a message). What I want is to: Try calling the ${PRIMARY_PROVIDER} first. If NOANSWER is returned by provider (not that the callee did not answer), then call with ${SECONDARY_PROVIDER} How can I modify my dial plan to get the desired results? EDIT : The primary provider is freephoneline.ca, and I'm using asterisk v1.8.2.3-2

    Read the article

  • My system is always disk-bound (the disk light is always on). Why is this?

    - by Scoobie
    I have been given a laptop by the good folks at my company on which to do my work (Java development). I usually use eclipse as my primary development platform. The laptop is a Dell D830 and runs Windows 7 - 32 bit. Although the processor supports a 64 bit instruction-set, licensing limits me to running the 32 bit OS. The HDD is a WD1600BEVT (Western Digital). I have noticed that my disk is always very slow. Windows start up is usually pretty quick, however as soon as I log on, my disk light stays on and usually, the laptop takes about 4 minutes (after logging in -- immediately upon getting the prompt to press Ctrl + Alt + Del to log in) before it's usable. Questions: Is this expected behavior? What can I do to examine the disk and determine the cause of the problem? What can I do to improve my disk's performance? Any optimizations you may be able to suggest? Other Questions: Some have suggested running Process Monitor (from sysinternals), but how would i get the log since start up? Instead of trying to fix this myself, should I simply push this onto the system administrator? Thanks all.

    Read the article

  • Mail server DNS failed to resolved by Mac clients

    - by Concordus Applications
    We have two internal DNS servers. One is located on a linux server box and the other is the router's DNS management. We set the linux box as primary DNS via DHCP and the router as secondary. We have a few Mac clients that are accessing our internal mail server (hostnamed "mail" internally). When using IMAP or SMTP against the mail server internally, the mac boxes will sometimes fail to locate the server. If I use NSLOOKUP I can see that "mail" is pointed to the correct IP address and is being resolved via the correct DNS server, but if I ping "mail" it fails. ~ (bash)$ nslookup mail Server: 254.254.254.206 Address: 254.254.254.206#53 Name: mail.example.com Address: 254.254.254.205 Note: I replaced our actual internal IP address with 254.254.254.* If I wait a few minutes (3-5 minutes), somehow it resolves itself and sends successfully. This happens multiple times a day. The /etc/hosts file on the mac boxes is the default config. ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Is there something about Mac clients I should know to prevent this failed DNS resolution? Client boxes are: OSX 10.7.4, 8GB RAM, i5 MacBooks Server is: Ubuntu 12.04 Server

    Read the article

  • Is it safe to format this partition?

    - by xanesis4
    On a ubuntu server I own, I am running out of space. When I ran sudo parted /dev/sda -l to find all available drives, I got this: Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 256MB 255MB primary ext2 boot 2 257MB 1000GB 1000GB extended 5 257MB 1000GB 1000GB logical lvm Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/server--vg-swap_1: 2135MB Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2135MB 2135MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/server--vg-root: 998GB Sector size (logical/physical): 512B/512B Partition Table: loop Number Start End Size File system Flags 1 0.00B 998GB 998GB ext4 I understand /dev/mapper/server--vg-root is the filesystem, and /dev/sda1 has some stuff related to GRUB. But, what about /dev/sda2 and /dev/sda5? When I tried to mount /dev/sda2, it said that I needed to specify the file system, which according to the table, is nonexistent. So, is it safe to format this with, say ext4 and mount it? Also, when I tried to mount /dev/sd5, it gave me this error: mount: unknown filesystem type 'LVM2_member' I assume it is NOT save to reformat this. If I'm wrong, then that would be great, because I could save some space. Please let me know either way. Thanks in advance! UPDATE: Here is the result of mount: /dev/mapper/server--vg-root on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /boot type ext2 (rw,acl) /dev/sda1 on /media/hd2 type ext2 (rw)

    Read the article

  • Downloading Emails locally with Thunderbird

    - by r_honey
    I am using Gmail (web interface) for sometime now, and have well over 20 labels and some thousand mails there archived to different labels in Gmail. Now I want to have a local copy of all my mails and following points are important in the context: The Primary mail access mechanism would continue to be Gmail web for me. I just want a backup of my mail account locally. Ideally the mails should download locally in folders named after Gmail labels (I know this is possible via IMAP but probably not by POP) After all my mails are available locally, I will delete most of them in Gmail to free up space and because I want to archive them. The mails should continue to exist locally and should not be deleted when I delete when from Gmail web interface. I would be syncing my gmail account locally let's say every month. So, the new mails that I have sent/received during this period should come over to my local mailbox in the folders named after Gmail labels. I do understand that Gmail maintains a single copy of email having 2 different labels and such email would get duplicated locally in the 2 folders and I am okay with that. Essentially you can see I just want to archive my mails from the Gmail server to a local backup and then sync (one way from Gmail to locally) new mails at regular intervals. For some points above, POP seems to be the option while IMAP seems for the others. I am really confused and need help in deciding which of POP or IMAP would suit me best. I have currently chosen Thunderbird to be my local email client but would not have a problem switching to Outlook or anything else as long as I get my desired archiving functionality.

    Read the article

  • oracle access on vmware fusion

    - by gaudi_br
    Hello, I'm running snow leopard and I'm doing some development that requires some network knowledge. I've installed vmware fusion 3.0 and I've set up a virtual machine with windows 2003 server. I need to mimic the exact configuration of another server in the network, so I really need to run the versions I'll be mentioning here. Besides, I set up two network configurations on the VM: one NAT config (so that I can have internet access) and one host-only config (because I need to use another server's mac adress and my local area network might have a problem with it) From the installation of windows 2003, I then installed oracle 10.2.0.1. During the installation I received a warning about the primary ip-address of the system being dhcp assigned, but I ignored it (maybe it was a mistake)... Now, from experience, unless the DHCP assigned address changes, I should be able to access the guest system's database from the host system, so I went to safari and tried to access the oracle em. As it turns out, because my computer is on a company network, the company's DNS doesn't know about the virtual machine, unless of course I switch to a bridged network config. However, I don't want to do that because I don't to mix up the domains. So I guess the question is, how can I define my own dns or router, or whatever it is that I need to define so that whenever I try the guest system's ip address form the host, it will use the vmnet1 or vmnet8 interface define by vmware and bypass the dns configuration of my local area network. I'd also like to know what to do incase I want to change ip addresses on the guest machine without having oracle go haywire (I've noticed a few folders on the structure which are specific for the very first IP Address)... Any help would be appreciated. Thanks in advance.

    Read the article

  • Can't upgrade MySQL Server on new Ubuntu 12.04 install

    - by user179627
    After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade, which failed for mysql-server-5.5: Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove/purge/install -f/reinstall, etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do?

    Read the article

  • I can get in, but I can't get out

    - by robwilkerson
    Like most technical folks, I suppose, I'm my family's primary source of tech support. I'm a developer--not a sysadmin--by trade and tonight I bumped into something I've never seen before. I'm hoping someone here has. In order to better help my Mom, I have her set up on a home network behind a Linksys router (WRT54G). She's got a Mac, so I have her router set up to forward SSH requests to her laptop's internal IP. I also have her router running DDNS through DynDns. Tonight she called to tell me that she can't access the Internet. Assuming it was one of the many simple, stupid problems most of us encounter with parents, I logged into the router admin remotely and took a look around. Everything looked normal. Then I SSH'd into her machine to check out her IP, DNS, etc. settings. Everything still looked fine. Then I noticed something weird. When SSH'd into her machine, I can't ping her router. In other words, I seem to be able to access her computer through her router, but not access her router from her computer. A traceroute dies immediately as well. Any ideas what I might try next? I've bounced her computer and even unplugged her router (it was plugged back in, of course). Thanks.

    Read the article

  • I can get in, but I can't get out

    - by robwilkerson
    Like most technical folks, I suppose, I'm my family's primary source of tech support. I'm a developer--not a sysadmin--by trade and tonight I bumped into something I've never seen before. I'm hoping someone here has. In order to better help my Mom, I have her set up on a home network behind a Linksys router (WRT54G). She's got a Mac, so I have her router set up to forward SSH requests to her laptop's internal IP. I also have her router running DDNS through DynDns. Tonight she called to tell me that she can't access the Internet. Assuming it was one of the many simple, stupid problems most of us encounter with parents, I logged into the router admin remotely and took a look around. Everything looked normal. Then I SSH'd into her machine to check out her IP, DNS, etc. settings. Everything still looked fine. Then I noticed something weird. When SSH'd into her machine, I can't ping her router. In other words, I seem to be able to access her computer through her router, but not access her router from her computer. A traceroute dies immediately as well. Any ideas what I might try next? I've bounced her computer and even unplugged her router (it was plugged back in, of course). Thanks.

    Read the article

  • Display on secondary video card (Nvidia 8400 GS): horrible refresh, bogs system

    - by minameismud
    This is my work computer, but it's a small shop. We do business software development. The most hardcore thing we create is some web animations with html5 and fancy javascript/css. The base machine is a Dell Precision T3500 - Xeon W3550 (3.07GHz quad), 6GB ram, pair of 500GB harddrives, and Win 7 x64 Enterprise SP1. My primary video card is an ATI FirePro V4800 1GB in a PCIe slot of some speed driving a pair of 23s at 1920x1080 through DisplayPort-HDMI adapters. The secondary card is an NVidia GeForce 8400GS in a PCI slot driving a single 17" at 1280x1024 through DVI. On either of the 23" monitors, windows move smoothly, scroll quickly, and are generally very responsive. On the 17", it's slow, chunky, and when I'm trying to scroll a ton of content, Windows will occasionally suggest I drop to the Windows Basic theme. I've updated drivers for both cards, and I've gotten every Windows update relating to video. Specifically: ATI FirePro Provider: Advanced Micro Devices, Inc Date: 6/22/2014 Version: 13.352.1014.0 NVidia 8400 GS Provider: NVIDIA Date: 7/2/2014 Version: 9.18.13.4052 Unfortunately, new hardware isn't really an option. Is there anything I can do software-wise to speed up the NVidia-driven monitor?

    Read the article

  • Duplicate monitor on highest resolution in Windows 7

    - by AlexanderMP
    I have a monitor with a native resolution of 2560x1440, connected through display port. I also have an AV Receiver connected to the video card via HDMI, to have surround sound in games. All using Radeon HD 5670 (will upgrade soon to HD 7850). The problem is that my computer detects the receiver as a separate monitor, with the highest available resolution of 1920x1080. I have 3 options: Disconnect the second display. But then the sound (digital audio output through video card) also disappears. Duplicate displays. But then my primary monitor resolution is reduced to a maximum of just 1920x1080, that being the maximum of the second monitor. Extend desktop. This is the solution I picked so far, it being the least evil. The problems I face in this situations are 2: I have a blank part of the desktop where I sometimes lose my mouse pointer, so I made the extension small, 640x480, and placed it in a corner; when I turn off the main display, all windows resize to 640x480. In Kubuntu I had the option to duplicate the displays, while keeping the higher resolution. Which was great. I tried overriding using the Win7 netbook hack, but it's not available on non-netbooks. Is there a similar solution for this problem in Windows 7?

    Read the article

  • How to ensure local file is up-to-date or ahead (dropbox sync) before truecrypt auto-mount it?

    - by user620965
    There are a lot tutorials out there that states that dropbox build-in encryption is not secure enought. That tutorials recommands to sync a truecrypt container file to have all files in it securely encrypted. This setup is know to be limited. You can NOT have that truecrypt container file mounted on the same time on more than one location - if you have inserted changes to the contents of the container in more then one location at a time then this setup produces a conflict on the container file in the dropbox system - resulting in one container file for each location. In my case that issue is not relevant - i do not use my data on more than one location at a time. I want to use the auto-mount feature of truecrypt on startup of windows 7 to have a zero configuration environment - and start working right away. But i want to ensure that the local truecrypt container file is up-to-date before truecrypt mounts it automatically - imagine you updated the contents of the container on your primary location and your secondary location was off for a long time. In that case it can take "a long time" till dropbox sync is complete (e.g. depending on your internet connection and the size of the container file). There is a option in truecrypt that ensures that truecrypt do not update the timestamp of the container file - which speeds up the sync, because dropbox client is doing a differential sync then instead of a time consuming full-sync. That is an improvement to that setup, but this do not fix my issue. The question is how to make the auto-mount function wait for the container file to be up-to-date (updated by dropbox)? In contrast: if the file was changed local, but remote file (in the dropbox cloud system) is still old (not jet updated by the sync process / or process is progress), should not make truecrypt to wait for the sync. Suggestions?

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • Update information outdated

    - by Achim Krause
    I have a warning triangle that my update information is outdated, the last update was 12 days ago. I use Ubuntu 11.10. A run of sudo apt-get update produces the following output: Ign http://ppa.launchpad.net oneiric InRelease Ign http://de.archive.ubuntu.com oneiric InRelease Ign http://de.archive.ubuntu.com oneiric-updates InRelease Ign http://de.archive.ubuntu.com oneiric-backports InRelease Ign http://security.ubuntu.com oneiric-security InRelease Ign http://extras.ubuntu.com oneiric InRelease Hit http://ppa.launchpad.net oneiric Release.gpg Get:1 http://de.archive.ubuntu.com oneiric Release.gpg [198 B] Hit http://security.ubuntu.com oneiric-security Release.gpg Get:2 http://extras.ubuntu.com oneiric Release.gpg [72 B] Hit http://ppa.launchpad.net oneiric Release Hit http://de.archive.ubuntu.com oneiric-updates Release.gpg Hit http://security.ubuntu.com oneiric-security Release Hit http://extras.ubuntu.com oneiric Release Err http://extras.ubuntu.com oneiric Release Hit http://ppa.launchpad.net oneiric/main Sources Hit http://de.archive.ubuntu.com oneiric-backports Release.gpg Hit http://security.ubuntu.com oneiric-security/main Sources Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Ign http://linux.dropbox.com oneiric InRelease Hit http://security.ubuntu.com oneiric-security/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Sources Hit http://security.ubuntu.com oneiric-security/multiverse Sources Hit http://security.ubuntu.com oneiric-security/main i386 Packages Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages Hit http://security.ubuntu.com oneiric-security/universe i386 Packages Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages Hit http://security.ubuntu.com oneiric-security/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric Release Hit http://de.archive.ubuntu.com oneiric-updates Release Ign http://de.archive.ubuntu.com oneiric Release Hit http://security.ubuntu.com oneiric-security/main Translation-en Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://linux.dropbox.com oneiric Release.gpg Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports Release Hit http://security.ubuntu.com oneiric-security/universe Translation-en Ign http://de.archive.ubuntu.com oneiric/main Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/restricted Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/universe Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/multiverse Sources/DiffIndex Ign http://de.archive.ubuntu.com oneiric/main i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/restricted i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/universe i386 Packages/DiffIndex Ign http://de.archive.ubuntu.com oneiric/multiverse i386 Packages/DiffIndex Hit http://linux.dropbox.com oneiric Release Get:3 http://de.archive.ubuntu.com oneiric/main TranslationIndex [3,289 B] Get:4 http://de.archive.ubuntu.com oneiric/multiverse TranslationIndex [2,265 B] Hit http://de.archive.ubuntu.com oneiric/restricted TranslationIndex Get:5 http://de.archive.ubuntu.com oneiric/universe TranslationIndex [2,640 B] Hit http://de.archive.ubuntu.com oneiric-updates/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Hit http://de.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/main Sources Hit http://linux.dropbox.com oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://linux.dropbox.com oneiric/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/restricted Sources Hit http://de.archive.ubuntu.com oneiric-backports/universe Sources Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Sources Hit http://de.archive.ubuntu.com oneiric-backports/main i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric-backports/main TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/restricted TranslationIndex Hit http://de.archive.ubuntu.com oneiric-backports/universe TranslationIndex Hit http://de.archive.ubuntu.com oneiric/main Sources Hit http://de.archive.ubuntu.com oneiric/restricted Sources Hit http://de.archive.ubuntu.com oneiric/universe Sources Hit http://de.archive.ubuntu.com oneiric/multiverse Sources Hit http://de.archive.ubuntu.com oneiric/main i386 Packages Hit http://de.archive.ubuntu.com oneiric/restricted i386 Packages Hit http://de.archive.ubuntu.com oneiric/universe i386 Packages Hit http://de.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http://de.archive.ubuntu.com oneiric/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/main Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-updates/universe Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/main Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric-updates/main Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/restricted Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/universe Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/multiverse Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Err http://de.archive.ubuntu.com oneiric-updates/main i386 Packages 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] Ign http://linux.dropbox.com oneiric/main Translation-en_US Ign http://linux.dropbox.com oneiric/main Translation-en Fetched 273 B in 2s (91 B/s) Reading package lists... Done W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://extras.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key <[email protected]> W: GPG error: http://de.archive.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: Failed to fetch http://extras.ubuntu.com/ubuntu/dists/oneiric/Release W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_main_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/de.archive.ubuntu.com_ubuntu_dists_oneiric_universe_i18n_Index W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/restricted/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/universe/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/multiverse/source/Sources 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/binary-i386/Packages 416 Requested Range Not Satisfiable [IP: 141.30.13.30 80] W: Some index files failed to download. They have been ignored, or old ones used instead. There are some questions with similar problems, but no one seems to get these "Range Not Satisfiable" errors. I do not use a proxy, and the network configuration should not have changed since it worked the last time.

    Read the article

  • CodePlex Daily Summary for Friday, February 19, 2010

    CodePlex Daily Summary for Friday, February 19, 2010New ProjectsApplication Management Library: Application Management makes your application life easier. It will automatic do memory management, handle and log unhandled exceptions, profiling y...Audio Service - Play Wave Files From Windows Service: This is a windows service that Check a registry key, when the key is updated with a new wave file path the service plays the wave file.Aviamodels: 3d drawing AviamodelsControl of payment proofs program for Greek citizens: This is a program that is used for Greek citizens who want to keep track of their payment proofs.Cover Creator: Cover Creator gives you the possibility to create and print CD covers. Content of CD is taken from http://www.freedb.org/ or can be added/modyfied ...DevBoard: DevBoard is a webbased scrum tool that helps developers/team get a clear overview of the project progress. It's developed in C# and silverlight.Flex AdventureWorks: The is mostly a skunk-works application to help me get acclimated to CodePlex. The long term goal is to integrate a Flex UI with the AdventureWor...GRE Wordlist: An intuitive and customizable word list for GRE aspirants. Developed in Java using a word list similar to Barron's.Indexer: A desktop file Index and Search tool which allows you to choose a list of folders to index, and then search on later. It is based on Lucene.net an...Project Management Office (PMO) for SharePoint: Sample web part for the Code Mastery event in Boston, February 11, 2010.Restart SQL Audit Policy and Job: Resolve SQL 2008 Audit Network Connectivity Issue.Rounded Corners / DIV Container: The RoundedDiv round corners container is a skin-able, CSS compliant UI control. Select which corners should be rounded, collapse and expand the c...Silverlight Google Search Application: The Silverlight Google Search Application uses Google Search API and behaves like Internet Search Application with option to preview desired page i...Weather Forecast Control: MyWeather forecast control pulls up to date weather forecast information from The Weather Channel for your website.New ReleasesApplication Management Library: ApplicationManagement v1.0: First ReleaseAudio Service - Play Wave Files From Windows Service: Audio Service v1.0: This is a working version of the Audio Service. Please use as you need to.AutoMapper: 1.0.1 for Silverlight 3.0 Alpha: AutoMapper for Silverlight 3.0. Features not supported: IDataReader mapping IListSource mapping All other features are supported.Buzz Dot Net: Buzz Dot Net v.1.10219: Buzz Dot Net Library (Parser & Objects) + WPF Example (using MVVM & Threading)Canvas VSDOC Intellisense: V 1.0.0.0a: This release contains two JavaScript files: canvas-utils.js (can be referenced in both runtime and development environment) canvas-vsdoc.js (must ...Control of payment proofs program for Greek citizens: Payment Proofs: source codeCourier: Beta 2: Added Rx Framework support and re-factored how message registration and un-registration works Blog post explaining the updates and re-factoring c...Cover Creator: Initial release: This is initial stable release. For now only in Polish language.Employee Scheduler: Employee Scheduler 2.2: Small Bug found. Small total hour calculation bug. See http://employeescheduler.codeplex.com/WorkItem/View.aspx?WorkItemId=6059 Extract the files...EnhSim: Release v1.9.7.1: Release v1.9.7.1Implemented Dislodged Foreign Object trinket Whispering Fanged Skull now also procs off Flame shock dots You can toggle bloodlust o...Extend SmallBasic: Teaching Extensions v.007: added SimpleSquareTest added Tortoise.Approve() for virtual proctor how to use virtual proctor: change the path in the proctor.txt file (located i...FolderSize: FolderSize.Win32.1.0.1.0: FolderSize.Win32.1.0.1.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...GLB Virtual Player Builder: v0.4.0 Beta: Allows for user to import and use archetypes for building players. The archetypes are contained in the file "archetypes.xml". This file is editab...Google Map WebPart from SharePoint List: GMap Stable Release: GMap Stable ReleaseHenge3D Physics Library for XNA: Henge3D Source (2010-02 R2): Fixed a build error related to an assembly attribute in XBOX 360 builds. Tweaked the controls in the sample when targeting the 360. Reduced the...Indexer: Beta Release 1: Just the initial/rough cut.NukeCS: NukeCS 5.2.3 Source Code: update version to 5.2.3ODOS: ODOS STABLE 1.5.0: Thank you for your patience while we develop this version. Not that much has been added, though. Just doing some sub-conscious stuff to make life...PoshBoard: PoshBoard 3.0 Beta 1: Welcome to the first beta release of PoshBoard 3.0 ! IMPORTANT WARNING : this release is absolutly not feature complete and is error-prone. Okay, ...Restart SQL Audit Policy and Job: Restart SQL 2008 Audit Policy and Job: This folder contains three pieces of source code: Server Audit Status (Started).xml - Import this on-schedule policy into your server's Policy-Ba...SAL- Self Artificial Learning: Artificial Learning 2AQV Working Proof Of Concept: This is the Simulation proof of concept version that comes after the 1aq version. AQ stands for Anwering Questions.SharePoint 2010 Word Automation: SP 2010 Word Automation - Workflow Actions 1.1: This release includes two new custom workflow activities for SharePoint designer Convert Folder Convert Library More information about these new...SharePoint Outlook Connector: Version 1.0.1.1: Exception Logging has been improved.Sharpy: Sharpy 1.2 Alpha: This is the third Sharpy release. A change has been made to allow overriding the master page from the controller. The release contains the single ...Silverlight Google Search Application: SL Google Search App Alpha: This is just a first alpha version of the application, as it looks like when I uploaded it to CodePlex. The application works, requires Silverlight...Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity 1.0 RC: EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowser VS 2010 RC MVC 2 RC db MSSQL2008thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.6): This release is an update for WSCF.blue V1. Below are the bug fixes made since the V1.0.5 release: The data contract type filter was not including...TS3QueryLib.Net: TS3QueryLib.Net Version 0.18.13.0: Changelog Added overloads to all methods of QueryRunenr class handling permission tasks to allow passing of permission name instead of permissionid...Umbraco CMS: Umbraco 4.1 Beta 2: This is the second beta of Umbraco 4.1. Umbraco 4.1 is more advanced than ever, yet faster, lighter and simpler to use than ever. We, on behalf of...VCC: Latest build, v2.1.30218.0: Automatic drop of latest buildZack's Fiasco - Code Generated DAL: v1.2.4: Enhancements: SQL Server CRUD Stored Procedures added option for USE <db> added option to create or not create INSERT sprocs added option to cr...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsRawrSharpyDinnerNow.netBlogEngine.NETjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFacebook Developer ToolkitFluent Ribbon Control Suite

    Read the article

  • Getting Started with Boxee

    - by DigitalGeekery
    Boxee is a free Media PC application that runs on Windows, Mac, and Ubuntu Linux. With Boxee, you can integrate online video, music and pictures, with your own local media and social networking. Today we are going to take a closer look at Boxee and some of it’s features. Note: We used Windows 7 for this tutorial. Your experience on a Mac or Ubuntu Linux build may vary slightly. Hardware Requirements x86 (Intel/AMD processor) based system running at 1.0GHz or greater 512MB system memory (RAM) or more Video card capable of OpenGL 1.4, Direct X 9.0 Software Requirements Mac OS X 10.4+ (Intel based processor) Ubuntu Linux 9.04+ x86 only Windows XP / Vista / 7 (64 bit in Vista or 7) Installing Boxee Before downloading and installing Boxee, you’ll need to register for a free account. (See link below) Once your account is registered and verified, you’ll be able to log in and download the application. Installation is pretty straightforward…just take the defaults. Boxee will open in full screen mode and you’ll be prompted to login with your username and password. Before you login, you may want to take a moment to click on the “Guide” icon and learn a bit about navigating in Boxee. Some basic keyboard navigation is as follows. Move right, left, up, & down with the arrow keys. Hit “Enter” to make a selection, the forward slash key “\” to toggle between full screen and windowed mode, and “Esc” to go back to the previous screen. For Playback, the volume is controlled by plus & minus (+/-) keys, you can Play / Pause using the spacebar, and skip using the arrow keys. Boxee will also work with any infrared remote. If you have an iPhone or iPod Touch you can download software to enable them as a Boxee remote. If you’re using a mouse and keyboard, hover over the username and password boxes to enter your login credentials. If using a a remote, click your OK button and enter credentials with the on screen keyboard. Click “Done” when finished.   When you are ready to login, enter your credentials and click “Login.” On first login, you’ll be prompted to calibrate your screen. If you choose “Skip” you can always calibrate your screen later under Settings > Appearance > Screen. When Boxee opens, you’ll be greeted by the Home screen. To the left will be your Feeds. This will be any recommended content from friends on Boxee, and social networks such as Facebook and Twitter. Although, when you first login, it will mainly be info from the Boxee staff. You’ll have “Featured” content in the center and your Queue on the right. You’ll also have the Menu along the top.   Pop Up Menu The Pop Menu can be accessed by hitting the “Esc” key, or back on your remote. Depending on where you are located in Boxee, you may have to hit it a few time to “back out” to the Pop Up menu. From the Pop Up Menu, you can easily access any of the resources, settings, and favorites. Queue The Queue is your playlist of TV shows, movies, or Internet videos you wish to watch. When you find an offering you’d like to watch, select it and then click “Add to Queue.” The selected item will be added to your Queue and can be accessed at any time from the Menu. TV Show Library The TV Show library can contain files from your local hard drive or streaming content from the Web. Boxee pulls content from a variety of online locations such as Hulu and TV network sites. Click on the show to see which specific episodes are currently available. To search for your favorite shows, click on the yellow arrow to the left, or navigate to the left with your keyboard or remote. Enter your selection into the search box. My Apps By default, the “My Apps” section includes a list of the most popular apps, such as Netflix, Pandora, YouTube, and others. You can remove Apps from “My Apps,” or add new Apps from the Apps Library.   To access all the available Apps, click on the left arrow button, or click on the yellow arrow at the left, then select “App Library.” Choose an App from the Library and click it to open… … and then select “Add to My Apps.” Or, you can click start to play the App if you don’t wish to Add it to your “My Apps.”   Music, Pictures, and Movies Boxee will scan your PC for movies, pictures, and music. You can choose to scan specific folders by clicking on “Scan Media Folders…” … or from the Pop Up Menu, selecting Settings > Media, and then browsing for your media.   Conclusion Boxee to be a great way to integrate your local media with online streaming content. It can be run as an application on your home PC, or as a stand alone media PC. It should also be noted, however, that your access to online content will vary depending on your country. If you are a Windows Media Center user and and want to add the additional features of Boxee, check out our article on integrating Boxee with Windows 7 Media Center. Download Boxee Similar Articles Productive Geek Tips Integrate Boxee with Media Center in Windows 7Disable Fast User Switching on Windows XPOops! Sorry About the Feed ErrorsDisplay a list of Started Services from the Command Line (Windows)Feedburner to Google: Worst Transition Ever. TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox)

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >