Search Results

Search found 12907 results on 517 pages for 'todd little'.

Page 164/517 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Add a small RAID card? Will it help overall stability and performance of my nine hard drives?

    - by Ray
    Hi, Will I get any extra genuine added performance and RAID stability if I insert a basic RAID card into a PCI-E x1 slot? I am considering the Adaptec 1220SA - 2 port SATA , pci-express (1x) , raid 0/1. Ok it only supports two SATA drives. Purpose is to help support the eight internal hard drives (1TB each), a DVD drive and an external e-SATA connected 2TB hard drive - by dealing with two of the internal hard drives. My current configuration of eight internal 1TB Barracuda (7200.12) SATA hard drives, one external 2TB SATA Western Digital Green Drive (e-SATA) and one DVD drive can already be supported by the Intel P55 & JMicron controllers on the ASUS motherboard : the Intel P55 (controls six HDD; configured as three x RAID 1), and the JMicron (controls two HDD as one RAID 1, as well as the DVD drive and the external SATA drive via the motherboard's e-SATA port (controlled by the JMicron)). Bigger picture details : I have an ASUS motherboard designed for the LGA1156 type processor and it includes the Intel P55 Express Chipset and JMicron. I am using the Intel Core i7-870 processor, and have 8GB DDR3 (1333) memory (four x 2GB Corsair DIMMs). Enough overall power. The power supply is more than sufficicient for the system. Corsair AX850. The system will never need the full 850 watts (future : second graphics card). The RAID card would provide hardware RAID 1 for two of the eight intrnal drives. It would either reduce the load on : the Intel P55 firmware RAID support, or replace the JMicron controller's RAID 1 set. I am busy installing the above configuration using Windows 7 Ultimate 64-bit as the OS. The RAID card is a last minute addition to the plan. Is it worth spending the extra R700 - R900 on the Adaptec 1220SA, or equivalent RAID card? I cannot afford to spend yet another R2000 - R3000 on a RAID card that would support many SATA2 hard drives, with a better RAID, example the RAID 5. My Issue & assumption : I am trusting that the Intel P55 chipset can properly handle six drives, configured as three * RAID 1. I am assuming that the JMicron can handle, using its RED SATA ports, one RAID-1 (two HDDs). The DVD drive connects to the JMicron optical SATA port 1 (white port 1). White port 2 is not used. The e-SATA connection is from the JMicron straight to, and through the motherboard - to an on-board (rear panel) e-SATA port. Am I being a little hopeful in only using the on-board Intel P55 and the JMicron? Is it a waste of money to install a RAID card that handles two SATA2 drives? OR Is it wisdom to take the pressure a little off the Intel P55? Obviously I am interested in data security, hence RAID 1, not RAID Zero. RAID 5 would be nice. The CPU, Intel Core i7-870 will provide the clout. Context to nine drives : I am using virtualisation with Windows 7 Ultimate. Bootable VMs. The operating system gets a mirror. Loaded apps gets a mirror. The current design data is kept in another mirror and Another mirror is back-up one and / or VM territory. Then the external 2TB drive (via e-SATA) is the next layer of data security and then finally, I use off-site data security. Thanks.

    Read the article

  • Windows 7 sometimes boots in VGA mode

    - by TuxRug
    I have an Asus G50VT-x5 laptop with nVidia GeForce9800M-GS graphics. Normally, Windows boots normally, but about 20% of the time (rough estimate), it will boot with the fallback VGA driver, maxing out at 800x600 with no Aero. I've checked the system logs and there is nothing indicating an error loading the nVidia driver. It even specifies in the logs that the Nvidia Display Driver service started successfully, even though it has booted in safe graphics mode. This has been happening for a while, but it's happening a little more often now than it was before. Since the first time my system exhibited this behavior, I have updated my graphics driver a handful of times. I used System Information for Windows to check for problems there, but the only thing that stood out was the following: Core Temperature 4486449 °C (8075639 °F) Shaders Temperature 1171513530 °C (2108724330 °F) I know this reading is incorrect, because my laptop is nowhere near the surface of the sun and my desk has not burst into flames. When it's opererating normally, I get a sane reading like [Core Temperature 58 °C (136 °F)] with no Shaders Temperature listed. All I have to do to resolve the issue is reboot. I have seen no stability issues with the graphics or anything else. A long time ago, I had an issue with this computer where my framerate would suddenly drop during a 3D game from 40fps to <1fps, but after looking at the temperature readout immediately after quitting a game, I removed the bottom panel and blew the dust out of the vent and heatsink. Since then I have no drops in framerate under any situation. I have uploaded a zip containing the SIW reports for when the problem is occurring and when the computer is operating normally. I don't have a paid account so it can only be downloaded 10 times, so please only download the reports if you think you can use them. If you try to download the reports and they are no longer available, please comment and I will re-upload them. If you want to look at the files, they are on Rapidshare. EDIT It happened again, and I looked a little deeper into the System logs. When this happens, there are a lot of errors about other device drivers unable to start. All of these errors are for PnP drivers. Also, my USB keyboard and mouse take a few moments before they actually start working, although this happens sometimes the first normal boot as well. I am quite sure this is related, so I am adding the pnp tag. Also, CHKDSK will not run on boot. Even if a check is scheduled or a volume is manually set as dirty, CHKDSK will be skipped entirely, not even leaving an entry in the System logs. I tried running CHKNTFS /D, which did not work. I then manually changed my HKLM\System\CurrentControlSet\Control\Session Manager BootExecute value to the default listed on Microsoft's website. That did not work either. I ended up booting to repair mode and running CHKDSK there, which found a number of minor inconsistencies on my system drive, but none on my data drive. I have no idea if this is related. Some more information for those who don't download my SIW report file: Antivirus and Firewall are ESET Smart Security I have three different virutalization programs installed: VMware Player, Windows Virtual PC, and VirtualBox. The network adapters for these show up in the log of failed device starts. EDIT 2 I tried running sfc /scannow, which reported that it found corrupted files that could not be fixed. The CBS log is extremely cryptic. I tried booting to my install disk, launching repair mode, and doing an offline sfc from there, which produced the same result.

    Read the article

  • Centos 7. Freeradius fails to start on boot

    - by Alex
    I was messing around with FreeRADIUS and MySQL (MariaDB) and it seems FreeRADIUS service can't start properly on startup. But it starts fine using root user or in debug mode (radiusd -X) and works just fine! Debug mode shows no errors. systemctl command shows that radiusd.service has failed to start. /var/log/messages output: Aug 21 15:52:29 nexus-test systemd: Starting The Apache HTTP Server... Aug 21 15:52:29 nexus-test systemd: Starting MariaDB database server... Aug 21 15:52:29 nexus-test systemd: Starting FreeRADIUS high performance RADIUS server.... Aug 21 15:52:29 nexus-test systemd: Started OpenSSH server daemon. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Aug 21 15:52:30 nexus-test systemd: Started Postfix Mail Transport Agent. Aug 21 15:52:30 nexus-test avahi-daemon[604]: Registering new address record for fe80::250:56ff:fe85:e4af on eth0.*. Aug 21 15:52:30 nexus-test systemd: radiusd.service: control process exited, code=exited status=1 Aug 21 15:52:30 nexus-test systemd: Failed to start FreeRADIUS high performance RADIUS server.. Aug 21 15:52:30 nexus-test systemd: Unit radiusd.service entered failed state. Aug 21 15:52:31 nexus-test kdumpctl: kexec: loaded kdump kernel Aug 21 15:52:31 nexus-test kdumpctl: Starting kdump: [OK] Aug 21 15:52:31 nexus-test systemd: Started Crash recovery kernel arming. Aug 21 15:52:31 nexus-test systemd: Started The Apache HTTP Server. Aug 21 15:52:31 nexus-test systemd: Started MariaDB database server. /var/log/radius/radius.log output: Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Attempting to connect to database "radius" Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Opening additional connection (0) Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Couldn't connect socket to MySQL server radius@localhost:radius Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Mysql error 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)' Thu Aug 21 15:24:16 2014 : Error: rlm_sql (sql): Opening connection failed (0) Thu Aug 21 15:24:16 2014 : Error: /etc/raddb/mods-enabled/sql[47]: Instantiation failed for module "sql" After seeing this I tried to replicate the problem, killed mariadb.service and started to run debug mode again. And it spits out the same problem as in the radius.log. I tried disabling iptables and firewalld and rebooting, but no luck: systemctl disable iptables systemctl disable firewalld So maybe the problem is in the process startup order or delay of some kind. Maybe FreeRADIUS's SQL module can't connect to not yet started MariaDB? If it, how can I fix this? In earlier versions of RHEL/CENTOS I know you easily see service start order in like rc.d or stuff, now IDK. I am new to this fancy "systemd", "systemctl", "firewalld" stuff Centos 7 introduced so sorry I'm a little bit confused. Also this new FreeRADIUS 3 structure... PS. MariaDB is enabled on startup, credentials in FR DB configuration are correct A little update: cat /etc/systemd/system/multi-user.target.wants/radiusd.service output: [Unit] Description=FreeRADIUS high performance RADIUS server. After=syslog.target network.target [Service] Type=forking PIDFile=/var/run/radiusd/radiusd.pid ExecStartPre=-/bin/chown -R radiusd.radiusd /var/run/radiusd ExecStartPre=/usr/sbin/radiusd -C ExecStart=/usr/sbin/radiusd -d /etc/raddb ExecReload=/usr/sbin/radiusd -C ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

    Read the article

  • Why does my computer run slowly and freeze sometimes?

    - by Brae
    EDIT I disabled sound in the BIOS, rebooted and it hung. I removed the (previously) faulty HDD, rebooted and it hung. I have managed to get my Realtek audio manager open again after its mysterious disappearance. Subsequently my microphone is now working again, to fix it I had to uninstall audio drivers, disable audio in BIOS, install audio drivers, enable audio in BIOS. Access via network (with faulty HDD in) seems to not be triggering hangs at the moment. I think with the sound problem fixed it might play a little nicer, but I think its still going to hang. If it does, then I'm fairly sure its been narrowed down to the mobo. EDIT Pretty convinced my motherboard is the culprit, because nothing else seems to have any obvious problems (bar the hard drive, which the PC still hangs without it being plugged in) So thank you all for helping, once I get more rep ill up a few of the answers. My PC is doing some weird things... Sometimes when I open up a program, lets say Adobe Photoshop, it will load everything normally, nice and quick no problems and I can use it fine. Other times its a little odd, and it loads the program as if it's only using half of the CPU. It's pretty obvious when it does it, normally the loading screen skims past, but when it does this weird load you see it slowly tick though each thing, and the program itself becomes incredibly slow. Even Google Chrome does it sometimes. Yet when I exit and reopen the program without doing anything else, it typically opens fine without lagging. I think this problem is probably a symptom of something bigger, because of other problems I'm having. Random hangs; no matter what I have open or what I'm doing. My PC will sort of freeze up for a few seconds. If music is playing it will either loop the last second or two, or it will buzz. This only lasts for a few seconds then it returns to normal without having to restart. During this time all programs lock up and freeze, and the mouse and keyboard are useless. I am also having a weird issue with my audio jacks, without touching my PC at all, sometimes I will see a popup saying that I have unplugged something or plugged something in, neither of which has actually happened. Pretty sure this is cause by the motherboard. I recently had a 'Pink Screen of Death' (yes pink) which pointed to my audio drivers. The lockups seem to happen with some consistency when someone is accessing my shared files via my home LAN. Which leads me to believe either one of my hard drives is acting up or more likely the controller. One of my drives had a bit of a crash before, I used Spinrite and managed to recover my stuff and now the drive appears to be working okay. This is possibly adding to the problems. My best guess is something has gone wrong with my motherboard, possibly a power issue or a chip has died, I really don't know. So what I would like to know is: Have I have missed some obvious diagnostic to help figure out what it is, or should I just bite the bullet and assume its my motherboard acting up and buy a new system? dxdiag[64-bit] - http://pastebin.com/G30kb2TL PSOD (minidump) - http://pastebin.com/aZsv0H56 HWiNFO64 (system info / specs) - http://pastebin.com/X6h3K8g6

    Read the article

  • Alias wordpress folder from within another website

    - by Bretticus
    I have a little dilemma. I wrote a custom PHP MVC framework and built a CMS on top of it. I decided to give nginx+fpm a whirl too. Which is the root of my dilemma. I was asked to incorporate a wordpress blog into my website (yah.) It has much content and it's not feasible in the short amount of time I have to bring all the content into my CMS. Because of using Apache for years, I'm, admittedly, a little lost using nginx. My website has the file path: /opt/directories/mysite/public/ The wordpress files are located at: /opt/directories/mysite/news/ I know I just need to setup location(s) that targets /news[/*] and then forces all matching URI's to the index.php within. Can someone point me in the right direction perhaps? My configuration is below: server { listen 80; server_name staging.mysite.com index index.php; root /opt/directories/mysite/public; access_log /var/log/nginx/mysite/access.log; error_log /var/log/nginx/mysite/error.log; add_header X-NodeName directory01; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { try_files $uri $uri/ /index.php?route=$uri&$args; } location ~ /news { try_files $uri $uri/ @news; } location @news { fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_split_path_info ^(/news)(/.*)$; fastcgi_param SCRIPT_FILENAME /opt/directories/mysite/news/index.php; fastcgi_param PATH_INFO $fastcgi_path_info; } include fastcgi_params; include php.conf; location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; } ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } } My php.conf file: location ~ \.php { fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_pass unix:/tmp/php-fpm.sock; # If you must use PATH_INFO and PATH_TRANSLATED then add # the following within your location block above # (make sure $ does not exist after \.php or /index.php/some/path/ will not match): #fastcgi_split_path_info ^(.+\.php)(/.+)$; #fastcgi_param PATH_INFO $fastcgi_path_info; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } fastcgi_params file: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; Thanks, in large part, to @Kromey, I have adjusted my location /news/ but I am still not getting the desired result. I was able to learn to tack a ~ my /news location as I discovered that my php location was being matched first. With this setup, I now get a 200 status, but the page is blank. Any ideas?

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

  • How do I make an external hard drive keep the same drive letter permanently?

    - by andygrunt
    I have a desktop PC (2002 vintage) running Windows XP that I turn on about 2 or 3 times per week. I have a mains powered 250Gb Western Digital hard disk connected to it via USB. I always turn the hard disk on before the PC so it's up and running as the PC boots. When I first connected the external hard disk, the PC assigned it a letter ('i' if it matters) and I've installed software to it, created shortcuts to various files and folders on the disk using that letter. For years everything was fine then I would boot the PC and the hard disk was assigned a different letter. I'd then have to go into 'my computer/manage/disk management' and manually change the letter back to 'i'. If I then rebooted the PC, the hard disk would usually still be 'i' but after the next reboot would be some other random letter and I have to manually change it back to 'i'. This would go on for some time then there'd be periods when the it would always be 'i' then, for no apparent reason (no new devices added, for example), the drive letter would start changing again. At the moment it's in random drive letter mood so I thought I'd ask the following question... How do I assign the external hard disk to be 'i' permanently? Answer: Thanks Molly that seems to have done the trick (after a little fiddling) - slightly disappointed there wasn't a way to do it within Windows without installing something else though. For anyone else trying this, it wasn't completely straightforward so here's what happened with me. I installed USBDLM as per the instructions on its website. I guessed that I had to assign the first USB letter to i so replaced the 'Letter1=' lines to 'Letter=I' in the ini file. To test it, I rebooted the PC only to find it came back up with the display set to 640x480 in 16 colours. After some investigation, I re-installed the display drivers and rebooted and set the display back to its usual setting. The external hard disk now gets set to 'i' but I found I had to re-apply sharing status to it so it was seen from my laptop which is on the same network. The end result of all this is that it now does what I wanted although it does act as though the hard drive has just been plugged in a few seconds after the Windows desktop appears i.e. the little box appears with a progress bar as it searches through the contents of the 'new' hard drive and I eventually get a dialogue box saying 'This disk or device contains more than one type of content. What do you want Windows to do?' and lists options such as play media files, print the pictures or open folder to view the files. This is a tiny pain I wish didn't happen but not exactly a huge price to pay. Other than that - it seems to work fine :) Looks like a spoke too soon... Every time I reboot, I have to re-share the 'i' drive (which I didn't have to do before) so it can be seen by my laptop on the same network. Any ideas how to make that permanent?

    Read the article

  • Network config / gear question

    - by mcgee1234
    I have been tasked with setting up a fairly straightforward rack in a data center (we do not even need a whole rack, but this is the smallest allotment available). In a nutshell, 4 to 6 servers need to be able to reach 2 (maybe 3) vendors. The servers needs to be reachable over the internet. A little more detail - the networks the servers need to reach are inside of the data center, and are "trusted". Connections to these networks will be achieved through intra data center cross connects. It is kind of like a manufacturing line where we receive data from one vendor (burst-able up to 200 Mbits), churn through it on the servers, and then send out data to another vendor (bursts up to 20 Mbits). This series of events is very latency sensitive, so much so that it is common practice not to use NAT or a firewall on these segments (or so I hear). To reach the servers over the internet, I plan to use a site to site VPN. (This part is only relevant as far as hardware selection goes). I have 2 configurations in mind: Cisco 2911 (2921) (with the additional wan ports module) and a layer 2 switch - in this scenario, I would use the router also for VPN. Cisco 3560 layer 3 switch to interconnect the networks inside of the data center and an ASA 5510 (which is total overkill, but the 5505 is not rack mountable) as a firewall for the Wan side (internet) and VPN. I envision the setup to be as follows: Internet - ASA - 3560 Vendors - 3560 - Servers The general idea is that the ASA acts as a firewall and VPN device and the 3560 does all the heavy lifting. The first is a fairly traditional setup but my concern is performance. The second is somewhat unorthodox in that the vendors are directly connected to the layer 3 switch without passing through a firewall. Based on my understanding however, a layer 3 switch will perform substantially better as it will do hardware (ASIC) vs. software switching. (Note that number 2 is a little over the budget, but not unworkable (double negative, ugh)) Since this is my first time dealing with a data center, I am not sure what the IP space is going to look like. I suspect I will retain a block(s) of public IPs, vlan them to individual interfaces for the vendor connections and the servers (which will not reachable from the wan side of course) and setup routing on the switch. So here are my questionss: Is there a substantial performance difference between 1 and 2, i.e. hardware based switching on a layer 3 vs a software base on the 2911? I have trolled the internet and found a lot of Cisco literature, but nothing that I could really use to get a good handle. The vendors we connect to are secure and trusted (famous last words) and as I understand it, it is common practice not to NAT or firewall these connections (because of the aforementioned latency sensitivity). But what what kind of latency are we really talking about if I push the data through a router (or even ASA for that matter)? For our purposes, 5 ms will not kill us, 20 or 30 can be very costly. Others measure in microseconds, but they are out of our league. Is there any issues with using public IPs on a layer 3 switch? I am certainly not married to either of these configs, and I am totally open to any ideas. My knowledge (and I use the term loosely) is largely from books so I welcome any advice / insight. Thanks in advance.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • SharePoint - Summing Calculated Columns By Groups (DVWP)

    - by Mark Rackley
    I had a problem… okay.. okay.. so I have many problems… but let’s focus on one in particular or this blog post would never end… okay? Thank you…. So, I had an electronic timesheet where users entered hours for each day of the week. It also had a “Week Total” column which was a calculated column of the sum. The calculated column looked like this: Pretty easy.. nothing spectacular. So, what’s the problem? WELL……………….. There is a row in the timesheet for each task a person worked on in a given week. So, if you worked on 4 tasks, you would have 4 rows of data, and 4 week totals for that week: This is all fine and dandy, but I want to know what the total was for the entire week. Yes.. I realize the answer is 24 from my example… I mean, I know how to add! I just want SharePoint to display it for me for the executives (we all know, they have math problems).  You may be thinking, hey genius (in a sarcastic tone of course), why don’t you just go to the view and total on the “Week Total” field. What a brilliant idea! Why didn’t I think of that… let’s go to the view and do just that…. Ohhhhhh… you can’t total on a Calculated Column.. it’s not even an option…  Yeah… I had the same moment. So, what do you do? Well… what do you think I did? 1) Googled “SharePoint total calculated column” 2) Said it couldn’t be done 3) Took a nap 4) Asked the question on twitter? The correct answer of course is number 4… followed by number 3… although I may have told my boss number 2 so that I look more brilliant than I am? It’s safe to say I did NOT try to find the solution on my own doing step 1… that would be just WAY to easy… So, anyway, I posted the question on Twitter and it turns out several people had suggestions from using jQuery to using DVWPs. I tend to be a big fan of the DVWP except for the disgusting process of deploying them to another farm.. ugh… just shoot me…. so, that is the solution I went with. Laura Rogers (@WonderLaura) has a super duper easy to follow video on the subject over at EndUserSharePoint.com: SharePoint: Displaying Calculated Column SUMS in a View (Screencast) Laura’s video was very easy to follow and was ALMOST exactly what I needed. She does a great job walking you through every step of summing up a calculated field which was PART of my problem. The other part was my list is grouped by date! So, I wanted to see for a given week, the summed “Week Total” of hours. Laura got me on the right track with her video and I dug a little deeper into the DVWP to accomplish my task. So, here are the steps you follow: 1. Click on the "chevron” (I didn’t know it was actually called that until I heard Laura say it).. I always call it the “little-button-in-the-top-right-corner-with-the-greater-than-sign”.. but “chevron” is much shorter. So, click on the chevron, click on “Sort and Group”. The Add the field you want to group by, in my example it is the “Monday Date” of the timesheet entry. Make sure to check the check boxes for “Show Group Header” AND “Show Group Footer”. Click “OK”. The view now shows the count of each grouped set of data: Interesting, this looks very similar to Laura’s video… right? So, let’s take a look at the code for the Count: Count : <xsl:value-of select="count($nodeset)" /> Wow, also very similar… except in Laura’s video it looks like: Count : <xsl:value-of select="count($Rows)" /> So.. the only difference is that instead of $Rows we have $nodeset. It turns out the $nodeset will go through each Row in the group just like $Rows goes through each row in the entire view. So, using the exact same logic as in Laura’s blog except replacing $Rows with $nodeset we get the functionality of being able to sum up the values for a group. So, I want to replace “Count: #” with the total hours, this is done using the following changes to the above code: Week Total : <xsl:value-of select="sum($nodeset/@Monday)+sum($nodeset/@Tuesday) +sum($nodeset/@Wednesday)+sum($nodeset/@Thursday)+sum($nodeset/@Friday) +sum($nodeset/@Saturday)+sum($nodeset/@Sunday)" /> Our final output has the summed hours for each group! So… long story short… follow Laura’s blog, then group your list, then replace “$Rows” with “$nodeset”. One caveat, this will not work if you group by a person field. For some reason the person field does not go through each row in the group. I haven’t dug into this much yet. Maybe if I find some time… whatever that is… Anyway, Laura did all the work, I just took it one small step forward… as always, feel free to leave any additional insights you may have. We’re all learning here!

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • DropDownList and SelectListItem Array Item Updates in MVC

    - by Rick Strahl
    So I ran into an interesting behavior today as I deployed my first MVC 4 app tonight. I have a list form that has a filter drop down that allows selection of categories. This list is static and rarely changes so rather than loading these items from the database each time I load the items once and then cache the actual SelectListItem[] array in a static property. However, when we put the site online tonight we immediately noticed that the drop down list was coming up with pre-set values that randomly changed. Didn't take me long to trace this back to the cached list of SelectListItem[]. Clearly the list was getting updated - apparently through the model binding process in the selection postback. To clarify the scenario here's the drop down list definition in the Razor View:@Html.DropDownListFor(mod => mod.QueryParameters.Category, Model.CategoryList, "All Categories") where Model.CategoryList gets set with:[HttpPost] [CompressContent] public ActionResult List(MessageListViewModel model) { InitializeViewModel(model); busEntry entryBus = new busEntry(); var entries = entryBus.GetEntryList(model.QueryParameters); model.Entries = entries; model.DisplayMode = ApplicationDisplayModes.Standard; model.CategoryList = AppUtils.GetCachedCategoryList(); return View(model); } The AppUtils.GetCachedCategoryList() method gets the cached list or loads the list on the first access. The code to load up the list is housed in a Web utility class. The method looks like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) return _CategoryList; lock (_SyncLock) { if (_CategoryList != null) return _CategoryList; var catBus = new busCategory(); var categories = catBus.GetCategories().ToList(); // Turn list into a SelectItem list var catList= categories .Select(cat => new SelectListItem() { Text = cat.Name, Value = cat.Id.ToString() }) .ToList(); catList.Insert(0, new SelectListItem() { Value = ((int)SpecialCategories.AllCategoriesButRealEstate).ToString(), Text = "All Categories except Real Estate" }); catList.Insert(1, new SelectListItem() { Value = "-1", Text = "--------------------------------" }); _CategoryList = catList.ToArray(); } return _CategoryList; } private static SelectListItem[] _CategoryList ; This seemed normal enough to me - I've been doing stuff like this forever caching smallish lists in memory to avoid an extra trip to the database. This list is used in various places throughout the application - for the list display and also when adding new items and setting up for notifications etc.. Watch that ModelBinder! However, it turns out that this code is clearly causing a problem. It appears that the model binder on the [HttpPost] method is actually updating the list that's bound to and changing the actual entry item in the list and setting its selected value. If you look at the code above I'm not setting the SelectListItem.Selected value anywhere - the only place this value can get set is through ModelBinding. Sure enough when stepping through the code I see that when an item is selected the actual model - model.CategoryList[x].Selected - reflects that. This is bad on several levels: First it's obviously affecting the application behavior - nobody wants to see their drop down list values jump all over the place randomly. But it's also a problem because the array is getting updated by multiple ASP.NET threads which likely would lead to odd crashes from time to time. Not good! In retrospect the modelbinding behavior makes perfect sense. The actual items and the Selected property is the ModelBinder's way of keeping track of one or more selected values. So while I assumed the list to be read-only, the ModelBinder is actually updating it on a post back producing the rather surprising results. Totally missed this during testing and is another one of those little - "Did you know?" moments. So, is there a way around this? Yes but it's maybe not quite obvious. I can't change the behavior of the ModelBinder, but I can certainly change the way that the list is generated. Rather than returning the cached list, I can return a brand new cloned list from the cached items like this:/// <summary> /// Returns a static category list that is cached /// </summary> /// <returns></returns> public static SelectListItem[] GetCachedCategoryList() { if (_CategoryList != null) { // Have to create new instances via projection // to avoid ModelBinding updates to affect this // globally return _CategoryList .Select(cat => new SelectListItem() { Value = cat.Value, Text = cat.Text }) .ToArray(); } …}  The key is that newly created instances of SelectListItems are returned not just filtered instances of the original list. The key here is 'new instances' so that the ModelBinding updates do not update the actual static instance. The code above uses LINQ and a projection into new SelectListItem instances to create this array of fresh instances. And this code works correctly - no more cross-talk between users. Unfortunately this code is also less efficient - it has to reselect the items and uses extra memory for the new array. Knowing what I know now I probably would have not cached the list and just take the hit to read from the database. If there is even a possibility of thread clashes I'm very wary of creating code like this. But since the method already exists and handles this load in one place this fix was easy enough to put in. Live and learn. It's little things like this that can cause some interesting head scratchers sometimes…© Rick Strahl, West Wind Technologies, 2005-2012Posted in MVC  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 2 Days of Share &amp; Point

    - by Mark Rackley
    Groovy man… SharePoint Saturday Ozarks is back for 2010, bigger and better than before. Join us for a far out time and learn more about SharePoint in one day than you could in a year from the man… Yes! SharePoint Saturday Ozarks is back! SharePoint Saturday Ozarks is the largest SharePoint conference in Arkansas, Southern Missouri, and the very north east tip of Oklahoma. Last year we had a great turn out with 20 speakers, 5 MVPs, and attendees coming from Arkansas, Texas, Oklahoma, Missouri, Kansas, Nebraska, Indiana, Ohio, Alabama, Michigan, and Washington. Hey Man… what’s SharePoint Saturday anyway? Sounds like a conspiracy man… Not to worry, SharePoint Saturday is not an arm of the government bent on mind control or any attempt what-so-ever to bring you down man. SharePoint Saturday is grass roots effort started by Michael Lotter (http://www.sharepointsaturday.org/pages/about.aspx). It is a FREE one day event where the best SharePoint speakers gather to present their love, hatred, and frustrations of SharePoint to those lucky individuals who attend. Lessons are learned, contacts are made, prizes are won, food is eaten, assorted beverages are consumed until wee hours of the morning. SharePoint Saturday started with just a few sporadic one day events here and there. However, over the past year SharePoint Saturday has exploded and it’s hard to find a weekend where there is NOT a SharePoint Saturday event happing in some corner of the globe. There are even occasions where there are two SharePoint Saturdays on the same day! Many people are pleasantly surprised at the caliber of speakers at these SharePoint Saturday events. For the most part, these speakers are more eloquent, practiced, and practical than those speakers you find at the major multi-day conferences. These guys aren’t even paid to speak.. they do it out of love man… SharePoint Saturday Ozarks 2009 Alumni We had a star studded cast last year with many returning this year! Just check out the fun that they had… John Ferringer – Admin rockstar… I can still sense the awesomeness   SharePoint poster children Mike Watson & Laura Rogers     Lori Gowin spreading the SharePoint Love Eric Shupps is a little bit country and a little bit rock and roll       Cathy Dew, Sean McDonough, and JD Wade relaxing between gigs Actually, you can see real photos from last year’s SharePoint Saturday ozarks here:  picasaweb.google.com/mrackley/SharePointSaturdayOzarks#    What’s new for SharePoint Saturday Ozarks 2010 SharePoint Saturday Ozarks 2010 will totally blow your mind man. We’re getting the band back to together with many returning speakers and few new faces. Joel Oleson will be speaking this year, maybe he’ll grace us with his song stylings. Sadly, once again, Andrew Connell will not be able to attend SharePoint Saturday Ozarks, however he did feel the need to show his support in his own way. Prizes this year currently include books, software, a Zune HD, and much more! Wait Man… You said 2 days? I thought it was a one day event? Correct you are my herbal smelling friend… SharePoint Saturday Ozarks 2010 will spread the love an additional day this year. The first day will be all about the SharePoint love, on day 2 we will be taking a leisurely float down the Buffalo National River for those interested in a truly unique experience (no banjos allowed please).   Here are the details: WHAT 4 – 5 hour float down the Buffalo National River WHEN & WHERE Sunday June 13th. We will be leaving at 10am from the Parking Lot of: Gordon’s Motel & Canoe Rental Old Highway 7 Jasper, AR 72641 (870) 446-5252 Jasper is about 30 minutes south of Harrison, AR on Highway 7 South. You are responsible for bumming a ride to/from Gordon’s Motel, but they will be shuttling us to/from the river and providing canoes and a boxed lunch. WHAT ELSE? The float trip is dependent on the weather of course, we won’t be floating down the river in a thunderstorm, however I planned SPS Ozarks around a time of year ideal for floating. We aren’t talking class 5 rapids here, you don’t need any real skill, but you need to be okay with possibly tipping your canoe over once or twice. You can bring your own assorted beverages with you, but glass containers are not allowed on the river. I suggest a small cooler with extra snacks and drinks. Also bring clothing you can get wet in (these SharePoint people can get ornery). HOW DO I SIGN UP? When you register for SharePoint Saturday Ozarks, you will have the option to also sign up for the float trip. Seats are limited though! If you do not intend to go, please do not take someone else’s place.  The cost for the float trip will be about $35 dollars per person (which you are responsible for unless we find a sponsor). The price includes shuttle to/from river, canoe, life jackets, paddles, and boxed lunch. Far out man… how do I register??? You can register for SharePoint Saturday Ozarks by going to http://spsozarks.eventbrite.com/ We are limited to 200 people for the conference and 50 people for the float trip, so register today before we are sold out. Lodging for SharePoint Saturday Ozarks will once again take place at the Hotel Seville: Annex Suites are available for $103.20 This is So Groovy.. How can I help? I’m glad you asked! We are still looking for a few sponsors and one or two more speakers. If you are interested please let me know!  You can find out more information at http://www.sharepointsaturday.org/ozarks Hey… wait a minute…. what exactly IS SharePoint man??? Come to SharePoint Saturday Ozarks and find out!!  See you guys there!

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ASP.NET and WIF: Showing custom profile username as User.Identity.Name

    - by DigiMortal
    I am building ASP.NET MVC application that uses external services to authenticate users. For ASP.NET users are fully authenticated when they are redirected back from external service. In system they are logically authenticated when they have created user profiles. In this posting I will show you how to force ASP.NET MVC controller actions to demand existence of custom user profiles. Using external authentication sources with AppFabric Suppose you want to be user-friendly and you don’t force users to keep in mind another username/password when they visit your site. You can accept logins from different popular sites like Windows Live, Facebook, Yahoo, Google and many more. If user has account in some of these services then he or she can use his or her account to log in to your site. If you have community site then you usually have support for user profiles too. Some of these providers give you some information about users and other don’t. So only thing in common you get from all those providers is some unique ID that identifies user in service uniquely. Image above shows you how new user joins your site. Existing users who already have profile are directed to users homepage after they are authenticated. You can read more about how to solve semi-authorized users problem from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages. The other problem is related to usernames that we don’t get from all identity providers. Why is IIdentity.Name sometimes empty? The problem is described more specifically in my blog posting Identifying AppFabric Access Control Service users uniquely. Shortly the problem is that not all providers have claim called http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name. The following diagram illustrates what happens when user got token from AppFabric ACS and was redirected to your site. Now, when user was authenticated using Windows Live ID then we don’t have name claim in token and that’s why User.Identity.Name is empty. Okay, we can force nameidentifier to be used as name (we can do it in web.config file) but we have user profiles and we want username from profile to be shown when username is asked. Modifying name claim Now let’s force IClaimsIdentity to use username from our user profiles. You can read more about my profiles topic from my blog posting ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages and you can find some useful extension methods for claims identity from my blog posting Identifying AppFabric Access Control Service users uniquely. Here is what we do to set User.Identity.Name: we will check if user has profile, if user has profile we will check if User.Identity.Name matches the name given by profile, if names does not match then probably identity provider returned some name for user, we will remove name claim and recreate it with correct username, we will add new name claim to claims collection. All this stuff happens in Application_AuthorizeRequest event of our web application. The code is here. protected void Application_AuthorizeRequest() {     if (string.IsNullOrEmpty(User.Identity.Name))     {         var identity = User.Identity;         var profile = identity.GetProfile();         if (profile != null)         {             if (profile.UserName != identity.Name)             {                 identity.RemoveName();                   var claim = new Claim("http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", profile.UserName);                 var claimsIdentity = (IClaimsIdentity)identity;                 claimsIdentity.Claims.Add(claim);             }         }     } } RemoveName extension method is simple – it looks for name claims of IClaimsIdentity claims collection and removes them. public static void RemoveName(this IIdentity identity) {     if (identity == null)         return;       var claimsIndentity = identity as ClaimsIdentity;     if (claimsIndentity == null)         return;       for (var i = claimsIndentity.Claims.Count - 1; i >= 0; i--)     {         var claim = claimsIndentity.Claims[i];         if (claim.ClaimType == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name")             claimsIndentity.Claims.RemoveAt(i);     } } And we are done. Now User.Identity.Name returns the username from user profile and you can use it to show username of current user everywhere in your site. Conclusion Mixing AppFabric Access Control Service and Windows Identity Foundation with custom authorization logic is not impossible but a little bit tricky. This posting finishes my little series about AppFabric ACS and WIF for this time and hopefully you found some useful tricks, tips, hacks and code pieces you can use in your own applications.

    Read the article

  • C# in Depth, Third Edition by Jon Skeet, Manning Publications Co. Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/10/24/c-in-depth-third-edition-by-jon-skeet-manning-publications.aspx I started reading this ebook on September 28, 2013, the same day it was sent my way by Manning Publications Co. for review while it still being fresh off the press. So 1st thing – thanks to Manning for this opportunity and a free copy of this must have on every C# developer’s desk book! Several hours ago I finished reading this book (well, except a for a large portion of its quite lengthy appendix). I jumped writing this review right away while still being full of emotions and impressions from reading it thoroughly and running code examples. Before I go any further I would like say that I used to program on various platforms using various languages starting with the Mainframe and ending on Windows, and I gradually shifted toward dealing with databases more than anything, however it happened with me to program in C# 1 a lot when it was first released and then some C# 2 with a big leap in between to C# 5. So my perception and experience reading this book may differ from yours. Also what I want to tell is somewhat funny that back then, knowing some Java and seeing C# 1 released, initially made me drawing a parallel that it is a copycat language, how wrong was I… Interestingly, Jon programs in Java full time, but how little it was mentioned in the book! So more on the book: Be informed, this is not a typical “Recipes”, “Cookbook” or any set of ready solutions, it is rather targeting mature, advanced developers who do not only know how to use a number of features, but are willing to understand how the language is operating “under the hood”. I must state immediately, at the same time I am glad the author did not go into the murky depths of the MSIL, so this is a very welcome decision on covering a modern language as C# for me, thank you Jon! Frankly, not all was that rosy regarding the tone and structure of the book, especially the the first half or so filled me with several negative and positive emotions overpowering each other. To expand more on that, some statements in the book appeared to be bias to me, or filled with pre-justice, it started to look like it had some PR-sole in it, but thankfully this was all gone toward the end of the 1st third of the book. Specifically, the mention on the C# language popularity, Java is the #1 language as per https://sites.google.com/site/pydatalog/pypl/PyPL-PopularitY-of-Programming-Language (many other sources put C at the top which I highly doubt), also many interesting functional languages as Clojure and Groovy appeared and gained huge traction which run on top of Java/JVM whereas C# does not enjoy such a situation. If we want to discuss the popularity in general and say how fast a developer can find a new job that pays well it would be indeed the very Java, C++ or PHP, never C#. Or that phrase on language preference as a personal issue? We choose where to work or we are chosen because of a technology used at a given software shop, not vice versa. The book though it technically very accurate with valid code, concise examples, but I wish the author would give more concrete, real-life examples on where each feature should be used, not how. Another point to realize before you get the book is that it is almost a live book which started to be written when even C# 3 wasn’t around so a lot of ground is covered (nearly half of the book) on the pre-C# 3 feature releases so if you already have a solid background in the previous releases and do not plan to upgrade, perhaps half of the book can be skipped, otherwise this book is surely highly recommended. Alas, for me it was a hard read, most of it. It was not boring (well, only may be two times), it was just hard to grasp some concepts, but do not get me wrong, it did made me pause, on several occasions, and made me read and re-read a page or two. At times I even wondered if I have any IQ at all (LOL). Be prepared to read A LOT on generics, not that they are widely used in the field (I happen to work as a consultant and went thru a lot of code at many places) I can tell my impression is the developers today in best case program using examples found at OpenStack.com. Also unlike the Java world where having the most recent version is nearly mandated by the OSS most companies on the Microsoft platform almost never tempted to upgrade the .Net version very soon and very often. As a side note, I was glad to see code recently that included a nullable variable (myvariable? notation) and this made me smile, besides, I recommended that person this book to expand her knowledge. The good things about this book is that Jon maintains an active forum, prepared code snippets and even a small program (Snippy) that is happy to run the sample code saving you from writing any plumbing code. A tad now on the C# language itself – it sure enjoyed a wonderful road toward perfection and a very high adoption, especially for ASP development. But to me all the recent features that made this statically typed language more dynamic look strange. Don’t we have F#? Which supposed to be the dynamic language? Why do we need to have a hybrid language? Now the developers live their lives in dualism of the static and dynamic variables! And LINQ to SQL, it is covered in depth, but wasn’t it supposed to be dropped? Also it seems that very little is being added, and at a slower pace, e.g. Roslyn will come in late 2014 perhaps, and will be probably the only main feature. Again, it is quite hard to read this book as various chapters, C# versions mentioned every so often only if I only could remember what was covered exactly where! So the fact it has so many jumps/links back and forth I recommend the ebook format to make the navigations easier to perform and I do recommend using software that allows bookmarking, also make sure you have access to plenty of coffee and pizza (hey, you probably know this joke – who a programmer is) ! In terms of closing, if you stuck at C# 1 or 2 level, it is time to embrace the power of C# 5! Finally, to compliment Manning, this book unlike from any other publisher so far, was the only one as well readable (put it formatted) on my tablet as in Adobe Reader on a laptop.

    Read the article

  • New <%: %> Syntax for HTML Encoding Output in ASP.NET 4 (and ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the nineteenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers a small, but very useful, new syntax feature being introduced with ASP.NET 4 – which is the ability to automatically HTML encode output within code nuggets.  This helps protect your applications and sites against cross-site script injection (XSS) and HTML injection attacks, and enables you to do so using a nice concise syntax. HTML Encoding Cross-site script injection (XSS) and HTML encoding attacks are two of the most common security issues that plague web-sites and applications.  They occur when hackers find a way to inject client-side script or HTML markup into web-pages that are then viewed by other visitors to a site.  This can be used to both vandalize a site, as well as enable hackers to run client-script code that steals cookie data and/or exploits a user’s identity on a site to do bad things. One way to help mitigate against cross-site scripting attacks is to make sure that rendered output is HTML encoded within a page.  This helps ensures that any content that might have been input/modified by an end-user cannot be output back onto a page containing tags like <script> or <img> elements.  ASP.NET applications (especially those using ASP.NET MVC) often rely on using <%= %> code-nugget expressions to render output.  Developers today often use the Server.HtmlEncode() or HttpUtility.Encode() helper methods within these expressions to HTML encode the output before it is rendered.  This can be done using code like below: While this works fine, there are two downsides of it: It is a little verbose Developers often forget to call the HtmlEncode method New <%: %> Code Nugget Syntax With ASP.NET 4 we are introducing a new code expression syntax (<%:  %>) that renders output like <%= %> blocks do – but which also automatically HTML encodes it before doing so.  This eliminates the need to explicitly HTML encode content like we did in the example above.  Instead you can just write the more concise code below to accomplish the same thing: We chose the <%: %> syntax so that it would be easy to quickly replace existing instances of <%= %> code blocks.  It also enables you to easily search your code-base for <%= %> elements to find and verify any cases where you are not using HTML encoding within your application to ensure that you have the correct behavior. Avoiding Double Encoding While HTML encoding content is often a good best practice, there are times when the content you are outputting is meant to be HTML or is already encoded – in which case you don’t want to HTML encode it again.  ASP.NET 4 introduces a new IHtmlString interface (along with a concrete implementation: HtmlString) that you can implement on types to indicate that its value is already properly encoded (or otherwise examined) for displaying as HTML, and that therefore the value should not be HTML-encoded again.  The <%: %> code-nugget syntax checks for the presence of the IHtmlString interface and will not HTML encode the output of the code expression if its value implements this interface.  This allows developers to avoid having to decide on a per-case basis whether to use <%= %> or <%: %> code-nuggets.  Instead you can always use <%: %> code nuggets, and then have any properties or data-types that are already HTML encoded implement the IHtmlString interface. Using ASP.NET MVC HTML Helper Methods with <%: %> For a practical example of where this HTML encoding escape mechanism is useful, consider scenarios where you use HTML helper methods with ASP.NET MVC.  These helper methods typically return HTML.  For example: the Html.TextBox() helper method returns markup like <input type=”text”/>.  With ASP.NET MVC 2 these helper methods now by default return HtmlString types – which indicates that the returned string content is safe for rendering and should not be encoded by <%: %> nuggets.  This allows you to use these methods within both <%= %> code nugget blocks: As well as within <%: %> code nugget blocks: In both cases above the HTML content returned from the helper method will be rendered to the client as HTML – and the <%: %> code nugget will avoid double-encoding it. This enables you to default to always using <%: %> code nuggets instead of <%= %> code blocks within your applications.  If you want to be really hardcore you can even create a build rule that searches your application looking for <%= %> usages and flags any cases it finds as an error to enforce that HTML encoding always takes place. Scaffolding ASP.NET MVC 2 Views When you use VS 2010 (or the free Visual Web Developer 2010 Express) you’ll find that the views that are scaffolded using the “Add View” dialog now by default always use <%: %> blocks when outputting any content.  For example, below I’ve scaffolded a simple “Edit” view for an article object.  Note the three usages of <%: %> code nuggets for the label, textbox, and validation message (all output with HTML helper methods): Summary The new <%: %> syntax provides a concise way to automatically HTML encode content and then render it as output.  It allows you to make your code a little less verbose, and to easily check/verify that you are always HTML encoding content throughout your site.  This can help protect your applications against cross-site script injection (XSS) and HTML injection attacks.  Hope this helps, Scott

    Read the article

  • Transparency and AlphaBlending

    - by TechTwaddle
    In this post we'll look at the AlphaBlend() api and how it can be used for semi-transparent blitting. AlphaBlend() takes a source device context and a destination device context (DC) and combines the bits in such a way that it gives a transparent effect. Follow the links for the msdn documentation. So lets take a image like, and AlphaBlend() it on our window. The code to do so is below, (under the WM_PAINT message of WndProc) HBITMAP hBitmap=NULL, hBitmapOld=NULL; HDC hMemDC=NULL; BLENDFUNCTION bf; hdc = BeginPaint(hWnd, &ps); hMemDC = CreateCompatibleDC(hdc); hBitmap = LoadBitmap(g_hInst, MAKEINTRESOURCE(IDB_BITMAP1)); hBitmapOld = SelectObject(hMemDC, hBitmap); bf.BlendOp = AC_SRC_OVER; bf.BlendFlags = 0; bf.SourceConstantAlpha = 80; //transparency value between 0-255 bf.AlphaFormat = 0;    AlphaBlend(hdc, 0, 25, 240, 100, hMemDC, 0, 0, 240, 100, bf); SelectObject(hMemDC, hBitmapOld); DeleteDC(hMemDC); DeleteObject(hBitmap); EndPaint(hWnd, &ps);   The code above creates a memory DC (hMemDC) using CreateCompatibleDC(), loads a bitmap onto the memory DC and AlphaBlends it on the device DC (hdc), with a transparency value of 80. The result is: Pretty simple till now. Now lets try to do something a little more exciting. Lets get two images involved, each overlapping the other, giving a better demonstration of transparency. I am also going to add a few buttons so that the user can increase or decrease the transparency by clicking on the buttons. Since this is the first time I played around with GDI apis, I ran into something that everybody runs into sometime or the other, flickering. When clicking the buttons the images would flicker a lot, I figured out why and used something called double buffering to avoid flickering. We will look at both my first implementation and the second implementation just to give the concept a little more depth and perspective. A few pre-conditions before I dive into the code: - hBitmap and hBitmap2 are handles to the two images obtained using LoadBitmap(), these variables are global and are initialized under WM_CREATE - The two buttons in the application are labeled Opaque++ (make more opaque, less transparent) and Opaque-- (make less opaque, more transparent) - DrawPics(HWND hWnd, int step=0); is the function called to draw the images on the screen. This is called from under WM_PAINT and also when the buttons are clicked. When Opaque++ is clicked the 'step' value passed to DrawPics() is +20 and when Opaque-- is clicked the 'step' value is -20. The default value of 'step' is 0 Now lets take a look at my first implementation: //this funciton causes flicker, cos it draws directly to screen several times void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL;     BLENDFUNCTION bf;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     //create a memory DC     hMemDC = CreateCompatibleDC(hdc);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents of screen     if (step < 0)     {         RECT rect = {0, 0, 240, 200};         FillRect(hdc, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     BitBlt(hdc, 0, 25, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hdc, 0, 75, 240, 100, hMemDC, 0, 0, 240, 100, bf);     DeleteDC(hMemDC);     ReleaseDC(hWnd, hdc); }   In the code above, we first get the window DC using GetDC() and create a memory DC using CreateCompatibleDC(). Then we select hBitmap2 onto the memory DC and Blt it on the window DC (hdc). Next, we select the other image, hBitmap, onto memory DC and AlphaBlend() it over window DC. As I told you before, this implementation causes flickering because it draws directly on the screen (hdc) several times. The video below shows what happens when the buttons were clicked rapidly: Well, the video recording tool I use captures only 15 frames per second and so the flickering is not visible in the video. So you're gonna have to trust me on this, it flickers (; To solve this problem we make sure that the drawing to the screen happens only once and to do that we create an additional memory DC, hTempDC. We perform all our drawing on this memory DC and finally when it is ready we Blt hTempDC on hdc, and the images are displayed in one go. Here is the code for our new DrawPics() function: //no flicker void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL, hTempDC=NULL;     BLENDFUNCTION bf;     HBITMAP hBitmapTemp=NULL, hBitmapOld=NULL;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     hMemDC = CreateCompatibleDC(hdc);     hTempDC = CreateCompatibleDC(hdc);     hBitmapTemp = CreateCompatibleBitmap(hdc, 240, 150);     hBitmapOld = (HBITMAP)SelectObject(hTempDC, hBitmapTemp);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents     if (step < 0)     {         RECT rect = {0, 0, 240, 150};         FillRect(hTempDC, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     //Blt hBitmap2 directly to hTempDC     BitBlt(hTempDC, 0, 0, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hTempDC, 0, 50, 240, 100, hMemDC, 0, 0, 240, 100, bf);     //now hTempDC is ready, blt it directly on hdc     BitBlt(hdc, 0, 25, 240, 150, hTempDC, 0, 0, SRCCOPY);     SelectObject(hTempDC, hBitmapOld);     DeleteObject(hBitmapTemp);     DeleteDC(hMemDC);     DeleteDC(hTempDC);     ReleaseDC(hWnd, hdc); }   This function is very similar to the first version, except for the use of hTempDC. Another point to note is the use of CreateCompatibleBitmap(). When a memory device context is created using CreateCompatibleDC(), the context is exactly one monochrome pixel high and one monochrome pixel wide. So in order for us to draw anything onto hTempDC, we first have to set a bitmap on it. We use CreateCompatibleBitmap() to create a bitmap of required dimension (240x150 above), and then select this bitmap onto hTempDC. Think of it as utilizing an extra canvas, drawing everything on the canvas and finally transferring the contents to the display in one scoop. And with this version the flickering is gone, video follows:   If you want the entire solutions source code then leave a message, I will share the code over SkyDrive.

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • Developer’s Life – Disaster Lessons – Notes from the Field #039

    - by Pinal Dave
    [Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster?  In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/) Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query. Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog. For now, though, onto how I almost got my vehicle hit by a train… The Train Crash That Almost Was…. My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing. When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there. Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks… Here are Those Lessons… It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that. Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later. Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today? In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Customize the Default Screensavers in Windows 7 and Vista

    - by Matthew Guay
    Windows 7 and Vista include a nice set of backgrounds, but unfortunately most of them aren’t configurable by default.  Thanks to a free app and some registry changes, however, you can make the default screensavers uniquely yours! Customize the default screensavers If you’ve ever pressed the Customize button on most of the default screensavers in Windows 7 and Vista, you were probably greeted with this message: A little digging in the registry shows that this isn’t fully correct.  The default screensavers in Vista and 7 do have options you can set, but they’re not obvious.  With the help of an app or some registry tips, you can easily customize the screensavers to be uniquely yours.  Here’s how you can do it with an app or in the registry. Customize Windows Screensavers with System Screensavers Tweaker Download the System Screensavers Tweaker (link below), and unzip the folder.  Run nt6srccfg.exe in the folder to tweak your screensavers.  This application lets you tweak the screensavers’ registry settings graphically, and it works great in all editions of Windows Vista and 7, including x64 versions. Change any of the settings you want in the screensaver tweaker, and click Apply. To preview the changes to your screensaver, open the Screen Saver settings window as normal by right-clicking on the desktop, and selecting Personalize. Click on the Screensaver button on the bottom right. Now, select your modified screensaver, and click Preview to see your changes. You can change a wide variety of settings for the Bubbles, Ribbons, and Mystify screensavers in Windows 7 and Vista, as well as the Aurora screensaver in Windows Vista.  The tweaks to the Bubbles screensaver are especially nice.  Here’s how the Bubbles look without transparency. And, by tweaking a little more, you get a screensaver that looks more like a screen full of marbles. Ribbons and Mystify each have less settings, but still can produce some unique effects.   How’s that for a brilliant screensaver? And, if you want to return your screensavers to their default settings, simply run the System Screensavers Tweaker and select Reset to defaults on any screensaver you wish to reset. Customize Windows Screensavers in the Registry If you prefer to roll up your sleeves and tweak Windows under-the-hood, then here’s how you can customize the screensavers yourself in the Registry.  Type regedit into the search box in the Start menu, browse to the key for each screensaver, and add or modify the DWORD values listed for that screensaver using the Decimal base. Please Note: Tweaking the Registry can be difficult, so if you’re unsure, just use the tweaking application above. Also, you’ll probably want to create a System Restore Point.   Bubbles To edit the Bubbles screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Bubbles Now, add or modify the following DWORD values to tweak the screensaver: MaterialGlass – enter 0 for solid or 1 for transparent bubbles Radius – enter a number between 1090000000 and 1130000000; the larger the number, the larger the bubbles’ radius ShowBubbles – enter 0 to show a black background or 1 to show the current desktop behind the bubbles ShowShadows – enter 0 for no shadow or 1 for shadows behind the bubbles SphereDensity – enter a number from 1000000000 to 2100000000; the higher the number, the more bubbles on the screen. TurbulenceNumOctaves – enter a number from 1 to 255; the higher the number, the faster the bubble colors will change. Ribbons To edit the Ribbons screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Ribbons Now, add or modify the following DWORD values to tweak the screensaver: Blur – enter 0 to prevent ribbons from fading, or 1 to have them fade away after a few moments. Numribbons – enter a number from 1 to 100; the higher the number, the more ribbons on the screen. RibbonWidth – enter a number from 1000000000 to 1080000000; the higher the number, the thicker the ribbons. Mystify To edit the Mystify screensaver, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Mystify Now, add or modify the following DWORD values to tweak the screensaver: Blur – enter 0 to prevent lines from fading, or 1 to have them fade away after a few moments. LineWidth – enter a number from 1000000000 to 1080000000; the higher the number, the wider the lines. NumLines – enter a number from 1 to 100; the higher the value, the more lines on the screen. Aurora – Windows Vista only To edit the Aurora screensaver in Windows Vista, browse to the following in regedit: HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Screensavers\Aurora Now, add or modify the following DWORD values to tweak the screensaver: Amplitude – enter a value from 500000000 to 2000000000; the higher the value, the slower the motion. Brightness – enter a value from 1000000000 to 1050000000; the higher the value, the brighter the affect. NumLayers – enter a value from 1 to 15; the higher the value, the more aurora layers displayed. Speed – enter a value from 1000000000 to 2100000000; the higher the value, the faster the cycling. Conclusion Although the default screensavers are nice, they can be boring after awhile with their default settings.  But with these tweaks, you can create a variety of vibrant screensavers that should keep your desktop fresh and interesting. Link Download the System Screensavers Tweaker Similar Articles Productive Geek Tips Create Icons to Start the Screensaver on Windows 7 or VistaMake Your Windows XP Logon Screen Look Like Windows VistaSpeed up Windows Vista Start Menu Search By Limiting ResultsRoundup: 16 Tweaks to Windows Vista Look & FeelSet XP as the Default OS in a Windows Vista Dual-Boot Setup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor

    Read the article

  • Customize the Five Windows Folder Templates

    - by Mark Virtue
    Are you’re particular about the way Windows Explorer presents each folder’s contents? Here we show you how to take advantage of Explorer’s built-in templates, which cuts down the time it takes to do customizations. Note: The techniques in this article apply to Windows XP, Vista, and Windows 7. When opening a folder for the first time in Windows Explorer, we are presented with a standard default view of the files and folders in that folder. It may be that the items are presented are perfectly fine, but on the other hand, we may want to customize the view.  The aspects of it that we can customize are the following: The display type (list view, details, tiles, thumbnails, etc) Which columns are displayed, and in which order The widths of the visible columns The order in which the files and folders are sorted Any file groupings Thankfully, Windows offers us a shortcut.  A particular folder’s settings can be used as a “template” for other, similar folders.  In fact, we can store up to five separate sets of folder presentation configurations.  Once we save the settings for a particular template, that template can then be applied to other folders. Customize Your First Folder We’ll start by setting up the first of our templates – the default one.  Once we create this template and apply it, the vast majority of the folders in our file system will change to match it, so it’s important that we set it up very carefully.  The first step in creating and applying the template is to customize one folder with the settings that all the rest will have. Choose a folder that is typical of the folders that you wish to have this default template.  Select it in Windows Explorer.  To ensure that it is a suitable candidate, right-click the folder name and select Properties, then go to the Customize tab.  Ensure that this folder is marked as General Items.  If it is not, either choose a different folder or select General Items from the list. Click OK.  Now we’re ready to customize our first folder. Changing the way one single folder is presented is straightforward.  We start with the folder’s display type.  Click the Change your view button in the top-right corner of every Explorer window. Each time you click the button, the folder’s view cycles to the next view type.  Alternatively you can click the little down-arrow next to the button to see all the display types at once, and select the one you want. Click the view you want, or drag the slider next to the one you want. If you have chosen Details, then the next thing you may wish to change is which columns are displayed, and the order of these.  To choose which columns are displayed, simply right-click on any column heading.  A list of the columns currently being display appears. Simply uncheck a column if you don’t want it displayed, and check the columns that you want displayed.  If you want some information displayed about your files that is not listed here, then click the More… button for a full list of file attributes. There’s a lot of them! To change the order of the columns that are currently being displayed, simply click on a column heading and drag it to where you think it should be.  To change the width of a column, click the line that represents the right-hand edge of the column and drag it left or right. To sort by a column, click once on that column.  To reverse the sort-order, click that same column again. To change the groupings of the files in the folder, right-click in a blank area of the folder, select Group by, and select the appropriate column. Apply This Default Template to All Similar Folders Once you have the folder exactly the way you want it, we now use this folder as our default template for most of the folders in our file system.  To do this, ensure that you are still in the folder you just customized, and then, from the Organize menu in Explorer, click on Folder and search options. Then select the View tab and click the Apply to Folders button. After you’ve clicked OK, visit some of the other folders in your file system.  You should see that most have taken on these new settings. What we’ve just done, in effect, is we have customized the General Items template.  This is one of five templates that Windows Explorer uses to display folder contents.  The five templates are called (in Windows 7): General Items Documents Pictures Music Videos When a folder is opened, Windows Explorer examines the contents to see if it can automatically determine which folder template to use to display the folder contents.  If it is not obvious that the folder contents falls into any of the last four templates, then Windows Explorer chooses the General Items template.  That’s why most of the folders in your file system are shown using the General Items template. Changing the Other Four Templates If you want to adjust the other four templates, the process is very similar to what we’ve just done.  If you wanted to change the “Music” template, for example, the steps would be as follows: Select a folder that contains music items Apply the existing Music template to the folder (even if it doesn’t look like you want it to) Customize the folder to your personal preferences Apply the new template to all “Music” folders A fifth step would be:  When you open a folder that contains music items but is not automatically displayed using the Music template, you manually select the Music template for that folder. First, select a folder that contains music items.  It will probably be displayed using the existing Music template: Next, ensure that it is using the Music template.  If it’s not, then manually select the Music template. Next, customize the folder to suit your personal preferences (here we’ve added a couple of columns, and sorted by Artist). Now we can set this view to be our Music template.  Choose Organize, then the View tab, and click the Apply to Folders button. Note: The only folders that will inherit these settings are the ones that are currently (or will soon be) using the Music template. Now, if you have any folder that contains music items, and you want it to inherit all of these settings, then right-click the folder name, choose Properties, and select that this folder should use the Music template.  You can also cehck the box entitled Also apply this template to all subfolders if you want to save yourself even more time with all the sub-folders. Conclusion It’s neat to be able to set up templates for your folder views like this.  It’s a shame that Microsoft didn’t take the concept just a little further and allow you to create as many templates as you want. Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesCustomize the Windows 7 or Vista Send To MenuFix for New Contact Group Button Not Displaying in VistaWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Make Your Last Minute Holiday Cards with Microsoft Word TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Creating A SharePoint Parent/Child List Relationship&ndash; SharePoint 2010 Edition

    - by Mark Rackley
    Hey blog readers… It has been almost 2 years since I posted my most read blog on creating a Parent/Child list relationship in SharePoint 2007: Creating a SharePoint List Parent / Child Relationship - Out of the Box And then a year ago I improved on my method and redid the blog post… still for SharePoint 2007: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX Since then many of you have been asking me how to get this to work in SharePoint 2010, and frankly I have just not had time to look into it. I wish I could have jumped into this sooner, but have just recently began to look at it. Well.. after all this time I have actually come up with two solutions that work, neither of them are as clean as I’d like them to be, but I wanted to get something in your hands that you can start using today. Hopefully in the coming weeks and months I’ll be able to improve upon this further and give you guys some better options. For the most part, the process is identical to the 2007 process, but you have probably found out that the list view web parts in 2010 behave differently, and getting the Parent ID to your new child form can be a pain in the rear (at least that’s what I’ve discovered). Anyway, like I said, I have found a couple of solutions that work. If you know of a better one, please let us know as it bugs me that this not as eloquent as my 2007 implementation. Getting on the same page First thing I’d recommend is recreating this blog: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX in SharePoint 2010… There are some vague differences, but it’s basically the same…  Here’s a quick video of me doing this in SP 2010: Creating Lists necessary for this blog post Now that you have the lists created, lets set up the New Time form to use a QueryString variable to populate the Parent ID field: Creating parameters in Child’s new item form to set parent ID Did I talk fast enough through both of those videos? Hopefully by now that stuff is old hat to you, but I wanted to make sure everyone could get on the same page.  Okay… let’s get started. Solution 1 – XSLTListView with Javascript This solution is the more elegant of the two, however it does require the use of a little javascript.  The other solution does not use javascript, but it also doesn’t use the pretty new SP 2010 pop-ups.  I’ll let you decide which you like better. The basic steps of this solution are: Inserted a Related Item View Insert a ContentEditorWebPart Insert script in ContentEditorWebPart that pulls the ID from the Query string and calls the method to insert a new item on the child entry form Hide the toolbar from data view to remove “add new item” link. Again, you don’t HAVE to use a CEWP, you could just put the javascript directly in the page using SPD.  Anyway, here is how I did it: Using Related Item View / JavaScript Here’s the JavaScript I used in my Content Editor Web Part: <script type="text/javascript"> function NewTime() { // Get the Query String values and split them out into the vals array var vals = new Object(); var qs = location.search.substring(1, location.search.length); var args = qs.split("&"); for (var i=0; i < args.length; i++) { var nameVal = args[i].split("="); var temp = unescape(nameVal[1]).split('+'); nameVal[1] = temp.join(' '); vals[nameVal[0]] = nameVal[1]; } var issueID = vals["ID"]; //use this to bring up the pretty pop up NewItem2(event,"http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID); //use this to open a new window //window.location="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID; } </script> Solution 2 – DataFormWebPart and exact same 2007 Process This solution is a little more of a hack, but it also MUCH more close to the process we did in SP 2007. So, if you don’t mind not having the pretty pop-up and prefer the comforts of what you are used to, you can give this one a try.  The basics steps are: Insert a DataFormWebPart instead of the List Data View Create a Parameter on DataFormWebPart to store “ID” Query String Variable Filter DataFormWebPart using Parameter Insert a link at bottom of DataForm Web part that points to the Child’s new item form and passes in the Parent Id using the Parameter. See.. like I told you, exact same process as in 2007 (except using the DataFormWeb Part). The DataFormWebPart also requires a lot more work to make it look “pretty” but it’s just table rows and cells, and can be configured pretty painlessly.  Here is that video: Using DataForm Web Part One quick update… if you change the link in this solution from: <tr> <td><a href="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}">Click here to create new item...</a> </td> </tr> to: <tr> <td> <a href="javascript:NewItem2(event,'http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}');">Click here to create new item...</a> </td> </tr> It will open up in the pretty pop up and act the same as solution one… So… both Solutions will now behave the same to the end user. Just depends on which you want to implement. That’s all for now… Remember in both solutions when you have them working, you can make the “IssueID” invisible to users by using the “ms-hidden” class (it’s my previous blog post on the subject up there). That’s basically all there is to it! No pithy or witty closing this time… I am sorry it took me so long to dive into this and I hope your questions are answered. As I become more polished myself I will try to come up with a cleaner solution that will make everyone happy… As always, thanks for taking the time to stop by.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >