Search Results

Search found 2671 results on 107 pages for 'cool rr'.

Page 100/107 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • ImgBurn fails to burn data CD-R disk due to "Layouts do not match" error

    - by 0xAether
    I have a reoccurring problem with the program ImgBurn. Whenever I try and burn anything to a CD-R using ImgBurn it burns just fine, except for when I go and verify the disk. It tells me that the "Layouts do not match". Windows 7 shows the disk as completely blank. Although, I see on the bottom of the disk it has been written to. I can burn ISO files to DVD-R's just fine. This only seems to happen with CD-R's. The CD-R's I'm using are Memorex Cool Colors 52x CD-R's. I have looked on Google, and it seems like I'm not the only one this happens to. Unfortunately, no one is able to provide an explanation. I have included the log file from the last CD I just burnt. If you need anything else to better diagnose this problem, I will gladly provide it. ; //****************************************\\ ; ImgBurn Version 2.5.7.0 - Log ; Monday, 19 November 2012, 16:11:57 ; \\****************************************// ; ; I 16:04:55 ImgBurn Version 2.5.7.0 started! I 16:04:55 Microsoft Windows 7 Ultimate x64 Edition (6.1, Build 7601 : Service Pack 1) I 16:04:55 Total Physical Memory: 4,156,380 KB - Available: 3,317,144 KB I 16:04:55 Initialising SPTI... I 16:04:55 Searching for SCSI / ATAPI devices... I 16:04:56 -> Drive 1 - Info: Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:04:56 Found 1 DVD±RW/RAM! I 16:05:37 Operation Started! I 16:05:37 Source File: C:\Users\Aaron\Desktop\VMware Workstation 9.iso I 16:05:37 Source File Sectors: 223,057 (MODE1/2048) I 16:05:37 Source File Size: 456,820,736 bytes I 16:05:37 Source File Volume Identifier: VMwareWorksta9 I 16:05:37 Source File Volume Set Identifier: 20121119_2102 I 16:05:37 Source File File System(s): ISO9660, Joliet I 16:05:37 Destination Device: [1:0:0] Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:05:37 Destination Media Type: CD-R (Disc ID: 97m17s06f, Moser Baer India) I 16:05:37 Destination Media Supported Write Speeds: 10x, 16x, 20x, 24x I 16:05:37 Destination Media Sectors: 359,847 I 16:05:37 Write Mode: CD I 16:05:37 Write Type: SAO I 16:05:37 Write Speed: 6x I 16:05:37 Lock Volume: Yes I 16:05:37 Test Mode: No I 16:05:37 OPC: No I 16:05:37 BURN-Proof: Enabled W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - Wanted: 1,058 KB/s (6x), Got: 1,764 KB/s (10x) / 11,080 KB/s (63x) W 16:05:37 The drive only supports writing these discs at 10x, 16x, 20x, 24x. I 16:05:38 Filling Buffer... (80 MB) I 16:05:40 Writing LeadIn... I 16:06:07 Writing Session 1 of 1... (1 Track, LBA: 0 - 223056) I 16:06:07 Writing Track 1 of 1... (MODE1/2048, LBA: 0 - 223056) I 16:11:00 Synchronising Cache... I 16:11:18 Exporting Graph Data... I 16:11:18 Graph Data File: C:\Users\Aaron\AppData\Roaming\ImgBurn\Graph Data Files\Optiarc_DVD_RW_AD-7560S_SH03_MONDAY-NOVEMBER-19-2012_4-05_PM_97m17s06f_6x.ibg I 16:11:18 Export Successfully Completed! I 16:11:18 Operation Successfully Completed! - Duration: 00:05:41 I 16:11:18 Average Write Rate: 1,522 KB/s (10.1x) - Maximum Write Rate: 1,544 KB/s (10.3x) I 16:11:18 Cycling Tray before Verify... W 16:11:23 Waiting for device to become ready... I 16:11:47 Device Ready! E 16:11:47 CompareImageFileLayouts Failed! - Session Count Not Equal (1/0) E 16:11:47 Verify Failed! - Reason: Layouts do not match. I 16:11:57 Close Request Acknowledged I 16:11:57 Closing Down... I 16:11:57 Shutting down SPTI... I 16:11:57 ImgBurn closed!

    Read the article

  • nginx php-fpm empty site

    - by Katafalkas
    I am writing this as I was stuck trying to fix it whole night. We have recently migrated to a new server. Before migration we have been running our site on nginx + cgi (script). After migration we decided to try apache + mod_php. It was rather terrible, and I would like to migrate back to nginx, but this time I want it with php-fpm (as people say its a cool) So I did follow a number of guides and I think i done everything correctly. As well as that I have our old config files for "server" section, which i reviewed and places into new config. So I ended up having an empty site when I enter our URL. (By empty I mean blank page, no letter, errors or anything at all.) In the access log there are some weird errors like: 123.242.148.54 - - [22/Mar/2012:06:08:11 +0200] "-" 400 0 "-" "-" My guess is that php-fpm is not working, but not sure how to confirm it. Maybe some1 could give some help ? Would much appreciate. my nginx config: user nginx; worker_processes 12; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/mime.types; #default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; fastcgi_buffer_size 256k; fastcgi_buffers 4 256k; server { listen 80; server_name www.example.com; access_log /var/log/nginx/example.access.log; error_log /var/log/nginx/error.log; root /home/www/example; index index.php; client_max_body_size 50M; #error_page 404 /404.html; # phpMyAdmin location /phpmyadmin { root /usr/share/; error_log /var/log/nginx/phpmyadmin.log; try_files $uri $uri/ /index.php; } location ~ ^/phpmyadmin/.*\.php$ { root /usr/share/; error_log /var/log/nginx/phpmyadmin.log; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Munin location /monitoring { root /var/www/; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/conf.d/monitoring_users; error_log /var/log/nginx/monitoring.log; index index.html; } # The site location / { try_files $uri $uri/ /index.php/?$uri&$args; } # PHP interpreter location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } }

    Read the article

  • nginx 502 Bad Gateway on every external site

    - by Leandros
    I just installed nginx and followed the guides on the official site, to set it up with php5-fpm, but it just won't work. Not even the default site, without php is working outside of my server. Tried listen = 127.0.0.1:7777 and listen = /var/run/php5-fpm.sock Both don't work. I can access http://localhost with lynx on my server, but not from somewhere else (with external ip obviously). Yes, the php5-fpm deamons are running, yes the port (80 and 7777) is opened. Don't work with php-cgi as well. My config: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; proxy_buffers 16 16k; proxy_buffer_size 32k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; } Server config: (symlinked to sites-enabled) server { server_name skilloverflow.de *.skilloverflow.de; root /var/www/blog.skilloverflow.de/htdocs; index index.php; error_log /var/log/nginx/skilloverflow.error.log; access_log /var/log/nginx/skilloverflow.access.log; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:7777; fastcgi_index index.php; include fastcgi_params; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } } PHP Version: 5.4.17-1 nginx version: 1.2.1 Debian 6.0.7 Linux 2.6.32 Edit: Lighttpd is still installed, does that matter? It's not running though. Edit 2: No error or access log is generated. They're all empty.

    Read the article

  • bash: per-command history. How does it work?

    - by romainl
    OK. I have an old G5 running Leopard and a Dell running Ubuntu 10.04 at home and a MacPro also running Leopard at work. I use Terminal.app/bash a lot. On my home G5 it exhibits a nice feature: using ? to navigate history I get the last command starting with the few letters that I've typed. This is what I mean (| represents the caret): $ ssh user@server $ vim /some/file/just/to/populate/history $ ss| So, I've typed the two first letters of "ssh", hitting ? results in this: $ ssh user@server instead of this, which is the behaviour I get everywhere else : $ vim /some/file/just/to/populate/history If I keep on hitting ? or ?, I can navigate through the history of ssh like this: $ ssh otheruser@otherserver $ ssh user@server $ ssh yetanotheruser@yetanotherserver It works the same for any command like cat, vim or whatever. That's really cool. Except that I have no idea how to mimic this behaviour on my other machines. Here is my .profile: export PATH=/Developer/SDKs/flex_sdk_3.4/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/sw/bin:/sw/sbin:/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:$HOME/Applications/bin:/usr/X11R6/bin export MANPATH=/usr/local/share/man:/usr/local/man:opt/local/man:sw/share/man export INFO=/usr/local/share/info export PERL5LIB=/opt/local/lib/perl5 export PYTHONPATH=/opt/local/bin/python2.7 export EDITOR=/opt/local/bin/vim export VISUAL=/opt/local/bin/vim export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home export TERM=xterm-color export GREP_OPTIONS='--color=auto' GREP_COLOR='1;32' export CLICOLOR=1 export LS_COLORS='no=00:fi=00:di=01;34:ln=target:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.deb=00;31:*.rpm=00;31:*.TAR=00;31:*.TGZ=00;31:*.ARJ=00;31:*.TAZ=00;31:*.LZH=00;31:*.ZIP=00;31:*.Z=00;31:*.Z=00;31:*.GZ=00;31:*.BZ2=00;31:*.DEB=00;31:*.RPM=00;31:*.jpg=00;35:*.png=00;35:*.gif=00;35:*.bmp=00;35:*.ppm=00;35:*.tga=00;35:*.xbm=00;35:*.xpm=00;35:*.tif=00;35:*.png=00;35:*.fli=00;35:*.gl=00;35:*.dl=00;35:*.psd=00;35:*.JPG=00;35:*.PNG=00;35:*.GIF=00;35:*.BMP=00;35:*.PPM=00;35:*.TGA=00;35:*.XBM=00;35:*.XPM=00;35:*.TIF=00;35:*.PNG=00;35:*.FLI=00;35:*.GL=00;35:*.DL=00;35:*.PSD=00;35:*.mpg=00;36:*.avi=00;36:*.mov=00;36:*.flv=00;36:*.divx=00;36:*.qt=00;36:*.mp4=00;36:*.m4v=00;36:*.MPG=00;36:*.AVI=00;36:*.MOV=00;36:*.FLV=00;36:*.DIVX=00;36:*.QT=00;36:*.MP4=00;36:*.M4V=00;36:*.txt=00;32:*.rtf=00;32:*.doc=00;32:*.odf=00;32:*.rtfd=00;32:*.html=00;32:*.css=00;32:*.js=00;32:*.php=00;32:*.xhtml=00;32:*.TXT=00;32:*.RTF=00;32:*.DOC=00;32:*.ODF=00;32:*.RTFD=00;32:*.HTML=00;32:*.CSS=00;32:*.JS=00;32:*.PHP=00;32:*.XHTML=00;32:' export LC_ALL=C export LANG=C stty cs8 -istrip -parenb bind 'set convert-meta off' bind 'set meta-flag on' bind 'set output-meta on' alias ip='curl http://www.whatismyip.org | pbcopy' alias ls='ls -FhLlGp' alias la='ls -AFhLlGp' alias couleurs='$HOME/Applications/bin/colors2.sh' alias td='$HOME/Applications/bin/todo.sh' alias scale='$HOME/Applications/bin/scale.sh' alias stree='$HOME/Applications/bin/tree' alias envoi='$HOME/Applications/bin/envoi.sh' alias unfoo='$HOME/Applications/bin/unfoo' alias up='cd ..' alias size='du -sh' alias lsvn='svn list -vR' alias jsc='/System/Library/Frameworks/JavaScriptCore.framework/Versions/A/Resources/jsc' alias asl='sudo rm -f /private/var/log/asl/*.asl' alias trace='tail -f $HOME/Library/Preferences/Macromedia/Flash\ Player/Logs/flashlog.txt' alias redis='redis-server /opt/local/etc/redis.conf' source /Users/johncoltrane/Applications/bin/git-completion.sh export GIT_PS1_SHOWUNTRACKEDFILES=1 export GIT_PS1_SHOWUPSTREAM="verbose git" export GIT_PS1_SHOWDIRTYSTATE=1 export PS1='\n\[\033[32m\]\w\[\033[0m\] $(__git_ps1 "[%s]")\n\[\033[1;31m\]\[\033[31m\]\u\[\033[0m\] $ \[\033[0m\]' mkcd () { mkdir -p "$*" cd "$*" } function cdl { cd $1 la } n() { $EDITOR ~/Dropbox/nv/"$*".txt } nls () { ls -c ~/Dropbox/nv/ | grep "$*" } copy(){ curl -s -F 'sprunge=<-' http://sprunge.us | pbcopy } if [ -f /opt/local/etc/profile.d/cdargs-bash.sh ]; then source /opt/local/etc/profile.d/cdargs-bash.sh fi if [ -f /opt/local/etc/bash_completion ]; then . /opt/local/etc/bash_completion fi Any idea?

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Publishing an Excel spreadsheet using Microsoft SBS 2008 to a web page that is viewable by mobile ph

    - by Dave Heath
    I am getting well out of my “superuser” depth here and would love some support. At work we have an Excel workbook (*.xls format circa Office 2003) which maintains our “engineers” timesheet. This handles what events we are doing across the year and how many “work units” it is. As far as a workbook goes, it is fairly simple with just a few =SUM(range) cells and some linked across sheets (12 sheets, one for each month) It is stored on a server, in a folder that provides “management” with full access and “engineers” with read-only access. The workbook itself is read-only for “engineers” and full access for “management”. I think these permissions are controlled through Active Directory. The workbook is protected with a password, assumingly to allow “management” to edit it even if they are working at a terminal logged in as an “engineer”. This protection prevents “engineers” from going to certain cells to see formulae and therefore editing them. The workbook has a macro which saves and closes it ten minutes after opening. This is to stop other “management” from being locked out by any one person who has logged in with editing privileges. I hope this is making sense to someone... :S Now then, we have Microsoft Small Business Server 2008. We have a shiny new web-based login for when we are offsite so we can get to Exchange webmail and our internal site (which uses Sharepoint 3.0). “Management” would like to be able to publish this timesheet automatically after changes (they don’t want to have to do anything different to what they are currently doing) so that using an iPhone “engineers” can check on it while out of the office. I am currently having a look at “Excel Services” for Office 2007 on TechNet but I am not sure if I am running down the right garden path at the moment. < EDIT This seems to suggest that I have to have Sharepoint Server 2007, with no mention of Sharepoint 3.0... ... "MOSS builds on WSS by adding both core features as well as end user web parts" - Wikipedia entry for Microsoft Office SharePoint Server (MOSS) this is not good news... "...and using the ASP.NET APIs, web parts can be written to extend the functionality of WSS." Wikipedia entry for Windows Sharepoint Services. Could this bring back what I need? Is this good news? Do I need to start learning ASP.NET? This link here implies that we need MOSS to do what I want and the bosses say we aint' getting it. http://serverfault.com/questions/20198/what-is-some-cool-things-you-can-do-with-sharepoint-2007/22128#22128 Back to the drawing board. < /EDIT Please could someone suggest some “further reading” for me to help point me in the right direction or to put me back on the right track. Many thanks. I will try to keep this up to date with how I get on.

    Read the article

  • Linux: Apple Wireless A1314 Fn key not registered, looks like software bug

    - by ramplank
    I'm trying to set up my Apple Wireless Keyboard with my Kubuntu systems. These are PC hardware powered by Intel Atom and Intel i5 respectively. The keyboard has a US keyboard layout and has model number A1314 written on the back. It takes two AA batteries. I'm saying that because it appears there are multiple types of model A1314. I have tried this on a 10.04, 11.04, 11.10 and 12.04 system with no success. Every time using a bluetooth dongle and the KDE bluetooth notification tray applet, the keyboard can be connected. In both cases it shows up as "Apple Wireless Keyboard". Almost everything works as expected, in fact, I'm typing on it right now. But one thing doesn't: The Fn key. I'd like to use Fn + Down Arrow as PgDn / Page Down, I understand this is default behaviour on Apple keyboards. And of course I'd like the same for Page Up, Home and End. I'll stick to Page Down in my example. I used the xev tool to see the keycodes the system receives, and if I press on Fn nothing happens, and nothing is registered. If I press Fn + Down Arrow, xev only registers the down arrow. Here's the output from my 11.04 system to illustrate: Press just the Fn key: no output Press Down Arrow key: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699773, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699860, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False Press Fn+Down Arrow Keys together: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701548, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701623, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False I've been searching this forum and other Linux-related forums for hours but I still have not found a solution. I mostly found advice on how to fix this when using an actual apple laptop or desktop, but I don't have that. They said to try something like the following echo 2 > /sys/module/hid_apple/ ... But since there's no hid_apple directory present on my systems, I've needed to modprobe hid_apple first. That didn't help either. I'm cool with changing some config files, or compiling my own patched kernel if that's necessary. I currently have a 10.04 and 12.04 system available to test. The same issue occurs when hooked up to Windows 7. Fn key still does nothing, not by itself or in combination with other keys. With some AutoHotkey fiddling, I was able to confirm the key is registered as pressed, but ignored by default. A custom AutoHotkey script can fix that. But AutoHotkey is only for Windows, I want my problem fixed on Linux. Hooked up to an iPad 2 it only works in combination with the F1-F12 keys. Not with the arrow keys. If the screen of the ipad is off, and I press just the Fn key, the screen will come on, so the key itself is registered as pressed. So to sum up my question: Can anyone help me get Page Up, Page Down, Home and End to work on this keyboard, when that requires me to use an Fn key which is currently not registered?

    Read the article

  • My 3D Games crashing in these scenarios

    - by desaivv
    I have a situation here which I am unable to solve. I bought a PC last year March, here are my specs: Intel Core i3 550 @ 3GHz 4 GB RAM @400 MHz XFX GeForce 9500 GT graphics card 1GB @550MHz 500 GB HDD Lately as soon as I load my save game of Skyrim, it crashes. I have been playing Skyrim since I joined Gaming.SE site. Crashes as in the entire scene gets red lines. I can not ALT + TAB back or even CTRL + ALT + DEL either. My only recourse is a hard reset via the power button. Can not take a screen shot either. I have the latest Forceware 296.10 drivers also. This has been happening since the last 2 weeks. I always use Driver Sweeper to clean my old drivers, since that is what XFXForce recommends before installing new drivers. I installed MSI Afterburner lately to see my GPU temperature. My GPU is default, never over-clocked it. In MSI's Afterburner, I can not adjust fan speed. It is greyed out. Also in settings there is no fan tab. With normal Internet browsing it stays at 51 C. Ran Memtest86 over night with level 11. Took about 13 hours, but no errors in my RAM. I even re-installed my OS, with just the 296 drivers. The fan for the GPU does come on. I can play Diablo 2. I can not get past Warcraft 3's menu selection. There WAS some dust in my machine, but I always try and keep everything clean, since in my home town dust is an issue. Always keep cool my entire PC cabinet. My friend came with his functioning graphics card, we bought our PCs at the same time with exact same specifications. His card did not work either. Same problem with the scene freezing with red lines. I did do my research before posting here. That is how I was able to learn about MSI Afterburner, Driver Sweeper, SpeedFan etc. I followed posts on Tom's Hardware too regarding people that had similar problems. One person suggested and was followed by worked as well the suggestion to "Bake the card in an oven". Since I bought it, played Diablo 2 for months, Starcraft 2 campaign for months and Skyrim recently for months. Bought ME3 also. I am at my wits end. I do not know what else to do. I can go out and buy a card, but my friend's card did not work either. I can use the machine for Eclipse or VS2010 development just fine. Just not with 3D gaming. I originally posted this question on Gaming.SE But I was directed here. I have browsed the SU database for my problem and found this, this, and this. But none of these cover my question. My machine is only one year old. Can some experienced superuser(s) shed some light on this scenario? Is it a 3D graphics card problem? Will a brand new card work? What else can I try to pin point the problem? Can it be the Motherboard? Thanks.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • How To Activate Your Free Office 2007 to 2010 Tech Guarantee Upgrade

    - by Matthew Guay
    Have you purchased Office 2007 since March 5th, 2010?  If so, here’s how you can activate and download your free upgrade to Office 2010! Microsoft Office 2010 has just been released, and today you can purchase upgrades from most retail stores or directly from Microsoft via download.  But if you’ve purchased a new copy of Office 2007 or a new computer that came with Office 2007 since March 5th, 2010, then you’re entitled to an absolutely free upgrade to Office 2010.  You’ll need enter information about your Office 2007 and then download the upgrade, so we’ll step you through the process. Getting Started First, if you’ve recently purchased Office 2007 but haven’t installed it, you’ll need to go ahead and install it before you can get your free Office 2010 upgrade.  Install it as normal.   Once Office 2007 is installed, run any of the Office programs.  You’ll be prompted to activate Office.  Make sure you’re connected to the internet, and then click Next to activate. Get your Free Upgrade to Office 2010 Now you’re ready to download your upgrade to Office 2010.  Head to the Office Tech Guarantee site (link below), and click Upgrade now. You’ll need to enter some information about your Office 2007.  Check that you purchased your copy of Office 2007 after March 5th, select your computer manufacturer, and check that you agree to the terms. Now you’re going to need the Product ID number from Office 2007.  To find this, open Word or any other Office 2007 application.  Click the Office Orb, and select Options on the bottom. Select the Resources button on the left, and then click About. Near the bottom of this dialog, you’ll see your Product ID.  This should be a number like: 12345-123-1234567-12345   Go back to the Office Tech Guarantee signup page in your browser, and enter this Product ID.  Select the language of your edition of Office 2007, enter the verification code, and then click Submit. It may take a few moments to validate your Product ID. When it is finished, you’ll be taken to an order page that shows the edition of Office 2010 you’re eligible to receive.  The upgrade download is free, but if you’d like to purchase a backup DVD of Office 2010, you can add it to your order for $13.99.  Otherwise, simply click Continue to accept. Do note that the edition of Office 2010 you receive may be different that the edition of Office 2007 you purchased, as the number of editions has been streamlined in the Office 2010 release.  Here’s a chart you can check to see what edition you’ll receive.  Note that you’ll still be allowed to install Office on the same number of computers; for example, Office 2007 Home and Student allows you to install it on up to 3 computers in the same house, and your Office 2010 upgrade will allow the same. Office 2007 Edition Office 2010 Upgrade You’ll Receive Office 2007 Home and Student Office Home and Student 2010 Office Basic 2007Office Standard 2007 Office Home and Business 2010 Office Small Business 2007Office Professional 2007Office Ultimate 2007 Office Professional 2010 Office Professional 2007 AcademicOffice Ultimate 2007 Academic Office Professional Academic 2010 Sign in with your Windows Live ID, or create a new one if you don’t already have one. Enter your name, select your country, and click Create My Account.  Note that Office will send Office 2010 tips to your email address; if you don’t wish to receive them, you can unsubscribe from the emails later.   Finally, you’re ready to download Office 2010!  Click the Download Now link to start downloading Office 2010.  Your Product Key will appear directly above the Download link, so you can copy it and then paste it in the installer when your download is finished.  You will additionally receive an email with the download links and product key, so if your download fails you can always restart it from that link. If your edition of Office 2007 included the Office Business Contact Manager, you will be able to download it from the second Download link.  And, of course, even if you didn’t order a backup DVD, you can always burn the installers to a DVD for a backup.   Install Office 2010 Once you’re finished downloading Office 2010, run the installer to get it installed on your computer.  Enter your Product Key from the Tech Guarantee website as above, and click Continue. Accept the license agreement, and then click Upgrade to upgrade to the latest version of Office.   The installer will remove all of your Office 2007 applications, and then install their 2010 counterparts.  If you wish to keep some of your Office 2007 applications instead, click Customize and then select to either keep all previous versions or simply keep specific applications. By default, Office 2010 will try to activate online automatically.  If it doesn’t activate during the install, you’ll need to activate it when you first run any of the Office 2010 apps.   Conclusion The Tech Guarantee makes it easy to get the latest version of Office if you recently purchased Office 2007.  The Tech Guarantee program is open through the end of September, so make sure to grab your upgrade during this time.  Actually, if you find a great deal on Office 2007 from a major retailer between now and then, you could also take advantage of this program to get Office 2010 cheaper. And if you need help getting started with Office 2010, check out our articles that can help you get situated in your new version of Office! Link Activate and Download Your free Office 2010 Tech Guarantee Upgrade Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideCenter Pictures and Other Objects in Office 2007 & 2010Change the Default Color Scheme in Office 2010Show Two Time Zones in Your Outlook 2007 Calendar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Ajax Control Toolkit and Superexpert

    - by Stephen Walther
    Microsoft has asked my company, Superexpert Consulting, to take ownership of the development and maintenance of the Ajax Control Toolkit moving forward. In this blog entry, I discuss our strategy for improving the Ajax Control Toolkit. Why the Ajax Control Toolkit? The Ajax Control Toolkit is one of the most popular projects on CodePlex. In fact, some have argued that it is among the most successful open-source projects of all time. It consistently receives over 3,500 downloads a day (not weekends -- workdays). A mind-boggling number of developers use the Ajax Control Toolkit in their ASP.NET Web Forms applications. Why does the Ajax Control Toolkit continue to be such a popular project? The Ajax Control Toolkit fills a strong need in the ASP.NET Web Forms world. The Toolkit enables Web Forms developers to build richly interactive JavaScript applications without writing any JavaScript. For example, by taking advantage of the Ajax Control Toolkit, a Web Forms developer can add modal dialogs, popup calendars, and client tabs to a web application simply by dragging web controls onto a page. The Ajax Control Toolkit is not for everyone. If you are comfortable writing JavaScript then I recommend that you investigate using jQuery plugins instead of the Ajax Control Toolkit. However, if you are a Web Forms developer and you don’t want to get your hands dirty writing JavaScript, then the Ajax Control Toolkit is a great solution. The Ajax Control Toolkit is Vast The Ajax Control Toolkit consists of 40 controls. That’s a lot of controls (For the sake of comparison, jQuery UI consists of only 8 controls – those slackers J). Furthermore, developers expect the Ajax Control Toolkit to work on browsers both old and new. For example, people expect the Ajax Control Toolkit to work with Internet Explorer 6 and Internet Explorer 9 and every version of Internet Explorer in between. People also expect the Ajax Control Toolkit to work on the latest versions of Mozilla Firefox, Apple Safari, and Google Chrome. And, people expect the Ajax Control Toolkit to work with different operating systems. Yikes, that is a lot of combinations. The biggest challenge which my company faces in supporting the Ajax Control Toolkit is ensuring that the Ajax Control Toolkit works across all of these different browsers and operating systems. Testing, Testing, Testing Because we wanted to ensure that we could easily test the Ajax Control Toolkit with different browsers, the very first thing that we did was to set up a dedicated testing server. The dedicated server -- named Schizo -- hosts 4 virtual machines so that we can run Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, and Internet Explorer 9 at the same time (We also use the virtual machines to host the latest versions of Firefox, Chrome, Opera, and Safari). The five developers on our team (plus me) can each publish to a separate FTP website on the testing server. That way, we can quickly test how changes to the Ajax Control Toolkit affect different browsers. QUnit Tests for the Ajax Control Toolkit Introducing regressions – introducing new bugs when trying to fix existing bugs – is the concern which prevents me from sleeping well at night. There are so many people using the Ajax Control Toolkit in so many unique scenarios, that it is difficult to make improvements to the Ajax Control Toolkit without introducing regressions. In order to avoid regressions, we decided early on that it was extremely important to build good test coverage for the 40 controls in the Ajax Control Toolkit. We’ve been focusing a lot of energy on building automated JavaScript unit tests which we can use to help us discover regressions. We decided to write the unit tests with the QUnit test framework. We picked QUnit because it is quickly becoming the standard unit testing framework in the JavaScript world. For example, it is the unit testing framework used by the jQuery team, the jQuery UI team, and many jQuery UI plugin developers. We had to make several enhancements to the QUnit framework in order to test the Ajax Control Toolkit. For example, QUnit does not support tests which include postbacks. We modified the QUnit framework so that it works with IFrames so we could perform postbacks in our automated tests. At this point, we have written hundreds of QUnit tests. For example, we have written 135 QUnit tests for the Accordion control. The QUnit tests are included with the Ajax Control Toolkit source code in a project named AjaxControlToolkit.Tests. You can run all of the QUnit tests contained in the project by opening the Default.aspx page. Automating the QUnit Tests across Multiple Browsers Automated tests are useless if no one ever runs them. In order for the QUnit tests to be useful, we needed an easy way to run the tests automatically against a matrix of browsers. We wanted to run the unit tests against Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, Firefox, Chrome, and Safari automatically. Expecting a developer to run QUnit tests against every browser after every check-in is just too much to expect. It takes 20 seconds to run the Accordion QUnit tests. We are testing against 8 browsers. That would require the developer to open 8 browsers and wait for the results after each change in code. Too much work. Therefore, we built a JavaScript Test Server. Our JavaScript Test Server project was inspired by John Resig’s TestSwarm project. The JavaScript Test Server runs our QUnit tests in a swarm of browsers (running on different operating systems) automatically. Here’s how the JavaScript Test Server works: 1. We created an ASP.NET page named RunTest.aspx that constantly polls the JavaScript Test Server for a new set of QUnit tests to run. After the RunTest.aspx page runs the QUnit tests, the RunTest.aspx records the test results back to the JavaScript Test Server. 2. We opened the RunTest.aspx page on instances of Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, FireFox, Chrome, Opera, Google, and Safari. Now that we have the JavaScript Test Server setup, we can run all of our QUnit tests against all of the browsers which we need to support with a single click of a button. A New Release of the Ajax Control Toolkit Each Month The Ajax Control Toolkit Issue Tracker contains over one thousand five hundred open issues and feature requests. So we have plenty of work on our plates J At CodePlex, anyone can vote for an issue to be fixed. Originally, we planned to fix issues in order of their votes. However, we quickly discovered that this approach was inefficient. Constantly switching back and forth between different controls was too time-consuming. It takes time to re-familiarize yourself with a control. Instead, we decided to focus on two or three controls each month and really focus on fixing the issues with those controls. This way, we can fix sets of related issues and avoid the randomization caused by context switching. Our team works in monthly sprints. We plan to do another release of the Ajax Control Toolkit each and every month. So far, we have competed one release of the Ajax Control Toolkit which was released on April 1, 2011. We plan to release a new version in early May. Conclusion Fortunately, I work with a team of smart developers. We currently have 5 developers working on the Ajax Control Toolkit (not full-time, they are also building two very cool ASP.NET MVC applications). All the developers who work on our team are required to have strong JavaScript, jQuery, and ASP.NET MVC skills. In the interest of being as transparent as possible about our work on the Ajax Control Toolkit, I plan to blog frequently about our team’s ongoing work. In my next blog entry, I plan to write about the two Ajax Control Toolkit controls which are the focus of our work for next release.

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Install Ubuntu Netbook Edition with Wubi Installer

    - by Matthew Guay
    Ubuntu is one of the most popular versions of Linux, and their Netbook Remix edition is especially attractive for netbook owners.  Here we’ll look at how you can easily try out Ubuntu on your netbook without a CD/DVD drive. Netbooks, along with the growing number of thin, full powered laptops, lack a CD/DVD drive.  Installing software isn’t much of a problem since most programs, whether free or for-pay, are available for download.  Operating systems, however, are usually installed from a disk.  You can easily install Windows 7 from a flash drive with our tutorial, but installing Ubuntu from a USB flash drive is more complicated.  However, using Wubi, a Windows installer for Ubuntu, you can easily install it directly on your netbook and even uninstall it with only a few clicks. Getting Started Download and run the Wubi installer for Ubuntu (link below).  In the installer, select the drive you where you wish to install Ubuntu, the size of the installation (this is the amount dedicated to Ubuntu; under 20Gb should be fine), language, username, and desired password.  Also, from the Desktop environment menu, select Ubuntu Netbook to install the netbook edition.  Click Install when your settings are correct. Wubi will automatically download the selected version of Ubuntu and install it on your computer. Windows Firewall may ask if you want to unblock Wubi; select your network and click Allow access. The download will take around an hour on broadband, depending on your internet connection speed.  Once the download is completed, it will automatically install to your computer.  If you’d prefer to have everything downloaded before you start the install, download the ISO of Ubuntu Netbook edition (link below) and save it in the same folder as Wubi. Then, when you run Wubi, select the netbook edition as before and click Install.  Wubi will verify that your download is valid, and will then proceed to install from the downloaded ISO.  This install will only take about 10 minutes. Once the install is finished you will be asked to reboot your computer.  Save anything else you’re working on, and then reboot to finish setting up Ubuntu on your netbook. When your computer reboots, select Ubuntu at the boot screen.  Wubi leaves the default OS as Windows 7, so if you don’t select anything it will boot into Windows 7 after a few seconds. Ubuntu will automatically finish the install when you boot into it the first time.  This took about 12 minutes in our test. When the setup is finished, your netbook will reboot one more time.  Remember again to select Ubuntu at the boot screen.  You’ll then see a second boot screen; press your Enter key to select the default.   Ubuntu only took less than a minute to boot in our test.  When you see the login screen, select your name and enter your password you setup in Wubi.  Now you’re ready to start exploring Ubuntu Netbook Remix. Using Ubuntu Netbook Remix Ubuntu Netbook Remix offers a simple, full-screen interface to take the best advantage of netbooks’ small screens.  Pre-installed applications are displayed in the application launcher, and are organized by category.  Click once to open an application. The first screen on the application launcher shows your favorite programs.  If you’d like to add another application to the favorites pane, click the plus sign beside its icon. Your files from Windows are still accessible from Ubuntu Netbook Remix.  From the home screen, select Files & Folders on the left menu, and then click the icon that says something like 100GB Filesystem under the Volumes section. Now you’ll be able to see all of your files from Windows.  Your user files such as documents, music, and pictures should be located in Documents and Settings in a folder with your user name. You can also easily install a variety of free applications via the Software Installer. Connecting to the internet is also easy, as Ubuntu Netbook Remix automatically recognized the WiFi adaptor on our test netbook, a Samsung N150.  To connect to a wireless network, click the wireless icon on the top right of the screen and select the network’s name from the list. And, if you’d like to customize your screen, right-click on the application launcher and select Change desktop background. Choose a background picture you’d like. Now you’ll see it through your application launcher.  Nice! Most applications are opened full-screen.  You can close them by clicking the x on the right of the program’s name. You can also switch to other applications from their icons on the top left.  Open the home screen by clicking the Ubuntu logo in the far left. Changing Boot Options By default, Wubi will leave Windows as the default operating system, and will give you 10 seconds at boot to choose to boot into Ubuntu.  To change this, boot into Windows and enter Advanced system settings in your start menu search. In this dialog, click Settings under Startup and Recovery. From this dialog, you can select the default operating system and the time to display list of operating systems.  You can enter a lower number to make the boot screen appear for less time. And if you’d rather make Ubuntu the default operating system, select it from the drop-down list.   Uninstalling Ubuntu Netbook Remix If you decide you don’t want to keep Ubuntu Netbook Remix on your computer, you can uninstall it just like you uninstall any normal application.  Boot your computer into Windows, open Control Panel, click Uninstall a Program, and enter ubuntu in the search box.  Select it, and click Uninstall. Click Uninstall at the prompt.  Ubuntu uninstalls very quickly, and removes the entry from the bootloader as well, so your computer is just like it was before you installed it.   Conclusion Ubuntu Netbook Remix offers an attractive Linux interface for netbooks.  We enjoyed trying it out, and found it much more user-friendly than most Linux distros.  And with the Wubi installer, you can install it risk-free and try it out on your netbook.  Or, if you’d like to try out another alternate netbook operating system, check out our article on Jolicloud, another new OS for netbooks. Links Download Wubi Installer for Windows Download Ubuntu Netbook Edition Similar Articles Productive Geek Tips Easily Install Ubuntu Linux with Windows Using the Wubi InstallerInstall VMware Tools on Ubuntu Edgy EftHow to install Spotify in Ubuntu 9.10 using WineInstalling PHP5 and Apache on UbuntuInstalling PHP4 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Learning AngularJS by Example – The Customer Manager Application

    - by dwahlin
    I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using AngularJS that I call Customer Manager. It’s not exactly the most creative name or concept, but I wanted to build something that highlighted a lot of the different features offered by AngularJS and how they could be used together to build a full-featured app. One of the goals of the application was to ensure that it was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts. The application initially started out small and was used in my AngularJS in 60-ish Minutes video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’ll be used in a new “end-to-end” training course my company is working on for AngularjS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like: In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts: Building an AngularJS Modal Service Building a Custom AngularJS Unique Value Directive Using an AngularJS Factory to Interact with a RESTful Service Application Structure The structure of the application is shown to the right. The  homepage is index.html and is located at the root of the application folder. It defines where application views will be loaded using the ng-view directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the Scripts folder and to custom application scripts located in the app folder. The app folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer things in folders such as controllers, views, services, etc. Doing that helps me find things a lot faster and allows me to categorize files (such as controllers) by functionality. My recommendation is to go with whatever works best for you. Anyone who says, “You’re doing it wrong!” should be ignored. Contrary to what some people think, there is no “one right way” to organize scripts and other files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls), the way you organize them is completely up to you. Here’s what I ended up doing for this application: Animation code for some custom animations is located in the animations folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called GreenSock. Controllers are located in the controllers folder. Some of the controllers are placed in subfolders based upon the their functionality while others are placed at the root of the controllers folder since they’re more generic:   The directives folder contains the custom directives created for the application. The filters folder contains the custom filters created for the application that filter city/state and product information. The partials folder contains partial views. This includes things like modal dialogs used in the application. The services folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data functionality. The views folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:   Back-End Services The Customer Manager application (grab it from Github) provides two different options on the back-end including ASP.NET Web API and Node.js. The ASP.NET Web API back-end uses Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.   Using the ASP.NET Web API Back-End To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free Express 2013 for Web version is fine). Press F5 and a browser will automatically launch and display the application. Using the Node.js Back-End To run the application using the Node.js/MongoDB back-end follow these steps: In the CustomerManager directory execute 'npm install' to install Express, MongoDB and Mongoose (package.json). Load sample data into MongoDB by performing the following steps: Execute 'mongod' to start the MongoDB daemon Navigate to the CustomerManager directory (the one that has initMongoCustData.js in it) then execute 'mongo' to start the MongoDB shell Enter the following in the mongo shell to load the seed files that handle seeding the database with initial data: use custmgr load("initMongoCustData.js") load("initMongoSettingsData.js") load("initMongoStateData.js") Start the Node/Express server by navigating to the CustomerManager/server directory and executing 'node app.js' View the application at http://localhost:3000 in your browser. Key Features The Customer Manager application certainly doesn’t cover every feature provided by AngularJS (as mentioned the intent was to keep it as simple as possible) but does provide insight into several key areas: Using factories and services as re-useable data services (see the app/services folder) Creating custom directives (see the app/directives folder) Custom paging (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Custom filters (see app/filters) Showing custom modal dialogs with a re-useable service (see app/services/modalService.js) Making Ajax calls using a factory (see app/services/customersService.js) Using Breeze to retrieve and work with data (see app/services/customersBreezeService.js). Switch the application to use the Breeze factory by opening app/services.config.js and changing the useBreeze property to true. Intercepting HTTP requests to display a custom overlay during Ajax calls (see app/directives/wcOverlay.js) Custom animations using the GreenSock library (see app/animations/listAnimations.js) Creating custom AngularJS animations using CSS (see Content/animations.css) JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder) Card View and List View display of data (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Using AngularJS validation functionality (see app/views/customerEdit.html, app/controllers/customerEditController.js, and app/directives/wcUnique.js) More… Conclusion I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code which is very cool to have as a back-end option. Access the standard application here and a version that has custom routing in it here. Additional information about the custom routing can be found in this post.

    Read the article

  • On Her Majesty's Secret Source Code: .NET Reflector 7 Early Access Builds Now Available

    - by Bart Read
    Dodgy Bond references aside, I'm extremely happy to be able to tell you that we've just released our first .NET Reflector 7 Early Access build. We're going to make these available over the coming weeks via the main .NET Reflector download page at: http://reflector.red-gate.com/Download.aspx Please have a play and tell us what you think in the forum we've set up. Also, please let us know if you run into any problems in the same place. The new version so far comes with numerous decompilation improvements including (after 5 years!) support for iterator blocks - i.e., the yield statement first seen in .NET 2.0. We've also done a lot of work to solidify the support for .NET 4.0. Clive's written about the work he's done to support iterator blocks in much more detail here, along with the odd problem he's encountered when dealing with compiler generated code: http://www.simple-talk.com/community/blogs/clivet/96199.aspx. On the UI front we've started what will ultimately be a rewrite of the entire front-end, albeit broken into stages over two or three major releases. The most obvious addition at the moment is tabbed browsing, which you can see in Figure 1. Figure 1. .NET Reflector's new tabbed decompilation feature. Use CTRL+Click on any item in the assembly browser tree, or any link in the source code view, to open it in a new tab. This isn't by any means finished. I'll be tying up loose ends for the next few weeks, with a major focus on performance and resource usage. .NET Reflector has historically been a largely single-threaded application which has been fine up until now but, as you might expect, the addition of browser-style tabbing has pushed this approach somewhat beyond its limit. You can see this if you refresh the assemblies list by hitting F5. This shows up another problem: we really need to make Reflector remember everything you had open before you refreshed the list, rather than just the last item you viewed - I discovered that it's always done the latter, but it used to hide all panes apart from the treeview after a Refresh, including the decompiler/disassembler window. Ultimately I've got plans to add the whole VS/Chrome/Firefox style ability to drag a tab into the middle of nowhere to spawn a new window, but I need to be mindful of the add-ins, amongst other things, so it's possible that might slip to a 7.5 or 8.0 release. You'll also notice that .NET Reflector 7 now needs .NET 3.5 or later to run. We made this jump because we wanted to offer ourselves a much better chance of adding some really cool functionality to support newer technologies, such as Silverlight and Windows Phone 7. We've also taken the opportunity to start using WPF for UI development, which has frankly been a godsend. The learning curve is practically vertical but, I kid you not, it's just a far better world. Really. Stop using WinForms. Now. Why are you still using it? I had to go back and work on an old WinForms dialog for an hour or two yesterday and it really made me wince. The point is we'll be able to move the UI in some exciting new directions that will make Reflector easier to use whilst continuing to develop its functionality without (and this is key) cluttering the interface. The 3.5 language enhancements should also enable us to be much more productive over the longer term. I know most of you have .NET Fx 3.5 or 4.0 already but, if you do need to install a new version, I'd recommend you jump straight to 4.0 because, for one thing, it's faster, and if you're starting afresh there's really no reason not to. Despite the Fx version jump the Visual Studio add-in should still work fine in Visual Studio 2005, and obviously will continue to work in Visual Studio 2008 and 2010. If you do run into problems, again, please let us know here. As before, we continue to support every edition of Visual Studio exception the Express Editions. Speaking of Visual Studio, we've also been improving the add-in. You can now open and explore decompiled code for any referenced assembly in any project in your solution. Just right-click on the reference, then click Decompile and Explore on the context menu. Reflector will pop up a progress box whilst it decompiles your assembly (Figure 2) - you can move this out of the way whilst you carry on working. Figure 2. Decompilation progress. This isn't modal so you can just move it out of the way and carry on working. Once it's done you can explore your assembly in the Reflector treeview (Figure 3), also accessible via the .NET Reflector Explore Decompiled Assemblies main menu item. Double-click on any item to open decompiled source in the Visual Studio source code view. Use right-click and Go To Definition on the source view context menu to navigate through the code. Figure 3. Using the .NET Reflector treeview within Visual Studio. Double-click on any item to open decompiled source in the source code view. There are loads of other changes and fixes that have gone in, often under the hood, which I don't have room to talk about here, and plenty more to come over the next few weeks. I'll try to keep you abreast of new functionality and changes as they go in. There are a couple of smaller things worth mentioning now though. Firstly, we've reorganised the menus and toolbar in Reflector itself to more closely mirror what you might be used to in other applications. Secondly, we've tried to make some of the functionality more discoverable. For example, you can now switch decompilation target framework version directly from the toolbar - and the default is now .NET 4.0. I think that about covers it for the moment. As I said, please use the new version, and send us your feedback. Here's that download URL again: http://reflector.red-gate.com/Download.aspx. Until next time! Technorati Tags: .net reflector,7,early access,new version,decompilation,tabbing,visual studio,software development,.net,c#,vb

    Read the article

  • Class-Level Model Validation with EF Code First and ASP.NET MVC 3

    - by ScottGu
    Earlier this week the data team released the CTP5 build of the new Entity Framework Code-First library.  In my blog post a few days ago I talked about a few of the improvements introduced with the new CTP5 build.  Automatic support for enforcing DataAnnotation validation attributes on models was one of the improvements I discussed.  It provides a pretty easy way to enable property-level validation logic within your model layer. You can apply validation attributes like [Required], [Range], and [RegularExpression] – all of which are built-into .NET 4 – to your model classes in order to enforce that the model properties are valid before they are persisted to a database.  You can also create your own custom validation attributes (like this cool [CreditCard] validator) and have them be automatically enforced by EF Code First as well.  This provides a really easy way to validate property values on your models.  I showed some code samples of this in action in my previous post. Class-Level Model Validation using IValidatableObject DataAnnotation attributes provides an easy way to validate individual property values on your model classes.  Several people have asked - “Does EF Code First also support a way to implement class-level validation methods on model objects, for validation rules than need to span multiple property values?”  It does – and one easy way you can enable this is by implementing the IValidatableObject interface on your model classes. IValidatableObject.Validate() Method Below is an example of using the IValidatableObject interface (which is built-into .NET 4 within the System.ComponentModel.DataAnnotations namespace) to implement two custom validation rules on a Product model class.  The two rules ensure that: New units can’t be ordered if the Product is in a discontinued state New units can’t be ordered if there are already more than 100 units in stock We will enforce these business rules by implementing the IValidatableObject interface on our Product class, and by implementing its Validate() method like so: The IValidatableObject.Validate() method can apply validation rules that span across multiple properties, and can yield back multiple validation errors. Each ValidationResult returned can supply both an error message as well as an optional list of property names that caused the violation (which is useful when displaying error messages within UI). Automatic Validation Enforcement EF Code-First (starting with CTP5) now automatically invokes the Validate() method when a model object that implements the IValidatableObject interface is saved.  You do not need to write any code to cause this to happen – this support is now enabled by default. This new support means that the below code – which violates one of our above business rules – will automatically throw an exception (and abort the transaction) when we call the “SaveChanges()” method on our Northwind DbContext: In addition to reactively handling validation exceptions, EF Code First also allows you to proactively check for validation errors.  Starting with CTP5, you can call the “GetValidationErrors()” method on the DbContext base class to retrieve a list of validation errors within the model objects you are working with.  GetValidationErrors() will return a list of all validation errors – regardless of whether they are generated via DataAnnotation attributes or by an IValidatableObject.Validate() implementation.  Below is an example of proactively using the GetValidationErrors() method to check (and handle) errors before trying to call SaveChanges(): ASP.NET MVC 3 and IValidatableObject ASP.NET MVC 2 included support for automatically honoring and enforcing DataAnnotation attributes on model objects that are used with ASP.NET MVC’s model binding infrastructure.  ASP.NET MVC 3 goes further and also honors the IValidatableObject interface.  This combined support for model validation makes it easy to display appropriate error messages within forms when validation errors occur.  To see this in action, let’s consider a simple Create form that allows users to create a new Product: We can implement the above Create functionality using a ProductsController class that has two “Create” action methods like below: The first Create() method implements a version of the /Products/Create URL that handles HTTP-GET requests - and displays the HTML form to fill-out.  The second Create() method implements a version of the /Products/Create URL that handles HTTP-POST requests - and which takes the posted form data, ensures that is is valid, and if it is valid saves it in the database.  If there are validation issues it redisplays the form with the posted values.  The razor view template of our “Create” view (which renders the form) looks like below: One of the nice things about the above Controller + View implementation is that we did not write any validation logic within it.  The validation logic and business rules are instead implemented entirely within our model layer, and the ProductsController simply checks whether it is valid (by calling the ModelState.IsValid helper method) to determine whether to try and save the changes or redisplay the form with errors. The Html.ValidationMessageFor() helper method calls within our view simply display the error messages our Product model’s DataAnnotations and IValidatableObject.Validate() method returned.  We can see the above scenario in action by filling out invalid data within the form and attempting to submit it: Notice above how when we hit the “Create” button we got an error message.  This was because we ticked the “Discontinued” checkbox while also entering a value for the UnitsOnOrder (and so violated one of our business rules).  You might ask – how did ASP.NET MVC know to highlight and display the error message next to the UnitsOnOrder textbox?  It did this because ASP.NET MVC 3 now honors the IValidatableObject interface when performing model binding, and will retrieve the error messages from validation failures with it. The business rule within our Product model class indicated that the “UnitsOnOrder” property should be highlighted when the business rule we hit was violated: Our Html.ValidationMessageFor() helper method knew to display the business rule error message (next to the UnitsOnOrder edit box) because of the above property name hint we supplied: Keeping things DRY ASP.NET MVC and EF Code First enables you to keep your validation and business rules in one place (within your model layer), and avoid having it creep into your Controllers and Views.  Keeping the validation logic in the model layer helps ensure that you do not duplicate validation/business logic as you add more Controllers and Views to your application.  It allows you to quickly change your business rules/validation logic in one single place (within your model layer) – and have all controllers/views across your application immediately reflect it.  This help keep your application code clean and easily maintainable, and makes it much easier to evolve and update your application in the future. Summary EF Code First (starting with CTP5) now has built-in support for both DataAnnotations and the IValidatableObject interface.  This allows you to easily add validation and business rules to your models, and have EF automatically ensure that they are enforced anytime someone tries to persist changes of them to a database.  ASP.NET MVC 3 also now supports both DataAnnotations and IValidatableObject as well, which makes it even easier to use them with your EF Code First model layer – and then have the controllers/views within your web layer automatically honor and support them as well.  This makes it easy to build clean and highly maintainable applications. You don’t have to use DataAnnotations or IValidatableObject to perform your validation/business logic.  You can always roll your own custom validation architecture and/or use other more advanced validation frameworks/patterns if you want.  But for a lot of applications this built-in support will probably be sufficient – and provide a highly productive way to build solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Monitor your Hard Drive’s Health with Acronis Drive Monitor

    - by Matthew Guay
    Are you worried that your computer’s hard drive could die without any warning?  Here’s how you can keep tabs on it and get the first warning signs of potential problems before you actually lose your critical data. Hard drive failures are one of the most common ways people lose important data from their computers.  As more of our memories and important documents are stored digitally, a hard drive failure can mean the loss of years of work.  Acronis Drive Monitor helps you avert these disasters by warning you at the first signs your hard drive may be having trouble.  It monitors many indicators, including heat, read/write errors, total lifespan, and more. It then notifies you via a taskbar popup or email that problems have been detected.  This early warning lets you know ahead of time that you may need to purchase a new hard drive and migrate your data before it’s too late. Getting Started Head over to the Acronis site to download Drive Monitor (link below).  You’ll need to enter your name and email, and then you can download this free tool. Also, note that the download page may ask if you want to include a trial of their for-pay backup program.  If you wish to simply install the Drive Monitor utility, click Continue without adding. Run the installer when the download is finished.  Follow the prompts and install as normal. Once it’s installed, you can quickly get an overview of your hard drives’ health.  Note that it shows 3 categories: Disk problems, Acronis backup, and Critical Events.  On our computer, we had Seagate DiskWizard, an image backup utility based on Acronis Backup, installed, and Acronis detected it. Drive Monitor stays running in your tray even when the application window is closed.  It will keep monitoring your hard drives, and will alert you if there’s a problem. Find Detailed Information About Your Hard Drives Acronis’ simple interface lets you quickly see an overview of how the drives on your computer are performing.  If you’d like more information, click the link under the description.  Here we see that one of our drives have overheated, so click Show disks to get more information. Now you can select each of your drives and see more information about them.  From the Disk overview tab that opens by default, we see that our drive is being monitored, has been running for a total of 368 days, and that it’s health is good.  However, it is running at 113F, which is over the recommended max of 107F.   The S.M.A.R.T. parameters tab gives us more detailed information about our drive.  Most users wouldn’t know what an accepted value would be, so it also shows the status.  If the value is within the accepted parameters, it will report OK; otherwise, it will show that has a problem in this area. One very interesting piece of information we can see is the total number of Power-On Hours, Start/Stop Count, and Power Cycle Count.  These could be useful indicators to check if you’re considering purchasing a second hand computer.  Simply load this program, and you’ll get a better view of how long it’s been in use. Finally, the Events tab shows each time the program gave a warning.  We can see that our drive, which had been acting flaky already, is routinely overheating even when our other hard drive was running in normal temperature ranges. Monitor Acronis Backups And Critical Errors In addition to monitoring critical stats of your hard drives, Acronis Drive Monitor also keeps up with the status of your backup software and critical events reported by Windows.  You can access these from the front page, or via the links on the left hand sidebar.  If you have any edition of any Acronis Backup product installed, it will show that it was detected.  Note that it can only monitor the backup status of the newest versions of Acronis Backup and True Image. If no Acronis backup software was installed, it will show a warning that the drive may be unprotected and will give you a link to download Acronis backup software.   If you have another backup utility installed that you wish to monitor yourself, click Configure backup monitoring, and then disable monitoring on the drives you’re monitoring yourself. Finally, you can view any detected Critical events from the Critical events tab on the left. Get Emailed When There’s a Problem One of Drive Monitor’s best features is the ability to send you an email whenever there’s a problem.  Since this program can run on any version of Windows, including the Server and Home Server editions, you can use this feature to stay on top of your hard drives’ health even when you’re not nearby.  To set this up, click Options in the top left corner. Select Alerts on the left, and then click the Change settings link to setup your email account. Enter the email address which you wish to receive alerts, and a name for the program.  Then, enter the outgoing mail server settings for your email.  If you have a Gmail account, enter the following information: Outgoing mail server (SMTP): smtp.gmail.com Port: 587 Username and Password: Your gmail address and password Check the Use encryption box, and then select TLS from the encryption options.   It will now send a test message to your email account, so check and make sure it sent ok. Now you can choose to have the program automatically email you when warnings and critical alerts appear, and also to have it send regular disk status reports.   Conclusion Whether you’ve got a brand new hard drive or one that’s seen better days, knowing the real health of your it is one of the best ways to be prepared before disaster strikes.  It’s no substitute for regular backups, but can help you avert problems.  Acronis Drive Monitor is a nice tool for this, and although we wish it wasn’t so centered around their backup offerings, we still found it a nice tool. Link Download Acronis Drive Monitor (registration required) Similar Articles Productive Geek Tips Quick Tip: Change Monitor Timeout From Command LineAnalyze and Manage Hard Drive Space with WinDirStatMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersDefrag Multiple Hard Drives At Once In WindowsFind Your Missing USB Drive on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer

    Read the article

  • WebSocket and Java EE 7 - Getting Ready for JSR 356 (TOTD #181)

    - by arungupta
    WebSocket is developed as part of HTML 5 specification and provides a bi-directional, full-duplex communication channel over a single TCP socket. It provides dramatic improvement over the traditional approaches of Polling, Long-Polling, and Streaming for two-way communication. There is no latency from establishing new TCP connections for each HTTP message. There is a WebSocket API and the WebSocket Protocol. The Protocol defines "handshake" and "framing". The handshake defines how a normal HTTP connection can be upgraded to a WebSocket connection. The framing defines wire format of the message. The design philosophy is to keep the framing minimum to avoid the overhead. Both text and binary data can be sent using the API. WebSocket may look like a competing technology to Server-Sent Events (SSE), but they are not. Here are the key differences: WebSocket can send and receive data from a client. A typical example of WebSocket is a two-player game or a chat application. Server-Sent Events can only push data data to the client. A typical example of SSE is stock ticker or news feed. With SSE, XMLHttpRequest can be used to send data to the server. For server-only updates, WebSockets has an extra overhead and programming can be unecessarily complex. SSE provides a simple and easy-to-use model that is much better suited. SSEs are sent over traditional HTTP and so no modification is required on the server-side. WebSocket require servers that understand the protocol. SSE have several features that are missing from WebSocket such as automatic reconnection, event IDs, and the ability to send arbitrary events. The client automatically tries to reconnect if the connection is closed. The default wait before trying to reconnect is 3 seconds and can be configured by including "retry: XXXX\n" header where XXXX is the milliseconds to wait before trying to reconnect. Event stream can include a unique event identifier. This allows the server to determine which events need to be fired to each client in case the connection is dropped in between. The data can span multiple lines and can be of any text format as long as EventSource message handler can process it. WebSockets provide true real-time updates, SSE can be configured to provide close to real-time by setting appropriate timeouts. OK, so all excited about WebSocket ? Want to convert your POJOs into WebSockets endpoint ? websocket-sdk and GlassFish 4.0 is here to help! The complete source code shown in this project can be downloaded here. On the server-side, the WebSocket SDK converts a POJO into a WebSocket endpoint using simple annotations. Here is how a WebSocket endpoint will look like: @WebSocket(path="/echo")public class EchoBean { @WebSocketMessage public String echo(String message) { return message + " (from your server)"; }} In this code "@WebSocket" is a class-level annotation that declares a POJO to accept WebSocket messages. The path at which the messages are accepted is specified in this annotation. "@WebSocketMessage" indicates the Java method that is invoked when the endpoint receives a message. This method implementation echoes the received message concatenated with an additional string. The client-side HTML page looks like <div style="text-align: center;"> <form action=""> <input onclick="send_echo()" value="Press me" type="button"> <input id="textID" name="message" value="Hello WebSocket!" type="text"><br> </form></div><div id="output"></div> WebSocket allows a full-duplex communication. So the client, a browser in this case, can send a message to a server, a WebSocket endpoint in this case. And the server can send a message to the client at the same time. This is unlike HTTP which follows a "request" followed by a "response". In this code, the "send_echo" method in the JavaScript is invoked on the button click. There is also a <div> placeholder to display the response from the WebSocket endpoint. The JavaScript looks like: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/websockets/echo"; var websocket = new WebSocket(wsUri); websocket.onopen = function(evt) { onOpen(evt) }; websocket.onmessage = function(evt) { onMessage(evt) }; websocket.onerror = function(evt) { onError(evt) }; function init() { output = document.getElementById("output"); } function send_echo() { websocket.send(textID.value); writeToScreen("SENT: " + textID.value); } function onOpen(evt) { writeToScreen("CONNECTED"); } function onMessage(evt) { writeToScreen("RECEIVED: " + evt.data); } function onError(evt) { writeToScreen('<span style="color: red;">ERROR:</span> ' + evt.data); } function writeToScreen(message) { var pre = document.createElement("p"); pre.style.wordWrap = "break-word"; pre.innerHTML = message; output.appendChild(pre); } window.addEventListener("load", init, false);</script> In this code The URI to connect to on the server side is of the format ws://<HOST>:<PORT>/websockets/<PATH> "ws" is a new URI scheme introduced by the WebSocket protocol. <PATH> is the path on the endpoint where the WebSocket messages are accepted. In our case, it is ws://localhost:8080/websockets/echo WEBSOCKET_SDK-1 will ensure that context root is included in the URI as well. WebSocket is created as a global object so that the connection is created only once. This object establishes a connection with the given host, port and the path at which the endpoint is listening. The WebSocket API defines several callbacks that can be registered on specific events. The "onopen", "onmessage", and "onerror" callbacks are registered in this case. The callbacks print a message on the browser indicating which one is called and additionally also prints the data sent/received. On the button click, the WebSocket object is used to transmit text data to the endpoint. Binary data can be sent as one blob or using buffering. The HTTP request headers sent for the WebSocket call are: GET ws://localhost:8080/websockets/echo HTTP/1.1Origin: http://localhost:8080Connection: UpgradeSec-WebSocket-Extensions: x-webkit-deflate-frameHost: localhost:8080Sec-WebSocket-Key: mDbnYkAUi0b5Rnal9/cMvQ==Upgrade: websocketSec-WebSocket-Version: 13 And the response headers received are Connection:UpgradeSec-WebSocket-Accept:q4nmgFl/lEtU2ocyKZ64dtQvx10=Upgrade:websocket(Challenge Response):00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 The headers are shown in Chrome as shown below: The complete source code shown in this project can be downloaded here. The builds from websocket-sdk are integrated in GlassFish 4.0 builds. Would you like to live on the bleeding edge ? Then follow the instructions below to check out the workspace and install the latest SDK: Check out the source code svn checkout https://svn.java.net/svn/websocket-sdk~source-code-repository Build and install the trunk in your local repository as: mvn install Copy "./bundles/websocket-osgi/target/websocket-osgi-0.3-SNAPSHOT.jar" to "glassfish3/glassfish/modules/websocket-osgi.jar" in your GlassFish 4 latest promoted build. Notice, you need to overwrite the JAR file. Anybody interested in building a cool application using WebSocket and get it running on GlassFish ? :-) This work will also feed into JSR 356 - Java API for WebSocket. On a lighter side, there seems to be less agreement on the name. Here are some of the options that are prevalent: WebSocket (W3C API, the URL is www.w3.org/TR/websockets though) Web Socket (HTML5 Demos - html5demos.com/web-socket) Websocket (Jenkins Plugin - wiki.jenkins-ci.org/display/JENKINS/Websocket%2BPlugin) WebSockets (Used by Mozilla - developer.mozilla.org/en/WebSockets, but use WebSocket as well) Web sockets (HTML5 Working Group - www.whatwg.org/specs/web-apps/current-work/multipage/network.html) Web Sockets (Chrome Blog - blog.chromium.org/2009/12/web-sockets-now-available-in-google.html) I prefer "WebSocket" as that seems to be most common usage and used by the W3C API as well. What do you use ?

    Read the article

  • I Hereby Resolve… (T-SQL Tuesday #14)

    - by smisner
    It’s time for another T-SQL Tuesday, hosted this month by Jen McCown (blog|twitter), on the topic of resolutions. Specifically, “what techie resolutions have you been pondering, and why?” I like that word – pondering – because I ponder a lot. And while there are many things that I do already because of my job, there are many more things that I ponder about doing…if only I had the time. Then I ponder about making time, but then it’s back to work! In 2010, I was moderately more successful in making time for things that I ponder about than I had been in years past, and I hope to continue that trend in 2011. If Jen hadn’t settled on this topic, I could keep my ponderings to myself and no one would ever know the outcome, but she’s egged me on (and everyone else that chooses to participate)! So here goes… For me, having resolve to do something means that I wouldn’t be doing that something as part of my ordinary routine. It takes extra effort to make time for it. It’s not something that I do once and check off a list, but something that I need to commit to over a period of time. So with that in mind, I hereby resolve… To Learn Something New… One of the things I love about my job is that I get to do a lot of things outside of my ordinary routine. It’s a veritable smorgasbord of opportunity! So what more could I possibly add to that list of things to do? Well, the more I learn, the more I realize I have so much more to learn. It would be much easier to remain in ignorant bliss, but I was born to learn. Constantly. (And apparently to teach, too– my father will tell you that as a small child, I had the neighborhood kids gathered together to play school – in the summer. I’m sure they loved that – but they did it!) These are some of things that I want to dedicate some time to learning this year: Spatial data. I have a good understanding of how maps in Reporting Services works, and I can cobble together a simple T-SQL spatial query, but I know I’m only scratching the surface here. Rob Farley (blog|twitter) posted interesting examples of combining maps and PivotViewer, and I think there’s so many more creative possibilities. I’ve always felt that pictures (including charts and maps) really help people get their minds wrapped around data better, and because a lot of data has a geographic aspect to it, I believe developing some expertise here will be beneficial to my work. PivotViewer. Not only is PivotViewer combined with maps a useful way to visualize data, but it’s an interesting way to work with data. If you haven’t seen it yet, check out this interactive demonstration using Netflx OData feed. According to Rob Farley, learning how to work with PivotViewer isn’t trivial. Just the type of challenge I like! Security. You’ve heard of the accidental DBA? Well, I am the accidental security person – is there a word for that role? My eyes used to glaze over when having to study about security, or  when reading anything about it. Then I had a problem long ago that no one could figure out – not even the vendor’s tech support – until I rolled up my sleeves and painstakingly worked through the myriad of potential problems to resolve a very thorny security issue. I learned a lot in the process, and have been able to share what I’ve learned with a lot of people. But I’m not convinced their eyes weren’t glazing over, too. I don’t take it personally – it’s just a very dry topic! So in addition to deepening my understanding about security, I want to find a way to make the subject as it relates to SQL Server and business intelligence more accessible and less boring. Well, there’s actually a lot more that I could put on this list, and a lot more things I have plans to do this coming year, but I run the risk of overcommitting myself. And then I wouldn’t have time… To Have Fun! My name is Stacia and I’m a workaholic. When I love what I do, it’s difficult to separate out the work time from the fun time. But there are some things that I’ve been meaning to do that aren’t related to business intelligence for which I really need to develop some resolve. And they are techie resolutions, too, in a roundabout sort of way! Photography. When my husband and I went on an extended camping trip in 2009 to Yellowstone and the Grand Tetons, I had a nice little digital camera that took decent pictures. But then I saw the gorgeous cameras that other tourists were toting around and decided I needed one too. So I bought a Nikon D90 and have started to learn to use it, but I’m definitely still in the beginning stages. I traveled so much in 2010 and worked on two book projects that I didn’t have a lot of free time to devote to it. I was very inspired by Kimberly Tripp’s (blog|twitter) and Paul Randal’s (blog|twitter) photo-adventure in Alaska, though, and plan to spend some dedicated time with my camera this year. (And hopefully before I move to Alaska – nothing set in stone yet, but we hope to move to a remote location – with Internet access – later this year!) Astronomy. I have this cool telescope, but it suffers the same fate as my camera. I have been gone too much and busy with other things that I haven’t had time to work with it. I’ll figure out how it works, and then so much time passes by that I forget how to use it. I have this crazy idea that I can actually put the camera and the telescope together for astrophotography, but I think I need to start simple by learning how to use each component individually. As long as I’m living in Las Vegas, I know I’ll have clear skies for nighttime viewing, but when we move to Alaska, we’ll be living in a rain forest. I have no idea what my opportunities will be like there – except I know that when the sky is clear, it will be far more amazing than anything I can see in Vegas – even out in the desert - because I’ll be so far away from city light pollution. I’ve been contemplating putting together a blog on these topics as I learn. As many of my fellow bloggers in the SQL Server community know, sometimes the best way to learn something is to sit down and write about it. I’m just stumped by coming up with a clever name for the new blog, which I was thinking about inaugurating with my move to Alaska. Except that I don’t know when that will be exactly, so we’ll just have to wait and see which comes first!

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >