Search Results

Search found 2964 results on 119 pages for 'lucky cool'.

Page 111/119 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Twisted + SQLAlchemy and the best way to do it.

    - by Khorkrak
    So I'm writing yet another Twisted based daemon. It'll have an xmlrpc interface as usual so I can easily communicate with it and have other processes interchange data with it as needed. This daemon needs to access a database. We've been using SQL Alchemy in place of hard coding SQL strings for our latest projects - those mostly done for web apps in Pylons. We'd like to do the same for this app and re-use library code that makes use of SQL Alchemy. So what to do? Well of course since that library was written for use in a Pylons app it's all the straight-forward blocking style code that everyone is accustomed to and all of the non-blocking is magically handled by Pylons via threading, thread locals, scoped sessions and so on. So now for Twisted I guess I'm a bit stuck. I could: Just write the sql I need directly if it's minimal and use the dbapi pool in twisted to do runInteractions etc when I need to hit the db. Use the objects and inherently blocking methods in our library and block now and then in my Twisted daemon. Bah. Use sAsync which was last updated in 2008 and kind of reuse the models we have defined already but not really and it does address code that needs to work in Pylons either. Does that even work with the latest version SQL Alchemy? Who knows. That project looked great though - why was it apparently abandoned? Spawn a separate subprocess and have it deal with the library code and all it's blocking, the results being returned back to my daemon when ready as objects marshalled via YAML over xmlrpc. Use deferToThread and then expunge the objects returned having made sure to do eager loads so that I have all my stuff that I might need. Seems kind of ugha to me. I'm also stuck using Python 2.5.4 atm so no 2.6 yet and I don't think I can just do an import from future to get access to the cool new multiprocessing module stuff in there. That's OK though I guess as we've got dealing with interprocess communication down pretty well. So I'm leaning towards option 4 mostly as that would avoid the mortal sin of logic duplication with option 1 while also staying the heck away from threads. Any better ideas?

    Read the article

  • How do I defer execution of some Ruby code until later and run it on demand in this scenario?

    - by Kyle Kaitan
    I've got some code that looks like the following. First, there's a simple Parser class for parsing command-line arguments with options. class Parser def initialize(&b); ...; end # Create new parser. def parse(args = ARGV); ...; end # Consume command-line args. def opt(...); ...; end # Declare supported option. def die(...); ...; end # Validation handler. end Then I have my own Parsers module which holds some metadata about parsers that I want to track. module Parsers ParserMap = {} def self.make_parser(kind, desc, &b) b ||= lambda {} module_eval { ParserMap[kind] = {:desc => "", :validation => lambda {} } ParserMap[kind][:desc] = desc # Create new parser identified by `<Kind>Parser`. Making a Parser is very # expensive, so we defer its creation until it's actually needed later # by wrapping it in a lambda and calling it when we actually need it. const_set(name_for_parser(kind), lambda { Parser.new(&b) }) } end # ... end Now when you want to add a new parser, you can call make_parser like so: make_parser :db, "login to database" do # Options that this parser knows how to parse. opt :verbose, "be verbose with output messages" opt :uid, "user id" opt :pwd, "password" end Cool. But there's a problem. We want to optionally associate validation with each parser, so that we can write something like: validation = lambda { |parser, opts| parser.die unless opts[:uid] && opts[:pwd] # Must provide login. } The interface contract with Parser says that we can't do any validation until after Parser#parse has been called. So, we want to do the following: Associate an optional block with every Parser we make with make_parser. We also want to be able to run this block, ideally as a new method called Parser#validate. But any on-demand method is equally suitable. How do we do that?

    Read the article

  • pointer to preallocated memory as an input parameter and have the function fill it

    - by djones2010
    test code: void modify_it(char * mystuff) { char test[7] = "123456"; //last element is null i presume for c style strings here. //static char test[] = "123123"; //when i do this i thought i should be able to gain access to this bit of memory when the function is destroyed but that does not seem to be the case. //char * test = new char[7]; //this is also creating memory on stack and not the heap i reckon and gets destroyed once the function is done with. strcpy_s(mystuff,7,test); //this does the job as long as memory for mystuff has been allocated outside the function. mystuff = test; //this does not work. I know with c style strings you can't just do string assignments they have to be actually copied. in this case I was using this in conjunction with static char test thinking by having it as static the memory would not get destroyed and i can then simply point mystuff to test and be done with it. i would later have address the memory cleanup in the main function. but anyway this never worked. } int main(void) { char * mystuff = new char [7]; //allocate memory on heap where the pointer will point cool(mystuff); std::string test_case(mystuff); std::cout<<test_case.c_str(); //this is the only way i know how to use cout by making it into a string c++ string. delete [] mystuff; return 0; } in the case, of a static array in the function why would it not work. in the case, when i allocated memory using new in the function does it get created on the stack or heap? in the case, i have string which needs to be copied into a char * form. everything i see usually requires const char* instead of just char*. I know i could use reference to take care of this easy. Or char ** to send in the pointer and do it that way. But i just wanted to know if I could do it with just char *. Anyway your thoughts and comments plus any examples would be very helpful.

    Read the article

  • Extend base class properties

    - by user1888033
    I need your help to extend my base class, here is the similar structure i have. public class ShowRoomA { public audi AudiModelA { get; set; } public benz benzModelA { get; set; } } public class audi { public string Name { get; set; } public string AC { get; set; } public string PowerStearing { get; set; } } public class benz { public string Name { get; set; } public string AC { get; set; } public string AirBag { get; set; } public string MusicSystem { get; set; } } //My Implementation class like this class Main() { private void UpdateDetails() { ShowRoomA ojbMahi = new ShowRoomA(); GetDetails( ojbMahi ); // this works fine } private void GetDetails(ShowRoomA objShowRoom) { objShowRoom = new objShowRoom(); objShowRoom.audi = new audi(); objShowRoom.audi.Name = "AUDIMODEL94CD698"; objShowRoom.audi.AC = "6 TON"; objShowRoom.audi.PowerStearing = "Electric"; objShowRoom.benz= new benz(); objShowRoom.audi.Name = "BENZMODEL34LCX"; objShowRoom.audi.AC = "8 TON"; objShowRoom.audi.AirBag = "Two (1+1)"; objShowRoom.audi.MusicSystem = "Poineer 3500W"; } } // Till this cool. // Now I got requirement for ShowRoomB with replacement of old audi and benz with new models and new other brand cars also added. // I don't want to modify GetDetails() method. by reusing this method additional logic i want to apply to my new extended model. // Here I struck in designing my new model of ShowRoomB (base of ShowRoomA) ... I have tried some thing like... but not sure. public class audiModelB:audi { public string JetEngine { get; set; } } public class benzModelB:benz { public string JetEngine { get; set; } } public class ShowRoomB { public audiModelB AudiModelB { get; set; } public benzModelB benzModelB { get; set; } } // My new code to Implementation class like this class Main() { private void UpdateDetails() { ShowRoomB ojbNahi = new ShowRoomB(); GetDetails( ojbNahi ); // this is NOT working! I know this object does not contain base class directly, still some what i want fill my new model with old properties. Kindly suggest here } } Can any one please give me solutions how to achieve my extending requirement for base class "ShowroomA" Really appreciated your time and suggestions. Thanks in advance,

    Read the article

  • Pass variables between separate instances of ruby (without writing to a text file or database)

    - by boulder_ruby
    Lets say I'm running a long worker-script in one of several open interactive rails consoles. The script is updating columns in a very, very, very large table of records. I've muted the ActiveRecord logger to speed up the process, and instruct the script to output some record of progress so I know how roughly how long the process is going to take. That is what I am currently doing and it would look something like this: ModelName.all.each_with_index do |r, i| puts i if i % 250 ...runs some process... r.save end Sometimes its two nested arrays running, such that there would be multiple iterators and other things running all at once. Is there a way that I could do something like this and access that variable from a separate rails console? (such that the variable would be overwritten every time the process is run without much slowdown) records = ModelName.all $total = records.count records.each_with_index do |r, i| $i = i ...runs some process... r.save end meanwhile mid-process in other console puts "#{($i/$total * 100).round(2)}% complete" #=> 67.43% complete I know passing global variables from one separate instance of ruby to the next doesn't work. I also just tried this to no effect as well unix console 1 $X=5 echo {$X} #=> 5 unix console 2 echo {$X} #=> "" Lastly, I also know using global variables like this is a major software design pattern no-no. I think that's reasonable, but I'd still like to know how to break that rule if I'd like. Writing to a text file obviously would work. So would writing to a separate database table or something. That's not a bad idea. But the really cool trick would be sharing a variable between two instances without writing to a text file or database column. What would this be called anyway? Tunneling? I don't quite know how to tag this question. Maybe bad-idea is one of them. But honestly design-patterns isn't what this question is about.

    Read the article

  • Else without if

    - by user2808951
    I'm trying to write a code for my computer programming class for a project due Monday, and I'm pretty new to Java, but I'm trying to write a program that will first determine if a number the user inputs is even or odd and then determine if the number is prime or not. I'm not sure if I did the algorithm right or not, so if anyone has any corrections on the program to my algorithm or anything else please say so, but my real issue is that the program is refusing to compile. Every time I try, it says it's having an else without if problem. Here's a link to my command box: http://s1341.photobucket.com/user/Emi_Nightshade/media/Capture_zps45f9a2ea.png.html Here's my code: import java.io.*; import java.util.*; public class Lesson9p1_ThuotteEmily { public static void main(String args[]) { Scanner kbReader0=new Scanner(System.in); System.out.print("\n\nPlease enter an integer. An integer is whole number, and it can be either negative or positive. Please enter your number: "); long num=kbReader0.nextLong(); if(num%2==0) //if and else with braces { System.out.println("Your integer " + num + " is even."); } else { System.out.println("Your integer " + num + " is odd."); } Scanner kbReader1=new Scanner(System.in); System.out.print("\n\nWould you like to know if your number is prime? Please enter yes or no: "); String yn=kbReader1.nextLine(); if(yn.equals.IgnoreCase("Yes")) { System.out.println("Okay. Give me a moment."); { if(num%2==0) { System.out.println("Your number isn't prime."); } else if(num==2) { System.out.println("Your number is 2, which is the only even prime number in existence. Cool, right?"); } for(int i=3;i*i<=n;i+=2) { if(n%1==0) { System.out.println("Your number isn't prime."); } } else { System.out.println("Your number is prime!"); } } } if(yn.equals.IgnoreCase("No")) { System.out.println("Okay."); } } } If anyone could help me out with this and also any problems I may have made elsewhere in the program, I'd be very grateful! Thanks.

    Read the article

  • ImgBurn fails to burn data CD-R disk due to "Layouts do not match" error

    - by 0xAether
    I have a reoccurring problem with the program ImgBurn. Whenever I try and burn anything to a CD-R using ImgBurn it burns just fine, except for when I go and verify the disk. It tells me that the "Layouts do not match". Windows 7 shows the disk as completely blank. Although, I see on the bottom of the disk it has been written to. I can burn ISO files to DVD-R's just fine. This only seems to happen with CD-R's. The CD-R's I'm using are Memorex Cool Colors 52x CD-R's. I have looked on Google, and it seems like I'm not the only one this happens to. Unfortunately, no one is able to provide an explanation. I have included the log file from the last CD I just burnt. If you need anything else to better diagnose this problem, I will gladly provide it. ; //****************************************\\ ; ImgBurn Version 2.5.7.0 - Log ; Monday, 19 November 2012, 16:11:57 ; \\****************************************// ; ; I 16:04:55 ImgBurn Version 2.5.7.0 started! I 16:04:55 Microsoft Windows 7 Ultimate x64 Edition (6.1, Build 7601 : Service Pack 1) I 16:04:55 Total Physical Memory: 4,156,380 KB - Available: 3,317,144 KB I 16:04:55 Initialising SPTI... I 16:04:55 Searching for SCSI / ATAPI devices... I 16:04:56 -> Drive 1 - Info: Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:04:56 Found 1 DVD±RW/RAM! I 16:05:37 Operation Started! I 16:05:37 Source File: C:\Users\Aaron\Desktop\VMware Workstation 9.iso I 16:05:37 Source File Sectors: 223,057 (MODE1/2048) I 16:05:37 Source File Size: 456,820,736 bytes I 16:05:37 Source File Volume Identifier: VMwareWorksta9 I 16:05:37 Source File Volume Set Identifier: 20121119_2102 I 16:05:37 Source File File System(s): ISO9660, Joliet I 16:05:37 Destination Device: [1:0:0] Optiarc DVD RW AD-7560S SH03 (D:) (SATA) I 16:05:37 Destination Media Type: CD-R (Disc ID: 97m17s06f, Moser Baer India) I 16:05:37 Destination Media Supported Write Speeds: 10x, 16x, 20x, 24x I 16:05:37 Destination Media Sectors: 359,847 I 16:05:37 Write Mode: CD I 16:05:37 Write Type: SAO I 16:05:37 Write Speed: 6x I 16:05:37 Lock Volume: Yes I 16:05:37 Test Mode: No I 16:05:37 OPC: No I 16:05:37 BURN-Proof: Enabled W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - MODE SENSE: 1,764 KB/s (10x), GET PERFORMANCE: 11,080 KB/s (63x) W 16:05:37 Write Speed Miscompare! - Wanted: 1,058 KB/s (6x), Got: 1,764 KB/s (10x) / 11,080 KB/s (63x) W 16:05:37 The drive only supports writing these discs at 10x, 16x, 20x, 24x. I 16:05:38 Filling Buffer... (80 MB) I 16:05:40 Writing LeadIn... I 16:06:07 Writing Session 1 of 1... (1 Track, LBA: 0 - 223056) I 16:06:07 Writing Track 1 of 1... (MODE1/2048, LBA: 0 - 223056) I 16:11:00 Synchronising Cache... I 16:11:18 Exporting Graph Data... I 16:11:18 Graph Data File: C:\Users\Aaron\AppData\Roaming\ImgBurn\Graph Data Files\Optiarc_DVD_RW_AD-7560S_SH03_MONDAY-NOVEMBER-19-2012_4-05_PM_97m17s06f_6x.ibg I 16:11:18 Export Successfully Completed! I 16:11:18 Operation Successfully Completed! - Duration: 00:05:41 I 16:11:18 Average Write Rate: 1,522 KB/s (10.1x) - Maximum Write Rate: 1,544 KB/s (10.3x) I 16:11:18 Cycling Tray before Verify... W 16:11:23 Waiting for device to become ready... I 16:11:47 Device Ready! E 16:11:47 CompareImageFileLayouts Failed! - Session Count Not Equal (1/0) E 16:11:47 Verify Failed! - Reason: Layouts do not match. I 16:11:57 Close Request Acknowledged I 16:11:57 Closing Down... I 16:11:57 Shutting down SPTI... I 16:11:57 ImgBurn closed!

    Read the article

  • nginx php-fpm empty site

    - by Katafalkas
    I am writing this as I was stuck trying to fix it whole night. We have recently migrated to a new server. Before migration we have been running our site on nginx + cgi (script). After migration we decided to try apache + mod_php. It was rather terrible, and I would like to migrate back to nginx, but this time I want it with php-fpm (as people say its a cool) So I did follow a number of guides and I think i done everything correctly. As well as that I have our old config files for "server" section, which i reviewed and places into new config. So I ended up having an empty site when I enter our URL. (By empty I mean blank page, no letter, errors or anything at all.) In the access log there are some weird errors like: 123.242.148.54 - - [22/Mar/2012:06:08:11 +0200] "-" 400 0 "-" "-" My guess is that php-fpm is not working, but not sure how to confirm it. Maybe some1 could give some help ? Would much appreciate. my nginx config: user nginx; worker_processes 12; error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 4096; } http { include /etc/nginx/mime.types; #default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; fastcgi_buffer_size 256k; fastcgi_buffers 4 256k; server { listen 80; server_name www.example.com; access_log /var/log/nginx/example.access.log; error_log /var/log/nginx/error.log; root /home/www/example; index index.php; client_max_body_size 50M; #error_page 404 /404.html; # phpMyAdmin location /phpmyadmin { root /usr/share/; error_log /var/log/nginx/phpmyadmin.log; try_files $uri $uri/ /index.php; } location ~ ^/phpmyadmin/.*\.php$ { root /usr/share/; error_log /var/log/nginx/phpmyadmin.log; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Munin location /monitoring { root /var/www/; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/conf.d/monitoring_users; error_log /var/log/nginx/monitoring.log; index index.html; } # The site location / { try_files $uri $uri/ /index.php/?$uri&$args; } # PHP interpreter location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } }

    Read the article

  • nginx 502 Bad Gateway on every external site

    - by Leandros
    I just installed nginx and followed the guides on the official site, to set it up with php5-fpm, but it just won't work. Not even the default site, without php is working outside of my server. Tried listen = 127.0.0.1:7777 and listen = /var/run/php5-fpm.sock Both don't work. I can access http://localhost with lynx on my server, but not from somewhere else (with external ip obviously). Yes, the php5-fpm deamons are running, yes the port (80 and 7777) is opened. Don't work with php-cgi as well. My config: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; proxy_buffers 16 16k; proxy_buffer_size 32k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; } Server config: (symlinked to sites-enabled) server { server_name skilloverflow.de *.skilloverflow.de; root /var/www/blog.skilloverflow.de/htdocs; index index.php; error_log /var/log/nginx/skilloverflow.error.log; access_log /var/log/nginx/skilloverflow.access.log; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:7777; fastcgi_index index.php; include fastcgi_params; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } } PHP Version: 5.4.17-1 nginx version: 1.2.1 Debian 6.0.7 Linux 2.6.32 Edit: Lighttpd is still installed, does that matter? It's not running though. Edit 2: No error or access log is generated. They're all empty.

    Read the article

  • bash: per-command history. How does it work?

    - by romainl
    OK. I have an old G5 running Leopard and a Dell running Ubuntu 10.04 at home and a MacPro also running Leopard at work. I use Terminal.app/bash a lot. On my home G5 it exhibits a nice feature: using ? to navigate history I get the last command starting with the few letters that I've typed. This is what I mean (| represents the caret): $ ssh user@server $ vim /some/file/just/to/populate/history $ ss| So, I've typed the two first letters of "ssh", hitting ? results in this: $ ssh user@server instead of this, which is the behaviour I get everywhere else : $ vim /some/file/just/to/populate/history If I keep on hitting ? or ?, I can navigate through the history of ssh like this: $ ssh otheruser@otherserver $ ssh user@server $ ssh yetanotheruser@yetanotherserver It works the same for any command like cat, vim or whatever. That's really cool. Except that I have no idea how to mimic this behaviour on my other machines. Here is my .profile: export PATH=/Developer/SDKs/flex_sdk_3.4/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/sw/bin:/sw/sbin:/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:$HOME/Applications/bin:/usr/X11R6/bin export MANPATH=/usr/local/share/man:/usr/local/man:opt/local/man:sw/share/man export INFO=/usr/local/share/info export PERL5LIB=/opt/local/lib/perl5 export PYTHONPATH=/opt/local/bin/python2.7 export EDITOR=/opt/local/bin/vim export VISUAL=/opt/local/bin/vim export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home export TERM=xterm-color export GREP_OPTIONS='--color=auto' GREP_COLOR='1;32' export CLICOLOR=1 export LS_COLORS='no=00:fi=00:di=01;34:ln=target:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.deb=00;31:*.rpm=00;31:*.TAR=00;31:*.TGZ=00;31:*.ARJ=00;31:*.TAZ=00;31:*.LZH=00;31:*.ZIP=00;31:*.Z=00;31:*.Z=00;31:*.GZ=00;31:*.BZ2=00;31:*.DEB=00;31:*.RPM=00;31:*.jpg=00;35:*.png=00;35:*.gif=00;35:*.bmp=00;35:*.ppm=00;35:*.tga=00;35:*.xbm=00;35:*.xpm=00;35:*.tif=00;35:*.png=00;35:*.fli=00;35:*.gl=00;35:*.dl=00;35:*.psd=00;35:*.JPG=00;35:*.PNG=00;35:*.GIF=00;35:*.BMP=00;35:*.PPM=00;35:*.TGA=00;35:*.XBM=00;35:*.XPM=00;35:*.TIF=00;35:*.PNG=00;35:*.FLI=00;35:*.GL=00;35:*.DL=00;35:*.PSD=00;35:*.mpg=00;36:*.avi=00;36:*.mov=00;36:*.flv=00;36:*.divx=00;36:*.qt=00;36:*.mp4=00;36:*.m4v=00;36:*.MPG=00;36:*.AVI=00;36:*.MOV=00;36:*.FLV=00;36:*.DIVX=00;36:*.QT=00;36:*.MP4=00;36:*.M4V=00;36:*.txt=00;32:*.rtf=00;32:*.doc=00;32:*.odf=00;32:*.rtfd=00;32:*.html=00;32:*.css=00;32:*.js=00;32:*.php=00;32:*.xhtml=00;32:*.TXT=00;32:*.RTF=00;32:*.DOC=00;32:*.ODF=00;32:*.RTFD=00;32:*.HTML=00;32:*.CSS=00;32:*.JS=00;32:*.PHP=00;32:*.XHTML=00;32:' export LC_ALL=C export LANG=C stty cs8 -istrip -parenb bind 'set convert-meta off' bind 'set meta-flag on' bind 'set output-meta on' alias ip='curl http://www.whatismyip.org | pbcopy' alias ls='ls -FhLlGp' alias la='ls -AFhLlGp' alias couleurs='$HOME/Applications/bin/colors2.sh' alias td='$HOME/Applications/bin/todo.sh' alias scale='$HOME/Applications/bin/scale.sh' alias stree='$HOME/Applications/bin/tree' alias envoi='$HOME/Applications/bin/envoi.sh' alias unfoo='$HOME/Applications/bin/unfoo' alias up='cd ..' alias size='du -sh' alias lsvn='svn list -vR' alias jsc='/System/Library/Frameworks/JavaScriptCore.framework/Versions/A/Resources/jsc' alias asl='sudo rm -f /private/var/log/asl/*.asl' alias trace='tail -f $HOME/Library/Preferences/Macromedia/Flash\ Player/Logs/flashlog.txt' alias redis='redis-server /opt/local/etc/redis.conf' source /Users/johncoltrane/Applications/bin/git-completion.sh export GIT_PS1_SHOWUNTRACKEDFILES=1 export GIT_PS1_SHOWUPSTREAM="verbose git" export GIT_PS1_SHOWDIRTYSTATE=1 export PS1='\n\[\033[32m\]\w\[\033[0m\] $(__git_ps1 "[%s]")\n\[\033[1;31m\]\[\033[31m\]\u\[\033[0m\] $ \[\033[0m\]' mkcd () { mkdir -p "$*" cd "$*" } function cdl { cd $1 la } n() { $EDITOR ~/Dropbox/nv/"$*".txt } nls () { ls -c ~/Dropbox/nv/ | grep "$*" } copy(){ curl -s -F 'sprunge=<-' http://sprunge.us | pbcopy } if [ -f /opt/local/etc/profile.d/cdargs-bash.sh ]; then source /opt/local/etc/profile.d/cdargs-bash.sh fi if [ -f /opt/local/etc/bash_completion ]; then . /opt/local/etc/bash_completion fi Any idea?

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Publishing an Excel spreadsheet using Microsoft SBS 2008 to a web page that is viewable by mobile ph

    - by Dave Heath
    I am getting well out of my “superuser” depth here and would love some support. At work we have an Excel workbook (*.xls format circa Office 2003) which maintains our “engineers” timesheet. This handles what events we are doing across the year and how many “work units” it is. As far as a workbook goes, it is fairly simple with just a few =SUM(range) cells and some linked across sheets (12 sheets, one for each month) It is stored on a server, in a folder that provides “management” with full access and “engineers” with read-only access. The workbook itself is read-only for “engineers” and full access for “management”. I think these permissions are controlled through Active Directory. The workbook is protected with a password, assumingly to allow “management” to edit it even if they are working at a terminal logged in as an “engineer”. This protection prevents “engineers” from going to certain cells to see formulae and therefore editing them. The workbook has a macro which saves and closes it ten minutes after opening. This is to stop other “management” from being locked out by any one person who has logged in with editing privileges. I hope this is making sense to someone... :S Now then, we have Microsoft Small Business Server 2008. We have a shiny new web-based login for when we are offsite so we can get to Exchange webmail and our internal site (which uses Sharepoint 3.0). “Management” would like to be able to publish this timesheet automatically after changes (they don’t want to have to do anything different to what they are currently doing) so that using an iPhone “engineers” can check on it while out of the office. I am currently having a look at “Excel Services” for Office 2007 on TechNet but I am not sure if I am running down the right garden path at the moment. < EDIT This seems to suggest that I have to have Sharepoint Server 2007, with no mention of Sharepoint 3.0... ... "MOSS builds on WSS by adding both core features as well as end user web parts" - Wikipedia entry for Microsoft Office SharePoint Server (MOSS) this is not good news... "...and using the ASP.NET APIs, web parts can be written to extend the functionality of WSS." Wikipedia entry for Windows Sharepoint Services. Could this bring back what I need? Is this good news? Do I need to start learning ASP.NET? This link here implies that we need MOSS to do what I want and the bosses say we aint' getting it. http://serverfault.com/questions/20198/what-is-some-cool-things-you-can-do-with-sharepoint-2007/22128#22128 Back to the drawing board. < /EDIT Please could someone suggest some “further reading” for me to help point me in the right direction or to put me back on the right track. Many thanks. I will try to keep this up to date with how I get on.

    Read the article

  • Linux: Apple Wireless A1314 Fn key not registered, looks like software bug

    - by ramplank
    I'm trying to set up my Apple Wireless Keyboard with my Kubuntu systems. These are PC hardware powered by Intel Atom and Intel i5 respectively. The keyboard has a US keyboard layout and has model number A1314 written on the back. It takes two AA batteries. I'm saying that because it appears there are multiple types of model A1314. I have tried this on a 10.04, 11.04, 11.10 and 12.04 system with no success. Every time using a bluetooth dongle and the KDE bluetooth notification tray applet, the keyboard can be connected. In both cases it shows up as "Apple Wireless Keyboard". Almost everything works as expected, in fact, I'm typing on it right now. But one thing doesn't: The Fn key. I'd like to use Fn + Down Arrow as PgDn / Page Down, I understand this is default behaviour on Apple keyboards. And of course I'd like the same for Page Up, Home and End. I'll stick to Page Down in my example. I used the xev tool to see the keycodes the system receives, and if I press on Fn nothing happens, and nothing is registered. If I press Fn + Down Arrow, xev only registers the down arrow. Here's the output from my 11.04 system to illustrate: Press just the Fn key: no output Press Down Arrow key: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699773, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699860, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False Press Fn+Down Arrow Keys together: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701548, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701623, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False I've been searching this forum and other Linux-related forums for hours but I still have not found a solution. I mostly found advice on how to fix this when using an actual apple laptop or desktop, but I don't have that. They said to try something like the following echo 2 > /sys/module/hid_apple/ ... But since there's no hid_apple directory present on my systems, I've needed to modprobe hid_apple first. That didn't help either. I'm cool with changing some config files, or compiling my own patched kernel if that's necessary. I currently have a 10.04 and 12.04 system available to test. The same issue occurs when hooked up to Windows 7. Fn key still does nothing, not by itself or in combination with other keys. With some AutoHotkey fiddling, I was able to confirm the key is registered as pressed, but ignored by default. A custom AutoHotkey script can fix that. But AutoHotkey is only for Windows, I want my problem fixed on Linux. Hooked up to an iPad 2 it only works in combination with the F1-F12 keys. Not with the arrow keys. If the screen of the ipad is off, and I press just the Fn key, the screen will come on, so the key itself is registered as pressed. So to sum up my question: Can anyone help me get Page Up, Page Down, Home and End to work on this keyboard, when that requires me to use an Fn key which is currently not registered?

    Read the article

  • My 3D Games crashing in these scenarios

    - by desaivv
    I have a situation here which I am unable to solve. I bought a PC last year March, here are my specs: Intel Core i3 550 @ 3GHz 4 GB RAM @400 MHz XFX GeForce 9500 GT graphics card 1GB @550MHz 500 GB HDD Lately as soon as I load my save game of Skyrim, it crashes. I have been playing Skyrim since I joined Gaming.SE site. Crashes as in the entire scene gets red lines. I can not ALT + TAB back or even CTRL + ALT + DEL either. My only recourse is a hard reset via the power button. Can not take a screen shot either. I have the latest Forceware 296.10 drivers also. This has been happening since the last 2 weeks. I always use Driver Sweeper to clean my old drivers, since that is what XFXForce recommends before installing new drivers. I installed MSI Afterburner lately to see my GPU temperature. My GPU is default, never over-clocked it. In MSI's Afterburner, I can not adjust fan speed. It is greyed out. Also in settings there is no fan tab. With normal Internet browsing it stays at 51 C. Ran Memtest86 over night with level 11. Took about 13 hours, but no errors in my RAM. I even re-installed my OS, with just the 296 drivers. The fan for the GPU does come on. I can play Diablo 2. I can not get past Warcraft 3's menu selection. There WAS some dust in my machine, but I always try and keep everything clean, since in my home town dust is an issue. Always keep cool my entire PC cabinet. My friend came with his functioning graphics card, we bought our PCs at the same time with exact same specifications. His card did not work either. Same problem with the scene freezing with red lines. I did do my research before posting here. That is how I was able to learn about MSI Afterburner, Driver Sweeper, SpeedFan etc. I followed posts on Tom's Hardware too regarding people that had similar problems. One person suggested and was followed by worked as well the suggestion to "Bake the card in an oven". Since I bought it, played Diablo 2 for months, Starcraft 2 campaign for months and Skyrim recently for months. Bought ME3 also. I am at my wits end. I do not know what else to do. I can go out and buy a card, but my friend's card did not work either. I can use the machine for Eclipse or VS2010 development just fine. Just not with 3D gaming. I originally posted this question on Gaming.SE But I was directed here. I have browsed the SU database for my problem and found this, this, and this. But none of these cover my question. My machine is only one year old. Can some experienced superuser(s) shed some light on this scenario? Is it a 3D graphics card problem? Will a brand new card work? What else can I try to pin point the problem? Can it be the Motherboard? Thanks.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • How To Activate Your Free Office 2007 to 2010 Tech Guarantee Upgrade

    - by Matthew Guay
    Have you purchased Office 2007 since March 5th, 2010?  If so, here’s how you can activate and download your free upgrade to Office 2010! Microsoft Office 2010 has just been released, and today you can purchase upgrades from most retail stores or directly from Microsoft via download.  But if you’ve purchased a new copy of Office 2007 or a new computer that came with Office 2007 since March 5th, 2010, then you’re entitled to an absolutely free upgrade to Office 2010.  You’ll need enter information about your Office 2007 and then download the upgrade, so we’ll step you through the process. Getting Started First, if you’ve recently purchased Office 2007 but haven’t installed it, you’ll need to go ahead and install it before you can get your free Office 2010 upgrade.  Install it as normal.   Once Office 2007 is installed, run any of the Office programs.  You’ll be prompted to activate Office.  Make sure you’re connected to the internet, and then click Next to activate. Get your Free Upgrade to Office 2010 Now you’re ready to download your upgrade to Office 2010.  Head to the Office Tech Guarantee site (link below), and click Upgrade now. You’ll need to enter some information about your Office 2007.  Check that you purchased your copy of Office 2007 after March 5th, select your computer manufacturer, and check that you agree to the terms. Now you’re going to need the Product ID number from Office 2007.  To find this, open Word or any other Office 2007 application.  Click the Office Orb, and select Options on the bottom. Select the Resources button on the left, and then click About. Near the bottom of this dialog, you’ll see your Product ID.  This should be a number like: 12345-123-1234567-12345   Go back to the Office Tech Guarantee signup page in your browser, and enter this Product ID.  Select the language of your edition of Office 2007, enter the verification code, and then click Submit. It may take a few moments to validate your Product ID. When it is finished, you’ll be taken to an order page that shows the edition of Office 2010 you’re eligible to receive.  The upgrade download is free, but if you’d like to purchase a backup DVD of Office 2010, you can add it to your order for $13.99.  Otherwise, simply click Continue to accept. Do note that the edition of Office 2010 you receive may be different that the edition of Office 2007 you purchased, as the number of editions has been streamlined in the Office 2010 release.  Here’s a chart you can check to see what edition you’ll receive.  Note that you’ll still be allowed to install Office on the same number of computers; for example, Office 2007 Home and Student allows you to install it on up to 3 computers in the same house, and your Office 2010 upgrade will allow the same. Office 2007 Edition Office 2010 Upgrade You’ll Receive Office 2007 Home and Student Office Home and Student 2010 Office Basic 2007Office Standard 2007 Office Home and Business 2010 Office Small Business 2007Office Professional 2007Office Ultimate 2007 Office Professional 2010 Office Professional 2007 AcademicOffice Ultimate 2007 Academic Office Professional Academic 2010 Sign in with your Windows Live ID, or create a new one if you don’t already have one. Enter your name, select your country, and click Create My Account.  Note that Office will send Office 2010 tips to your email address; if you don’t wish to receive them, you can unsubscribe from the emails later.   Finally, you’re ready to download Office 2010!  Click the Download Now link to start downloading Office 2010.  Your Product Key will appear directly above the Download link, so you can copy it and then paste it in the installer when your download is finished.  You will additionally receive an email with the download links and product key, so if your download fails you can always restart it from that link. If your edition of Office 2007 included the Office Business Contact Manager, you will be able to download it from the second Download link.  And, of course, even if you didn’t order a backup DVD, you can always burn the installers to a DVD for a backup.   Install Office 2010 Once you’re finished downloading Office 2010, run the installer to get it installed on your computer.  Enter your Product Key from the Tech Guarantee website as above, and click Continue. Accept the license agreement, and then click Upgrade to upgrade to the latest version of Office.   The installer will remove all of your Office 2007 applications, and then install their 2010 counterparts.  If you wish to keep some of your Office 2007 applications instead, click Customize and then select to either keep all previous versions or simply keep specific applications. By default, Office 2010 will try to activate online automatically.  If it doesn’t activate during the install, you’ll need to activate it when you first run any of the Office 2010 apps.   Conclusion The Tech Guarantee makes it easy to get the latest version of Office if you recently purchased Office 2007.  The Tech Guarantee program is open through the end of September, so make sure to grab your upgrade during this time.  Actually, if you find a great deal on Office 2007 from a major retailer between now and then, you could also take advantage of this program to get Office 2010 cheaper. And if you need help getting started with Office 2010, check out our articles that can help you get situated in your new version of Office! Link Activate and Download Your free Office 2010 Tech Guarantee Upgrade Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideCenter Pictures and Other Objects in Office 2007 & 2010Change the Default Color Scheme in Office 2010Show Two Time Zones in Your Outlook 2007 Calendar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Q1 2010 New Feature: Paging with RadGridView for Silverlight and WPF

    We are glad to announce that the Q1 2010 Release has added another weapon to RadGridViews growing arsenal of features. This is the brand new RadDataPager control which provides the user interface for paging through a collection of data. The good news is that RadDataPager can be used to page any collection. It does not depend on RadGridView in any way, so you will be free to use it with the rest of your ItemsControls if you chose to do so. Before you read on, you might want to download the samples solution that I have attached. It contains a sample project for every scenario that I will discuss later on. Looking at the code while reading will make things much easier for you. There is something for everyone among the 10 Visual Studio projects that are included in the solution. So go and grab it. I. Paging essentials The single most important piece of software concerning paging in Silverlight is the System.ComponentModel.IPagedCollectionView interface. Those of you who are on the WPF front need not worry though. As you might already know, Teleriks Silverlight and WPF controls is share the same code-base. Since WPF does not contain a similar interface, Telerik has provided its own Telerik.Windows.Data.IPagedCollectionView. The IPagedCollectionView interface contains several important members which are used by RadGridView to perform the actual paging. Silverlight provides a default implementation of this interface which, naturally, is called PagedCollectionView. You should definitely take a look at its source code in case you are interested in what is going on under the hood. But this is not a prerequisite for our discussion. The WPF default implementation of the interface is Teleriks QueryableCollectionView which, among many other interfaces, implements IPagedCollectionView. II. No Paging In order to gradually build up my case, I will start with a very simple example that lacks paging whatsoever. It might sound stupid, but this will help us build on top of this paging-devoid example. Let us imagine that we have the simplest possible scenario. That is a simple IEnumerable and an ItemsControl that shows its contents. This will look like this: No Paging IEnumerable itemsSource = Enumerable.Range(0, 1000); this.itemsControl.ItemsSource = itemsSource; XAML <Border Grid.Row="0" BorderBrush="Black" BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1" BorderBrush="Black" BorderThickness="1" Margin="5">     <TextBlock Text="No Paging"/> </Border> Nothing special for now. Just some data displayed in a ListBox. The two sample projects in the solution that I have attached are: NoPaging_WPF NoPaging_SL3 With every next sample those two project will evolve in some way or another. III. Paging simple collections The single most important property of RadDataPager is its Source property. This is where you pass in your collection of data for paging. More often than not your collection will not be an IPagedCollectionView. It will either be a simple List<T>, or an ObservableCollection<T>, or anything that is simply IEnumerable. Unless you had paging in mind when you designed your project, it is almost certain that your data source will not be pageable out of the box. So what are the options? III. 1. Wrapping the simple collection in an IPagedCollectionView If you look at the constructors of PagedCollectionView and QueryableCollectionView you will notice that you can pass in a simple IEnumerable as a parameter. Those two classes will wrap it and provide paging capabilities over your original data. In fact, this is what RadGridView does internally. It wraps your original collection in an QueryableCollectionView in order to easily perform many useful tasks such as filtering, sorting, and others, but in our case the most important one is paging. So let us start our series of examples with the most simplistic one. Imagine that you have a simple IEnumerable which is the source for an ItemsControl. Here is how to wrap it in order to enable paging: Silverlight IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new PagedCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; WPF IEnumerable itemsSource = Enumerable.Range(0, 1000); var pagedSource = new QueryableCollectionView(itemsSource); this.radDataPager.Source = pagedSource; this.itemsControl.ItemsSource = pagedSource; XAML <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> This will do the trick. It is quite simple, isnt it? The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithWrapping_WPF PagingSimpleCollectionWithWrapping_SL3 III. 2. Binding to RadDataPager.PagedSource In case you do not like this approach there is a better one. When you assign an IEnumerable as the Source of a RadDataPager it will automatically wrap it in a QueryableCollectionView and expose it through its PagedSource property. From then on, you can attach any number of ItemsControls to the PagedSource and they will be automatically paged. Here is how to do this entirely in XAML: Using RadDataPager.PagedSource <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding PagedSource, ElementName=radDataPager}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding ItemsSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingSimpleCollectionWithPagedSource_WPF PagingSimpleCollectionWithPagedSource_SL3 IV. Paging collections implementing IPagedCollectionView Those of you who are using WCF RIA Services should feel very lucky. After a quick look with Reflector or the debugger we can see that the DomainDataSource.Data property is in fact an instance of the DomainDataSourceView class. This class implements a handful of useful interfaces: ICollectionView IEnumerable INotifyCollectionChanged IEditableCollectionView IPagedCollectionView INotifyPropertyChanged Luckily, IPagedCollectionView is among them which lets you do the whole paging in the server. So lets do this. We will add a DomainDataSource control to our page/window and connect the items control and the pager to it. Here is how to do this: MainPage <riaControls:DomainDataSource x:Name="invoicesDataSource"                               AutoLoad="True"                               QueryName="GetInvoicesQuery">     <riaControls:DomainDataSource.DomainContext>         <services:ChinookDomainContext/>     </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <ListBox Name="itemsControl"              ItemsSource="{Binding Data, ElementName=invoicesDataSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Data, ElementName=invoicesDataSource}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> By the way, you can replace the ListBox from the above code snippet with any other ItemsControl. It can be RadGridView, it can be the MS DataGrid, you name it. Essentially, RadDataPager is sending paging commands to the the DomainDataSource.Data. It does not care who, what, or how many different controls are bound to this same Data property of the DomainDataSource control. So if you would like to experiment with this, you can throw in any number of other ItemsControls next to the ListBox, bind them in the same manner, and all of them will be paged by our single RadDataPager. Furthermore, you can throw in any number of RadDataPagers and bind them to the same property. Then when you page with any one of them will automatically update all of the rest. The whole picture is simply beautiful and we can do all of this thanks to WCF RIA Services. The two sample projects (Silverlight only) in the solution that I have attached are: PagingIPagedCollectionView PagingIPagedCollectionView.Web IV. Paging RadGridView While you can replace the ListBox in any of the above examples with a RadGridView, RadGridView offers something extra. Similar to the DomainDataSource.Data property, the RadGridView.Items collection implements the IPagedCollectionView interface. So you are already thinking: Then why not bind the Source property of RadDataPager to RadGridView.Items? Well thats exactly what you can do and you will start paging RadGridView out-of-the-box. It is as simple as that, no code-behind is involved: MainPage <Border Grid.Row="0"         BorderBrush="Black"         BorderThickness="1" Margin="5">     <telerikGrid:RadGridView Name="radGridView"                              ItemsSource="{Binding ItemsSource}"/> </Border> <Border Grid.Row="1"         BorderBrush="Black"         BorderThickness="1"         Margin="5">     <telerikGrid:RadDataPager Name="radDataPager"                               Source="{Binding Items, ElementName=radGridView}"                              PageSize="10"                              IsTotalItemCountFixed="True"                              DisplayMode="All"/> The two sample projects in the solution that I have attached are: PagingRadGridView_SL3 PagingRadGridView_WPF With this last example I think I have covered every possible paging combination. In case you would like to see an example of something that I have not covered, please let me know. Also, make sure you check out those great online examples: WCF RIA Services with DomainDataSource Paging Configurator Endless Paging Paging Any Collection Paging RadGridView Happy Paging! Download Full Source Code Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Ajax Control Toolkit and Superexpert

    - by Stephen Walther
    Microsoft has asked my company, Superexpert Consulting, to take ownership of the development and maintenance of the Ajax Control Toolkit moving forward. In this blog entry, I discuss our strategy for improving the Ajax Control Toolkit. Why the Ajax Control Toolkit? The Ajax Control Toolkit is one of the most popular projects on CodePlex. In fact, some have argued that it is among the most successful open-source projects of all time. It consistently receives over 3,500 downloads a day (not weekends -- workdays). A mind-boggling number of developers use the Ajax Control Toolkit in their ASP.NET Web Forms applications. Why does the Ajax Control Toolkit continue to be such a popular project? The Ajax Control Toolkit fills a strong need in the ASP.NET Web Forms world. The Toolkit enables Web Forms developers to build richly interactive JavaScript applications without writing any JavaScript. For example, by taking advantage of the Ajax Control Toolkit, a Web Forms developer can add modal dialogs, popup calendars, and client tabs to a web application simply by dragging web controls onto a page. The Ajax Control Toolkit is not for everyone. If you are comfortable writing JavaScript then I recommend that you investigate using jQuery plugins instead of the Ajax Control Toolkit. However, if you are a Web Forms developer and you don’t want to get your hands dirty writing JavaScript, then the Ajax Control Toolkit is a great solution. The Ajax Control Toolkit is Vast The Ajax Control Toolkit consists of 40 controls. That’s a lot of controls (For the sake of comparison, jQuery UI consists of only 8 controls – those slackers J). Furthermore, developers expect the Ajax Control Toolkit to work on browsers both old and new. For example, people expect the Ajax Control Toolkit to work with Internet Explorer 6 and Internet Explorer 9 and every version of Internet Explorer in between. People also expect the Ajax Control Toolkit to work on the latest versions of Mozilla Firefox, Apple Safari, and Google Chrome. And, people expect the Ajax Control Toolkit to work with different operating systems. Yikes, that is a lot of combinations. The biggest challenge which my company faces in supporting the Ajax Control Toolkit is ensuring that the Ajax Control Toolkit works across all of these different browsers and operating systems. Testing, Testing, Testing Because we wanted to ensure that we could easily test the Ajax Control Toolkit with different browsers, the very first thing that we did was to set up a dedicated testing server. The dedicated server -- named Schizo -- hosts 4 virtual machines so that we can run Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, and Internet Explorer 9 at the same time (We also use the virtual machines to host the latest versions of Firefox, Chrome, Opera, and Safari). The five developers on our team (plus me) can each publish to a separate FTP website on the testing server. That way, we can quickly test how changes to the Ajax Control Toolkit affect different browsers. QUnit Tests for the Ajax Control Toolkit Introducing regressions – introducing new bugs when trying to fix existing bugs – is the concern which prevents me from sleeping well at night. There are so many people using the Ajax Control Toolkit in so many unique scenarios, that it is difficult to make improvements to the Ajax Control Toolkit without introducing regressions. In order to avoid regressions, we decided early on that it was extremely important to build good test coverage for the 40 controls in the Ajax Control Toolkit. We’ve been focusing a lot of energy on building automated JavaScript unit tests which we can use to help us discover regressions. We decided to write the unit tests with the QUnit test framework. We picked QUnit because it is quickly becoming the standard unit testing framework in the JavaScript world. For example, it is the unit testing framework used by the jQuery team, the jQuery UI team, and many jQuery UI plugin developers. We had to make several enhancements to the QUnit framework in order to test the Ajax Control Toolkit. For example, QUnit does not support tests which include postbacks. We modified the QUnit framework so that it works with IFrames so we could perform postbacks in our automated tests. At this point, we have written hundreds of QUnit tests. For example, we have written 135 QUnit tests for the Accordion control. The QUnit tests are included with the Ajax Control Toolkit source code in a project named AjaxControlToolkit.Tests. You can run all of the QUnit tests contained in the project by opening the Default.aspx page. Automating the QUnit Tests across Multiple Browsers Automated tests are useless if no one ever runs them. In order for the QUnit tests to be useful, we needed an easy way to run the tests automatically against a matrix of browsers. We wanted to run the unit tests against Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, Firefox, Chrome, and Safari automatically. Expecting a developer to run QUnit tests against every browser after every check-in is just too much to expect. It takes 20 seconds to run the Accordion QUnit tests. We are testing against 8 browsers. That would require the developer to open 8 browsers and wait for the results after each change in code. Too much work. Therefore, we built a JavaScript Test Server. Our JavaScript Test Server project was inspired by John Resig’s TestSwarm project. The JavaScript Test Server runs our QUnit tests in a swarm of browsers (running on different operating systems) automatically. Here’s how the JavaScript Test Server works: 1. We created an ASP.NET page named RunTest.aspx that constantly polls the JavaScript Test Server for a new set of QUnit tests to run. After the RunTest.aspx page runs the QUnit tests, the RunTest.aspx records the test results back to the JavaScript Test Server. 2. We opened the RunTest.aspx page on instances of Internet Explorer 6, Internet Explorer 7, Internet Explorer 8, Internet Explorer 9, FireFox, Chrome, Opera, Google, and Safari. Now that we have the JavaScript Test Server setup, we can run all of our QUnit tests against all of the browsers which we need to support with a single click of a button. A New Release of the Ajax Control Toolkit Each Month The Ajax Control Toolkit Issue Tracker contains over one thousand five hundred open issues and feature requests. So we have plenty of work on our plates J At CodePlex, anyone can vote for an issue to be fixed. Originally, we planned to fix issues in order of their votes. However, we quickly discovered that this approach was inefficient. Constantly switching back and forth between different controls was too time-consuming. It takes time to re-familiarize yourself with a control. Instead, we decided to focus on two or three controls each month and really focus on fixing the issues with those controls. This way, we can fix sets of related issues and avoid the randomization caused by context switching. Our team works in monthly sprints. We plan to do another release of the Ajax Control Toolkit each and every month. So far, we have competed one release of the Ajax Control Toolkit which was released on April 1, 2011. We plan to release a new version in early May. Conclusion Fortunately, I work with a team of smart developers. We currently have 5 developers working on the Ajax Control Toolkit (not full-time, they are also building two very cool ASP.NET MVC applications). All the developers who work on our team are required to have strong JavaScript, jQuery, and ASP.NET MVC skills. In the interest of being as transparent as possible about our work on the Ajax Control Toolkit, I plan to blog frequently about our team’s ongoing work. In my next blog entry, I plan to write about the two Ajax Control Toolkit controls which are the focus of our work for next release.

    Read the article

  • GZip/Deflate Compression in ASP.NET MVC

    - by Rick Strahl
    A long while back I wrote about GZip compression in ASP.NET. In that article I describe two generic helper methods that I've used in all sorts of ASP.NET application from WebForms apps to HttpModules and HttpHandlers that require gzip or deflate compression. The same static methods also work in ASP.NET MVC. Here are the two routines:/// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } The first method checks whether the client sending the request includes the accept-encoding for either gzip or deflate, and if if it does it returns true. The second function uses IsGzipSupported() to decide whether it should encode content and uses an Response Filter to do its job. Basically response filters look at the Response output stream as it's written and convert the data flowing through it. Filters are a bit tricky to work with but the two .NET filter streams for GZip and Deflate Compression make this a snap to implement. In my old code and even now in MVC I can always do:public ActionResult List(string keyword=null, int category=0) { WebUtils.GZipEncodePage(); …} to encode my content. And that works just fine. The proper way: Create an ActionFilterAttribute However in MVC this sort of thing is typically better handled by an ActionFilter which can be applied with an attribute. So to be all prim and proper I created an CompressContentAttribute ActionFilter that incorporates those two helper methods and which looks like this:/// <summary> /// Attribute that can be added to controller methods to force content /// to be GZip encoded if the client supports it /// </summary> public class CompressContentAttribute : ActionFilterAttribute { /// <summary> /// Override to compress the content that is generated by /// an action method. /// </summary> /// <param name="filterContext"></param> public override void OnActionExecuting(ActionExecutingContext filterContext) { GZipEncodePage(); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("gzip")) { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } else { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } } // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); } } It's basically the same code wrapped into an ActionFilter attribute, which intercepts requests MVC requests to Controller methods and lets you hook up logic before and after the methods have executed. Here I want to override OnActionExecuting() which fires before the Controller action is fired. With the CompressContentAttribute created, it can now be applied to either the controller as a whole:[CompressContent] public class ClassifiedsController : ClassifiedsBaseController { … } or to one of the Action methods:[CompressContent] public ActionResult List(string keyword=null, int category=0) { … } The former applies compression to every action method, while the latter is selective and only applies it to the individual action method. Is the attribute better than the static utility function? Not really, but it is the standard MVC way to hook up 'filter' content and that's where others are likely to expect to set options like this. In fact,  you have a bit more control with the utility function because you can conditionally apply it in code, but this is actually much less likely in MVC applications than old WebForms apps since controller methods tend to be more focused. Compression Caveats Http compression is very cool and pretty easy to implement in ASP.NET but you have to be careful with it - especially if your content might get transformed or redirected inside of ASP.NET. A good example, is if an error occurs and a compression filter is applied. ASP.NET errors don't clear the filter, but clear the Response headers which results in some nasty garbage because the compressed content now no longer matches the headers. Another issue is Caching, which has to account for all possible ways of compression and non-compression that the content is served. Basically compressed content and caching don't mix well. I wrote about several of these issues in an old blog post and I recommend you take a quick peek before diving into making every bit of output Gzip encoded. None of these are show stoppers, but you have to be aware of the issues. Related Posts GZip Compression with ASP.NET Content ASP.NET GZip Encoding Caveats© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Adding Client Validation To DataAnnotations DataType Attribute

    - by srkirkland
    The System.ComponentModel.DataAnnotations namespace contains a validation attribute called DataTypeAttribute, which takes an enum specifying what data type the given property conforms to.  Here are a few quick examples: public class DataTypeEntity { [DataType(DataType.Date)] public DateTime DateTime { get; set; }   [DataType(DataType.EmailAddress)] public string EmailAddress { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This attribute comes in handy when using ASP.NET MVC, because the type you specify will determine what “template” MVC uses.  Thus, for the DateTime property if you create a partial in Views/[loc]/EditorTemplates/Date.ascx (or cshtml for razor), that view will be used to render the property when using any of the Html.EditorFor() methods. One thing that the DataType() validation attribute does not do is any actual validation.  To see this, let’s take a look at the EmailAddress property above.  It turns out that regardless of the value you provide, the entity will be considered valid: //valid new DataTypeEntity {EmailAddress = "Foo"}; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Hmmm.  Since DataType() doesn’t validate, that leaves us with two options: (1) Create our own attributes for each datatype to validate, like [Date], or (2) add validation into the DataType attribute directly.  In this post, I will show you how to hookup client-side validation to the existing DataType() attribute for a desired type.  From there adding server-side validation would be a breeze and even writing a custom validation attribute would be simple (more on that in future posts). Validation All The Way Down Our goal will be to leave our DataTypeEntity class (from above) untouched, requiring no reference to System.Web.Mvc.  Then we will make an ASP.NET MVC project that allows us to create a new DataTypeEntity and hookup automatic client-side date validation using the suggested “out-of-the-box” jquery.validate bits that are included with ASP.NET MVC 3.  For simplicity I’m going to focus on the only DateTime field, but the concept is generally the same for any other DataType. Building a DataTypeAttribute Adapter To start we will need to build a new validation adapter that we can register using ASP.NET MVC’s DataAnnotationsModelValidatorProvider.RegisterAdapter() method.  This method takes two Type parameters; The first is the attribute we are looking to validate with and the second is an adapter that should subclass System.Web.Mvc.ModelValidator. Since we are extending DataAnnotations we can use the subclass of ModelValidator called DataAnnotationsModelValidator<>.  This takes a generic argument of type DataAnnotations.ValidationAttribute, which lucky for us means the DataTypeAttribute will fit in nicely. So starting from there and implementing the required constructor, we get: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now you have a full-fledged validation adapter, although it doesn’t do anything yet.  There are two methods you can override to add functionality, IEnumerable<ModelValidationResult> Validate(object container) and IEnumerable<ModelClientValidationRule> GetClientValidationRules().  Adding logic to the server-side Validate() method is pretty straightforward, and for this post I’m going to focus on GetClientValidationRules(). Adding a Client Validation Rule Adding client validation is now incredibly easy because jquery.validate is very powerful and already comes with a ton of validators (including date and regular expressions for our email example).  Teamed with the new unobtrusive validation javascript support we can make short work of our ModelClientValidationDateRule: public class ModelClientValidationDateRule : ModelClientValidationRule { public ModelClientValidationDateRule(string errorMessage) { ErrorMessage = errorMessage; ValidationType = "date"; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If your validation has additional parameters you can the ValidationParameters IDictionary<string,object> to include them.  There is a little bit of conventions magic going on here, but the distilled version is that we are defining a “date” validation type, which will be included as html5 data-* attributes (specifically data-val-date).  Then jquery.validate.unobtrusive takes this attribute and basically passes it along to jquery.validate, which knows how to handle date validation. Finishing our DataTypeAttribute Adapter Now that we have a model client validation rule, we can return it in the GetClientValidationRules() method of our DataTypeAttributeAdapter created above.  Basically I want to say if DataType.Date was provided, then return the date rule with a given error message (using ValidationAttribute.FormatErrorMessage()).  The entire adapter is below: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { }   public override System.Collections.Generic.IEnumerable<ModelClientValidationRule> GetClientValidationRules() { if (Attribute.DataType == DataType.Date) { return new[] { new ModelClientValidationDateRule(Attribute.FormatErrorMessage(Metadata.GetDisplayName())) }; }   return base.GetClientValidationRules(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Putting it all together Now that we have an adapter for the DataTypeAttribute, we just need to tell ASP.NET MVC to use it.  The easiest way to do this is to use the built in DataAnnotationsModelValidatorProvider by calling RegisterAdapter() in your global.asax startup method. DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Show and Tell Let’s see this in action using a clean ASP.NET MVC 3 project.  First make sure to reference the jquery, jquery.vaidate and jquery.validate.unobtrusive scripts that you will need for client validation. Next, let’s make a model class (note we are using the same built-in DataType() attribute that comes with System.ComponentModel.DataAnnotations). public class DataTypeEntity { [DataType(DataType.Date, ErrorMessage = "Please enter a valid date (ex: 2/14/2011)")] public DateTime DateTime { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Then we make a create page with a strongly-typed DataTypeEntity model, the form section is shown below (notice we are just using EditorForModel): @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>Fields</legend>   @Html.EditorForModel()   <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The final step is to register the adapter in our global.asax file: DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); Now we are ready to run the page: Looking at the datetime field’s html, we see that our adapter added some data-* validation attributes: <input type="text" value="1/1/0001" name="DateTime" id="DateTime" data-val-required="The DateTime field is required." data-val-date="Please enter a valid date (ex: 2/14/2011)" data-val="true" class="text-box single-line valid"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here data-val-required was added automatically because DateTime is non-nullable, and data-val-date was added by our validation adapter.  Now if we try to add an invalid date: Our custom error message is displayed via client-side validation as soon as we tab out of the box.  If we didn’t include a custom validation message, the default DataTypeAttribute “The field {0} is invalid” would have been shown (of course we can change the default as well).  Note we did not specify server-side validation, but in this case we don’t have to because an invalid date will cause a server-side error during model binding. Conclusion I really like how easy it is to register new data annotations model validators, whether they are your own or, as in this post, supplements to existing validation attributes.  I’m still debating about whether adding the validation directly in the DataType attribute is the correct place to put it versus creating a dedicated “Date” validation attribute, but it’s nice to know either option is available and, as we’ve seen, simple to implement. I’m also working through the nascent stages of an open source project that will create validation attribute extensions to the existing data annotations providers using similar techniques as seen above (examples: Email, Url, EqualTo, Min, Max, CreditCard, etc).  Keep an eye on this blog and subscribe to my twitter feed (@srkirkland) if you are interested for announcements.

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >