Search Results

Search found 13293 results on 532 pages for 'small ticket'.

Page 403/532 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • Display on secondary video card (Nvidia 8400 GS): horrible refresh, bogs system

    - by minameismud
    This is my work computer, but it's a small shop. We do business software development. The most hardcore thing we create is some web animations with html5 and fancy javascript/css. The base machine is a Dell Precision T3500 - Xeon W3550 (3.07GHz quad), 6GB ram, pair of 500GB harddrives, and Win 7 x64 Enterprise SP1. My primary video card is an ATI FirePro V4800 1GB in a PCIe slot of some speed driving a pair of 23s at 1920x1080 through DisplayPort-HDMI adapters. The secondary card is an NVidia GeForce 8400GS in a PCI slot driving a single 17" at 1280x1024 through DVI. On either of the 23" monitors, windows move smoothly, scroll quickly, and are generally very responsive. On the 17", it's slow, chunky, and when I'm trying to scroll a ton of content, Windows will occasionally suggest I drop to the Windows Basic theme. I've updated drivers for both cards, and I've gotten every Windows update relating to video. Specifically: ATI FirePro Provider: Advanced Micro Devices, Inc Date: 6/22/2014 Version: 13.352.1014.0 NVidia 8400 GS Provider: NVIDIA Date: 7/2/2014 Version: 9.18.13.4052 Unfortunately, new hardware isn't really an option. Is there anything I can do software-wise to speed up the NVidia-driven monitor?

    Read the article

  • Map Subdomains to Folders Owned/Run by Other Apache/PHP/Cpanel Users

    - by kristofferR
    I run a small service for Norwegian customers where they get automatically installed and configured Wordpress blogs on their own domains ready immediately after payment is finished. It's quite similar to Page.ly and WPEngine, just aimed at Norwegian customers with Norwegian Wordpress, support, billing etc. The backend is WHM/CPanel (Apache, PHP, mySQL), with a script running immediately after payment that installs and configures Wordpress and sends the customer an email with their username and password. Newly registered domains takes some time to propagate though, so for a day or two my customers unfortunately have to use a temporary URL before I can switch them over to their own domains. Right now my system uses mod_userdir ("serverIP/~cpanelusername"). However, it's not an optimal solution. It looks unprofessional, is confusing, and is quite problematic for both my customers and me. I'd rather prefer the temporary URL for their blogs to be "theirdomainwithoutextension.host.no", with "host.no" being a domain I own and served from the same server as the customer sites. I can easily modify the script to create the subdomains on my "host.no"-domain, but how can I seamlessly map the subdomains to folders owned/ran on/by different CPanel/Apache/PHP users?

    Read the article

  • Duplicate monitor on highest resolution in Windows 7

    - by AlexanderMP
    I have a monitor with a native resolution of 2560x1440, connected through display port. I also have an AV Receiver connected to the video card via HDMI, to have surround sound in games. All using Radeon HD 5670 (will upgrade soon to HD 7850). The problem is that my computer detects the receiver as a separate monitor, with the highest available resolution of 1920x1080. I have 3 options: Disconnect the second display. But then the sound (digital audio output through video card) also disappears. Duplicate displays. But then my primary monitor resolution is reduced to a maximum of just 1920x1080, that being the maximum of the second monitor. Extend desktop. This is the solution I picked so far, it being the least evil. The problems I face in this situations are 2: I have a blank part of the desktop where I sometimes lose my mouse pointer, so I made the extension small, 640x480, and placed it in a corner; when I turn off the main display, all windows resize to 640x480. In Kubuntu I had the option to duplicate the displays, while keeping the higher resolution. Which was great. I tried overriding using the Win7 netbook hack, but it's not available on non-netbooks. Is there a similar solution for this problem in Windows 7?

    Read the article

  • Can I remove the original file while running "sort"?

    - by Spaceman
    I'm sorting a huge file, around 400 gigabytes. I'm running out of the disk space and I must do something quickly. Let's assume the original file is called original_file. So I execute (simplified) it as "sort original_file | gzip -c output_file" I use /home/tmp as a temporary dir. From what I see, there are a lot of intermediate files, like so: tmpA465 tmpB154 ... and so on. The smallest ones have size 12 megabytes. The largest have ~182 megabytes. So, it seems that the "sort" command have already split the original file into small pieces, and have sorted them, and now it is merging them into bigger parts (which will be, eventually, sorted as well). Please correct me if I'm wrong. Can I remove the original file right now without terminating the sort process? I've been waiting for a few days for that and it's important that the "sort" command will not fail and I will get the result file, finally. The OS is Ubuntu server 13.04, x64. Thanks!

    Read the article

  • where are the default ulimit values set? (linux, centos)

    - by nomercysir
    I have two CentOS (5) servers with nearly identical specs. When I login and do: ulimit -u on one machine, I get unlimited and on the other, 77824 When I run a cron like * * * * * ulimit -u > ulimit.txt I get the same results (unlimited, 77824). I am trying to determine where these are set so that I can alter them. They are not set in any of my profiles (.bashrc, /etc/profile, etc .. these wouldn't affect cron anyway) nor in /etc/security/limits.conf (which is empty). I have scoured google and even gone so far as to do grep -Ir 77824 / but nothing has turned up so far. I don't understand how these machines could have come preset with different limits. I am actually wondering not for these machines, but for a different (CentOS 6) machine which has a limit of 1024, which is far too small. I need to run cron jobs with a higher limit and the only way I know how to set that is in the cron job itself. That's ok, but I'd rather set it system wide so it's not as hacky. Thanks for any help. This seems like it should be easy (NOT) EDIT -- SOLVED Ok, I figured this out. It seems to be an issue either with CentOS 6 or perhaps my machine configuration. On the CentOS 5 configuration, I can set in /etc/security/limits.conf: * - nproc unlimited and that would effectively update the accounts and cron limits. However, this does not work in my CentOS 6 box. Instead, I must do: myname1 - nproc unlimited myname2 - nproc unlimited ... And things work as expected. Maybe the UID specification works to, but the wildcard (*) definitely DOES NOT here. Oddly, wildcards DO work for the 'nofile' limit. I still would love to know where the default values are actually coming from, because by default, this file is empty and I couldn't see why I had different defaults for the two CentOS boxes, which had identical hardware and were from the same provider.

    Read the article

  • File gone or altered after MySQL[HY000][2002] error [on hold]

    - by Psyberion
    I'm working on a rather small project, and today I got an SQLSTATE[HY000][2002]:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' error. After a bit of googling and a few attempts to restart the mysqld service, I gave up and tried rebooting the computer. This did the trick, MySQL was now running fine. I did, however, get a more serious issue: Some files were missing, others were altered. Also, a few posts in the MySQL was gone. It's really strange, it's like the whole project has been reset two or three days, and I have no clue why. Some additional details about this: I save my files after every line of code. I'm religious about this. So I haven't lost the files that way. I was accessing the server via SSH when the error occurred, so I did the programming and the reboot over SSH. The server is a Raspberry Pi, model B, with Raspian on which I run Apache2. I was viewing the site and had an active session when I rebooted the system. The pages I lost did work just before this all happened. The MySQL fault occurred when I tried to add a text NOT NULL column to a table which had entries. Now, the amount of lost work isn't really that much, so I'm not really looking for help recovering the files. The reason I'm posting this is because I wonder how did this happen, and why?

    Read the article

  • Two video card on one Mobo

    - by InfinityKing
    I recently purchased a new HP Pavilion p6-2265eo. It was then that I realised that it has only one DVI and one HDMI output.. I will connect my TV with the HDMI. so I am letft with only one DVI output. I need to have two monitors. Please help. Should I purchase a new video card and install it? My knowledge is limited. The specifications of my comp says that there is 1 x PCI-E x16, 3 x PCI-E x1. 1) I suppose that the video card already present in my purchase is connected on the PCI-E x16. Am I right? I dont want to open my desktop right now and check it for myself as it can void the warrenty. so I need an experienced person to tell me that. 2) I have an old nvidia geforce 7200 gt. Is it possible for me to connect it to my left over PCI-E x1? I searched for PCI-E x1 on the net and as far as I can understand the slot is too small for my old nvidia geforce 7200 gt graphic card. 3) what are the options? Please help this dummo :) Thankign you in advance,

    Read the article

  • Ubuntu 9.10 won't reboot after replacing a failed drive

    - by user149041
    Hello Serverfault community. I hope someone can shed light on a peculiar problem I am having with an Ubuntu 9.10 server install. I am not a Linux expert but have the responsibility of fixing the box if something goes wrong. DOH! I have Ubuntu 9.10 server installed on on a desktop platform: Compaq Presario SR5027CL. There are two 1TB SATA drives configured in a RAID 1 array; I use the box as an email backup server for a small group of users. Last week one of the drives failed and was replaced with a new drive of the same type. The problem I have been having is getting the box to reboot after a restart or a shutdown halt. The OS and the RAID 1 array are on the same drives that make up the RAID 1 array. The replacement drive (sda) was added to the box and the partitions were created to match the existing good drive (sdb). The array is made up of sda1 and sdb1. I found an interesting point while checking the BIOS settings: there is a "HDD Boot Group Priority" section, and the new drive was selected as the "1. 3rd master"; the server wouldn't boot configured like that, but when I set the old drive to be "1. 4th master", the box will reboot. I'm checking some more things, but I would certainly appreciate any useful information. Thanks in advance.

    Read the article

  • Deploying Memcached as 32bit or 64bit?

    - by rlotun
    I'm curious about how people deploy memcached on 64 bit machines. Do you compile a 64bit (standard) memcached binary and run that, or do people compile it in 32bit mode and run N instances (where N = machine_RAM / 4GB)? Consider a recommended deployment of Redis (from the Redis FAQ): Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference. You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use make 32bit. For Linux instead, make sure you have libc6-dev-i386 installed, then use make 32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32". If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient. Would not the same recommendation also apply to a memcached cluster?

    Read the article

  • BYOD (accessing files) on a domain without joining?

    - by Philip White
    I run a Samba 4 instance at a small private school. This makes a regular Linux server appear as a directory controller. There are two relevant benefits to this: I have a Samba share for people's documents, and I use the Redirected Folders feature to allow any employee to sit down at any PC, log in with their domain credentials, and their My Documents points to network storage. Everyone has a mapped drive (using Group Policy Preferences) to a share specific to their account type. Students can access one share (one share for all students), teachers have another, and office staff have another. However, I would like to allow BYOD (Bring Your Own Device). Some employees are already asking for it with their personal laptops, and I know eventually most everyone will want to. Is there any way to replicate the two features above without having to join PCs to the domain? Joining personal PCs is impractical if only because only professional editions of Windows support this. Ideally, any operating system (including mobile) could access the relevant shares, but of course Windows is key. Offline caching is optional. (I could set up OpenVPN for teachers who want to access their files from home.) The problem with simply giving SSH access to the relevant shares is primarily that Samba 4 relies on ext4 ACLs and ext4 extended attributes to maintain NTFS permissions. Writing files directly to the Linux server would bypass this and would (probably) not be interoperable with Samba4. Right now I am completely flexible. I am even fine with scrapping the whole domain and using some other software for the two features above. How can I allow school employees and students freedom to securely share files without requiring everyone to have specific editions of Windows?

    Read the article

  • Multiple if functions

    - by user2948699
    I have problem I'm hoping someone could help me with. I have five different text values in 5 cells. I am trying to combine these values into one cell with a comma in between each. However the trick is that if there is no value in (H6) then it must place the word "and" between the cell (F6) and (G6). If there is a value in (H6) then place the word "and" between (G6) and (H6). In the same statement I must also include If there is not value in (G6) then it must place the word "and" between the cell E6 and F6. Please see image attached. I am trying to get the highlighted statements into one cell. So multiple IF statements into one cell. Anyone? =IF(G8=0,(D8)&", "&(E8)&" and "&(F8),(D8)&", "&(E8)&", "&(F8)&" and "&(G8)=IF(H8=0,(D8)&", "&(E8)&", "&(F8)&" and "&(G8),(D8)&", "&(E8)&", "&(F8)&", "&(G8)&" and "&(H8))) I cant figure out the code. Many thanks. Alex Edit: The original image can be found here if size of the inlined is too small.

    Read the article

  • SVG as CSS background for website navigation-bar

    - by Irfan Mir
    I drew a small (horizontal / in width) svg to be the background of my website's navigation. My website's navigation takes place a 100% of the browser's viewport and I want the svg image to fill that 100% space. So, using css I set the background of the navigation (.nav) to nav.svg but then I saw (whenI opened the html file in a browser) that the svg was not the full-width of the nav, but at the small width I drew it at. How can I get the SVG to stretch and fill the entire width of the navigation (100% of the page) ? Here is the code for the html file where the navigation is in: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Distributed Horizontal Menu</title> <meta name="generator" content="PSPad editor, www.pspad.com"> <style type="text/css"> *{ margin:0; padding:0; } .nav { margin:0; padding:0; min-width:42em; width:100%; height:47px; overflow:hidden; background:transparent url(nav.svg) no-repeat; text-align:justify; font:bold 88%/1.1 verdana; } .nav li { display:inline; list-style:none; } .nav li.last { margin-right:100%; } .nav li a { display:inline-block; padding:13px 4px 0; height:31px; color:#fff; vertical-align:middle; text-decoration:none; } .nav li a:hover { color:#ff6; background:#36c; } @media screen and (max-width:322px){ /* styling causing first break will go here*/ /* but in the meantime, a test */ body{ background:#ff0000; } } </style></head><body> <ul class="nav"> <!--[test to comment out random items] <li>&nbsp; <a href="#">netscape&nbsp;9</a></li> [the spacing should be distributed]--> <li>&nbsp; <a href="#">internet&nbsp;explorer&nbsp;6-8</a></li> <li>&nbsp; <a href="#">opera&nbsp;10</a></li> <li>&nbsp; <a href="#">firefox&nbsp;3</a></li> <li>&nbsp; <a href="#">safari&nbsp;4</a></li> <li class="last">&nbsp; <a href="#">chrome&nbsp;2</a> &nbsp; &nbsp;</li> </ul> </body></html> and Here is the code for the svg: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" width="321.026px" height="44.398px" viewBox="39.487 196.864 321.026 44.398" enable-background="new 39.487 196.864 321.026 44.398" xml:space="preserve"> <linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="280" y1="316.8115" x2="280" y2="275.375" gradientTransform="matrix(1 0 0 1 -80 -77)"> <stop offset="0" style="stop-color:#5A4A6A"/> <stop offset="0.3532" style="stop-color:#605170"/> <stop offset="0.8531" style="stop-color:#726382"/> <stop offset="1" style="stop-color:#796A89"/> </linearGradient> <path fill="url(#SVGID_1_)" d="M360,238.721c0,1.121-0.812,2.029-1.812,2.029H41.813c-1.001,0-1.813-0.908-1.813-2.029v-39.316 c0-1.119,0.812-2.027,1.813-2.027h316.375c1.002,0,1.812,0.908,1.812,2.027V238.721z"/> <path opacity="0.1" fill="#FFFFFF" enable-background="new " d="M358.188,197.376H41.813c-1.001,0-1.813,0.908-1.813,2.028 v39.316c0,1.12,0.812,2.028,1.813,2.028h316.375c1,0,1.812-0.908,1.812-2.028v-39.316C360,198.284,359.189,197.376,358.188,197.376z M358.75,238.721c0,0.415-0.264,0.779-0.562,0.779H41.813c-0.3,0-0.563-0.363-0.563-0.779v-39.316c0-0.414,0.263-0.777,0.563-0.777 h316.375c0.301,0,0.562,0.363,0.562,0.777V238.721z"/> <path opacity="0.5" fill="#FFFFFF" enable-background="new " d="M358.188,197.376H41.813c-1.001,0-1.813,0.908-1.813,2.028v1.461 c0-1.12,0.812-2.028,1.813-2.028h316.375c1.002,0,1.812,0.908,1.812,2.028v-1.461C360,198.284,359.189,197.376,358.188,197.376z"/> <g id="seperators"> <line fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" x1="104.5" y1="197.375" x2="104.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="103.5" y1="197.375" x2="103.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="105.5" y1="197.375" x2="105.5" y2="240.75"/> <line fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" x1="167.5" y1="197.375" x2="167.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="166.5" y1="197.375" x2="166.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="168.5" y1="197.375" x2="168.5" y2="240.75"/> <line fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" x1="231.5" y1="197.375" x2="231.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="232.5" y1="197.375" x2="232.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="230.5" y1="197.375" x2="230.5" y2="240.75"/> <line fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" x1="295.5" y1="197.375" x2="295.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="294.5" y1="197.375" x2="294.5" y2="240.75"/> <line opacity="0.1" fill="none" stroke="#FFFFFF" stroke-width="1.0259" stroke-miterlimit="10" enable-background="new " x1="296.5" y1="197.375" x2="296.5" y2="240.75"/> </g> <path fill="none" stroke="#000000" stroke-width="1.0259" stroke-miterlimit="10" d="M360,238.721c0,1.121-0.812,2.029-1.812,2.029 H41.813c-1.001,0-1.813-0.908-1.813-2.029v-39.316c0-1.119,0.812-2.027,1.813-2.027h316.375c1.002,0,1.812,0.908,1.812,2.027 V238.721z"/> </svg> I appreciate and welcome any and all comments, help, and suggestions. Thanks in Advance!

    Read the article

  • current_user and Comments on Posts - Create another association or loop posts? - Ruby on Rails

    - by bgadoci
    I have created a blog application using Ruby on Rails and have just added an authentication piece and it is working nicely. I am now trying to go back through my application to adjust the code such that it only shows information that is associated with a certain user. Currently, Users has_many :posts and Posts has_many :comments. When a post is created I am successfully inserting the user_id into the post table. Additionally I am successfully only displaying the posts that belong to a certain user upon their login in the /views/posts/index.html.erb view. My problem is with the comments. For instance on the home page, when logged in, a user will see only posts that they have written, but comments from all users on all posts. Which is not what I want and need some direction in correcting. I want only to display the comments written on all of the logged in users posts. Do I need to create associations such that comments also belong to user? Or is there a way to adjust my code to simply loop through post to display this data. I have put the code for the PostsController, CommentsController, and /posts/index.html.erb below and also my view code but will post more if needed. class PostsController < ApplicationController before_filter :authenticate auto_complete_for :tag, :tag_name auto_complete_for :ugtag, :ugctag_name def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @ugtag_counts = Ugtag.count(:group => :ugctag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts= current_user.posts.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id, posts.id ", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id, posts.id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end def show @post = Post.find(params[:id]) respond_to do |format| format.html # show.html.erb format.xml { render :xml => @post } end end def new @post = Post.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @post } end end def edit @post = Post.find(params[:id]) end def create @post = current_user.posts.create(params[:post]) respond_to do |format| if @post.save flash[:notice] = 'Post was successfully created.' format.html { redirect_to(@post) } format.xml { render :xml => @post, :status => :created, :location => @post } else format.html { render :action => "new" } format.xml { render :xml => @post.errors, :status => :unprocessable_entity } end end end def update @post = Post.find(params[:id]) respond_to do |format| if @post.update_attributes(params[:post]) flash[:notice] = 'Post was successfully updated.' format.html { redirect_to(@post) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @post.errors, :status => :unprocessable_entity } end end end def destroy @post = Post.find(params[:id]) @post.destroy respond_to do |format| format.html { redirect_to(posts_url) } format.xml { head :ok } end end end CommentsController class CommentsController < ApplicationController before_filter :authenticate, :except => [:show, :create] def index @comments = Comment.find(:all, :include => :post, :order => "created_at DESC").paginate :page => params[:page], :per_page => 5 respond_to do |format| format.html # index.html.erb format.xml { render :xml => @comments } format.json { render :json => @comments } format.atom end end def show @comment = Comment.find(params[:id]) respond_to do |format| format.html # show.html.erb format.xml { render :xml => @comment } end end # GET /posts/new # GET /posts/new.xml # GET /posts/1/edit def edit @comment = Comment.find(params[:id]) end def update @comment = Comment.find(params[:id]) respond_to do |format| if @comment.update_attributes(params[:comment]) flash[:notice] = 'Comment was successfully updated.' format.html { redirect_to(@comment) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @comment.errors, :status => :unprocessable_entity } end end end def create @post = Post.find(params[:post_id]) @comment = @post.comments.build(params[:comment]) respond_to do |format| if @comment.save flash[:notice] = "Thanks for adding this comment" format.html { redirect_to @post } format.js else flash[:notice] = "Make sure you include your name and a valid email address" format.html { redirect_to @post } end end end def destroy @comment = Comment.find(params[:id]) @comment.destroy respond_to do |format| format.html { redirect_to Post.find(params[:post_id]) } format.js end end end View Code for Comments <% Comment.find(:all, :order => 'created_at DESC', :limit => 3).each do |comment| -%> <div id="side-bar-comments"> <p> <div class="small"><%=h comment.name %> commented on:</div> <div class="dark-grey"><%= link_to h(comment.post.title), comment.post %><br/></div> <i><%=h truncate(comment.body, :length => 100) %></i><br/> <div class="small"><i> <%= time_ago_in_words(comment.created_at) %> ago</i></div> </p> </div> <% end -%>

    Read the article

  • What is the fastest cyclic synchronization in Java (ExecutorService vs. CyclicBarrier vs. X)?

    - by Alex Dunlop
    Which Java synchronization construct is likely to provide the best performance for a concurrent, iterative processing scenario with a fixed number of threads like the one outlined below? After experimenting on my own for a while (using ExecutorService and CyclicBarrier) and being somewhat surprised by the results, I would be grateful for some expert advice and maybe some new ideas. Existing questions here do not seem to focus primarily on performance, hence this new one. Thanks in advance! The core of the app is a simple iterative data processing algorithm, parallelized to the spread the computational load across 8 cores on a Mac Pro, running OS X 10.6 and Java 1.6.0_07. The data to be processed is split into 8 blocks and each block is fed to a Runnable to be executed by one of a fixed number of threads. Parallelizing the algorithm was fairly straightforward, and it functionally works as desired, but its performance is not yet what I think it could be. The app seems to spend a lot of time in system calls synchronizing, so after some profiling I wonder whether I selected the most appropriate synchronization mechanism(s). A key requirement of the algorithm is that it needs to proceed in stages, so the threads need to sync up at the end of each stage. The main thread prepares the work (very low overhead), passes it to the threads, lets them work on it, then proceeds when all threads are done, rearranges the work (again very low overhead) and repeats the cycle. The machine is dedicated to this task, Garbage Collection is minimized by using per-thread pools of pre-allocated items, and the number of threads can be fixed (no incoming requests or the like, just one thread per CPU core). V1 - ExecutorService My first implementation used an ExecutorService with 8 worker threads. The program creates 8 tasks holding the work and then lets them work on it, roughly like this: // create one thread per CPU executorService = Executors.newFixedThreadPool( 8 ); ... // now process data in cycles while( ...) { // package data into 8 work items ... // create one Callable task per work item ... // submit the Callables to the worker threads executorService.invokeAll( taskList ); } This works well functionally (it does what it should), and for very large work items indeed all 8 CPUs become highly loaded, as much as the processing algorithm would be expected to allow (some work items will finish faster than others, then idle). However, as the work items become smaller (and this is not really under the program's control), the user CPU load shrinks dramatically: blocksize | system | user | cycles/sec 256k 1.8% 85% 1.30 64k 2.5% 77% 5.6 16k 4% 64% 22.5 4096 8% 56% 86 1024 13% 38% 227 256 17% 19% 420 64 19% 17% 948 16 19% 13% 1626 Legend: - block size = size of the work item (= computational steps) - system = system load, as shown in OS X Activity Monitor (red bar) - user = user load, as shown in OS X Activity Monitor (green bar) - cycles/sec = iterations through the main while loop, more is better The primary area of concern here is the high percentage of time spent in the system, which appears to be driven by thread synchronization calls. As expected, for smaller work items, ExecutorService.invokeAll() will require relatively more effort to sync up the threads versus the amount of work being performed in each thread. But since ExecutorService is more generic than it would need to be for this use case (it can queue tasks for threads if there are more tasks than cores), I though maybe there would be a leaner synchronization construct. V2 - CyclicBarrier The next implementation used a CyclicBarrier to sync up the threads before receiving work and after completing it, roughly as follows: main() { // create the barrier barrier = new CyclicBarrier( 8 + 1 ); // create Runable for thread, tell it about the barrier Runnable task = new WorkerThreadRunnable( barrier ); // start the threads for( int i = 0; i < 8; i++ ) { // create one thread per core new Thread( task ).start(); } while( ... ) { // tell threads about the work ... // N threads + this will call await(), then system proceeds barrier.await(); // ... now worker threads work on the work... // wait for worker threads to finish barrier.await(); } } class WorkerThreadRunnable implements Runnable { CyclicBarrier barrier; WorkerThreadRunnable( CyclicBarrier barrier ) { this.barrier = barrier; } public void run() { while( true ) { // wait for work barrier.await(); // do the work ... // wait for everyone else to finish barrier.await(); } } } Again, this works well functionally (it does what it should), and for very large work items indeed all 8 CPUs become highly loaded, as before. However, as the work items become smaller, the load still shrinks dramatically: blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.7% 78% 6.1 16k 5.5% 52% 25 4096 9% 29% 64 1024 11% 15% 117 256 12% 8% 169 64 12% 6.5% 285 16 12% 6% 377 For large work items, synchronization is negligible and the performance is identical to V1. But unexpectedly, the results of the (highly specialized) CyclicBarrier seem MUCH WORSE than those for the (generic) ExecutorService: throughput (cycles/sec) is only about 1/4th of V1. A preliminary conclusion would be that even though this seems to be the advertised ideal use case for CyclicBarrier, it performs much worse than the generic ExecutorService. V3 - Wait/Notify + CyclicBarrier It seemed worth a try to replace the first cyclic barrier await() with a simple wait/notify mechanism: main() { // create the barrier // create Runable for thread, tell it about the barrier // start the threads while( ... ) { // tell threads about the work // for each: workerThreadRunnable.setWorkItem( ... ); // ... now worker threads work on the work... // wait for worker threads to finish barrier.await(); } } class WorkerThreadRunnable implements Runnable { CyclicBarrier barrier; @NotNull volatile private Callable<Integer> workItem; WorkerThreadRunnable( CyclicBarrier barrier ) { this.barrier = barrier; this.workItem = NO_WORK; } final protected void setWorkItem( @NotNull final Callable<Integer> callable ) { synchronized( this ) { workItem = callable; notify(); } } public void run() { while( true ) { // wait for work while( true ) { synchronized( this ) { if( workItem != NO_WORK ) break; try { wait(); } catch( InterruptedException e ) { e.printStackTrace(); } } } // do the work ... // wait for everyone else to finish barrier.await(); } } } Again, this works well functionally (it does what it should). blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.4% 80% 6.3 16k 4.6% 60% 30.1 4096 8.6% 41% 98.5 1024 12% 23% 202 256 14% 11.6% 299 64 14% 10.0% 518 16 14.8% 8.7% 679 The throughput for small work items is still much worse than that of the ExecutorService, but about 2x that of the CyclicBarrier. Eliminating one CyclicBarrier eliminates half of the gap. V4 - Busy wait instead of wait/notify Since this app is the primary one running on the system and the cores idle anyway if they're not busy with a work item, why not try a busy wait for work items in each thread, even if that spins the CPU needlessly. The worker thread code changes as follows: class WorkerThreadRunnable implements Runnable { // as before final protected void setWorkItem( @NotNull final Callable<Integer> callable ) { workItem = callable; } public void run() { while( true ) { // busy-wait for work while( true ) { if( workItem != NO_WORK ) break; } // do the work ... // wait for everyone else to finish barrier.await(); } } } Also works well functionally (it does what it should). blocksize | system | user | cycles/sec 256k 1.9% 85% 1.30 64k 2.2% 81% 6.3 16k 4.2% 62% 33 4096 7.5% 40% 107 1024 10.4% 23% 210 256 12.0% 12.0% 310 64 11.9% 10.2% 550 16 12.2% 8.6% 741 For small work items, this increases throughput by a further 10% over the CyclicBarrier + wait/notify variant, which is not insignificant. But it is still much lower-throughput than V1 with the ExecutorService. V5 - ? So what is the best synchronization mechanism for such a (presumably not uncommon) problem? I am weary of writing my own sync mechanism to completely replace ExecutorService (assuming that it is too generic and there has to be something that can still be taken out to make it more efficient). It is not my area of expertise and I'm concerned that I'd spend a lot of time debugging it (since I'm not even sure my wait/notify and busy wait variants are correct) for uncertain gain. Any advice would be greatly appreciated.

    Read the article

  • How to set a imageButton is an RSS

    - by L?c Song
    I have a feed_layout.xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <LinearLayout android:baselineAligned="false" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:orientation="horizontal" > <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:onClick="homeImageButton" android:scaleType="fitStart" android:src="@drawable/home" android:tag="1" /> </LinearLayout> <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="thegioiImageButton" android:src="@drawable/home" android:tag="2" /> </LinearLayout> </LinearLayout> <LinearLayout android:baselineAligned="false" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:orientation="horizontal" > <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="giaitriImageButton" android:src="@drawable/home" android:tag="3" /> </LinearLayout> <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="thethaoImageButton" android:src="@drawable/home" android:tag="4" /> </LinearLayout> </LinearLayout> <LinearLayout android:baselineAligned="false" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginTop="10dp" android:orientation="horizontal" > <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="khoahocImageButton" android:src="@drawable/home" android:tag="5" /> </LinearLayout> <LinearLayout android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" > <ImageButton android:layout_width="138dp" android:layout_height="138dp" android:layout_gravity="center" android:scaleType="centerCrop" android:onClick="xeImageButton" android:src="@drawable/home" android:tag="6" /> </LinearLayout> </LinearLayout> </LinearLayout> and feedActivity.java package com.dqh.vnexpressrssreader; import android.R.string; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.view.View; import android.widget.ImageButton; import android.widget.Toast; public class FeedActivity extends Activity { public String tagImg; private static final String TAG = "FeedActivity"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.feed_layout); } public void homeImageButton(View v) { ImageButton imageButtonClicked = (ImageButton)v; tagImg = imageButtonClicked.getTag().toString(); setTagImg(tagImg); String tt = getTagImg(); Log.d(TAG, "FeedId: " + tt); Intent intent = new Intent(getApplicationContext(), ItemsActivity.class); startActivityForResult(intent, 0); } public void thegioiImageButton(View v) { ImageButton imageButtonClicked = (ImageButton)v; tagImg = imageButtonClicked.getTag().toString(); //Log.d(TAG, "FeedId: " + imageButtonClicked.getTag()); Log.d(TAG, "FeedId: " + tagImg); Intent intent = new Intent(getApplicationContext(), ItemsActivity.class); startActivityForResult(intent, 0); } } and RssReader.java /** * */ package com.dqh.vnexpressrssreader.reader; import java.util.ArrayList; import java.util.List; import org.json.JSONException; import org.json.JSONObject; import com.dqh.vnexpressrssreader.FeedActivity; import com.dqh.vnexpressrssreader.NewsRssReaderDB; import com.dqh.vnexpressrssreader.util.RSSHandler; import com.dqh.vnexpressrssreader.util.Tintuc; import android.content.Context; import android.text.Html; import android.util.Log; /** * @author rob * */ public class RssReader { private final static String TAG = "RssReader"; private final static String BOLD_OPEN = "<B>"; private final static String BOLD_CLOSE = "</B>"; private final static String BREAK = "<BR>"; private final static String ITALIC_OPEN = "<I>"; private final static String ITALIC_CLOSE = "</I>"; private final static String SMALL_OPEN = "<SMALL>"; private final static String SMALL_CLOSE = "</SMALL>"; /** * This method defines a feed URL and then calles our SAX Handler to read the tintuc list * from the stream * * @return List<JSONObject> - suitable for the List View activity */ public static List<JSONObject> getLatestRssFeed(Context context) { NewsRssReaderDB newsRssReaderDB = new NewsRssReaderDB(context); List<Tintuc> tintucsFromDB = newsRssReaderDB.getLists(); return fillData(tintucsFromDB); } public static void getLatestRssFeed(Context context, String feed) { NewsRssReaderDB newsRssReaderDB = new NewsRssReaderDB(context); feed = "http://vnexpress.net/rss/the-gioi.rss"; //RSS 2 feed = "http://vnexpress.net/rss/the-thao.rss"; //RSS 3 feed = "http://vnexpress.net/rss/home.rss"; RSSHandler rh = new RSSHandler(); List<Tintuc> tintucs = rh.getLatestTintucs(feed); if ((tintucs != null) && (tintucs.size() > 0)) { for (Tintuc tintuc : tintucs) { if ((tintuc.getUrl() != null) && !newsRssReaderDB.checkUrlExist(tintuc.getUrl().toString())) { long tintucId = newsRssReaderDB.insertTintuc(tintuc); if (tintucId > 0) { Log.d(TAG, "saved tintucId: " + tintucId); } else { Log.e(TAG, "saved tintucId fail"); } } else { Log.e(TAG, "tintucs exist!"); } } } } /** * This method takes a list of Tintuc objects and converts them in to the * correct JSON format so the info can be processed by our list view * * @param tintucs - list<Tintuc> * @return List<JSONObject> - suitable for the List View activity */ private static List<JSONObject> fillData(List<Tintuc> tintucs) { List<JSONObject> items = new ArrayList<JSONObject>(); for (Tintuc tintuc : tintucs) { JSONObject current = new JSONObject(); try { buildJsonObject(tintuc, current); } catch (JSONException e) { Log.e("RSS ERROR", "Error creating JSON Object from RSS feed"); } items.add(current); } return items; } /** * This method takes a single Tintuc Object and converts it in to a single JSON object * including some additional HTML formating so they can be displayed nicely * * @param tintuc * @param current * @throws JSONException */ private static void buildJsonObject(Tintuc tintuc, JSONObject current) throws JSONException { String title = tintuc.getTieude(); String description = tintuc.getMota(); ///////////////////////// //////// 2 ///////////// String date = tintuc.getPubDate(); String imgLink = tintuc.getImgLink(); StringBuffer sb = new StringBuffer(); sb.append(BOLD_OPEN).append(title).append(BOLD_CLOSE); sb.append(BREAK); sb.append(description); sb.append(BREAK); sb.append(SMALL_OPEN).append(ITALIC_OPEN).append(date).append(ITALIC_CLOSE).append(SMALL_CLOSE); current.put("text", Html.fromHtml(sb.toString())); current.put("imageLink", imgLink); current.put("url", tintuc.getUrl().toString()); current.put("title", tintuc.getTieude()); } } I have 1 array RSS and I want each ImageButton is assigned a Rss??. I have attempt to call to FeedActivity from RSSReader but not be help me !

    Read the article

  • Questions related to writing your own file downloader using multiple threads java

    - by Shekhar
    Hello In my current company, i am doing a PoC on how we can write a file downloader utility. We have to use socket programming(TCP/IP) for downloading the files. One of the requirements of the client is that a file(which will be large in size) should be transfered in chunks for example if we have a file of 5Mb size then we can have 5 threads which transfer 1 Mb each. I have written a small application which downloads a file. You can download the eclipe project from http://www.fileflyer.com/view/QM1JSC0 A brief explanation of my classes FileSender.java This class provides the bytes of file. It has a method called sendBytesOfFile(long start,long end, long sequenceNo) which gives the number of bytes. import java.io.File; import java.io.IOException; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileSender { private static final String FILE_NAME = "C:\\shared\\test.pdf"; public ByteArrayWrapper sendBytesOfFile(long start,long end, long sequenceNo){ try { File file = new File(FILE_NAME); byte[] fileBytes = FileUtils.readFileToByteArray(file); System.out.println("Size of file is " +fileBytes.length); System.out.println(); System.out.println("Start "+start +" end "+end); byte[] bytes = getByteArray(fileBytes, start, end); ByteArrayWrapper wrapper = new ByteArrayWrapper(bytes, sequenceNo); return wrapper; } catch (IOException e) { throw new RuntimeException(e); } } private byte[] getByteArray(byte[] bytes, long start, long end){ long arrayLength = end-start; System.out.println("Start : "+start +" end : "+end + " Arraylength : "+arrayLength +" length of source array : "+bytes.length); byte[] arr = new byte[(int)arrayLength]; for(int i = (int)start, j =0; i < end;i++,j++){ arr[j] = bytes[i]; } return arr; } public static long fileSize(){ File file = new File(FILE_NAME); return file.length(); } } Second Class is FileReceiver.java - This class receives the file. Small Explanation what this file does This class finds the size of the file to be fetched from Sender Depending upon the size of the file it finds the start and end position till the bytes needs to be read. It starts n number of threads giving each thread start,end, sequence number and a list which all the threads share. Each thread reads the number of bytes and creates a ByteArrayWrapper. ByteArrayWrapper objects are added to the list Then i have while loop which basically make sure that all threads have done their work finally it sorts the list based on the sequence number. then the bytes are joined, and a complete byte array is formed which is converted to a file. Code of File Receiver package com.filedownloader; import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; import java.util.zip.CRC32; import org.apache.commons.io.FileUtils; public class FileReceiver { public static void main(String[] args) { FileReceiver receiver = new FileReceiver(); receiver.receiveFile(); } public void receiveFile(){ long startTime = System.currentTimeMillis(); long numberOfThreads = 10; long filesize = FileSender.fileSize(); System.out.println("File size received "+filesize); long start = filesize/numberOfThreads; List<ByteArrayWrapper> list = new ArrayList<ByteArrayWrapper>(); for(long threadCount =0; threadCount<numberOfThreads ;threadCount++){ FileDownloaderTask task = new FileDownloaderTask(threadCount*start,(threadCount+1)*start,threadCount,list); new Thread(task).start(); } while(list.size() != numberOfThreads){ // this is done so that all the threads should complete their work before processing further. //System.out.println("Waiting for threads to complete. List size "+list.size()); } if(list.size() == numberOfThreads){ System.out.println("All bytes received "+list); Collections.sort(list, new Comparator<ByteArrayWrapper>() { @Override public int compare(ByteArrayWrapper o1, ByteArrayWrapper o2) { long sequence1 = o1.getSequence(); long sequence2 = o2.getSequence(); if(sequence1 < sequence2){ return -1; }else if(sequence1 > sequence2){ return 1; } else{ return 0; } } }); byte[] totalBytes = list.get(0).getBytes(); byte[] firstArr = null; byte[] secondArr = null; for(int i = 1;i<list.size();i++){ firstArr = totalBytes; secondArr = list.get(i).getBytes(); totalBytes = concat(firstArr, secondArr); } System.out.println(totalBytes.length); convertToFile(totalBytes,"c:\\tmp\\test.pdf"); long endTime = System.currentTimeMillis(); System.out.println("Total time taken with "+numberOfThreads +" threads is "+(endTime-startTime)+" ms" ); } } private byte[] concat(byte[] A, byte[] B) { byte[] C= new byte[A.length+B.length]; System.arraycopy(A, 0, C, 0, A.length); System.arraycopy(B, 0, C, A.length, B.length); return C; } private void convertToFile(byte[] totalBytes,String name) { try { FileUtils.writeByteArrayToFile(new File(name), totalBytes); } catch (IOException e) { throw new RuntimeException(e); } } } Code of ByteArrayWrapper package com.filedownloader; import java.io.Serializable; public class ByteArrayWrapper implements Serializable{ private static final long serialVersionUID = 3499562855188457886L; private byte[] bytes; private long sequence; public ByteArrayWrapper(byte[] bytes, long sequenceNo) { this.bytes = bytes; this.sequence = sequenceNo; } public byte[] getBytes() { return bytes; } public long getSequence() { return sequence; } } Code of FileDownloaderTask import java.util.List; public class FileDownloaderTask implements Runnable { private List<ByteArrayWrapper> list; private long start; private long end; private long sequenceNo; public FileDownloaderTask(long start,long end,long sequenceNo,List<ByteArrayWrapper> list) { this.list = list; this.start = start; this.end = end; this.sequenceNo = sequenceNo; } @Override public void run() { ByteArrayWrapper wrapper = new FileSender().sendBytesOfFile(start, end, sequenceNo); list.add(wrapper); } } Questions related to this code 1) Does file downloading becomes fast when multiple threads is used? In this code i am not able to see the benefit. 2) How should i decide how many threads should i create ? 3) Are their any opensource libraries which does that 4) The file which file receiver receives is valid and not corrupted but checksum (i used FileUtils of common-io) does not match. Whats the problem? 5) This code gives out of memory when used with large file(above 100 Mb) i.e. because byte array which is created. How can i avoid? I know this is a very bad code but i have to write this in one day -:). Please suggest any other good way to do this? Thanks Shekhar

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Unable to Create New Incidents in Dynamics CRM with Java and Axis2

    - by Lutz
    So I've been working on trying to figure this out, oddly when I ran it one machine I got a generic Axis Fault with no description, but now on another machine I'm getting a different error message, but I'm still stuck. Basically I'm just trying to do what I thought would be a fairly trivial task of creating a new incident in Microsoft Dynamics CRM 4.0 via a web services call. I started by downloading the XML from http://hostname/MSCrmServices/2007/CrmService.asmx and generating code from it using Axis2. Anyway, here's my program, any help would be greatly appreciated, as I've been stuck on this for way longer than I thought I'd be and I'm really out of ideas here. public class TestCRM { private static String endpointURL = "http://theHost/MSCrmServices/2007/CrmService.asmx"; private static String userName = "myUserNameHere"; private static String password = "myPasswordHere"; private static String host = "theHostname"; private static int port = 80; private static String domain = "theDomain"; private static String orgName = "theOrganization"; public static void main(String[] args) { CrmServiceStub stub; try { stub = new CrmServiceStub(endpointURL); setOptions(stub._getServiceClient().getOptions()); RetrieveMultipleDocument rmd = RetrieveMultipleDocument.Factory.newInstance(); com.microsoft.schemas.crm._2007.webservices.RetrieveMultipleDocument.RetrieveMultiple rm = com.microsoft.schemas.crm._2007.webservices.RetrieveMultipleDocument.RetrieveMultiple.Factory.newInstance(); QueryExpression query = QueryExpression.Factory.newInstance(); query.setColumnSet(AllColumns.Factory.newInstance()); query.setEntityName(EntityName.INCIDENT.toString()); rm.setQuery(query); rmd.setRetrieveMultiple(rm); TargetCreateIncident tinc = TargetCreateIncident.Factory.newInstance(); Incident inc = tinc.addNewIncident(); inc.setDescription("This is a test of ticket creation through a web services call."); CreateDocument cd = CreateDocument.Factory.newInstance(); Create create = Create.Factory.newInstance(); create.setEntity(inc); cd.setCreate(create); Incident test = (Incident)cd.getCreate().getEntity(); CrmAuthenticationTokenDocument catd = CrmAuthenticationTokenDocument.Factory.newInstance(); CrmAuthenticationToken token = CrmAuthenticationToken.Factory.newInstance(); token.setAuthenticationType(0); token.setOrganizationName(orgName); catd.setCrmAuthenticationToken(token); //The two printlns below spit back XML that looks okay to me? System.out.println(cd); System.out.println(catd); /* stuff that doesn't work */ CreateResponseDocument crd = stub.create(cd, catd, null, null); //this line throws the error CreateResponse cr = crd.getCreateResponse(); System.out.println("create result: " + cr.getCreateResult()); /* End stuff that doesn't work */ System.out.println(); System.out.println(); System.out.println(); boolean fetchNext = true; while(fetchNext){ RetrieveMultipleResponseDocument rmrd = stub.retrieveMultiple(rmd, catd, null, null); //This retrieve using the CRMAuthenticationToken catd works just fine RetrieveMultipleResponse rmr = rmrd.getRetrieveMultipleResponse(); BusinessEntityCollection bec = rmr.getRetrieveMultipleResult(); String pagingCookie = bec.getPagingCookie(); fetchNext = bec.getMoreRecords(); ArrayOfBusinessEntity aobe = bec.getBusinessEntities(); BusinessEntity[] myEntitiesAtLast = aobe.getBusinessEntityArray(); for(int i=0; i<myEntitiesAtLast.length; i++){ //cast to whatever you asked for... Incident myEntity = (Incident) myEntitiesAtLast[i]; System.out.println("["+(i+1)+"]: " + myEntity); } } } catch (Exception e) { e.printStackTrace(); } } private static void setOptions(Options options){ HttpTransportProperties.Authenticator auth = new HttpTransportProperties.Authenticator(); List authSchemes = new ArrayList(); authSchemes.add(HttpTransportProperties.Authenticator.NTLM); auth.setAuthSchemes(authSchemes); auth.setUsername(userName); auth.setPassword(password); auth.setHost(host); auth.setPort(port); auth.setDomain(domain); auth.setPreemptiveAuthentication(false); options.setProperty(HTTPConstants.AUTHENTICATE, auth); options.setProperty(HTTPConstants.REUSE_HTTP_CLIENT, "true"); } } Also, here's the error message I receive: org.apache.axis2.AxisFault: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character 'S' (code 83) in prolog; expected '<' at [row,col {unknown-source}]: [1,1] at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430) at org.apache.axis2.transport.TransportUtils.createSOAPMessage(TransportUtils.java:123) at org.apache.axis2.transport.TransportUtils.createSOAPMessage(TransportUtils.java:67) at org.apache.axis2.description.OutInAxisOperationClient.handleResponse(OutInAxisOperation.java:354) at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:417) at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:229) at org.apache.axis2.client.OperationClient.execute(OperationClient.java:165) at com.spanlink.crm.dynamics4.webservice.CrmServiceStub.create(CrmServiceStub.java:618) at com.spanlink.crm.dynamics4.runtime.TestCRM.main(TestCRM.java:82) Caused by: org.apache.axiom.om.OMException: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character 'S' (code 83) in prolog; expected '<' at [row,col {unknown-source}]: [1,1] at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:260) at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.getSOAPEnvelope(StAXSOAPModelBuilder.java:161) at org.apache.axiom.soap.impl.builder.StAXSOAPModelBuilder.<init>(StAXSOAPModelBuilder.java:110) at org.apache.axis2.builder.BuilderUtil.getSOAPBuilder(BuilderUtil.java:682) at org.apache.axis2.transport.TransportUtils.createDocumentElement(TransportUtils.java:215) at org.apache.axis2.transport.TransportUtils.createSOAPMessage(TransportUtils.java:145) at org.apache.axis2.transport.TransportUtils.createSOAPMessage(TransportUtils.java:108) ... 7 more Caused by: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character 'S' (code 83) in prolog; expected '<' at [row,col {unknown-source}]: [1,1] at com.ctc.wstx.sr.StreamScanner.throwUnexpectedChar(StreamScanner.java:623) at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:2047) at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1069) at javax.xml.stream.util.StreamReaderDelegate.next(StreamReaderDelegate.java:60) at org.apache.axiom.om.impl.builder.SafeXMLStreamReader.next(SafeXMLStreamReader.java:183) at org.apache.axiom.om.impl.builder.StAXOMBuilder.parserNext(StAXOMBuilder.java:597) at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:172) ... 13 more

    Read the article

  • How to group using XSLT

    - by AdRock
    I'm having trouble grouping a set of nodes. I've found an article that does work with grouping and i have tested it and it works on a small test stylesheet i have I now need to use it in my stylesheet where I only want to select node sets that have a specific value. What I want to do in my stylesheet is select all users who have a userlevel of 2 then to group them by the volunteer region. What happens at the minute is that it gets the right amount of users with userlevel 2 but doesn't print them. It just repeats the first user in the xml file. <?xml version="1.0" encoding="ISO-8859-1"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:key name="volunteers-by-region" match="volunteer" use="region" /> <xsl:template name="hoo" match="/"> <html> <head> <title>Registered Volunteers</title> <link rel="stylesheet" type="text/css" href="volunteer.css" /> </head> <body> <h1>Registered Volunteers</h1> <h3>Ordered by the username ascending</h3> <xsl:for-each select="folktask/member[user/account/userlevel='2']"> <xsl:for-each select="volunteer[count(. | key('volunteers-by-region', region)[1]) = 1]"> <xsl:sort select="region" /> <xsl:for-each select="key('volunteers-by-region', region)"> <xsl:sort select="folktask/member/user/personal/name" /> <div class="userdiv"> <xsl:call-template name="member_userid"> <xsl:with-param name="myid" select="/folktask/member/user/@id" /> </xsl:call-template> <xsl:call-template name="volunteer_volid"> <xsl:with-param name="volid" select="/folktask/member/volunteer/@id" /> </xsl:call-template> <xsl:call-template name="volunteer_role"> <xsl:with-param name="volrole" select="/folktask/member/volunteer/roles" /> </xsl:call-template> <xsl:call-template name="volunteer_region"> <xsl:with-param name="volloc" select="/folktask/member/volunteer/region" /> </xsl:call-template> </div> </xsl:for-each> </xsl:for-each> </xsl:for-each> <xsl:if test="position()=last()"> <div class="count"><h2>Total number of volunteers: <xsl:value-of select="count(/folktask/member/user/account/userlevel[text()=2])"/></h2></div> </xsl:if> </body> </html> </xsl:template> <xsl:template name="member_userid"> <xsl:param name="myid" select="'Not Available'" /> <div class="heading bold"><h2>USER ID: <xsl:value-of select="$myid" /></h2></div> </xsl:template> <xsl:template name="volunteer_volid"> <xsl:param name="volid" select="'Not Available'" /> <div class="heading2 bold"><h2>VOLUNTEER ID: <xsl:value-of select="$volid" /></h2></div> </xsl:template> <xsl:template name="volunteer_role"> <xsl:param name="volrole" select="'Not Available'" /> <div class="small bold">ROLES:</div> <div class="large"> <xsl:choose> <xsl:when test="string-length($volrole)!=0"> <xsl:value-of select="$volrole" /> </xsl:when> <xsl:otherwise> <xsl:text> </xsl:text> </xsl:otherwise> </xsl:choose> </div> </xsl:template> <xsl:template name="volunteer_region"> <xsl:param name="volloc" select="'Not Available'" /> <div class="small bold">REGION:</div> <div class="large"><xsl:value-of select="$volloc" /></div> </xsl:template> </xsl:stylesheet> here is my full xml file <?xml version="1.0" encoding="ISO-8859-1" ?> <?xml-stylesheet type="text/xsl" href="volunteers.xsl"?> <folktask xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:noNamespaceSchemaLocation="folktask.xsd"> <member> <user id="1"> <personal> <name>Abbie Hunt</name> <sex>Female</sex> <address1>108 Access Road</address1> <address2></address2> <city>Wells</city> <county>Somerset</county> <postcode>BA5 8GH</postcode> <telephone>01528927616</telephone> <mobile>07085252492</mobile> <email>[email protected]</email> </personal> <account> <username>AdRock</username> <password>269eb625e2f0cf6fae9a29434c12a89f</password> <userlevel>4</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="1"> <roles></roles> <region>South West</region> </volunteer> </member> <member> <user id="2"> <personal> <name>Aidan Harris</name> <sex>Male</sex> <address1>103 Aiken Street</address1> <address2></address2> <city>Chichester</city> <county>Sussex</county> <postcode>PO19 4DS</postcode> <telephone>01905149894</telephone> <mobile>07784467941</mobile> <email>[email protected]</email> </personal> <account> <username>AmbientExpert</username> <password>8e64214160e9dd14ae2a6d9f700004a6</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="2"> <roles>Van Driver,gas Fitter</roles> <region>South Central</region> </volunteer> </member> <member> <user id="3"> <personal> <name>Skye Saunders</name> <sex>Female</sex> <address1>31 Anns Court</address1> <address2></address2> <city>Cirencester</city> <county>Gloucestershire</county> <postcode>GL7 1JG</postcode> <telephone>01958303514</telephone> <mobile>07260491667</mobile> <email>[email protected]</email> </personal> <account> <username>BigUndecided</username> <password>ea297847f80e046ca24a8621f4068594</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="3"> <roles>Scaffold Erector</roles> <region>South West</region> </volunteer> </member> <member> <user id="4"> <personal> <name>Connor Lawson</name> <sex>Male</sex> <address1>12 Ash Way</address1> <address2></address2> <city>Swindon</city> <county>Wiltshire</county> <postcode>SN3 6GS</postcode> <telephone>01791928119</telephone> <mobile>07338695664</mobile> <email>[email protected]</email> </personal> <account> <username>iTuneStinker</username> <password>3a1f5fda21a07bfff20c41272bae7192</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX1 9BE</eventpostcode> <additional>Oxford Folk Festival is going into it's third year in 2006. As well as needing volunteers to steward for the event on the weekend itself, we would be delighted to hear from people willing to help in year round festival work such as stuffing envelopes for mailings, poster and leaflet distribution, and stewarding duties at festival pre-events.</additional> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> <contact> <conname>Stuart Vincent</conname> <conaddress1>P.O. Box 642</conaddress1> <conaddress2></conaddress2> <concity>Oxford</concity> <concounty>Bedfordshire</concounty> <conpostcode>OX1 3BY</conpostcode> <contelephone>01865 79073</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> <member> <user id="5"> <personal> <name>Lewis King</name> <sex>Male</sex> <address1>67 Arbors Way</address1> <address2></address2> <city>Sherborne</city> <county>Dorset</county> <postcode>DT9 0GS</postcode> <telephone>01446139701</telephone> <mobile>07292614033</mobile> <email>[email protected]</email> </personal> <account> <username>Runninglife</username> <password>98fab0a27c34ddb2b0618bc184d4331d</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="4"> <roles>Van Driver</roles> <region>South West</region> </volunteer> </member> <member> <user id="6"> <personal> <name>Cameron Lee</name> <sex>Male</sex> <address1>77 Arrington Road</address1> <address2></address2> <city>Solihull</city> <county>Warwickshire</county> <postcode>B90 6FG</postcode> <telephone>01435158624</telephone> <mobile>07789503179</mobile> <email>[email protected]</email> </personal> <account> <username>love2Mixer</username> <password>1df752d54876928639cea07ce036a9c0</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="5"> <roles>Fire Warden</roles> <region>Midlands</region> </volunteer> </member> <member> <user id="7"> <personal> <name>Lexie Dean</name> <sex>Female</sex> <address1>38 Bloomfield Court</address1> <address2></address2> <city>Windermere</city> <county>Westmorland</county> <postcode>LA23 8BM</postcode> <telephone>01781207188</telephone> <mobile>07127461231</mobile> <email>[email protected]</email> </personal> <account> <username>MailNetworker</username> <password>0e070701839e612bf46af4421db4f44b</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="2"> <event> <eventname>Middlewich Folk And Boat Festival</eventname> <url>http://midfest.org.uk/mfab/</url> <datefrom>2010-06-16</datefrom> <dateto>2010-06-18</dateto> <location>Middlewich</location> <eventpostcode>CW10 9BX</eventpostcode> <additional>We welcome stewards staying on campsite or boats.</additional> <coords> <lat>53.190562</lat> <lng>-2.444926</lng> </coords> </event> <contact> <conname>Festival Committee</conname> <conaddress1>PO Box 141</conaddress1> <conaddress2></conaddress2> <concity>Winsford</concity> <concounty>Cheshire</concounty> <conpostcode>CW10 9WB</conpostcode> <contelephone>07092 39050</contelephone> <conmobile>07092 39050</conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> <member> <user id="8"> <personal> <name>Liam Chapman</name> <sex>Male</sex> <address1>99 Black Water Drive</address1> <address2></address2> <city>St.Austell</city> <county>Cornwall</county> <postcode>PL25 3GF</postcode> <telephone>01835629418</telephone> <mobile>07695179069</mobile> <email>[email protected]</email> </personal> <account> <username>GreenWimp</username> <password>1fe3df99a841dc4f723d21af89e0990f</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="9"> <personal> <name>Brandon Harrison</name> <sex>Male</sex> <address1>41 Arlington Way</address1> <address2></address2> <city>Dorchester</city> <county>Dorset</county> <postcode>DT1 3JS</postcode> <telephone>01293626735</telephone> <mobile>07277145760</mobile> <email>[email protected]</email> </personal> <account> <username>LovelyStar</username> <password>8b53b66f323aa5e6a083edb4fd44456b</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="10"> <personal> <name>Samuel Young</name> <sex>Male</sex> <address1>102 Bailey Hill Road</address1> <address2></address2> <city>Wolverhampton</city> <county>Staffordshire</county> <postcode>WV7 8HS</postcode> <telephone>01594531382</telephone> <mobile>07544663654</mobile> <email>[email protected]</email> </personal> <account> <username>GuruSassy</username> <password>00da02da6c143c3d136bf60b8bfcf43e</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="6"> <roles>Fire Warden</roles> <region>Midlands</region> </volunteer> </member> <member> <user id="11"> <personal> <name>Alexander Harris</name> <sex>Male</sex> <address1>93 Beguine Drive</address1> <address2></address2> <city>Winchester</city> <county>Hampshire</county> <postcode>S23 2FD</postcode> <telephone>01452496582</telephone> <mobile>07353867291</mobile> <email>[email protected]</email> </personal> <account> <username>GuitarExpert</username> <password>0102ad3740028e155925e9918ead3bde</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="7"> <roles>Scaffold Erector</roles> <region>North East</region> </volunteer> </member> <member> <user id="12"> <personal> <name>Tyler Mcdonald</name> <sex>Male</sex> <address1>44 Baker Road</address1> <address2></address2> <city>Bromley</city> <county>Kent</county> <postcode>BR1 2GD</postcode> <telephone>01918704546</telephone> <mobile>07314062451</mobile> <email>[email protected]</email> </personal> <account> <username>WildWish</username> <password>073220bb5e9a12ad202bb7d94dcc86f7</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="13"> <personal> <name>Skye Mason</name> <sex>Female</sex> <address1>56 Cedar Creek Church Road</address1> <address2></address2> <city>Bracknell</city> <county>Berkshire</county> <postcode>RG12 1AQ</postcode> <telephone>01787607618</telephone> <mobile>07540218868</mobile> <email>[email protected]</email> </personal> <account> <username>PizzaDork</username> <password>74c54937ee7051ee7f4ebc11296ed531</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="14"> <personal> <name>Maryam Rose</name> <sex>Female</sex> <address1>98 Baptist Circle</address1> <address2></address2> <city>Newbury</city> <county>Berkshire</county> <postcode>RG14 8DF</postcode> <telephone>01691317999</telephone> <mobile>07212477154</mobile> <email>[email protected]</email> </personal> <account> <username>SexTech</username> <password>f1c21f9f1e999da97d7dc460bb876fcf</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="3"> <event> <eventname>Birdsedge Village Festival</eventname> <url>http://www.birdsedge.co.uk/</url> <datefrom>2010-07-08</datefrom> <dateto>2010-07-09</dateto> <location>Birdsedge</location> <eventpostcode>HD8 8XT</eventpostcode> <additional></additional> <coords> <lat>53.565644</lat> <lng>-1.696196</lng> </coords> </event> <contact> <conname>Jacey Bedford</conname> <conaddress1>Penistone Road</conaddress1> <conaddress2>Birdsedge</conaddress2> <concity>Huddersfield</concity> <concounty>West Yorkshire</concounty> <conpostcode>HD8 8XT</conpostcode> <contelephone>01484 60623</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> <member> <user id="15"> <personal> <name>Lexie Rogers</name> <sex>Female</sex> <address1>38 Bishop Road</address1> <address2></address2> <city>Matlock</city> <county>Derbyshire</county> <postcode>DE4 1BX</postcode> <telephone>01961168823</telephone> <mobile>07170855351</mobile> <email>[email protected]</email> </personal> <account> <username>ShipBurglar</username> <password>cc190488a95667cb117e20bc6c7c330e</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="8"> <roles>Gas Fitter</roles> <region>Midlands</region> </volunteer> </member> <member> <user id="16"> <personal> <name>Noah Parker</name> <sex>Male</sex> <address1>112 Canty Road</address1> <address2></address2> <city>Keswick</city> <county>Cumberland</county> <postcode>CA12 4TR</postcode> <telephone>01931272522</telephone> <mobile>07610026576</mobile> <email>[email protected]</email> </personal> <account> <username>AwsomeMoon</username> <password>50b770539bdf08543f15778fc7a6f188</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <volunteer id="9"> <roles>Van Driver</roles> <region>North West</region> </volunteer> </member> <member> <user id="17"> <personal> <name>Elliot Mitchell</name> <sex>Male</sex> <address1>102 Brown Loop</address1> <address2></address2> <city>Grimsby</city> <county>Lincolnshire</county> <postcode>OX16 4QP</postcode> <telephone>01212971319</telephone> <mobile>07544663654</mobile> <email>[email protected]</email> </personal> <account> <username>msBasher</username> <password>c38fad85badcdff6e3559ef38656305d</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="18"> <personal> <name>Scarlett Rose</name> <sex>Female</sex> <address1>93 Cedar Lane</address1> <address2></address2> <city>Stourbridge</city> <county>Warminster</county> <postcode>DY8 4NX</postcode> <telephone>01537477435</telephone> <mobile>07353867291</mobile> <email>[email protected]</email> </personal> <account> <username>MakeupWimp</username> <password>16a9b7910fc34304c1d1a6a1b0c31502</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="19"> <personal> <name>Katie Butler</name> <sex>Female</sex> <address1>44 Boulder Crest Road</address1> <address2></address2> <city>Bungay</city> <county>Suffolk</county> <postcode>NR35 1LT</postcode> <telephone>01419124094</telephone> <mobile>07314062451</mobile> <email>[email protected]</email> </personal> <account> <username>TomatoCrunch</username> <password>d7eba53443ec4ddcee69ed71b2023fc0</password> <userlevel>1</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> </member> <member> <user id="20"> <personal> <name>Jayden Richards</name> <sex>Male</sex> <address1>56 Corson Trail</address1> <address2></address2> <city>Sandy</city> <county>Bedfordshire</county> <postcode>SG19 6DF</postcode> <telephone>01882134438</telephone> <mobile>07540218868</mobile> <email>[email protected]</email> </personal> <account> <username>nightmareTwig</username> <password>8a9c08c7b6473493e8a5da15dd541025</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="4"> <event> <eventname>East Barnet Festival</eventname> <url>http://www.eastbarnetfestival.org.uk</url> <datefrom>2010-07-01</datefrom> <dateto>2010-07-03</dateto> <location>East Barnet</location> <eventpostcode>EN4 8TB</eventpostcode> <additional></additional> <coords> <lat>51.641556</lat> <lng>-0.163018</lng> </coords> </event> <contact> <conname>East Barnet Festival Commitee</conname> <conaddress1>Oak Hill Park</conaddress1> <conaddress2>Church Hill Road</conaddress2> <concity>East Barnet</concity> <concounty>Hertfordshire</concounty> <conpostcode>EN4 8TB</conpostcode> <contelephone>07071781745</contelephone> <conmobile>07071781745</conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> <member> <user id="21"> <personal> <name>Abbie Jackson</name> <sex>Female</sex> <address1>98 Briarwood Lane</address1> <address2></address2> <city>Weymouth</city> <county>Dorset</county> <postcode>DT3 6TS</postcode> <telephone>01575629969</telephone> <mobile>07212477154</mobile> <email>[email protected]</email> </personal> <account> <username>CrazyBlockhead</username> <password>4ce56fb13d043be605037ace4fbd9fa5</password> <userlevel>2</u

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >